From patchwork Mon Dec 4 14:13:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10090423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6EF3560329 for ; Mon, 4 Dec 2017 14:14:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5AE5E2239C for ; Mon, 4 Dec 2017 14:14:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F88A27E5A; Mon, 4 Dec 2017 14:14:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B86292239C for ; Mon, 4 Dec 2017 14:14:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lQDQSYbW62REL61ribAS9IqtsFexONiwbsK+cWhagsA=; b=G3+NrI19Y/Xe4V9pz+Ar8VqXG6 IGuJAIj14nIDKUIOyU420ufm5hRN0pSXstuGXUHrxUQ5hxzLYmzD1RwcdT8f/4Iuv9tlCcGT1MmiQ He/yD805sK+aIeeF1P6gu/ckoP9vDg2rH7yyjIOSEEeU+AcTkiE4i7siOFpvDcAPk1VKDRJAZVd63 TDO4JGjNvAo6qnQC6MPWyHNqYidz8jQiDQzYRc9/BziUeukvuXeM9OKNtf23mCp1ahPLxO8bkocI6 FCZhSXcRl0vk9Rur1qUn/eWILSvlF9aWcmO98nAk1glIk4siHla8ZZ5585EpynJ0XRwHJW++MNg/J ZC+tVtQQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eLrVk-00058R-Vp; Mon, 04 Dec 2017 14:14:21 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eLrVK-0004QK-QE for linux-arm-kernel@lists.infradead.org; Mon, 04 Dec 2017 14:14:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D1F315AD; Mon, 4 Dec 2017 06:13:35 -0800 (PST) Received: from capper-debian.emea.arm.com (unknown [10.37.7.145]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 825BD3F236; Mon, 4 Dec 2017 06:13:33 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH 02/12] arm64: KVM: Enforce injective kern_hyp_va mappings Date: Mon, 4 Dec 2017 14:13:03 +0000 Message-Id: <20171204141313.31604-3-steve.capper@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171204141313.31604-1-steve.capper@arm.com> References: <20171204141313.31604-1-steve.capper@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171204_061354_957195_C3FA6900 X-CRM114-Status: GOOD ( 13.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: catalin.marinas@arm.com, ard.biesheuvel@linaro.org, James Morse , Steve Capper , Suzuki.Poulose@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For systems that are not executing with VHE, we need to create page tables for HYP/EL2 mode in order to access data from the kernel running at EL1. In addition to parts of the kernel address space being mapped to EL2, we also need to make space for an identity mapping of the __hyp_idmap_text area (as this code is responsible for activating the EL2 MMU). In order to create these pagetables, we need a mechanism to map from the address space pointed to by TTBR1_EL1 (addresses preceded by 0xFF...) to the one addressed by TTBR0_EL2 (addresses preceded by 0x00...). There are two ways of performing this mapping depending upon the physical address of __hyp_idmap_text_start. If PA[VA_BITS - 2] == 0b: 1) HYP_VA = KERN_VA & GENMASK(VA_BITS - 2, 0) - so we mask in the lower bits of the kernel address. This is a bijective mapping. If PA[VA_BITS - 2] == 1b: 2) HYP_VA = KERN_VA & GENMASK(VA_BITS - 3, 0) - so the top bit of our HYP VA will always be zero. This mapping is no longer injective, each HYP VA can be obtained from two different kernel VAs. These mappings guarantee that kernel addresses in the direct linear mapping will not give a HYP VA that collides with the identity mapping for __hyp_idmap_text. Unfortunately, with the second mapping we run the risk of hyp VAs derived from kernel addresses in the direct linear map colliding with those derived from kernel addresses from ioremap. This patch addresses this issue by switching to the following logic: If PA[VA_BITS - 2] == 0b: 3) HYP_VA = KERN_VA XOR GENMASK(63, VA_BITS - 1) - we toggle off the top bits from the kernel address rather than and in the bottom bits. If PA[VA_BITS - 2] == 1b: 4) HYP_VA = KERN_VA XOR GENMASK(63, VA_BITS - 2) - no longer maps to a reduced address space, we have a bijective mapping. Now there is no possibility of collision between HYP VAs obtained from kernel addresses. Note that the new mappings are no longer idempotent, so the following code sequence will behave differently after this patch is applied: testva = kern_hyp_va(kern_hyp_va(sourceva)); Cc: James Morse Signed-off-by: Steve Capper --- arch/arm64/include/asm/cpucaps.h | 2 +- arch/arm64/include/asm/kvm_mmu.h | 36 +++++++++++++++++------------------- arch/arm64/kernel/cpufeature.c | 8 ++++---- 3 files changed, 22 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 2ff7c5e8efab..3de31a1010ee 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -32,7 +32,7 @@ #define ARM64_HAS_VIRT_HOST_EXTN 11 #define ARM64_WORKAROUND_CAVIUM_27456 12 #define ARM64_HAS_32BIT_EL0 13 -#define ARM64_HYP_OFFSET_LOW 14 +#define ARM64_HYP_MAP_FLIP 14 #define ARM64_MISMATCHED_CACHE_LINE_SIZE 15 #define ARM64_HAS_NO_FPSIMD 16 #define ARM64_WORKAROUND_REPEAT_TLBI 17 diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 672c8684d5c2..d74d5236c26c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -69,8 +69,8 @@ * mappings, and none of this applies in that case. */ -#define HYP_PAGE_OFFSET_HIGH_MASK ((UL(1) << VA_BITS) - 1) -#define HYP_PAGE_OFFSET_LOW_MASK ((UL(1) << (VA_BITS - 1)) - 1) +#define HYP_MAP_KERNEL_BITS (UL(0xffffffffffffffff) << VA_BITS) +#define HYP_MAP_HIGH_BIT (UL(1) << (VA_BITS - 1)) #ifdef __ASSEMBLY__ @@ -82,26 +82,24 @@ * reg: VA to be converted. * * This generates the following sequences: - * - High mask: - * and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK + * + * - Flip the kernel bits: + * eor x0, x0, #HYP_MAP_KERNEL_BITS * nop - * - Low mask: - * and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK - * and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK + * + * - Flip the kernel bits and upper HYP bit: + * eor x0, x0, #HYP_MAP_KERNEL_BITS + * eor x0, x0, #HYP_MAP_HIGH_BIT * - VHE: * nop * nop - * - * The "low mask" version works because the mask is a strict subset of - * the "high mask", hence performing the first mask for nothing. - * Should be completely invisible on any viable CPU. */ .macro kern_hyp_va reg alternative_if_not ARM64_HAS_VIRT_HOST_EXTN - and \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK + eor \reg, \reg, #HYP_MAP_KERNEL_BITS alternative_else_nop_endif -alternative_if ARM64_HYP_OFFSET_LOW - and \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK +alternative_if ARM64_HYP_MAP_FLIP + eor \reg, \reg, #HYP_MAP_HIGH_BIT alternative_else_nop_endif .endm @@ -115,16 +113,16 @@ alternative_else_nop_endif static inline unsigned long __kern_hyp_va(unsigned long v) { - asm volatile(ALTERNATIVE("and %0, %0, %1", + asm volatile(ALTERNATIVE("eor %0, %0, %1", "nop", ARM64_HAS_VIRT_HOST_EXTN) : "+r" (v) - : "i" (HYP_PAGE_OFFSET_HIGH_MASK)); + : "i" (HYP_MAP_KERNEL_BITS)); asm volatile(ALTERNATIVE("nop", - "and %0, %0, %1", - ARM64_HYP_OFFSET_LOW) + "eor %0, %0, %1", + ARM64_HYP_MAP_FLIP) : "+r" (v) - : "i" (HYP_PAGE_OFFSET_LOW_MASK)); + : "i" (HYP_MAP_HIGH_BIT)); return v; } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c5ba0097887f..5a6e1f3611eb 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -824,7 +824,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused return is_kernel_in_hyp_mode(); } -static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry, +static bool hyp_flip_space(const struct arm64_cpu_capabilities *entry, int __unused) { phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); @@ -926,10 +926,10 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, }, { - .desc = "Reduced HYP mapping offset", - .capability = ARM64_HYP_OFFSET_LOW, + .desc = "HYP mapping flipped", + .capability = ARM64_HYP_MAP_FLIP, .def_scope = SCOPE_SYSTEM, - .matches = hyp_offset_low, + .matches = hyp_flip_space, }, { /* FP/SIMD is not implemented */