From patchwork Mon Feb 18 17:02:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818481 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EEB5F922 for ; Mon, 18 Feb 2019 17:04:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D01432B685 for ; Mon, 18 Feb 2019 17:04:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C416D29D6F; Mon, 18 Feb 2019 17:04:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2544E2B687 for ; Mon, 18 Feb 2019 17:04:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pqKd9g6JIdQnEwSia6i4ck3ufS6rxI0OhY8x8y20F9c=; b=d/lEpmUyAbGhga 6vnoLxxFsKu/Y2lm/++ui0F9asLIAX7HIiZJPvC6xh3XVaZo+terh0wZJfSYx0ZivKCsYPcgod6yv UXGaAFne3vtMYncFkXq3ma/dQ+INjvFo7zixt7wFWmccXqUkxumqnL24KYtYHguMgxs9L8bPulQXu sV9tR/r1QR3NbnIRKutzwiupa1e7w02MSPqGN4bbNSofdVI2u7Y7/h3CavETLzaF7xu5hhifb3pFs CbSOvvbQYJVczhHaw52SsSuh1jLA2K7eJiadOt5zic4azSW9HFuL3mo5zDguP6rzz8EdHTLQqNtYk jw+fXwHG7XUqcLR06qKQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLY-0004vM-9z; Mon, 18 Feb 2019 17:04:48 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmL9-0004VK-Ua for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 373D41684; Mon, 18 Feb 2019 09:04:14 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E44FC3F740; Mon, 18 Feb 2019 09:04:07 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/9] arm/arm64: KVM: Formalise end of direct linear map Date: Mon, 18 Feb 2019 17:02:37 +0000 Message-Id: <20190218170245.14915-2-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090429_686428_CB1F3E08 X-CRM114-Status: GOOD ( 14.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We assume that the direct linear map ends at ~0 in the KVM HYP map intersection checking code. This assumption will become invalid later on for arm64 when the address space of the kernel is re-arranged. This patch introduces a new constant PAGE_OFFSET_END for both arm and arm64 and defines it to be ~0UL Signed-off-by: Steve Capper --- arch/arm/include/asm/memory.h | 1 + arch/arm64/include/asm/memory.h | 1 + virt/kvm/arm/mmu.c | 4 ++-- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index ed8fd0d19a3e..45c211fd50da 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -24,6 +24,7 @@ /* PAGE_OFFSET - the virtual address of the start of the kernel image */ #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) +#define PAGE_OFFSET_END (~0UL) #ifdef CONFIG_MMU diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0c656850eeea..617071dbad06 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -52,6 +52,7 @@ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) +#define PAGE_OFFSET_END (~0UL) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 30251e288629..94496bdffb49 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -2179,10 +2179,10 @@ int kvm_mmu_init(void) kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); kvm_debug("HYP VA range: %lx:%lx\n", kern_hyp_va(PAGE_OFFSET), - kern_hyp_va((unsigned long)high_memory - 1)); + kern_hyp_va(PAGE_OFFSET_END)); if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) && - hyp_idmap_start < kern_hyp_va((unsigned long)high_memory - 1) && + hyp_idmap_start < kern_hyp_va(PAGE_OFFSET_END) && hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) { /* * The idmap page is intersecting with the VA space, From patchwork Mon Feb 18 17:02:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62D701390 for ; Mon, 18 Feb 2019 17:05:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 464CC28405 for ; Mon, 18 Feb 2019 17:05:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43C8A2B1D1; Mon, 18 Feb 2019 17:05:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4B8232B687 for ; Mon, 18 Feb 2019 17:05:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Tzzu3wk0BEvG7aINvsgYtGoPWJaxL5kTJvjqGvzzdfQ=; b=dkfFXTJFSdhrvN YS2VPNI/8mtyj1K7xoMDBPW034dzmYi1VR+iL9p2RKiGcxQ1Wxap41lnIZQKVJXH/BWoYPStiUZbf jACOP3gF534zsFice1uz/ToJhK7m6KpWXODnNPQXkpb4RbT9i5E9ThgjjkZjsNHQ3vXd/MxptXnmI IWBf3rMbY5KIXk8VrvzBxZDs3Zdfxp7uhlZntlao20wslxkh9JCbhV3cs06+y2rtX4/OnDoq6YG0o Q5rwHwuK9ZZTg72wQ84rNE4e3DHe/gc7EuDkV8Jl0hCTwRo0IOCIcCq51isTjGT2MYNt0b/1BOdLj QTUvL846/BGuPQBbSCwA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmMa-0007Gx-Bj; Mon, 18 Feb 2019 17:05:52 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLM-0004fV-CT for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C16DF1AC1; Mon, 18 Feb 2019 09:04:30 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 766383F77D; Mon, 18 Feb 2019 09:04:09 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/9] arm64: mm: Flip kernel VA space Date: Mon, 18 Feb 2019 17:02:38 +0000 Message-Id: <20190218170245.14915-3-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090436_650321_1091508D X-CRM114-Status: GOOD ( 16.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Put the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. This allows us to make room for an inline KASAN shadow that operates under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA, if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses of the kernel VA range), this will be below the start of the minimum 48-bit kernel VA address of 0xFFFF000000000000. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. Signed-off-by: Steve Capper --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 10 +++++----- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/mm/dump.c | 8 ++++---- arch/arm64/mm/init.c | 9 +-------- arch/arm64/mm/kasan_init.c | 6 +++--- arch/arm64/mm/mmu.c | 4 ++-- 7 files changed, 17 insertions(+), 24 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index b025304bde46..2dad2ae6b181 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -115,7 +115,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ - (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ - (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) ) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 617071dbad06..46a7aba44e9b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -49,10 +49,10 @@ */ #define VA_BITS (CONFIG_ARM64_VA_BITS) #define VA_START (UL(0xffffffffffffffff) - \ - (UL(1) << VA_BITS) + 1) -#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) -#define PAGE_OFFSET_END (~0UL) +#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ + (UL(1) << VA_BITS) + 1) +#define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) @@ -60,7 +60,7 @@ #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (BPF_JIT_REGION_END) #define MODULES_VSIZE (SZ_128M) -#define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE) +#define VMEMMAP_START (-VMEMMAP_SIZE) #define PCI_IO_END (VMEMMAP_START - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) #define FIXADDR_TOP (PCI_IO_START - SZ_2M) @@ -243,7 +243,7 @@ extern u64 vabits_user; * space. Testing the top bit for the start of the region is a * sufficient check. */ -#define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1))) +#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS - 1))) #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index de70c1eabf33..766def2ed788 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -32,7 +32,7 @@ * and fixed mappings */ #define VMALLOC_START (MODULES_END) -#define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) +#define VMALLOC_END (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index 99bb8facb5cb..3dd9b884bd39 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -30,6 +30,8 @@ #include static const struct addr_marker address_markers[] = { + { PAGE_OFFSET, "Linear Mapping start" }, + { VA_START, "Linear Mapping end" }, #ifdef CONFIG_KASAN { KASAN_SHADOW_START, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, @@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = { { PCI_IO_START, "PCI I/O start" }, { PCI_IO_END, "PCI I/O end" }, #ifdef CONFIG_SPARSEMEM_VMEMMAP - { VMEMMAP_START, "vmemmap start" }, - { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" }, + { VMEMMAP_START, "vmemmap" }, #endif - { PAGE_OFFSET, "Linear mapping" }, { -1, NULL }, }; @@ -380,7 +380,7 @@ static void ptdump_initialize(void) static struct ptdump_info kernel_ptdump_info = { .mm = &init_mm, .markers = address_markers, - .base_addr = VA_START, + .base_addr = PAGE_OFFSET, }; void ptdump_check_wx(void) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 7205a9085b4d..0574e17fd28d 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -354,7 +354,7 @@ static void __init fdt_enforce_memory_region(void) void __init arm64_memblock_init(void) { - const s64 linear_region_size = -(s64)PAGE_OFFSET; + const s64 linear_region_size = BIT(VA_BITS - 1); /* Handle linux,usable-memory-range property */ fdt_enforce_memory_region(); @@ -362,13 +362,6 @@ void __init arm64_memblock_init(void) /* Remove memory above our supported physical address size */ memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX); - /* - * Ensure that the linear region takes up exactly half of the kernel - * virtual address space. This way, we can distinguish a linear address - * from a kernel/module/vmalloc address by testing a single bit. - */ - BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1)); - /* * Select a suitable value for the base of physical memory. */ diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 4b55b15707a3..ee5ec343d009 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -219,10 +219,10 @@ void __init kasan_init(void) kasan_map_populate(kimg_shadow_start, kimg_shadow_end, early_pfn_to_nid(virt_to_pfn(lm_alias(_text)))); - kasan_populate_early_shadow((void *)KASAN_SHADOW_START, - (void *)mod_shadow_start); + kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START), + (void *)mod_shadow_start); kasan_populate_early_shadow((void *)kimg_shadow_end, - kasan_mem_to_shadow((void *)PAGE_OFFSET)); + (void *)KASAN_SHADOW_END); if (kimg_shadow_start > mod_shadow_end) kasan_populate_early_shadow((void *)mod_shadow_end, diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b6f5aa52ac67..7401a8481f78 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -389,7 +389,7 @@ static phys_addr_t pgd_pgtable_alloc(void) static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot) { - if (virt < VMALLOC_START) { + if ((virt >= VA_START) && (virt < VMALLOC_START)) { pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n", &phys, virt); return; @@ -416,7 +416,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, static void update_mapping_prot(phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot) { - if (virt < VMALLOC_START) { + if ((virt >= VA_START) && (virt < VMALLOC_START)) { pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n", &phys, virt); return; From patchwork Mon Feb 18 17:02:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818499 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 198711390 for ; Mon, 18 Feb 2019 17:06:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE3F229E16 for ; Mon, 18 Feb 2019 17:06:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E256C2B669; Mon, 18 Feb 2019 17:06:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 135692B417 for ; Mon, 18 Feb 2019 17:06:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xoOYHszHccB9B9NTw8r29lS8I9FaMYL8XVYMzecxVF0=; b=Acq2+/Q4bS9gnw 32xVnQUeOa58RcHHIdUVhzAqoeUpIPinLlPZPg5J4GmZagRYrEkFIZm/9qc6TIS8FTfrkL3BHaP4V 9i5ZHkbfO3Zku7X4+aDHiIA6Cd6ePtjKOk4rhI0tN1AwoQ2EdlC1l/7epS2tUE7Ds/G2gFh0jpysZ cvvGjtVymKJZVma+5OngTa+2cgH7KNty7LP6qCEj4gGd0Jis6hScjkKv07cCwKC8V87DbEpkUtRjM 6otWgx3dQi6WfbTUe/0eEDCJRBkWPaR+loj/0aEwTtbUxbJzmSGmmNly1G8t7cg1WnREZg9sf8d/W xo7EvUVT69hdEprfuk+A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmNM-0007zv-8t; Mon, 18 Feb 2019 17:06:40 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLq-0004pY-Bs for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:05:22 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51FF0A78; Mon, 18 Feb 2019 09:04:12 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 09A833FB75; Mon, 18 Feb 2019 09:04:10 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/9] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Date: Mon, 18 Feb 2019 17:02:39 +0000 Message-Id: <20190218170245.14915-4-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090506_596158_AC40FA70 X-CRM114-Status: GOOD ( 18.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command line argument and affects the codegen of the inline address sanetiser. Essentially, for an example memory access: *ptr1 = val; The compiler will insert logic similar to the below: shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET) if (somethingWrong(shadowValue)) flagAnError(); This code sequence is inserted into many places, thus KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel text. If we want to run a single kernel binary with multiple address spaces, then we need to do this with KASAN_SHADOW_OFFSET fixed. Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide shadow addresses we know that the end of the shadow region is constant w.r.t. VA space size: KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET This means that if we increase the size of the VA space, the start of the KASAN region expands into lower addresses whilst the end of the KASAN region is fixed. Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via build scripts with the VA size used as a parameter. (There are build time checks in the C code too to ensure that expected values are being derived). It is sufficient, and indeed is a simplification, to remove the build scripts (and build time checks) entirely and instead provide KASAN_SHADOW_OFFSET values. This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the arm64 Makefile, and instead we adopt the approach used by x86 to supply offset values in kConfig. To help debug/develop future VA space changes, the Makefile logic has been preserved in a script file in the arm64 Documentation folder. Signed-off-by: Steve Capper --- Changed in V3, rebase to include KASAN_SHADOW_SCALE_SHIFT, wording tidied up. --- Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++ arch/arm64/Kconfig | 15 +++++++++++++++ arch/arm64/Makefile | 8 -------- arch/arm64/include/asm/kasan.h | 11 ++++------- arch/arm64/include/asm/memory.h | 8 +++++--- arch/arm64/mm/kasan_init.c | 2 -- 6 files changed, 51 insertions(+), 20 deletions(-) create mode 100644 Documentation/arm64/kasan-offsets.sh diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh new file mode 100644 index 000000000000..2b7a021db363 --- /dev/null +++ b/Documentation/arm64/kasan-offsets.sh @@ -0,0 +1,27 @@ +#!/bin/sh + +# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW +# start address at the mid-point of the kernel VA space + +print_kasan_offset () { + printf "%02d\t" $1 + printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \ + + (1 << ($1 - 32 - $2)) \ + - (1 << (64 - 32 - $2)) )) +} + +echo KASAN_SHADOW_SCALE_SHIFT = 3 +printf "VABITS\tKASAN_SHADOW_OFFSET\n" +print_kasan_offset 48 3 +print_kasan_offset 47 3 +print_kasan_offset 42 3 +print_kasan_offset 39 3 +print_kasan_offset 36 3 +echo +echo KASAN_SHADOW_SCALE_SHIFT = 4 +printf "VABITS\tKASAN_SHADOW_OFFSET\n" +print_kasan_offset 48 4 +print_kasan_offset 47 4 +print_kasan_offset 42 4 +print_kasan_offset 39 4 +print_kasan_offset 36 4 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a4168d366127..db155ae86fed 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -291,6 +291,21 @@ config ARCH_SUPPORTS_UPROBES config ARCH_PROC_KCORE_TEXT def_bool y +config KASAN_SHADOW_OFFSET + hex + depends on KASAN + default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS + default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS + default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS + default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS + default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS + default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS + default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS + default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS + default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS + default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS + default 0xffffffffffffffff + source "arch/arm64/Kconfig.platforms" menu "Kernel Features" diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2dad2ae6b181..764201c76d77 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -111,14 +111,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) -# - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) -# in 32-bit arithmetic -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ - (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \ - + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ - - (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) ) - export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index b52aacd2c526..10d2add842da 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -18,11 +18,8 @@ * KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, * where N = (1 << KASAN_SHADOW_SCALE_SHIFT). - */ -#define KASAN_SHADOW_START (VA_START) -#define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE) - -/* + * + * KASAN_SHADOW_OFFSET: * This value is used to map an address to the corresponding shadow * address by the following formula: * shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET @@ -33,8 +30,8 @@ * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) */ -#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1ULL << \ - (64 - KASAN_SHADOW_SCALE_SHIFT))) +#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) +#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS) void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 46a7aba44e9b..dde0b9b75b22 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -54,7 +54,7 @@ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) -#define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) +#define BPF_JIT_REGION_START (KASAN_SHADOW_END) #define BPF_JIT_REGION_SIZE (SZ_128M) #define BPF_JIT_REGION_END (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE) #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) @@ -80,15 +80,17 @@ * significantly, so double the (minimum) stack size when they are in use. */ #ifdef CONFIG_KASAN -#define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) +#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \ + + KASAN_SHADOW_OFFSET) #ifdef CONFIG_KASAN_EXTRA #define KASAN_THREAD_SHIFT 2 #else #define KASAN_THREAD_SHIFT 1 #endif /* CONFIG_KASAN_EXTRA */ #else -#define KASAN_SHADOW_SIZE (0) #define KASAN_THREAD_SHIFT 0 +#define KASAN_SHADOW_END (VA_START) #endif #define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index ee5ec343d009..66ea44568aa5 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -148,8 +148,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { - BUILD_BUG_ON(KASAN_SHADOW_OFFSET != - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, From patchwork Mon Feb 18 17:02:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A8F91390 for ; Mon, 18 Feb 2019 17:04:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4EC02B670 for ; Mon, 18 Feb 2019 17:04:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E29F12B5F4; Mon, 18 Feb 2019 17:04:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 60F252B639 for ; Mon, 18 Feb 2019 17:04:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=04TKSjJnI7uVZS6K/1Lbuy5IgtxHTVqTOKIT9KhWbZg=; b=D33gZFdcemRxIA 5uEiV+kqyYISi2Gd9+xnMZY7Zdb9xbgVy4bGZGvbOXtA4TlnKExfvjCPeE7mDLhbk7YI8UTkrPCa6 4awrI3hmmGLUIO7U+84TPbYIgUghnHTgni8otVnjnhq5jRExbcpo/kHbX8xP9+M1z+00xYxC18Gxg lTkFg231N6d96zL2PFD/4sQqXuLYK0z9XrVLY1RPE/Q7gUCz1tvqRDNWvuWfFfsTzOWXZfankzbah caPj92ybxVRqidJtfQCSJdOo2fcAJSgBAYkG/ORrTqH746ROBypSeRUWmfUyxs/DaAbgn0FxKsG94 twokmtrV+pPaZxpzbD6A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLE-0004WD-Lr; Mon, 18 Feb 2019 17:04:28 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLA-0004VL-8k for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0743615AB; Mon, 18 Feb 2019 09:04:14 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90C6E3FB75; Mon, 18 Feb 2019 09:04:12 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/9] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Date: Mon, 18 Feb 2019 17:02:40 +0000 Message-Id: <20190218170245.14915-5-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090424_321418_28DD29DA X-CRM114-Status: GOOD ( 12.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to prepare for a variable VA_BITS we need to account for a variable size VMEMMAP which in turn means the position of the fixed map is variable at compile time. Thus, we need to replace the BUILD_BUG_ON's that check the fixed map position with BUG_ON's. Signed-off-by: Steve Capper --- arch/arm64/mm/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 7401a8481f78..c24bc447ed9c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -830,7 +830,7 @@ void __init early_fixmap_init(void) * The boot-ioremap range spans multiple pmds, for which * we are not prepared: */ - BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) + BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT)); if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN))) @@ -898,9 +898,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) * On 4k pages, we'll use section mappings for the FDT so we only * have to be in the same PUD. */ - BUILD_BUG_ON(dt_virt_base % SZ_2M); + BUG_ON(dt_virt_base % SZ_2M); - BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT != + BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT != __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT); offset = dt_phys % SWAPPER_BLOCK_SIZE; From patchwork Mon Feb 18 17:02:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A8621390 for ; Mon, 18 Feb 2019 17:04:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 111392B40B for ; Mon, 18 Feb 2019 17:04:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F3F22B66A; Mon, 18 Feb 2019 17:04:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 66CD82B411 for ; Mon, 18 Feb 2019 17:04:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jPOps1AHHP0sKRRqDKy6DbsRGis9FlJwU1DjJQdgf2Y=; b=Zxe84ozcAB3SQz EO1SgxPgUJOPvPHBb9KH2+kuMWRA8i+ePKEST/+PRwOJ74jm8zKUJ3lunrxe0VHSQB1wkVOzgVwwU l+NfXO8QwdZTMHFxPe3weV2v5r5gKiHn6mH3NeN95dTngWV13cAgPIHU0P04MCW/mg+PFPP1t0D5o qcUsKtcieNIgShfWSHbfwt5jGb0F/cjOJAT7TJ10aAr7Y0L6xM3BlQPxbPshn2ecFKQySMw3wuyj0 pOmBNdlAHegNL74Y3fcfKp6ZHET9J+5GOf6Hv6iEOZmu1zivohhPNyrm7yI3NrftHLrQ4cfmQH1Ho AVQsfvHfWUQEz5vQh5pQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLO-0004iZ-LF; Mon, 18 Feb 2019 17:04:38 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLC-0004VU-2K for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67C9516A3; Mon, 18 Feb 2019 09:04:15 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 22E333FB75; Mon, 18 Feb 2019 09:04:13 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 5/9] arm64: dump: Make kernel page table dumper dynamic again Date: Mon, 18 Feb 2019 17:02:41 +0000 Message-Id: <20190218170245.14915-6-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090426_157242_D66BC68C X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The kernel page table dumper assumes that the placement of VA regions is constant and determined at compile time. As we are about to introduce variable VA logic, we need to be able to determine certain regions at boot time. This patch adds logic to the kernel page table dumper s.t. these regions can be computed at boot time. Signed-off-by: Steve Capper --- arch/arm64/mm/dump.c | 58 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 47 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index 3dd9b884bd39..2196131b8e4f 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -29,23 +29,45 @@ #include #include -static const struct addr_marker address_markers[] = { - { PAGE_OFFSET, "Linear Mapping start" }, - { VA_START, "Linear Mapping end" }, + +enum address_markers_idx { + PAGE_OFFSET_NR = 0, + VA_START_NR, +#ifdef CONFIG_KASAN + KASAN_START_NR, + KASAN_END_NR, +#endif + MODULES_START_NR, + MODULES_END_NR, + VMALLOC_START_NR, + VMALLOC_END_NR, + FIXADDR_START_NR, + FIXADDR_END_NR, + PCI_START_NR, + PCI_END_NR, +#ifdef CONFIG_SPARSEMEM_VMEMMAP + VMEMMAP_START_NR, +#endif + END_NR +}; + +static struct addr_marker address_markers[] = { + { 0 /* PAGE_OFFSET */, "Linear Mapping start" }, + { 0 /* VA_START */, "Linear Mapping end" }, #ifdef CONFIG_KASAN - { KASAN_SHADOW_START, "Kasan shadow start" }, + { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, #endif { MODULES_VADDR, "Modules start" }, { MODULES_END, "Modules end" }, { VMALLOC_START, "vmalloc() area" }, - { VMALLOC_END, "vmalloc() end" }, - { FIXADDR_START, "Fixmap start" }, - { FIXADDR_TOP, "Fixmap end" }, - { PCI_IO_START, "PCI I/O start" }, - { PCI_IO_END, "PCI I/O end" }, + { 0 /* VMALLOC_END */, "vmalloc() end" }, + { 0 /* FIXADDR_START */, "Fixmap start" }, + { 0 /* FIXADDR_TOP */, "Fixmap end" }, + { 0 /* PCI_IO_START */, "PCI I/O start" }, + { 0 /* PCI_IO_END */, "PCI I/O end" }, #ifdef CONFIG_SPARSEMEM_VMEMMAP - { VMEMMAP_START, "vmemmap" }, + { 0 /* VMEMMAP_START */, "vmemmap" }, #endif { -1, NULL }, }; @@ -380,7 +402,6 @@ static void ptdump_initialize(void) static struct ptdump_info kernel_ptdump_info = { .mm = &init_mm, .markers = address_markers, - .base_addr = PAGE_OFFSET, }; void ptdump_check_wx(void) @@ -406,6 +427,21 @@ void ptdump_check_wx(void) static int ptdump_init(void) { ptdump_initialize(); + kernel_ptdump_info.base_addr = PAGE_OFFSET; + address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET; + address_markers[VA_START_NR].start_address = VA_START; +#ifdef CONFIG_KASAN + address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; +#endif + address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; + address_markers[FIXADDR_START_NR].start_address = FIXADDR_START; + address_markers[FIXADDR_END_NR].start_address = FIXADDR_TOP; + address_markers[PCI_START_NR].start_address = PCI_IO_START; + address_markers[PCI_END_NR].start_address = PCI_IO_END; +#ifdef CONFIG_SPARSEMEM_VMEMMAP + address_markers[VMEMMAP_START_NR].start_address = VMEMMAP_START; +#endif + return ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); } From patchwork Mon Feb 18 17:02:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818503 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3068F1390 for ; Mon, 18 Feb 2019 17:17:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 040422B5CB for ; Mon, 18 Feb 2019 17:17:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EBF9C2B641; Mon, 18 Feb 2019 17:17:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DBA7F2B5CB for ; Mon, 18 Feb 2019 17:17:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7xQoS6E2iDYqqJR4uI4jytJQtv5kkTFeyzhufLgLRwI=; b=eAbaYbcFMHW0+z CgMV7v17GQbS8+F6UQmuAXwhhIh1VdWfSxN0EZuqRuV/XQ9+r8xzgDAKyFAwRPzskGR+ZMzKw4PYc gyvjJ1bqtPYJbZjC+996950PUbQsdZmiITsoRrJE3mK5GqljI57/FPH65NXvgOaLiLjnL6/2PdQ8x e8ATUGody/InU1Bpzwp2/FCpEao1OXXcyt+nH9U1FkTz/rZqS33CBS7+vH5bSuvBZg3pBovUDtnAe Ey4JZKYc0q09L+aw1ObWNMZm8vgaEXLjloqJERdQQUPnWe1u1Cy21PD8OCWTCYI8YQnwxwnD0KlL9 X7soy6iuD4leDYotoBlA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmXw-0003wA-AP; Mon, 18 Feb 2019 17:17:36 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmXo-0003vK-1E for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:17:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F08291713; Mon, 18 Feb 2019 09:04:16 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AB8863F740; Mon, 18 Feb 2019 09:04:15 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 6/9] arm64: mm: Introduce VA_BITS_MIN Date: Mon, 18 Feb 2019 17:02:42 +0000 Message-Id: <20190218170245.14915-7-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_091733_076801_4019931E X-CRM114-Status: GOOD ( 19.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit kernel addresses detectable at boot time, the kernel needs to know the most conservative VA_BITS possible should it need to fall back to this quantity due to lack of hardware support. A new compile time constant VA_BITS_MIN is introduced in this patch and it is employed in the KASAN end address, KASLR, and EFI stub. For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits. In other words: VA_BITS_MIN = min (48, VA_BITS) Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/efi.h | 4 ++-- arch/arm64/include/asm/memory.h | 5 ++++- arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/head.S | 2 +- arch/arm64/kernel/kaslr.c | 6 +++--- arch/arm64/mm/kasan_init.c | 3 ++- 7 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index db155ae86fed..0f933d3a1614 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -756,6 +756,10 @@ config ARM64_VA_BITS default 47 if ARM64_VA_BITS_47 default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52 +config ARM64_VA_BITS_MIN + int + default ARM64_VA_BITS + choice prompt "Physical address space size" default ARM64_PA_BITS_48 diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 7ed320895d1f..4a84353bc9f7 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -68,7 +68,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base) /* * On arm64, we have to ensure that the initrd ends up in the linear region, - * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is + * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is * guaranteed to cover the kernel Image. * * Since the EFI stub is part of the kernel Image, we can relax the @@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base) static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base, unsigned long image_addr) { - return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1)); + return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1)); } #define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index dde0b9b75b22..03db258eb354 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -64,6 +64,9 @@ #define PCI_IO_END (VMEMMAP_START - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) #define FIXADDR_TOP (PCI_IO_START - SZ_2M) +#define VA_BITS_MIN (CONFIG_ARM64_VA_BITS_MIN) +#define _VA_START(va) (UL(0xffffffffffffffff) - \ + (UL(1) << ((va) - 1)) + 1) #define KERNEL_START _text #define KERNEL_END _end @@ -90,7 +93,7 @@ #endif /* CONFIG_KASAN_EXTRA */ #else #define KASAN_THREAD_SHIFT 0 -#define KASAN_SHADOW_END (VA_START) +#define KASAN_SHADOW_END (_VA_START(VA_BITS_MIN)) #endif #define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index f1a7ab18faf3..48e94a671234 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -53,7 +53,7 @@ * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. */ -#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) +#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS_MIN) #define TASK_SIZE_64 (UL(1) << vabits_user) #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 15d79a8e5e5e..b6c15b97ec9d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -325,7 +325,7 @@ __create_page_tables: mov x5, #52 cbnz x6, 1f #endif - mov x5, #VA_BITS + mov x5, #VA_BITS_MIN 1: adr_l x6, vabits_user str x5, [x6] diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index b09b6f75f759..6f0075f983c7 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -119,15 +119,15 @@ u64 __init kaslr_early_init(u64 dt_phys) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of + * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of * the lower and upper quarters to avoid colliding with other * allocations. * Even if we could randomize at page granularity for 16k and 64k pages, * let's always round to 2 MB so we don't interfere with the ability to * map using contiguous PTEs */ - mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1); - offset = BIT(VA_BITS - 3) + (seed & mask); + mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1); + offset = BIT(VA_BITS_MIN - 3) + (seed & mask); /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 66ea44568aa5..3996d46b5d0e 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -148,7 +148,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { - BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, true); From patchwork Mon Feb 18 17:02:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D6A6922 for ; Mon, 18 Feb 2019 17:06:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 507EE2B642 for ; Mon, 18 Feb 2019 17:06:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 450572B4C6; Mon, 18 Feb 2019 17:06:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6E5992B642 for ; Mon, 18 Feb 2019 17:06:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LGFI9axv3DdDCVA0nuitRw2tsZI+Q85Jy4TU0VVso3E=; b=IgLIOb7TWWDFjC lp8iAQvn5gwuX4OexoWrFbL1N9+D8a+IYbe7cz5wPKHluwhM1myu994Xx2Z5mdj0LtFsr7djYXWII eqMdnRX1qMZzF8N85bSoufIX3Ge00xuMgsbNeTeACPj0tGvQwqTWqU39CsKx+lOduUsf0RhAoiijb IODIAgbsHFLiW5Zuexdc5FR1TRP1wACu41EqNlOSCKLJJ7gkN3SHg5wgtzn5KxN5ugI7E9gwYlHOd mg7l/bdQ1s9XIe6uDgNu+pPy34PnPDLmY51dISri9qU+N1UfhiYiG5s9tesJT9WcNa3HwnTlFmpJI cdurEdz7+1NGuXJ1wwaQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmN3-0007cw-Ov; Mon, 18 Feb 2019 17:06:21 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLL-0004ax-5G for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B96B1993; Mon, 18 Feb 2019 09:04:19 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3F1B63F740; Mon, 18 Feb 2019 09:04:17 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 7/9] arm64: mm: Introduce VA_BITS_ACTUAL Date: Mon, 18 Feb 2019 17:02:43 +0000 Message-Id: <20190218170245.14915-8-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090435_219217_129B7A89 X-CRM114-Status: GOOD ( 17.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable VA_BITS_ACTUAL is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Signed-off-by: Steve Capper --- arch/arm64/include/asm/kasan.h | 2 +- arch/arm64/include/asm/memory.h | 17 ++++++++++------- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/head.S | 5 +++++ arch/arm64/kvm/va_layout.c | 14 +++++++------- arch/arm64/mm/fault.c | 4 ++-- arch/arm64/mm/init.c | 7 ++++++- arch/arm64/mm/mmu.c | 3 +++ 8 files changed, 35 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 10d2add842da..ff991dc86ae1 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -31,7 +31,7 @@ * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) */ #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) -#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS) +#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS_ACTUAL) void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 03db258eb354..a51056a157dd 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -48,10 +48,6 @@ * VA_START - the first kernel virtual address. */ #define VA_BITS (CONFIG_ARM64_VA_BITS) -#define VA_START (UL(0xffffffffffffffff) - \ - (UL(1) << (VA_BITS - 1)) + 1) -#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ - (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (KASAN_SHADOW_END) @@ -178,10 +174,17 @@ #endif #ifndef __ASSEMBLY__ +extern u64 vabits_actual; +#define VA_BITS_ACTUAL ({vabits_actual;}) +#define VA_START (_VA_START(VA_BITS_ACTUAL)) +#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ + (UL(1) << VA_BITS_ACTUAL) + 1) +#define PAGE_OFFSET_END (VA_START) #include #include +extern s64 physvirt_offset; extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) @@ -248,9 +251,9 @@ extern u64 vabits_user; * space. Testing the top bit for the start of the region is a * sufficient check. */ -#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS - 1))) +#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS_ACTUAL - 1))) -#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) +#define __lm_to_phys(addr) (((addr) + physvirt_offset)) #define __kimg_to_phys(addr) ((addr) - kimage_voffset) #define __virt_to_phys_nodebug(x) ({ \ @@ -269,7 +272,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x); #define __phys_addr_symbol(x) __pa_symbol_nodebug(x) #endif -#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) +#define __phys_to_virt(x) ((unsigned long)((x) - physvirt_offset)) #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) /* diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 2da3e478fd8f..133ecb65b602 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -106,7 +106,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz) isb(); } -#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS)) +#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL)) #define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz) /* diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b6c15b97ec9d..68c391b26858 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -332,6 +332,11 @@ __create_page_tables: dmb sy dc ivac, x6 // Invalidate potentially stale cache line + adr_l x6, vabits_actual + str x5, [x6] + dmb sy + dc ivac, x6 // Invalidate potentially stale cache line + /* * VA_BITS may be too small to allow for an ID mapping to be created * that covers system RAM if that is located sufficiently high in the diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index c712a7376bc1..c9a1debb45bd 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -40,25 +40,25 @@ static void compute_layout(void) int kva_msb; /* Where is my RAM region? */ - hyp_va_msb = idmap_addr & BIT(VA_BITS - 1); - hyp_va_msb ^= BIT(VA_BITS - 1); + hyp_va_msb = idmap_addr & BIT(VA_BITS_ACTUAL - 1); + hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1); kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^ (u64)(high_memory - 1)); - if (kva_msb == (VA_BITS - 1)) { + if (kva_msb == (VA_BITS_ACTUAL - 1)) { /* * No space in the address, let's compute the mask so - * that it covers (VA_BITS - 1) bits, and the region + * that it covers (VA_BITS_ACTUAL - 1) bits, and the region * bit. The tag stays set to zero. */ - va_mask = BIT(VA_BITS - 1) - 1; + va_mask = BIT(VA_BITS_ACTUAL - 1) - 1; va_mask |= hyp_va_msb; } else { /* * We do have some free bits to insert a random tag. * Hyp VAs are now created from kernel linear map VAs - * using the following formula (with V == VA_BITS): + * using the following formula (with V == VA_BITS_ACTUAL): * * 63 ... V | V-1 | V-2 .. tag_lsb | tag_lsb - 1 .. 0 * --------------------------------------------------------- @@ -66,7 +66,7 @@ static void compute_layout(void) */ tag_lsb = kva_msb; va_mask = GENMASK_ULL(tag_lsb - 1, 0); - tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb); + tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb); tag_val |= hyp_va_msb; tag_val >>= tag_lsb; } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index efb7b2cbead5..15eca262f6e6 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -171,9 +171,9 @@ void show_pte(unsigned long addr) return; } - pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n", + pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp = %p\n", mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K, - mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd); + mm == &init_mm ? VA_BITS_ACTUAL : (int) vabits_user, mm->pgd); pgdp = pgd_offset(mm, addr); pgd = READ_ONCE(*pgdp); pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd)); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 0574e17fd28d..25bdb71c6c5a 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -61,6 +61,9 @@ s64 memstart_addr __ro_after_init = -1; EXPORT_SYMBOL(memstart_addr); +s64 physvirt_offset __ro_after_init; +EXPORT_SYMBOL(physvirt_offset); + phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE @@ -354,7 +357,7 @@ static void __init fdt_enforce_memory_region(void) void __init arm64_memblock_init(void) { - const s64 linear_region_size = BIT(VA_BITS - 1); + const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1); /* Handle linux,usable-memory-range property */ fdt_enforce_memory_region(); @@ -368,6 +371,8 @@ void __init arm64_memblock_init(void) memstart_addr = round_down(memblock_start_of_DRAM(), ARM64_MEMSTART_ALIGN); + physvirt_offset = PHYS_OFFSET - PAGE_OFFSET; + /* * Remove the memory that we will not be able to cover with the * linear mapping. Take care not to clip the kernel which may be diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c24bc447ed9c..50fe776ccb37 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -55,6 +55,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; u64 vabits_user __ro_after_init; EXPORT_SYMBOL(vabits_user); +u64 __section(".mmuoff.data.write") vabits_actual; +EXPORT_SYMBOL(vabits_actual); + u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Mon Feb 18 17:02:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B5A3217FB for ; Mon, 18 Feb 2019 17:05:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B9612B2AF for ; Mon, 18 Feb 2019 17:05:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 998E92B67C; Mon, 18 Feb 2019 17:05:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1C7ED2B40D for ; Mon, 18 Feb 2019 17:05:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SlGDQMehIzUpv+f3pLxntrRLpPzujl8nGIkHj/fFd3s=; b=bK55E7pUsubRpK ReVJQfPXbARFZMRd+ncrjFowvnBU0AcG69/ce93+b1Ou2KFrHqa1jVfS4emTrj1QcJXcVpi01b2HJ LeJg7kCPh1bTR8093opFrL90oH+1oy8wOiJ5MjTEAf7O4BTmvKgGqKHgZK3oQf3aYu1zk24TytNay Km8+295FUw4UzA69t4szFKg4XlJ/xtw/cx33XVVBWdcu9fVQCIZJJ5u3ORkRjhVCzHrnx8LNASvkj j+kFEc0hjstEa7waFkqOVFZylngBRWZP6a8x9nFHN3FSrpw7rIqoID3ZZLox5CSdo9UikisurRNLo KICOr1Zb3UjQr8zr9wag==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmM2-0006Xh-I7; Mon, 18 Feb 2019 17:05:18 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLL-0004az-5F for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1A07C19BF; Mon, 18 Feb 2019 09:04:20 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C7BBB3FB75; Mon, 18 Feb 2019 09:04:18 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 8/9] arm64: mm: Logic to make offset_ttbr1 conditional Date: Mon, 18 Feb 2019 17:02:44 +0000 Message-Id: <20190218170245.14915-9-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090435_221504_DF479C17 X-CRM114-Status: GOOD ( 18.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces alternative logic to offset_ttbr1 and expands out the very early case in head.S. We need to use the alternative framework as offset_ttbr1 is used in places in the kernel where it is not possible to safely adrp address kernel constants (such as the kpti paths); thus code patching is the safer route. Signed-off-by: Steve Capper Signed-off-by: Bhupesh Sharma --- arch/arm64/include/asm/assembler.h | 10 +++++++++- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/kernel/cpufeature.c | 18 ++++++++++++++++++ arch/arm64/kernel/head.S | 14 +++++++++++++- arch/arm64/kernel/hibernate-asm.S | 1 + 5 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 4feb6119c3c9..58ed5d086e1e 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -551,6 +551,14 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU .macro offset_ttbr1, ttbr #ifdef CONFIG_ARM64_USER_VA_BITS_52 orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +#endif + +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 +alternative_if_not ARM64_HAS_52BIT_VA + orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +alternative_else + nop +alternative_endif #endif .endm @@ -560,7 +568,7 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU * to be nop'ed out when dealing with 52-bit kernel VAs. */ .macro restore_ttbr1, ttbr -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52) bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET #endif .endm diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 82e9099834ae..d71aecb6d6db 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -60,7 +60,8 @@ #define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 39 #define ARM64_HAS_GENERIC_AUTH_ARCH 40 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41 +#define ARM64_HAS_52BIT_VA 42 -#define ARM64_NCAPS 42 +#define ARM64_NCAPS 43 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f6d84e2c92fe..2e150c564f2a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -944,6 +944,16 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) return has_cpuid_feature(entry, scope); } +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 +extern u64 vabits_actual; +static bool __maybe_unused +has_52bit_kernel_va(const struct arm64_cpu_capabilities *entry, int scope) +{ + return vabits_actual == 52; +} + +#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */ + #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ @@ -1480,6 +1490,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, }, #endif /* CONFIG_ARM64_PTR_AUTH */ +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 + { + .desc = "52-bit kernel VA", + .capability = ARM64_HAS_52BIT_VA, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_52bit_kernel_va, + }, +#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */ {}, }; diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 68c391b26858..4877b82d2091 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -789,7 +789,19 @@ ENTRY(__enable_mmu) phys_to_ttbr x1, x1 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 - offset_ttbr1 x1 + +#if defined(CONFIG_ARM64_USER_VA_BITS_52) + orr x1, x1, #TTBR1_BADDR_4852_OFFSET +#endif + +#if defined(CONFIG_ARM64_USER_KERNEL_VA_BITS_52) + ldr_l x3, vabits_actual + cmp x3, #52 + b.eq 1f + orr x1, x1, #TTBR1_BADDR_4852_OFFSET +1: +#endif + msr ttbr1_el1, x1 // load TTBR1 isb msr sctlr_el1, x0 diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index fe36d85c60bd..d32725a2b77f 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -19,6 +19,7 @@ #include #include +#include #include #include #include From patchwork Mon Feb 18 17:02:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10818491 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28DA9922 for ; Mon, 18 Feb 2019 17:05:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A27D2904F for ; Mon, 18 Feb 2019 17:05:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 07FF92B68B; Mon, 18 Feb 2019 17:05:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 03E292B683 for ; Mon, 18 Feb 2019 17:05:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jvnjUDDHxcG7deeB6tix9aU3VuTMqW3339BXa7Qfzl4=; b=mmNVSWT8b7PeRM teC3BIF+Posgw1ClaAjYNq6yGmb9OH4Tv66mWEmuHeoz3p/nzGfNVJYgJfUFAD7IZgEkYX8RcaZQ0 qi9DbHKA+xz5kwQuwT1WdWXCQXIQ1RYMfGoi4VdYaM+k9loGj2ss+i8XZ017WcZYjSMviJP33ZcjI wS0UAf5lpmALf1Q+ikRkK9SMFOpmDBOByPM3QZjuoMiVX3bH1ZA22VtIN6B2XKxjMCg+9a3a/40zv nydqUYR00leHgGGmmUiwSafd/1xt6dDfGoew0YX/rvNQscxfNIvkBoBtrrMBS7l0XFoPZ51c0V7MM MyDwnJgEh3rnBj+e4keA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmME-0006ml-8A; Mon, 18 Feb 2019 17:05:30 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmLM-0004eq-B0 for linux-arm-kernel@lists.infradead.org; Mon, 18 Feb 2019 17:04:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AAFD31A25; Mon, 18 Feb 2019 09:04:21 -0800 (PST) Received: from capper-debian.emea.arm.com (C02R32KKFVH8.manchester.arm.com [10.32.102.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5FA7A3F740; Mon, 18 Feb 2019 09:04:20 -0800 (PST) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 9/9] arm64: mm: Introduce 52-bit Kernel VAs Date: Mon, 18 Feb 2019 17:02:45 +0000 Message-Id: <20190218170245.14915-10-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190218170245.14915-1-steve.capper@arm.com> References: <20190218170245.14915-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190218_090436_523245_C75A85BE X-CRM114-Status: GOOD ( 18.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, Steve Capper , marc.zyngier@arm.com, catalin.marinas@arm.com, ard.biesheuvel@linaro.org, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ at early boot. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 31 ++++++++++++++++++++++---- arch/arm64/include/asm/assembler.h | 9 +++++++- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/include/asm/pgtable-hwdef.h | 2 +- arch/arm64/kernel/head.S | 4 ++-- arch/arm64/mm/proc.S | 6 ++++- 7 files changed, 45 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0f933d3a1614..1a673a7d0a9b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,11 +276,14 @@ config KERNEL_MODE_NEON config FIX_EARLYCON_MEM def_bool y +config HAS_VA_BITS_52 + def_bool n + config PGTABLE_LEVELS int default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36 default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 - default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) + default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || HAS_VA_BITS_52) default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 @@ -294,12 +297,12 @@ config ARCH_PROC_KCORE_TEXT config KASAN_SHADOW_OFFSET hex depends on KASAN - default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS + default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && !KASAN_SW_TAGS default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS - default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS + default 0xefff900000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && KASAN_SW_TAGS default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS @@ -721,6 +724,7 @@ config ARM64_VA_BITS_48 config ARM64_USER_VA_BITS_52 bool "52-bit (user)" depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + select HAS_VA_BITS_52 help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will continue to @@ -733,11 +737,28 @@ config ARM64_USER_VA_BITS_52 If unsure, select 48-bit virtual addressing instead. +config ARM64_USER_KERNEL_VA_BITS_52 + bool "52-bit (user & kernel)" + depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + select HAS_VA_BITS_52 + help + Enable 52-bit virtual addressing for userspace when explicitly + requested via a hint to mmap(). The kernel will also use 52-bit + virtual addresses for its own mappings (provided HW support for + this feature is available, otherwise it reverts to 48-bit). + + NOTE: Enabling 52-bit virtual addressing in conjunction with + ARMv8.3 Pointer Authentication will result in the PAC being + reduced from 7 bits to 3 bits, which may have a significant + impact on its susceptibility to brute-force attacks. + + If unsure, select 48-bit virtual addressing instead. + endchoice config ARM64_FORCE_52BIT bool "Force 52-bit virtual addresses for userspace" - depends on ARM64_USER_VA_BITS_52 && EXPERT + depends on HAS_VA_BITS_52 && EXPERT help For systems with 52-bit userspace VAs enabled, the kernel will attempt to maintain compatibility with older software by providing 48-bit VAs @@ -755,9 +776,11 @@ config ARM64_VA_BITS default 42 if ARM64_VA_BITS_42 default 47 if ARM64_VA_BITS_47 default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52 + default 52 if ARM64_USER_KERNEL_VA_BITS_52 config ARM64_VA_BITS_MIN int + default 48 if ARM64_USER_KERNEL_VA_BITS_52 default ARM64_VA_BITS choice diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 58ed5d086e1e..c7c3ca2d70f8 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -363,6 +363,13 @@ alternative_endif bfi \valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH .endm +/* + * tcr_set_t1sz - update TCR.T1SZ + */ + .macro tcr_set_t1sz, valreg, t1sz + bfi \valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH + .endm + /* * tcr_compute_pa_size - set TCR.(I)PS to the highest supported * ID_AA64MMFR0_EL1.PARange value @@ -568,7 +575,7 @@ alternative_endif * to be nop'ed out when dealing with 52-bit kernel VAs. */ .macro restore_ttbr1, ttbr -#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52) +#ifdef CONFIG_HAS_VA_BITS_52 bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET #endif .endm diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a51056a157dd..f17777f4272c 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -67,7 +67,7 @@ #define KERNEL_START _text #define KERNEL_END _end -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 #define MAX_USER_VA_BITS 52 #else #define MAX_USER_VA_BITS VA_BITS diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 133ecb65b602..1ae26b357e1e 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -74,7 +74,7 @@ extern u64 idmap_ptrs_per_pgd; static inline bool __cpu_uses_extended_idmap(void) { - if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52)) + if (IS_ENABLED(CONFIG_HAS_VA_BITS_52)) return false; return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS)); diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index e9b0a7d75184..15379996ffe2 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -315,7 +315,7 @@ #define TTBR_BADDR_MASK_52 (((UL(1) << 46) - 1) << 2) #endif -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */ #define TTBR1_BADDR_4852_OFFSET (((UL(1) << (52 - PGDIR_SHIFT)) - \ (UL(1) << (48 - PGDIR_SHIFT))) * 8) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 4877b82d2091..c486e66a9ba6 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -319,7 +319,7 @@ __create_page_tables: adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 mrs_s x6, SYS_ID_AA64MMFR2_EL1 and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) mov x5, #52 @@ -818,7 +818,7 @@ ENTRY(__enable_mmu) ENDPROC(__enable_mmu) ENTRY(__cpu_secondary_check52bitva) -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 ldr_l x0, vabits_user cmp x0, #52 b.ne 2f diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 73886a5f1f30..e85500dfe89a 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -457,7 +457,7 @@ ENTRY(__cpu_setup) TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 ldr_l x9, vabits_user sub x9, xzr, x9 add x9, x9, #64 @@ -466,6 +466,10 @@ ENTRY(__cpu_setup) #endif tcr_set_t0sz x10, x9 +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 + tcr_set_t1sz x10, x9 +#endif + /* * Set the IPS bits in TCR_EL1. */