From patchwork Tue May 28 16:10:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DCC17912 for ; Tue, 28 May 2019 16:10:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA71C28733 for ; Tue, 28 May 2019 16:10:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BCC85286E4; Tue, 28 May 2019 16:10:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 36A5B286E4 for ; Tue, 28 May 2019 16:10:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8CL5pet4Sp9ewhM2umHJjU06roBKQxZ9oSU8muYSNLg=; b=QWu9bAvm5PbRDh locjIO9pxXzUas5OEfZE9A1HUdnZQ0bz7vnVolvqVIjvSUgEEbE1yCosiyNuq3QBfKnDq0OcB4dcP 46QB8+Jj8hxZ33QoGpUZfKGGpHQCiNhpE8WpOKezRVxtqd5VzIKrnpe7q4Sb+NcwDyiVWfEoXPVDx QzzcU+ITfT3tjzeCsCleS3AK5XRXLElnxAp8Hzd42zemM5u9ieASBUN+5619AuGJxp3fUj8KF/uGU Gcv94byRQlmAzQwK5gAcHz1p8pgUCfuo/AP6gAsqBuUnpp6DaVdyNl9PDpsYBoKuV2etm6VrnaCb8 la1bfrb6THL+jiOc11GQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegc-0004OL-2x; Tue, 28 May 2019 16:10:50 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegU-0004Hi-5O for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:10:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5E6AA78; Tue, 28 May 2019 09:10:41 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EFE813F59C; Tue, 28 May 2019 09:10:39 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Date: Tue, 28 May 2019 17:10:15 +0100 Message-Id: <20190528161026.13193-2-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091042_206604_5A180D57 X-CRM114-Status: GOOD ( 14.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We assume that the direct linear map ends at ~0 in the KVM HYP map intersection checking code. This assumption will become invalid later on for arm64 when the address space of the kernel is re-arranged. This patch introduces a new constant PAGE_OFFSET_END for both arm and arm64 and defines it to be ~0UL Signed-off-by: Steve Capper --- arch/arm/include/asm/memory.h | 1 + arch/arm64/include/asm/memory.h | 1 + virt/kvm/arm/mmu.c | 4 ++-- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index ed8fd0d19a3e..45c211fd50da 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -24,6 +24,7 @@ /* PAGE_OFFSET - the virtual address of the start of the kernel image */ #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) +#define PAGE_OFFSET_END (~0UL) #ifdef CONFIG_MMU diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 8ffcf5a512bb..9fd387a63b9b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -52,6 +52,7 @@ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) +#define PAGE_OFFSET_END (~0UL) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 74b6582eaa3c..e1a777275b37 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void) kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); kvm_debug("HYP VA range: %lx:%lx\n", kern_hyp_va(PAGE_OFFSET), - kern_hyp_va((unsigned long)high_memory - 1)); + kern_hyp_va(PAGE_OFFSET_END)); if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) && - hyp_idmap_start < kern_hyp_va((unsigned long)high_memory - 1) && + hyp_idmap_start < kern_hyp_va(PAGE_OFFSET_END) && hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) { /* * The idmap page is intersecting with the VA space, From patchwork Tue May 28 16:10:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04E80912 for ; Tue, 28 May 2019 16:12:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6C60286E4 for ; Tue, 28 May 2019 16:12:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D810B28734; Tue, 28 May 2019 16:12:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 36340286E4 for ; Tue, 28 May 2019 16:12:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5OF9I1ZCSD5kGI/pLruh8tk1qG3LJ1AsM/Z4hh0W3xg=; b=dVO1t99q/KeTIu TA/JkNy1qKMbcr41m3PpSiz15KzClh78/MIO8GnJZ+R6pyXQqpxHzV3mtff+rcZItOysM3lRDF1Bj gHpcxh6PljT+C0i9lz6cr4pkVZn4J7GEt8XP2FA1RGoY0joyFDhKytu4YTIwJ0DrrArFFezONhbEl X5we2dO7patnBdwFz6u8Rx15vLf3GuYe5wN+i8Y8UgbrpHwOtDuOD8TVEQF+0jYEWiE0vXK73f8it mikI51UM5eGk84CRxr/DcJw7/blPmm88DvF2xLtdMvsUEU6V1k7Wo3HXulvraXphS32FgL8Am0Mdx EheHa/jPdA0O0XH8cr9w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVehf-0005eM-Ln; Tue, 28 May 2019 16:11:55 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegX-0004J0-9H for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:10:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 678CD15AB; Tue, 28 May 2019 09:10:44 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 074343F59C; Tue, 28 May 2019 09:10:41 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 02/12] arm64: mm: Flip kernel VA space Date: Tue, 28 May 2019 17:10:16 +0100 Message-Id: <20190528161026.13193-3-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091045_627657_43E5BE8F X-CRM114-Status: GOOD ( 16.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Put the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. This allows us to make room for an inline KASAN shadow that operates under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA, if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses of the kernel VA range), this will be below the start of the minimum 48-bit kernel VA address of 0xFFFF000000000000. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. Signed-off-by: Steve Capper --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 10 +++++----- arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/mm/dump.c | 8 ++++---- arch/arm64/mm/init.c | 9 +-------- arch/arm64/mm/kasan_init.c | 6 +++--- arch/arm64/mm/mmu.c | 4 ++-- 8 files changed, 18 insertions(+), 25 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index b025304bde46..2dad2ae6b181 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -115,7 +115,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ - (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ - (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) ) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 9fd387a63b9b..d19bf9965d34 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -49,10 +49,10 @@ */ #define VA_BITS (CONFIG_ARM64_VA_BITS) #define VA_START (UL(0xffffffffffffffff) - \ - (UL(1) << VA_BITS) + 1) -#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) -#define PAGE_OFFSET_END (~0UL) +#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ + (UL(1) << VA_BITS) + 1) +#define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) @@ -60,7 +60,7 @@ #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (BPF_JIT_REGION_END) #define MODULES_VSIZE (SZ_128M) -#define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE) +#define VMEMMAP_START (-VMEMMAP_SIZE) #define PCI_IO_END (VMEMMAP_START - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) #define FIXADDR_TOP (PCI_IO_START - SZ_2M) @@ -239,7 +239,7 @@ extern u64 vabits_user; * space. Testing the top bit for the start of the region is a * sufficient check. */ -#define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1))) +#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS - 1))) #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2c41b04708fe..d0ab784304e9 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -32,7 +32,7 @@ * and fixed mappings */ #define VMALLOC_START (MODULES_END) -#define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) +#define VMALLOC_END (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9859e1178e6b..002653b214c4 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -497,7 +497,7 @@ int swsusp_arch_resume(void) rc = -ENOMEM; goto out; } - rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0); + rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, PAGE_OFFSET_END); if (rc) goto out; diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index 14fe23cd5932..ee4e5bea8944 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -30,6 +30,8 @@ #include static const struct addr_marker address_markers[] = { + { PAGE_OFFSET, "Linear Mapping start" }, + { VA_START, "Linear Mapping end" }, #ifdef CONFIG_KASAN { KASAN_SHADOW_START, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, @@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = { { PCI_IO_START, "PCI I/O start" }, { PCI_IO_END, "PCI I/O end" }, #ifdef CONFIG_SPARSEMEM_VMEMMAP - { VMEMMAP_START, "vmemmap start" }, - { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" }, + { VMEMMAP_START, "vmemmap" }, #endif - { PAGE_OFFSET, "Linear mapping" }, { -1, NULL }, }; @@ -380,7 +380,7 @@ static void ptdump_initialize(void) static struct ptdump_info kernel_ptdump_info = { .mm = &init_mm, .markers = address_markers, - .base_addr = VA_START, + .base_addr = PAGE_OFFSET, }; void ptdump_check_wx(void) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d2adffb81b5d..574ed1d4be19 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void) void __init arm64_memblock_init(void) { - const s64 linear_region_size = -(s64)PAGE_OFFSET; + const s64 linear_region_size = BIT(VA_BITS - 1); /* Handle linux,usable-memory-range property */ fdt_enforce_memory_region(); @@ -319,13 +319,6 @@ void __init arm64_memblock_init(void) /* Remove memory above our supported physical address size */ memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX); - /* - * Ensure that the linear region takes up exactly half of the kernel - * virtual address space. This way, we can distinguish a linear address - * from a kernel/module/vmalloc address by testing a single bit. - */ - BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1)); - /* * Select a suitable value for the base of physical memory. */ diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 296de39ddee5..8066621052db 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -229,10 +229,10 @@ void __init kasan_init(void) kasan_map_populate(kimg_shadow_start, kimg_shadow_end, early_pfn_to_nid(virt_to_pfn(lm_alias(_text)))); - kasan_populate_early_shadow((void *)KASAN_SHADOW_START, - (void *)mod_shadow_start); + kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START), + (void *)mod_shadow_start); kasan_populate_early_shadow((void *)kimg_shadow_end, - kasan_mem_to_shadow((void *)PAGE_OFFSET)); + (void *)KASAN_SHADOW_END); if (kimg_shadow_start > mod_shadow_end) kasan_populate_early_shadow((void *)mod_shadow_end, diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a170c6369a68..b0401b2ec4da 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift) static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot) { - if (virt < VMALLOC_START) { + if ((virt >= VA_START) && (virt < VMALLOC_START)) { pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n", &phys, virt); return; @@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, static void update_mapping_prot(phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot) { - if (virt < VMALLOC_START) { + if ((virt >= VA_START) && (virt < VMALLOC_START)) { pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n", &phys, virt); return; From patchwork Tue May 28 16:10:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 070D4912 for ; Tue, 28 May 2019 16:11:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E68A0286E4 for ; Tue, 28 May 2019 16:11:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D91AC28734; Tue, 28 May 2019 16:11:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2236E286E4 for ; Tue, 28 May 2019 16:11:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9Tzs+CmN2eu5ex1ondpZDHqq8ih1LVYU/4efTDDZY0E=; b=WZfnBpnZH2s/Kx Wl2s91md55IdIXZX5mjVBhZ4gOjWeIXH+9KYRRflm4hE3wdP+ETB+LZSsltRWQugWzLj6iUVlhzzs 7eSJGfgQx9T1ShUwxiUUQLPV5oHVVszzbCo7jFwLdUdScbP6qxSWO85VeyCO0LRv3oyCsZZ3i+2n+ 6MLk8Y3j45J44Dcr4tADORMgAlkZ4mPRViyqgiEsoflIB1r/mVfFXT4GePcfp5UnrG5QVXtH3PtlP qI8ZVk4OwAgB3eOwMrKnvP8pXX/AnzaLiZsvWfqsmwOlwLRv3898tVqo2PIXW3q36XfP18uQEyOBD idtTVCnfog0Qz6SkhgaQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeh4-0004pA-7I; Tue, 28 May 2019 16:11:18 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegY-0004L2-Qg for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:10:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9094F15AD; Tue, 28 May 2019 09:10:46 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C2DAC3F59C; Tue, 28 May 2019 09:10:44 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Date: Tue, 28 May 2019 17:10:17 +0100 Message-Id: <20190528161026.13193-4-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091047_056288_1AD964BF X-CRM114-Status: GOOD ( 18.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command line argument and affects the codegen of the inline address sanetiser. Essentially, for an example memory access: *ptr1 = val; The compiler will insert logic similar to the below: shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET) if (somethingWrong(shadowValue)) flagAnError(); This code sequence is inserted into many places, thus KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel text. If we want to run a single kernel binary with multiple address spaces, then we need to do this with KASAN_SHADOW_OFFSET fixed. Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide shadow addresses we know that the end of the shadow region is constant w.r.t. VA space size: KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET This means that if we increase the size of the VA space, the start of the KASAN region expands into lower addresses whilst the end of the KASAN region is fixed. Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via build scripts with the VA size used as a parameter. (There are build time checks in the C code too to ensure that expected values are being derived). It is sufficient, and indeed is a simplification, to remove the build scripts (and build time checks) entirely and instead provide KASAN_SHADOW_OFFSET values. This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the arm64 Makefile, and instead we adopt the approach used by x86 to supply offset values in kConfig. To help debug/develop future VA space changes, the Makefile logic has been preserved in a script file in the arm64 Documentation folder. Signed-off-by: Steve Capper --- Changed in V3, rebase to include KASAN_SHADOW_SCALE_SHIFT, wording tidied up. --- Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++ arch/arm64/Kconfig | 15 +++++++++++++++ arch/arm64/Makefile | 8 -------- arch/arm64/include/asm/kasan.h | 11 ++++------- arch/arm64/include/asm/memory.h | 12 +++++++++--- arch/arm64/mm/kasan_init.c | 2 -- 6 files changed, 55 insertions(+), 20 deletions(-) create mode 100644 Documentation/arm64/kasan-offsets.sh diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh new file mode 100644 index 000000000000..2b7a021db363 --- /dev/null +++ b/Documentation/arm64/kasan-offsets.sh @@ -0,0 +1,27 @@ +#!/bin/sh + +# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW +# start address at the mid-point of the kernel VA space + +print_kasan_offset () { + printf "%02d\t" $1 + printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \ + + (1 << ($1 - 32 - $2)) \ + - (1 << (64 - 32 - $2)) )) +} + +echo KASAN_SHADOW_SCALE_SHIFT = 3 +printf "VABITS\tKASAN_SHADOW_OFFSET\n" +print_kasan_offset 48 3 +print_kasan_offset 47 3 +print_kasan_offset 42 3 +print_kasan_offset 39 3 +print_kasan_offset 36 3 +echo +echo KASAN_SHADOW_SCALE_SHIFT = 4 +printf "VABITS\tKASAN_SHADOW_OFFSET\n" +print_kasan_offset 48 4 +print_kasan_offset 47 4 +print_kasan_offset 42 4 +print_kasan_offset 39 4 +print_kasan_offset 36 4 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4780eb7af842..d30046c7de7f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -291,6 +291,21 @@ config ARCH_SUPPORTS_UPROBES config ARCH_PROC_KCORE_TEXT def_bool y +config KASAN_SHADOW_OFFSET + hex + depends on KASAN + default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS + default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS + default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS + default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS + default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS + default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS + default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS + default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS + default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS + default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS + default 0xffffffffffffffff + source "arch/arm64/Kconfig.platforms" menu "Kernel Features" diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2dad2ae6b181..764201c76d77 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -111,14 +111,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) -# - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) -# in 32-bit arithmetic -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ - (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \ - + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ - - (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) ) - export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index b52aacd2c526..10d2add842da 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -18,11 +18,8 @@ * KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, * where N = (1 << KASAN_SHADOW_SCALE_SHIFT). - */ -#define KASAN_SHADOW_START (VA_START) -#define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE) - -/* + * + * KASAN_SHADOW_OFFSET: * This value is used to map an address to the corresponding shadow * address by the following formula: * shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET @@ -33,8 +30,8 @@ * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) */ -#define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1ULL << \ - (64 - KASAN_SHADOW_SCALE_SHIFT))) +#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) +#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS) void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index d19bf9965d34..fbd841f986ff 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -54,7 +54,7 @@ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) -#define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE) +#define BPF_JIT_REGION_START (KASAN_SHADOW_END) #define BPF_JIT_REGION_SIZE (SZ_128M) #define BPF_JIT_REGION_END (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE) #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) @@ -80,11 +80,17 @@ * significantly, so double the (minimum) stack size when they are in use. */ #ifdef CONFIG_KASAN -#define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) +#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \ + + KASAN_SHADOW_OFFSET) +#ifdef CONFIG_KASAN_EXTRA +#define KASAN_THREAD_SHIFT 2 +#else #define KASAN_THREAD_SHIFT 1 +#endif #else -#define KASAN_SHADOW_SIZE (0) #define KASAN_THREAD_SHIFT 0 +#define KASAN_SHADOW_END (VA_START) #endif #define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 8066621052db..0d3027be7bf8 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -158,8 +158,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { - BUILD_BUG_ON(KASAN_SHADOW_OFFSET != - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, From patchwork Tue May 28 16:10:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1132E912 for ; Tue, 28 May 2019 16:11:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 008A8286E4 for ; Tue, 28 May 2019 16:11:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8B3E28734; Tue, 28 May 2019 16:11:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9B307286E4 for ; Tue, 28 May 2019 16:11:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jQMtSijoeCoCvrxfyuygaepSocM3BwcGJolk1WKrH00=; b=ewRY1vMY8BQg7B 5RU9+1WcJ4vKxKrX/GQdEe8j7f60vC54uTJ340GZf9tyw9VkzobgUs5kDjEJbeF6US/TUp0s3YUEk PzDrezkdouENzTQmoC9XjtlKyrwptz89yldLKt00mksOipo/9KdG3LJUJ7oZTqeE8GW4xI8TYxFtK e3xMLfToA4aPsg2I6thuiJGWUp162654o6vXnmBs8FT2YravnIhRlGZnJ3kKSs1OHdS8JdyGieV6i arxlY8xkXqjTF+jskAUPmIlstYK/gFxQ7bx7Bpjv426PKSqS+S8FkuN3VF65fXfklf+XngHSJv78e phQeUZS486kFo6dQdhng==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeh9-0004ww-Hz; Tue, 28 May 2019 16:11:23 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegb-0004Ot-Ms for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:10:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F8E8165C; Tue, 28 May 2019 09:10:49 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5659F3F59C; Tue, 28 May 2019 09:10:46 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Date: Tue, 28 May 2019 17:10:18 +0100 Message-Id: <20190528161026.13193-5-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091050_268563_C0A2779E X-CRM114-Status: GOOD ( 12.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to prepare for a variable VA_BITS we need to account for a variable size VMEMMAP which in turn means the position of the fixed map is variable at compile time. Thus, we need to replace the BUILD_BUG_ON's that check the fixed map position with BUG_ON's. Signed-off-by: Steve Capper --- arch/arm64/mm/mmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b0401b2ec4da..1b88d9d81954 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -846,7 +846,7 @@ void __init early_fixmap_init(void) * The boot-ioremap range spans multiple pmds, for which * we are not prepared: */ - BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) + BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT) != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT)); if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN))) @@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) * On 4k pages, we'll use section mappings for the FDT so we only * have to be in the same PUD. */ - BUILD_BUG_ON(dt_virt_base % SZ_2M); + BUG_ON(dt_virt_base % SZ_2M); - BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT != + BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT != __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT); offset = dt_phys % SWAPPER_BLOCK_SIZE; From patchwork Tue May 28 16:10:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965309 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01C8A76 for ; Tue, 28 May 2019 16:11:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5C52286E4 for ; Tue, 28 May 2019 16:11:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA4CE28734; Tue, 28 May 2019 16:11:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5B001286E4 for ; Tue, 28 May 2019 16:11:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QfwbgAW0LN0Eij77ZiTa6ydeA8J1QLFluIdKkEmRpi0=; b=QHEl/PWb4WppZQ LUdCWCSC2Ugn77wX7AOEnKjcF22qu0GTPhqdADFbRW+RnztJlZBdA2+gyjePafsDe2c6L64QVa/HP NwV8WeR8D+2RgHIHYLSL11Ch2fPHl6/F0vRp063R8DEa9P5RMFWjvdwu+rng5oljle+QTqky232Ac w7BXqqZZTEzDMiZ4B+p10XuyQeTWHMeSLGnKszcyjUsBfa6fQu15uDd1QGLT4QXgNFjqv5CY5Yeq5 0obDloinJ3/hq5K/SwfdQ/WTmZouNmNIVp4qBBh0kQAXqxZzRblha+iPF9CZF+d6bgpBCKk7ztNUE HVb9AhrtNj2pSOIj5CQA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVehK-0005B6-9K; Tue, 28 May 2019 16:11:34 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVege-0004SQ-5p for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:10:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C78C11684; Tue, 28 May 2019 09:10:51 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AF52E3F59C; Tue, 28 May 2019 09:10:49 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 05/12] arm64: dump: Make kernel page table dumper dynamic again Date: Tue, 28 May 2019 17:10:19 +0100 Message-Id: <20190528161026.13193-6-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091052_607472_7BA0E6AC X-CRM114-Status: GOOD ( 13.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The kernel page table dumper assumes that the placement of VA regions is constant and determined at compile time. As we are about to introduce variable VA logic, we need to be able to determine certain regions at boot time. This patch adds logic to the kernel page table dumper s.t. these regions can be computed at boot time. Signed-off-by: Steve Capper --- arch/arm64/mm/dump.c | 58 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 47 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index ee4e5bea8944..f5fd6d6557fc 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -29,23 +29,45 @@ #include #include -static const struct addr_marker address_markers[] = { - { PAGE_OFFSET, "Linear Mapping start" }, - { VA_START, "Linear Mapping end" }, + +enum address_markers_idx { + PAGE_OFFSET_NR = 0, + VA_START_NR, +#ifdef CONFIG_KASAN + KASAN_START_NR, + KASAN_END_NR, +#endif + MODULES_START_NR, + MODULES_END_NR, + VMALLOC_START_NR, + VMALLOC_END_NR, + FIXADDR_START_NR, + FIXADDR_END_NR, + PCI_START_NR, + PCI_END_NR, +#ifdef CONFIG_SPARSEMEM_VMEMMAP + VMEMMAP_START_NR, +#endif + END_NR +}; + +static struct addr_marker address_markers[] = { + { 0 /* PAGE_OFFSET */, "Linear Mapping start" }, + { 0 /* VA_START */, "Linear Mapping end" }, #ifdef CONFIG_KASAN - { KASAN_SHADOW_START, "Kasan shadow start" }, + { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, #endif { MODULES_VADDR, "Modules start" }, { MODULES_END, "Modules end" }, { VMALLOC_START, "vmalloc() area" }, - { VMALLOC_END, "vmalloc() end" }, - { FIXADDR_START, "Fixmap start" }, - { FIXADDR_TOP, "Fixmap end" }, - { PCI_IO_START, "PCI I/O start" }, - { PCI_IO_END, "PCI I/O end" }, + { 0 /* VMALLOC_END */, "vmalloc() end" }, + { 0 /* FIXADDR_START */, "Fixmap start" }, + { 0 /* FIXADDR_TOP */, "Fixmap end" }, + { 0 /* PCI_IO_START */, "PCI I/O start" }, + { 0 /* PCI_IO_END */, "PCI I/O end" }, #ifdef CONFIG_SPARSEMEM_VMEMMAP - { VMEMMAP_START, "vmemmap" }, + { 0 /* VMEMMAP_START */, "vmemmap" }, #endif { -1, NULL }, }; @@ -380,7 +402,6 @@ static void ptdump_initialize(void) static struct ptdump_info kernel_ptdump_info = { .mm = &init_mm, .markers = address_markers, - .base_addr = PAGE_OFFSET, }; void ptdump_check_wx(void) @@ -405,6 +426,21 @@ void ptdump_check_wx(void) static int ptdump_init(void) { + kernel_ptdump_info.base_addr = PAGE_OFFSET; + address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET; + address_markers[VA_START_NR].start_address = VA_START; +#ifdef CONFIG_KASAN + address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; +#endif + address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; + address_markers[FIXADDR_START_NR].start_address = FIXADDR_START; + address_markers[FIXADDR_END_NR].start_address = FIXADDR_TOP; + address_markers[PCI_START_NR].start_address = PCI_IO_START; + address_markers[PCI_END_NR].start_address = PCI_IO_END; +#ifdef CONFIG_SPARSEMEM_VMEMMAP + address_markers[VMEMMAP_START_NR].start_address = VMEMMAP_START; +#endif + ptdump_initialize(); ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables"); return 0; From patchwork Tue May 28 16:10:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ECBD76 for ; Tue, 28 May 2019 16:11:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3DEC0286E4 for ; Tue, 28 May 2019 16:11:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3194E28734; Tue, 28 May 2019 16:11:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 96933286E4 for ; Tue, 28 May 2019 16:11:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LVEEHRvsSv7mjD3Elbh4HQFNHPF+sEp330FKH0U2ysg=; b=aAHUCj0b/h1Huc ARRu6JNRKIFQtS9l//ubHoVZiqOVn2z2EkG8o9EIJNisml2pl0PQRdTpEjZTx/tDeSEGk0t4Ub0cG BPi7Qp0q3Ecc+UOE3PcuPrYTczApMqPYg0qNjaBjLQusTjBlrfXtpjKbo2p44QiO0zbgc9CY0K9+S Sa1P4EGNCr2R3WuLop1s2GS1hi5tFh1L3aQkuW9Kr7pYr+azbRw8FF63LUpkQgBOzReJPuzqH2sTY nlf03JJHau4rKt7jJnfGSxfnHZPJBXBthbkV/31g8F9h651oAj/Difn3u9jLVQkp2DRlaOBOTr5Iz onGEuNrGK4+zHgcVnpxg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVehT-0005Pt-K8; Tue, 28 May 2019 16:11:43 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegg-0004Vu-9i for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1DE241688; Tue, 28 May 2019 09:10:54 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 29F733F59C; Tue, 28 May 2019 09:10:51 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 06/12] arm64: mm: Introduce VA_BITS_MIN Date: Tue, 28 May 2019 17:10:20 +0100 Message-Id: <20190528161026.13193-7-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091054_897821_5E48C0D3 X-CRM114-Status: GOOD ( 19.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit kernel addresses detectable at boot time, the kernel needs to know the most conservative VA_BITS possible should it need to fall back to this quantity due to lack of hardware support. A new compile time constant VA_BITS_MIN is introduced in this patch and it is employed in the KASAN end address, KASLR, and EFI stub. For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits. In other words: VA_BITS_MIN = min (48, VA_BITS) Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/efi.h | 4 ++-- arch/arm64/include/asm/memory.h | 5 ++++- arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/head.S | 2 +- arch/arm64/kernel/kaslr.c | 6 +++--- arch/arm64/mm/kasan_init.c | 3 ++- 7 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d30046c7de7f..0cedcb4a0a01 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -774,6 +774,10 @@ config ARM64_VA_BITS default 47 if ARM64_VA_BITS_47 default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52 +config ARM64_VA_BITS_MIN + int + default ARM64_VA_BITS + choice prompt "Physical address space size" default ARM64_PA_BITS_48 diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index c9e9a6978e73..a87e78f3f58d 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base) /* * On arm64, we have to ensure that the initrd ends up in the linear region, - * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is + * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is * guaranteed to cover the kernel Image. * * Since the EFI stub is part of the kernel Image, we can relax the @@ -90,7 +90,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base) static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base, unsigned long image_addr) { - return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1)); + return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1)); } #define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index fbd841f986ff..c62df8f9ef86 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -64,6 +64,9 @@ #define PCI_IO_END (VMEMMAP_START - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) #define FIXADDR_TOP (PCI_IO_START - SZ_2M) +#define VA_BITS_MIN (CONFIG_ARM64_VA_BITS_MIN) +#define _VA_START(va) (UL(0xffffffffffffffff) - \ + (UL(1) << ((va) - 1)) + 1) #define KERNEL_START _text #define KERNEL_END _end @@ -90,7 +93,7 @@ #endif #else #define KASAN_THREAD_SHIFT 0 -#define KASAN_SHADOW_END (VA_START) +#define KASAN_SHADOW_END (_VA_START(VA_BITS_MIN)) #endif #define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index fcd0e691b1ea..307cd2173813 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -53,7 +53,7 @@ * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. */ -#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS) +#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS_MIN) #define TASK_SIZE_64 (UL(1) << vabits_user) #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index fcae3f85c6cd..ab68c3fe7a19 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -325,7 +325,7 @@ __create_page_tables: mov x5, #52 cbnz x6, 1f #endif - mov x5, #VA_BITS + mov x5, #VA_BITS_MIN 1: adr_l x6, vabits_user str x5, [x6] diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index b09b6f75f759..6f0075f983c7 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -119,15 +119,15 @@ u64 __init kaslr_early_init(u64 dt_phys) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of + * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of * the lower and upper quarters to avoid colliding with other * allocations. * Even if we could randomize at page granularity for 16k and 64k pages, * let's always round to 2 MB so we don't interfere with the ability to * map using contiguous PTEs */ - mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1); - offset = BIT(VA_BITS - 3) + (seed & mask); + mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1); + offset = BIT(VA_BITS_MIN - 3) + (seed & mask); /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 0d3027be7bf8..0e0b69af0aaa 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -158,7 +158,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, /* The early shadow maps everything to a single page of zeroes */ asmlinkage void __init kasan_early_init(void) { - BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE, true); From patchwork Tue May 28 16:10:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8F43912 for ; Tue, 28 May 2019 16:12:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8513286E4 for ; Tue, 28 May 2019 16:12:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC92628738; Tue, 28 May 2019 16:12:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0E736286E4 for ; Tue, 28 May 2019 16:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jagP1B75aa1NuVGrVY+4MX81rZ28SQNv03YuhEME9cw=; b=urNbdwq+AK9mo9 DsKqFmHk/CnrIEZIteW13uziOaa5p/wBBB5ostXLBvFB7rs1NdQ1g2PpPuBriCA8XtCGoo90sDOPm V6GUME9METBsi2wZLlP7xKAKIysffXTLcvilT9VY6bKuoKzoZ2PgPMDUKkN1F2tacd6kWH8ciypxC Iyj7rltWnHaEXapg+jvyxNASjN81KPpkUPtCzmRPQPxKdiYxP+d8TARckCAiXtmcU+SGRQxoUsJHa K8V6X9AQLkKANmnodHB4xnFZN2ZVXb7cAJ8Njtvz3tnVF7CJiiOooijjzB1IRy2vnAkCoqEr+CzGc ID6BgDDH49/mT0ymt58Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVei0-00065f-IC; Tue, 28 May 2019 16:12:16 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegn-0004fu-TM for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C0CA168F; Tue, 28 May 2019 09:11:01 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 687E93F59C; Tue, 28 May 2019 09:10:54 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 07/12] arm64: mm: Introduce VA_BITS_ACTUAL Date: Tue, 28 May 2019 17:10:21 +0100 Message-Id: <20190528161026.13193-8-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091102_118095_8714A6E2 X-CRM114-Status: GOOD ( 18.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable VA_BITS_ACTUAL is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Signed-off-by: Steve Capper --- arch/arm64/include/asm/kasan.h | 2 +- arch/arm64/include/asm/memory.h | 12 +++++++----- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/kernel/head.S | 5 +++++ arch/arm64/kvm/va_layout.c | 14 +++++++------- arch/arm64/mm/fault.c | 4 ++-- arch/arm64/mm/init.c | 7 ++++++- arch/arm64/mm/mmu.c | 3 +++ 8 files changed, 32 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 10d2add842da..ff991dc86ae1 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -31,7 +31,7 @@ * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) */ #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT))) -#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS) +#define KASAN_SHADOW_START _KASAN_SHADOW_START(VA_BITS_ACTUAL) void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c62df8f9ef86..c02373035533 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -48,8 +48,6 @@ * VA_START - the first kernel virtual address. */ #define VA_BITS (CONFIG_ARM64_VA_BITS) -#define VA_START (UL(0xffffffffffffffff) - \ - (UL(1) << (VA_BITS - 1)) + 1) #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET_END (VA_START) @@ -178,10 +176,14 @@ #endif #ifndef __ASSEMBLY__ +extern u64 vabits_actual; +#define VA_BITS_ACTUAL ({vabits_actual;}) +#define VA_START (_VA_START(VA_BITS_ACTUAL)) #include #include +extern s64 physvirt_offset; extern s64 memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) @@ -248,9 +250,9 @@ extern u64 vabits_user; * space. Testing the top bit for the start of the region is a * sufficient check. */ -#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS - 1))) +#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS_ACTUAL - 1))) -#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) +#define __lm_to_phys(addr) (((addr) + physvirt_offset)) #define __kimg_to_phys(addr) ((addr) - kimage_voffset) #define __virt_to_phys_nodebug(x) ({ \ @@ -269,7 +271,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x); #define __phys_addr_symbol(x) __pa_symbol_nodebug(x) #endif -#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET) +#define __phys_to_virt(x) ((unsigned long)((x) - physvirt_offset)) #define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset)) /* diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 2da3e478fd8f..133ecb65b602 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -106,7 +106,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz) isb(); } -#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS)) +#define cpu_set_default_tcr_t0sz() __cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL)) #define cpu_set_idmap_tcr_t0sz() __cpu_set_tcr_t0sz(idmap_t0sz) /* diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index ab68c3fe7a19..b3335e639b6d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -332,6 +332,11 @@ __create_page_tables: dmb sy dc ivac, x6 // Invalidate potentially stale cache line + adr_l x6, vabits_actual + str x5, [x6] + dmb sy + dc ivac, x6 // Invalidate potentially stale cache line + /* * VA_BITS may be too small to allow for an ID mapping to be created * that covers system RAM if that is located sufficiently high in the diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index c712a7376bc1..c9a1debb45bd 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -40,25 +40,25 @@ static void compute_layout(void) int kva_msb; /* Where is my RAM region? */ - hyp_va_msb = idmap_addr & BIT(VA_BITS - 1); - hyp_va_msb ^= BIT(VA_BITS - 1); + hyp_va_msb = idmap_addr & BIT(VA_BITS_ACTUAL - 1); + hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1); kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^ (u64)(high_memory - 1)); - if (kva_msb == (VA_BITS - 1)) { + if (kva_msb == (VA_BITS_ACTUAL - 1)) { /* * No space in the address, let's compute the mask so - * that it covers (VA_BITS - 1) bits, and the region + * that it covers (VA_BITS_ACTUAL - 1) bits, and the region * bit. The tag stays set to zero. */ - va_mask = BIT(VA_BITS - 1) - 1; + va_mask = BIT(VA_BITS_ACTUAL - 1) - 1; va_mask |= hyp_va_msb; } else { /* * We do have some free bits to insert a random tag. * Hyp VAs are now created from kernel linear map VAs - * using the following formula (with V == VA_BITS): + * using the following formula (with V == VA_BITS_ACTUAL): * * 63 ... V | V-1 | V-2 .. tag_lsb | tag_lsb - 1 .. 0 * --------------------------------------------------------- @@ -66,7 +66,7 @@ static void compute_layout(void) */ tag_lsb = kva_msb; va_mask = GENMASK_ULL(tag_lsb - 1, 0); - tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb); + tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb); tag_val |= hyp_va_msb; tag_val >>= tag_lsb; } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 0cb0e09995e1..6c672600ea8a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -171,9 +171,9 @@ static void show_pte(unsigned long addr) return; } - pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n", + pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp = %p\n", mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K, - mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd); + mm == &init_mm ? VA_BITS_ACTUAL : (int) vabits_user, mm->pgd); pgdp = pgd_offset(mm, addr); pgd = READ_ONCE(*pgdp); pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd)); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 574ed1d4be19..d89341df2d0e 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -61,6 +61,9 @@ s64 memstart_addr __ro_after_init = -1; EXPORT_SYMBOL(memstart_addr); +s64 physvirt_offset __ro_after_init; +EXPORT_SYMBOL(physvirt_offset); + phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE @@ -311,7 +314,7 @@ static void __init fdt_enforce_memory_region(void) void __init arm64_memblock_init(void) { - const s64 linear_region_size = BIT(VA_BITS - 1); + const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1); /* Handle linux,usable-memory-range property */ fdt_enforce_memory_region(); @@ -325,6 +328,8 @@ void __init arm64_memblock_init(void) memstart_addr = round_down(memblock_start_of_DRAM(), ARM64_MEMSTART_ALIGN); + physvirt_offset = PHYS_OFFSET - PAGE_OFFSET; + /* * Remove the memory that we will not be able to cover with the * linear mapping. Take care not to clip the kernel which may be diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 1b88d9d81954..57d4b72219c9 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -54,6 +54,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; u64 vabits_user __ro_after_init; EXPORT_SYMBOL(vabits_user); +u64 __section(".mmuoff.data.write") vabits_actual; +EXPORT_SYMBOL(vabits_actual); + u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); From patchwork Tue May 28 16:10:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CB85912 for ; Tue, 28 May 2019 16:12:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26D95286E4 for ; Tue, 28 May 2019 16:12:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 17E8C28738; Tue, 28 May 2019 16:12:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 744CE286E4 for ; Tue, 28 May 2019 16:12:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LqIAx03yhK0bY8L4esU6MKh5IXHSS1jiYvgXGB2bzoM=; b=pA5hSIb5jg47iR rx7BU0dxDbcxGT25ZGIACEFzbbRidmJJ9WURnrBAw7WIqe+HYyh+92FI5qunva1nzAt+ZIGqFdRel tk6c9jJ9fW+3ww0k7xFNpvcLVphw0gDS+4b4cwXTjLvh4fYa2D624REJwv0S1nCYcjy3w+CgWm/Ka Ni264NNZvcfyArrnTkijS6XIBzyM8/cq/E6JVohHnYW1VPBydx3qJYz+Ewj+Ri/8Bw22mTlzvLJwZ 0VQ6h+U0aoNBvwIfYH9sIrsikKlBg/PxbDEBB5y5lZcFvUx/dNPbkvgnEeMDm0xWXjhZrkAf9Q4RN rolbdgynIiOOns74FpFw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeho-0005s9-NE; Tue, 28 May 2019 16:12:04 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegp-0004hg-HB for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 625FB169E; Tue, 28 May 2019 09:11:03 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E154A3F59C; Tue, 28 May 2019 09:11:01 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional Date: Tue, 28 May 2019 17:10:22 +0100 Message-Id: <20190528161026.13193-9-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091103_886792_5AAF49B0 X-CRM114-Status: GOOD ( 17.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces alternative logic to offset_ttbr1 and expands out the very early case in head.S. We need to use the alternative framework as offset_ttbr1 is used in places in the kernel where it is not possible to safely adrp address kernel constants (such as the kpti paths); thus code patching is the safer route. Signed-off-by: Steve Capper --- arch/arm64/include/asm/assembler.h | 10 +++++++++- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/kernel/cpufeature.c | 18 ++++++++++++++++++ arch/arm64/kernel/head.S | 14 +++++++++++++- arch/arm64/kernel/hibernate-asm.S | 1 + 5 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 039fbd822ec6..a42c392ed1e1 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -548,6 +548,14 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU .macro offset_ttbr1, ttbr #ifdef CONFIG_ARM64_USER_VA_BITS_52 orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +#endif + +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 +alternative_if_not ARM64_HAS_52BIT_VA + orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET +alternative_else + nop +alternative_endif #endif .endm @@ -557,7 +565,7 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU * to be nop'ed out when dealing with 52-bit kernel VAs. */ .macro restore_ttbr1, ttbr -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52) bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET #endif .endm diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index defdc67d9ab4..b317b8761744 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -62,7 +62,8 @@ #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41 #define ARM64_HAS_IRQ_PRIO_MASKING 42 #define ARM64_HAS_DCPODP 43 +#define ARM64_HAS_52BIT_VA 44 -#define ARM64_NCAPS 44 +#define ARM64_NCAPS 45 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index ca27e08e3d8a..f9d8a5c8d8ce 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -957,6 +957,16 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) return has_cpuid_feature(entry, scope); } +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 +extern u64 vabits_actual; +static bool __maybe_unused +has_52bit_kernel_va(const struct arm64_cpu_capabilities *entry, int scope) +{ + return vabits_actual == 52; +} + +#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */ + static bool __meltdown_safe = true; static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ @@ -1558,6 +1568,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = 1, }, #endif +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 + { + .desc = "52-bit kernel VA", + .capability = ARM64_HAS_52BIT_VA, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_52bit_kernel_va, + }, +#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */ {}, }; diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b3335e639b6d..8bc1b533a912 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -788,7 +788,19 @@ ENTRY(__enable_mmu) phys_to_ttbr x1, x1 phys_to_ttbr x2, x2 msr ttbr0_el1, x2 // load TTBR0 - offset_ttbr1 x1 + +#if defined(CONFIG_ARM64_USER_VA_BITS_52) + orr x1, x1, #TTBR1_BADDR_4852_OFFSET +#endif + +#if defined(CONFIG_ARM64_USER_KERNEL_VA_BITS_52) + ldr_l x3, vabits_actual + cmp x3, #52 + b.eq 1f + orr x1, x1, #TTBR1_BADDR_4852_OFFSET +1: +#endif + msr ttbr1_el1, x1 // load TTBR1 isb msr sctlr_el1, x0 diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index fe36d85c60bd..d32725a2b77f 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -19,6 +19,7 @@ #include #include +#include #include #include #include From patchwork Tue May 28 16:10:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F92E912 for ; Tue, 28 May 2019 16:13:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AB1E28733 for ; Tue, 28 May 2019 16:13:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0D26328757; Tue, 28 May 2019 16:13:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9C38928733 for ; Tue, 28 May 2019 16:13:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8+oIrE1BL/Z5nPpq9ow3/CPzsn0nLXVchKBk6pMNCE8=; b=NmBva8V08KNxNF RogDvtoDxhyc8aWxQxDsHFtlUogI6phM8ANYPrA5OBeRCevJAK5mUyW8m7pX9BsQR2YmciK/bkLzw r/6ZDWa0Nu3T7559mxkZ/FYo0eLjRFRRcGtS2/EP4PyPa7IgGk/GWFYfGPUrooDZBr36oYGffRO6y IFtXxsH8XgntxxcVEavQLfmuGq4CYo/GmF4+y95XmL4gOG9sFeXGSKFLekKhyVdS5wjIQrFagrG24 Tf8AJK/qSxaJ2H1GyaLWlBdMlwCCkfKya483584USXlABircsiU2VWUiXCKsvuNG3nkIYzV5sSopy AoB7RPNtxP5L3jaaMIsw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeiw-0006jO-0U; Tue, 28 May 2019 16:13:14 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVegx-0004n4-Mw for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 779F6341; Tue, 28 May 2019 09:11:11 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A789F3F59C; Tue, 28 May 2019 09:11:03 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 09/12] arm64: mm: Separate out vmemmap Date: Tue, 28 May 2019 17:10:23 +0100 Message-Id: <20190528161026.13193-10-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091111_868767_2BB404C9 X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP vmemmap is a preprocessor definition that depends on a variable, memstart_addr. In a later patch we will need to expand the size of the VMEMMAP region and optionally modify vmemmap depending upon whether or not hardware support is available for 52-bit virtual addresses. This patch changes vmemmap to be a variable. As the old definition depended on a variable load, this should not affect performance noticeably. Signed-off-by: Steve Capper --- arch/arm64/include/asm/pgtable.h | 4 ++-- arch/arm64/mm/init.c | 5 +++++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d0ab784304e9..60c52c1da61a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -34,8 +34,6 @@ #define VMALLOC_START (MODULES_END) #define VMALLOC_END (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) -#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) - #define FIRST_USER_ADDRESS 0UL #ifndef __ASSEMBLY__ @@ -46,6 +44,8 @@ #include #include +extern struct page *vmemmap; + extern void __pte_error(const char *file, int line, unsigned long val); extern void __pmd_error(const char *file, int line, unsigned long val); extern void __pud_error(const char *file, int line, unsigned long val); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d89341df2d0e..6844365c0a51 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -64,6 +64,9 @@ EXPORT_SYMBOL(memstart_addr); s64 physvirt_offset __ro_after_init; EXPORT_SYMBOL(physvirt_offset); +struct page *vmemmap __ro_after_init; +EXPORT_SYMBOL(vmemmap); + phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE @@ -330,6 +333,8 @@ void __init arm64_memblock_init(void) physvirt_offset = PHYS_OFFSET - PAGE_OFFSET; + vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)); + /* * Remove the memory that we will not be able to cover with the * linear mapping. Take care not to clip the kernel which may be From patchwork Tue May 28 16:10:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D92ED76 for ; Tue, 28 May 2019 16:13:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C63AD28733 for ; Tue, 28 May 2019 16:13:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA9A828757; Tue, 28 May 2019 16:13:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6403928733 for ; Tue, 28 May 2019 16:13:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HEkG88b+KbYcTwSsx47F1VgpSEoY+Svi0NzdfZGupb0=; b=Ba0kFC6+VmzNU9 kgP/3eCHEFW2BhwWIR3fjkfJJlG/Q88lFFrb6ZJybT/rdJb6EdlwfnW/Yaaj29J6jsje7dnTEFxxx Ti/VQyRbEgtwmbRk4+zVdzfi8cp2wWuVbAQwgdkZ7Q0bfwd3SyeTldanC2WKReRf2mGVihGoZR3Vm 4I3ngQnj3+zt8QRyeaVggK5DJuL2O96JBJvXGtiFfzWISkwyDbUSNuQrzoMyAZsrhhnnLZx6+qT/W GaiFpOOVBGKpu2fjyBOwlFMxbVG5ne7ombz/kVMhVx2H1r3qoYHKrfSLy3eW+oW6UA1Nt73IWeyZ5 17PYxyHbqY4zuqmIBmcA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVejA-00073Z-Cl; Tue, 28 May 2019 16:13:28 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeh3-0004ot-Bl for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1EA67A78; Tue, 28 May 2019 09:11:17 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BFB183F59C; Tue, 28 May 2019 09:11:13 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 10/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Date: Tue, 28 May 2019 17:10:24 +0100 Message-Id: <20190528161026.13193-11-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091117_422136_91CE471F X-CRM114-Status: GOOD ( 15.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In a later patch we will need to have a slightly larger VMEMMAP region to accommodate boot time selection between 48/52-bit kernel VAs. This patch modifies the formula for computing VMEMMAP_SIZE to depend explicitly on the PAGE_OFFSET and start of kernel addressable memory. (This allows for a slightly larger direct linear map in future). Also this patch removes the use of bitmasking in the computation of _page_to_voff as this can cause problems if we decide to move regions around in future. Signed-off-by: Steve Capper --- arch/arm64/include/asm/memory.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c02373035533..e9af8aa36612 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -37,8 +37,15 @@ /* * VMEMMAP_SIZE - allows the whole linear region to be covered by * a struct page array + * + * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE + * neads to cover the memory region from the beginning of the 52-bit + * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to + * keep a constant PAGE_OFFSET and "fallback" to using the higher end + * of the VMEMMAP where 52-bit support is not available in hardware. */ -#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT)) +#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \ + >> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT)) /* * PAGE_OFFSET - the virtual address of the start of the linear map (top @@ -319,7 +326,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) #else #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) -#define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) +#define __page_to_voff(kaddr) (((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) #define page_to_virt(page) ({ \ unsigned long __addr = \ From patchwork Tue May 28 16:10:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F65776 for ; Tue, 28 May 2019 16:13:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E6D528733 for ; Tue, 28 May 2019 16:13:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2269928757; Tue, 28 May 2019 16:13:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BEAE428733 for ; Tue, 28 May 2019 16:13:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Bip5XUmb03ZDSiTHESUHAGN+6SJOT2eIL8kuEeR86SI=; b=axFPzjvq+DMsvp Ib5jzSZgAiNt09q7IflxhmA8Dgp1GaDlWIKRVgvTR3kDzfzBESeY5WrR0qQ+lhEEIgKt+sT6zmKgm bP3nP+WJX5Zo5fPgD/TB4hAUVA/0erO1IxKB81a9/uVP3sl9ljCFXzmiMeeIakAlRRQZxfHoQ4j9M oAkacAz8/CoJ1Cw3oXyq5v0Yhyk090G4FiLt8goaiAF8h96En25siC6uCOH7scYuaDbSgkhO3wxMl mK6/mTMCUkTQ122PyBy+Qec9deC+TTdXLZzG80Bvd8LlrizB/rIt37TiNuvMh8k0FtJJBVR4iSphh xcsRWUHaR/3gBaE84G0g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVejM-0007IH-EJ; Tue, 28 May 2019 16:13:40 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeh5-0004rV-E9 for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:22 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DCDF615AB; Tue, 28 May 2019 09:11:18 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 659E73F59C; Tue, 28 May 2019 09:11:17 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 11/12] arm64: mm: Tweak PAGE_OFFSET logic Date: Tue, 28 May 2019 17:10:25 +0100 Message-Id: <20190528161026.13193-12-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091119_870516_CE3F38C9 X-CRM114-Status: GOOD ( 13.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces _PAGE_OFFSET(va) to allow for computing the largest possible direct linear map for use with 48/52-bit selectable kernel VAs. Also, this patch removes the PAGE_OFFSET bit masking logic in virt_to_pgoff as this optimisation can paint us into a corner for permissible PAGE_OFFSET values in future. Lastly, this patch modifies the definition of _virt_addr_valid to use __virt_to_phys rather than a bitmasked PAGE_OFFSET formula. Signed-off-by: Steve Capper --- arch/arm64/include/asm/memory.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index e9af8aa36612..c817ee71e201 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -55,8 +55,9 @@ * VA_START - the first kernel virtual address. */ #define VA_BITS (CONFIG_ARM64_VA_BITS) -#define PAGE_OFFSET (UL(0xffffffffffffffff) - \ - (UL(1) << VA_BITS) + 1) +#define _PAGE_OFFSET(va) (UL(0xffffffffffffffff) - \ + (UL(1) << (va)) + 1) +#define PAGE_OFFSET (_PAGE_OFFSET(VA_BITS)) #define PAGE_OFFSET_END (VA_START) #define KIMAGE_VADDR (MODULES_END) #define BPF_JIT_REGION_START (KASAN_SHADOW_END) @@ -325,7 +326,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) #define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) #else -#define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) +#define __virt_to_pgoff(kaddr) (((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) #define page_to_virt(page) ({ \ @@ -338,8 +339,7 @@ static inline void *phys_to_virt(phys_addr_t x) #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) -#define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ - + PHYS_OFFSET) >> PAGE_SHIFT) +#define _virt_addr_valid(kaddr) pfn_valid(__virt_to_phys((u64)(kaddr)) >> PAGE_SHIFT) #endif #endif From patchwork Tue May 28 16:10:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 10965327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1A511575 for ; Tue, 28 May 2019 16:13:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0E9828733 for ; Tue, 28 May 2019 16:13:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B505828757; Tue, 28 May 2019 16:13:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F170C28733 for ; Tue, 28 May 2019 16:13:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iSnsFCT1DNPZ94lOz452d8fKWyPN/e//oAC4F2hc388=; b=VT3/1nC+JjkVkL gCYQOHNtSZKj3fRn+xnkH3ahd8yUtMa9WV7y7R1fVGq/Nne8Loc1Wmpy6dwV50ahgmCuEwSY5xZCN ExBxopDnOtvvEeA3Q53MdlLvpW7FKzMiYgxYg6kcUCOG7/bWmk/rfSXm8U2Z/aZqJ6e413ogfyu9r yqglDbTA2Nulxv49u/v6lfgrODy9mc57fDdUHv1oAJW0ZRVf1zjis8n8F9hBh4Y3hIdSqYjhpMLN3 gr+CQ1xW6U8FF4QQl0AemX1j3j460NtxC+m7eYSzcVgcZpHwPFhfKmnuGS1V7OJspQiCC7Pfp9bl9 brqht4CSBPcV6uahgDsQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVejV-0007WQ-O7; Tue, 28 May 2019 16:13:49 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hVeh7-0004uY-Fq for linux-arm-kernel@lists.infradead.org; Tue, 28 May 2019 16:11:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3123915AD; Tue, 28 May 2019 09:11:21 -0700 (PDT) Received: from capper-debian.arm.com (unknown [10.37.12.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A5753F59C; Tue, 28 May 2019 09:11:19 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs Date: Tue, 28 May 2019 17:10:26 +0100 Message-Id: <20190528161026.13193-13-steve.capper@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190528161026.13193-1-steve.capper@arm.com> References: <20190528161026.13193-1-steve.capper@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190528_091121_811641_76AEA648 X-CRM114-Status: GOOD ( 20.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: crecklin@redhat.com, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, catalin.marinas@arm.com, bhsharma@redhat.com, will.deacon@arm.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 31 ++++++++++++++++++++++---- arch/arm64/include/asm/assembler.h | 9 +++++++- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/mmu_context.h | 2 +- arch/arm64/include/asm/pgtable-hwdef.h | 2 +- arch/arm64/kernel/head.S | 4 ++-- arch/arm64/mm/init.c | 10 +++++++++ arch/arm64/mm/proc.S | 6 ++++- 8 files changed, 55 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0cedcb4a0a01..d22432c2caca 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -276,11 +276,14 @@ config KERNEL_MODE_NEON config FIX_EARLYCON_MEM def_bool y +config HAS_VA_BITS_52 + def_bool n + config PGTABLE_LEVELS int default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36 default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 - default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) + default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || HAS_VA_BITS_52) default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 @@ -294,12 +297,12 @@ config ARCH_PROC_KCORE_TEXT config KASAN_SHADOW_OFFSET hex depends on KASAN - default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS + default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && !KASAN_SW_TAGS default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS - default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS + default 0xefff900000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && KASAN_SW_TAGS default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS @@ -739,6 +742,7 @@ config ARM64_VA_BITS_48 config ARM64_USER_VA_BITS_52 bool "52-bit (user)" depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + select HAS_VA_BITS_52 help Enable 52-bit virtual addressing for userspace when explicitly requested via a hint to mmap(). The kernel will continue to @@ -751,11 +755,28 @@ config ARM64_USER_VA_BITS_52 If unsure, select 48-bit virtual addressing instead. +config ARM64_USER_KERNEL_VA_BITS_52 + bool "52-bit (user & kernel)" + depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN) + select HAS_VA_BITS_52 + help + Enable 52-bit virtual addressing for userspace when explicitly + requested via a hint to mmap(). The kernel will also use 52-bit + virtual addresses for its own mappings (provided HW support for + this feature is available, otherwise it reverts to 48-bit). + + NOTE: Enabling 52-bit virtual addressing in conjunction with + ARMv8.3 Pointer Authentication will result in the PAC being + reduced from 7 bits to 3 bits, which may have a significant + impact on its susceptibility to brute-force attacks. + + If unsure, select 48-bit virtual addressing instead. + endchoice config ARM64_FORCE_52BIT bool "Force 52-bit virtual addresses for userspace" - depends on ARM64_USER_VA_BITS_52 && EXPERT + depends on HAS_VA_BITS_52 && EXPERT help For systems with 52-bit userspace VAs enabled, the kernel will attempt to maintain compatibility with older software by providing 48-bit VAs @@ -773,9 +794,11 @@ config ARM64_VA_BITS default 42 if ARM64_VA_BITS_42 default 47 if ARM64_VA_BITS_47 default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52 + default 52 if ARM64_USER_KERNEL_VA_BITS_52 config ARM64_VA_BITS_MIN int + default 48 if ARM64_USER_KERNEL_VA_BITS_52 default ARM64_VA_BITS choice diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index a42c392ed1e1..1f8d7b891ec9 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -356,6 +356,13 @@ alternative_endif bfi \valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH .endm +/* + * tcr_set_t1sz - update TCR.T1SZ + */ + .macro tcr_set_t1sz, valreg, t1sz + bfi \valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH + .endm + /* * tcr_compute_pa_size - set TCR.(I)PS to the highest supported * ID_AA64MMFR0_EL1.PARange value @@ -565,7 +572,7 @@ alternative_endif * to be nop'ed out when dealing with 52-bit kernel VAs. */ .macro restore_ttbr1, ttbr -#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52) +#ifdef CONFIG_HAS_VA_BITS_52 bic \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET #endif .endm diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index c817ee71e201..8e3c12a3414d 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -77,7 +77,7 @@ #define KERNEL_START _text #define KERNEL_END _end -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 #define MAX_USER_VA_BITS 52 #else #define MAX_USER_VA_BITS VA_BITS diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 133ecb65b602..1ae26b357e1e 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -74,7 +74,7 @@ extern u64 idmap_ptrs_per_pgd; static inline bool __cpu_uses_extended_idmap(void) { - if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52)) + if (IS_ENABLED(CONFIG_HAS_VA_BITS_52)) return false; return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS)); diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index a69259cc1f16..73dc4a8c0df3 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -316,7 +316,7 @@ #define TTBR_BADDR_MASK_52 (((UL(1) << 46) - 1) << 2) #endif -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */ #define TTBR1_BADDR_4852_OFFSET (((UL(1) << (52 - PGDIR_SHIFT)) - \ (UL(1) << (48 - PGDIR_SHIFT))) * 8) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 8bc1b533a912..31c43abf30aa 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -319,7 +319,7 @@ __create_page_tables: adrp x0, idmap_pg_dir adrp x3, __idmap_text_start // __pa(__idmap_text_start) -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 mrs_s x6, SYS_ID_AA64MMFR2_EL1 and x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT) mov x5, #52 @@ -817,7 +817,7 @@ ENTRY(__enable_mmu) ENDPROC(__enable_mmu) ENTRY(__cpu_secondary_check52bitva) -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 ldr_l x0, vabits_user cmp x0, #52 b.ne 2f diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 6844365c0a51..c0c6d2dc39b2 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -335,6 +335,16 @@ void __init arm64_memblock_init(void) vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)); + /* + * If we are running with a 52-bit kernel VA config on a system that + * does not support it, we have to offset our vmemmap and physvirt_offset + * s.t. we avoid the 52-bit portion of the direct linear map + */ + if (IS_ENABLED(CONFIG_ARM64_USER_KERNEL_VA_BITS_52) && (VA_BITS_ACTUAL != 52)) { + vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT; + physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48); + } + /* * Remove the memory that we will not be able to cover with the * linear mapping. Take care not to clip the kernel which may be diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index fdd626d34274..62554d92e903 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -449,7 +449,7 @@ ENTRY(__cpu_setup) TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS tcr_clear_errata_bits x10, x9, x5 -#ifdef CONFIG_ARM64_USER_VA_BITS_52 +#ifdef CONFIG_HAS_VA_BITS_52 ldr_l x9, vabits_user sub x9, xzr, x9 add x9, x9, #64 @@ -458,6 +458,10 @@ ENTRY(__cpu_setup) #endif tcr_set_t0sz x10, x9 +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52 + tcr_set_t1sz x10, x9 +#endif + /* * Set the IPS bits in TCR_EL1. */