From patchwork Wed Apr 15 15:34:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 6221671 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3D18BBF4A6 for ; Wed, 15 Apr 2015 15:43:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 135E420274 for ; Wed, 15 Apr 2015 15:43:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D7451201CE for ; Wed, 15 Apr 2015 15:43:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPQ9-0003d2-Fk; Wed, 15 Apr 2015 15:40:09 +0000 Received: from mail-wg0-f49.google.com ([74.125.82.49]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMX-000182-6e for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:26 +0000 Received: by wgso17 with SMTP id o17so51342783wgs.1 for ; Wed, 15 Apr 2015 08:36:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YAcRIEV3pNACM4KatVWwdo/r8JbuK7Jo3qMhwDaDe94=; b=Yqy6ibjWKy3LMKyGQfTjHnZ940SmGbinTjhxAnOLSkIc3DNIFYsOOmmZlgNJywg690 1RPYgI+2lZRgiZf75jwxI/h486wkLTKzOuInuLb/Y0lS3+65/PzJrOX+pX5DsKyj0wMO apwb2RbWBSe5ZGWWmpKX40fZtaVymkTChqFSPqkD0XAcPgxQAc/LoyQdpwKpiAL36Lng 4XJmlR9qos7FGTUcVhKR4hsjDiIOhNHh2WjfPnbNHizDb5EKatrH7NWR34aIAcasrgJz v1Hsc1wYmoZfGsOU9TzqrCTgLydhCoLyhhvAqAEfppAj+MEU22HiWgs0YY/3IMiz4Exr +8jA== X-Gm-Message-State: ALoCoQmYEMtPw2pwvzsahii7pn5IJPr1gtq5pCzHqMSLCxbT8jSUj2P+G3ByOEli0vcpHmh/qQPf X-Received: by 10.194.222.135 with SMTP id qm7mr52053947wjc.14.1429112163694; Wed, 15 Apr 2015 08:36:03 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.35.58 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:36:02 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 10/13] arm64: move kernel mapping out of linear region Date: Wed, 15 Apr 2015 17:34:21 +0200 Message-Id: <1429112064-19952-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083625_395985_C88AAE0B X-CRM114-Status: GOOD ( 18.10 ) X-Spam-Score: -0.7 (/) Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This moves the primary mapping of the kernel Image out of the linear region. This is a preparatory step towards allowing the kernel Image to reside anywhere in physical memory without affecting the ability to map all of it efficiently. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/boot.h | 7 +++++++ arch/arm64/include/asm/memory.h | 28 ++++++++++++++++++++++++---- arch/arm64/kernel/head.S | 8 ++++---- arch/arm64/kernel/vmlinux.lds.S | 11 +++++++++-- arch/arm64/mm/mmu.c | 11 ++++++++++- 5 files changed, 54 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/boot.h b/arch/arm64/include/asm/boot.h index 81151b67b26b..092d1096ce9a 100644 --- a/arch/arm64/include/asm/boot.h +++ b/arch/arm64/include/asm/boot.h @@ -11,4 +11,11 @@ #define MIN_FDT_ALIGN 8 #define MAX_FDT_SIZE SZ_2M +/* + * arm64 requires the kernel image to be 2 MB aligned and + * not exceed 64 MB in size. + */ +#define MIN_KIMG_ALIGN SZ_2M +#define MAX_KIMG_SIZE SZ_64M + #endif diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index f800d45ea226..801331793bd3 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -24,6 +24,7 @@ #include #include #include +#include #include /* @@ -39,7 +40,12 @@ #define PCI_IO_SIZE SZ_16M /* - * PAGE_OFFSET - the virtual address of the start of the kernel image (top + * Offset below PAGE_OFFSET where to map the kernel Image. + */ +#define KIMAGE_OFFSET MAX_KIMG_SIZE + +/* + * PAGE_OFFSET - the virtual address of the base of the linear mapping (top * (VA_BITS - 1)) * VA_BITS - the maximum number of bits for virtual addresses. * TASK_SIZE - the maximum size of a user space task. @@ -49,7 +55,8 @@ */ #define VA_BITS (CONFIG_ARM64_VA_BITS) #define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1)) -#define MODULES_END (PAGE_OFFSET) +#define KIMAGE_VADDR (PAGE_OFFSET - KIMAGE_OFFSET) +#define MODULES_END KIMAGE_VADDR #define MODULES_VADDR (MODULES_END - SZ_64M) #define PCI_IO_END (MODULES_VADDR - SZ_2M) #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) @@ -77,7 +84,11 @@ * private definitions which should NOT be used outside memory.h * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. */ -#define __virt_to_phys(x) (((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET)) +#define __virt_to_phys(x) ({ \ + long __x = (long)(x) - PAGE_OFFSET; \ + __x >= 0 ? (phys_addr_t)(__x + PHYS_OFFSET) : \ + (phys_addr_t)(__x + PHYS_OFFSET + kernel_va_offset); }) + #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET)) /* @@ -111,7 +122,16 @@ extern phys_addr_t memstart_addr; /* PHYS_OFFSET - the physical address of the start of memory. */ -#define PHYS_OFFSET ({ memstart_addr; }) +#define PHYS_OFFSET ({ memstart_addr + phys_offset_bias; }) + +/* + * Before the linear mapping has been set up, __va() translations will + * not produce usable virtual addresses unless we tweak PHYS_OFFSET to + * compensate for the offset between the kernel mapping and the base of + * the linear mapping. We will undo this in map_mem(). + */ +extern u64 phys_offset_bias; +extern u64 kernel_va_offset; /* * PFNs are used to describe any physical page; this means diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c0ff3ce4299e..3bf1d339dd8d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -36,8 +36,6 @@ #include #include -#define __PHYS_OFFSET (KERNEL_START - TEXT_OFFSET) - #if (TEXT_OFFSET & 0xfff) != 0 #error TEXT_OFFSET must be at least 4KB aligned #elif (PAGE_OFFSET & 0x1fffff) != 0 @@ -58,6 +56,8 @@ #define KERNEL_START _text #define KERNEL_END _end +#define KERNEL_BASE (KERNEL_START - TEXT_OFFSET) + /* * Initial memory map attributes. @@ -235,7 +235,7 @@ section_table: ENTRY(stext) bl preserve_boot_args bl el2_setup // Drop to EL1, w20=cpu_boot_mode - adrp x24, __PHYS_OFFSET + adrp x24, KERNEL_BASE bl set_cpu_boot_mode_flag bl __create_page_tables // x25=TTBR0, x26=TTBR1 /* @@ -411,7 +411,7 @@ __create_page_tables: * Map the kernel image (starting with PHYS_OFFSET). */ mov x0, x26 // swapper_pg_dir - mov x5, #PAGE_OFFSET + ldr x5, =KERNEL_BASE create_pgd_entry x0, x5, x3, x6 ldr x6, =KERNEL_END // __va(KERNEL_END) mov x3, x24 // phys offset diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 338eaa7bcbfd..8dbb816c0338 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -6,6 +6,7 @@ #include #include +#include #include #include #include @@ -95,7 +96,7 @@ SECTIONS *(.discard.*) } - . = PAGE_OFFSET + TEXT_OFFSET; + . = KIMAGE_VADDR + TEXT_OFFSET; .head.text : { _text = .; @@ -203,4 +204,10 @@ ASSERT(SIZEOF(.pgdir) < ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") /* * If padding is applied before .head.text, virt<->phys conversions will fail. */ -ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") +ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned") + +/* + * Make sure the memory footprint of the kernel Image does not exceed the limit. + */ +ASSERT(_end - _text + TEXT_OFFSET <= MAX_KIMG_SIZE, + "Kernel Image memory footprint exceeds MAX_KIMG_SIZE") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 93e5a2497f01..b457b7e425cc 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -50,6 +50,9 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS); struct page *empty_zero_page; EXPORT_SYMBOL(empty_zero_page); +u64 phys_offset_bias __read_mostly = KIMAGE_OFFSET; +u64 kernel_va_offset __read_mostly; + pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { @@ -386,6 +389,9 @@ static void __init bootstrap_linear_mapping(unsigned long va_offset) * Bootstrap the linear range that covers swapper_pg_dir so that the * statically allocated page tables as well as newly allocated ones * are accessible via the linear mapping. + * Since at this point, PHYS_OFFSET is still biased to redirect __va() + * translations into the kernel text mapping, we need to apply an + * explicit va_offset to calculate virtual linear addresses. */ static struct bootstrap_pgtables linear_bs_pgtables __pgdir; const phys_addr_t swapper_phys = __pa(swapper_pg_dir); @@ -441,7 +447,10 @@ static void __init map_mem(void) { struct memblock_region *reg; - bootstrap_linear_mapping(0); + bootstrap_linear_mapping(KIMAGE_OFFSET); + + kernel_va_offset = KIMAGE_OFFSET; + phys_offset_bias = 0; /* map all the memory banks */ for_each_memblock(memory, reg) {