From patchwork Wed Apr 15 15:34:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 6221651 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 74EE79F313 for ; Wed, 15 Apr 2015 15:42:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6E2BD20274 for ; Wed, 15 Apr 2015 15:42:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5E4F5201CE for ; Wed, 15 Apr 2015 15:42:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPPL-0003AJ-CJ; Wed, 15 Apr 2015 15:39:19 +0000 Received: from mail-wi0-f179.google.com ([209.85.212.179]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiPMR-00015C-Mu for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 15:36:20 +0000 Received: by widdi4 with SMTP id di4so159718970wid.0 for ; Wed, 15 Apr 2015 08:35:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dRJjVeVFMqNJMKZ/Ia7Zw5WU6pKo/0mazJ8RFsZB7NU=; b=B+jYSi0y73wXq8LnZy4aJL89Bqumf2UtMUVBVUZbAv4lNyh5nht9YzbB1Debtf323d qve+Mf8oQXDmeWfsXXB3vdOa6zj/zmABmLxdE9uFvvlaOYgre0qzTLPtq3DqYM6X01wO s6iiPZcOq8S7nBemm3KXdkmSDL16TQNpIPuvwPsRJ6iBaQoBsfgbsH8XdJNHG3xZRi/D wy/q05WlgmjNiXYA+r0/TInsmPuoj8D6x3f5XUr9wSlDvWQF00QAZ705l7tK077FEJ+u vnx0GoRLhZ7CSWyK4RDNoEEGf2UOi/yUw3PWXPEcBECPw7vJ1L4Sgv+sbK4zRN/sjVjx Q9mQ== X-Gm-Message-State: ALoCoQkSOuD92k5zQ+U0j+ODID8vjBEUe41r65EJiyfIzpW/XkrRaDpTkoKUFddpg7WL38kHgFE6 X-Received: by 10.194.86.135 with SMTP id p7mr51651128wjz.89.1429112157731; Wed, 15 Apr 2015 08:35:57 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.175]) by mx.google.com with ESMTPSA id eh5sm7674765wic.20.2015.04.15.08.35.52 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 15 Apr 2015 08:35:56 -0700 (PDT) From: Ard Biesheuvel To: mark.rutland@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 09/13] arm64: mm: explicitly bootstrap the linear mapping Date: Wed, 15 Apr 2015 17:34:20 +0200 Message-Id: <1429112064-19952-10-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> References: <1429112064-19952-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_083619_987182_A51AB6FE X-CRM114-Status: GOOD ( 18.72 ) X-Spam-Score: -0.7 (/) Cc: Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation of moving the kernel text out of the linear mapping, ensure that the part of the kernel Image that contains the statically allocated page tables is made accessible via the linear mapping before performing the actual mapping of all of memory. This is needed by the normal mapping routines, that rely on the linear mapping to walk the page tables while manipulating them. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/vmlinux.lds.S | 18 ++++++++- arch/arm64/mm/mmu.c | 89 +++++++++++++++++++++++++++-------------- 2 files changed, 75 insertions(+), 32 deletions(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index ceec4def354b..338eaa7bcbfd 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -68,6 +68,17 @@ PECOFF_FILE_ALIGNMENT = 0x200; #define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min); #endif +/* + * The pgdir region needs to be mappable using a single PMD or PUD sized region, + * so it should not cross a 512 MB or 1 GB alignment boundary, respectively + * (depending on page size). So align to an upper bound of its size. + */ +#if CONFIG_ARM64_PGTABLE_LEVELS == 2 +#define PGDIR_ALIGN (8 * PAGE_SIZE) +#else +#define PGDIR_ALIGN (16 * PAGE_SIZE) +#endif + SECTIONS { /* @@ -160,7 +171,7 @@ SECTIONS BSS_SECTION(0, 0, 0) - .pgdir (NOLOAD) : ALIGN(PAGE_SIZE) { + .pgdir (NOLOAD) : ALIGN(PGDIR_ALIGN) { idmap_pg_dir = .; . += IDMAP_DIR_SIZE; swapper_pg_dir = .; @@ -185,6 +196,11 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, "ID map text too big or misaligned") /* + * Check that the chosen PGDIR_ALIGN value if sufficient. + */ +ASSERT(SIZEOF(.pgdir) < ALIGNOF(.pgdir), ".pgdir size exceeds its alignment") + +/* * If padding is applied before .head.text, virt<->phys conversions will fail. */ ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned") diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index c27ab20a5ba9..93e5a2497f01 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -380,26 +380,68 @@ static void __init bootstrap_early_mapping(unsigned long addr, } } +static void __init bootstrap_linear_mapping(unsigned long va_offset) +{ + /* + * Bootstrap the linear range that covers swapper_pg_dir so that the + * statically allocated page tables as well as newly allocated ones + * are accessible via the linear mapping. + */ + static struct bootstrap_pgtables linear_bs_pgtables __pgdir; + const phys_addr_t swapper_phys = __pa(swapper_pg_dir); + unsigned long swapper_virt = __phys_to_virt(swapper_phys) + va_offset; + struct memblock_region *reg; + + bootstrap_early_mapping(swapper_virt, &linear_bs_pgtables, + IS_ENABLED(CONFIG_ARM64_64K_PAGES)); + + /* now find the memblock that covers swapper_pg_dir, and clip */ + for_each_memblock(memory, reg) { + phys_addr_t start = reg->base; + phys_addr_t end = start + reg->size; + unsigned long vstart, vend; + + if (start > swapper_phys || end <= swapper_phys) + continue; + +#ifdef CONFIG_ARM64_64K_PAGES + /* clip the region to PMD size */ + vstart = max(swapper_virt & PMD_MASK, + round_up(__phys_to_virt(start + va_offset), + PAGE_SIZE)); + vend = min(round_up(swapper_virt, PMD_SIZE), + round_down(__phys_to_virt(end + va_offset), + PAGE_SIZE)); +#else + /* clip the region to PUD size */ + vstart = max(swapper_virt & PUD_MASK, + round_up(__phys_to_virt(start + va_offset), + PMD_SIZE)); + vend = min(round_up(swapper_virt, PUD_SIZE), + round_down(__phys_to_virt(end + va_offset), + PMD_SIZE)); +#endif + + create_mapping(__pa(vstart - va_offset), vstart, vend - vstart, + PAGE_KERNEL_EXEC); + + /* + * Temporarily limit the memblock range. We need to do this as + * create_mapping requires puds, pmds and ptes to be allocated + * from memory addressable from the early linear mapping. + */ + memblock_set_current_limit(__pa(vend - va_offset)); + + return; + } + BUG(); +} + static void __init map_mem(void) { struct memblock_region *reg; - phys_addr_t limit; - /* - * Temporarily limit the memblock range. We need to do this as - * create_mapping requires puds, pmds and ptes to be allocated from - * memory addressable from the initial direct kernel mapping. - * - * The initial direct kernel mapping, located at swapper_pg_dir, gives - * us PUD_SIZE (4K pages) or PMD_SIZE (64K pages) memory starting from - * PHYS_OFFSET (which must be aligned to 2MB as per - * Documentation/arm64/booting.txt). - */ - if (IS_ENABLED(CONFIG_ARM64_64K_PAGES)) - limit = PHYS_OFFSET + PMD_SIZE; - else - limit = PHYS_OFFSET + PUD_SIZE; - memblock_set_current_limit(limit); + bootstrap_linear_mapping(0); /* map all the memory banks */ for_each_memblock(memory, reg) { @@ -409,21 +451,6 @@ static void __init map_mem(void) if (start >= end) break; -#ifndef CONFIG_ARM64_64K_PAGES - /* - * For the first memory bank align the start address and - * current memblock limit to prevent create_mapping() from - * allocating pte page tables from unmapped memory. - * When 64K pages are enabled, the pte page table for the - * first PGDIR_SIZE is already present in swapper_pg_dir. - */ - if (start < limit) - start = ALIGN(start, PMD_SIZE); - if (end < limit) { - limit = end & PMD_MASK; - memblock_set_current_limit(limit); - } -#endif __map_memblock(start, end); }