From patchwork Thu Oct 24 07:12:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King - ARM Linux X-Patchwork-Id: 3090391 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F20E49F372 for ; Thu, 24 Oct 2013 07:13:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ADCD520398 for ; Thu, 24 Oct 2013 07:13:46 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 343EB20397 for ; Thu, 24 Oct 2013 07:13:45 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VZF6q-000417-Km; Thu, 24 Oct 2013 07:13:32 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VZF6o-0007LQ-Du; Thu, 24 Oct 2013 07:13:30 +0000 Received: from [2002:4e20:1eda::1] (helo=caramon.arm.linux.org.uk) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VZF6k-0007KM-FJ for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2013 07:13:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=caramon; h=Sender:Content-Type:MIME-Version:Message-ID:Subject:To:From:Date; bh=Of4jQvvvSeJBy1n/bFHFvMILNJ004eiIt1IKAbHUV3s=; b=X6tBDAOp/EyZM0DT1j0sF6XpQHgKHA/d1ZfJGh9Wfd1aE4FSmaq9epTkfbo+rMTsVaIEiWXmrNfdH6pSOUzUoMfWXBc011YR9GdRJVHUrVR79Cvp9he22ZL9csSvLRvj1Lj5lFAUfZb4os0/gN5A4ozpcpLyb1QZA7Vqj4qaT5k=; Received: from n2100.arm.linux.org.uk ([2002:4e20:1eda:1:214:fdff:fe10:4f86]:46858) by caramon.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.76) (envelope-from ) id 1VZF6K-0002Zl-2J for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2013 08:13:00 +0100 Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76) (envelope-from ) id 1VZF6I-0004RY-Mw for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2013 08:12:58 +0100 Date: Thu, 24 Oct 2013 08:12:58 +0100 From: Russell King - ARM Linux To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] Simple NX lowmem mappings Message-ID: <20131024071258.GB16735@n2100.arm.linux.org.uk> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.19 (2009-01-05) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131024_031326_986079_8AF7387F X-CRM114-Status: GOOD ( 20.88 ) X-Spam-Score: -1.0 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch has similar functionality to Laura's patch set earlier this month, except I've just done the basic change - we don't increase the size of the kernel for this, so there should be no kernel size related problems with this. We mark sections which do not overlap any kernel text as non-executable. 8<==== From: Russell King ARM: Implement basic NX support for kernel lowmem mappings Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: Russell King --- arch/arm/include/asm/mach/map.h | 27 ++++++++++-------- arch/arm/mm/mmu.c | 55 +++++++++++++++++++++++++++++++++++--- 2 files changed, 65 insertions(+), 17 deletions(-) diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h index 2fe141f..d43f100 100644 --- a/arch/arm/include/asm/mach/map.h +++ b/arch/arm/include/asm/mach/map.h @@ -22,18 +22,21 @@ struct map_desc { }; /* types 0-3 are defined in asm/io.h */ -#define MT_UNCACHED 4 -#define MT_CACHECLEAN 5 -#define MT_MINICLEAN 6 -#define MT_LOW_VECTORS 7 -#define MT_HIGH_VECTORS 8 -#define MT_MEMORY 9 -#define MT_ROM 10 -#define MT_MEMORY_NONCACHED 11 -#define MT_MEMORY_DTCM 12 -#define MT_MEMORY_ITCM 13 -#define MT_MEMORY_SO 14 -#define MT_MEMORY_DMA_READY 15 +enum { + MT_UNCACHED = 4, + MT_CACHECLEAN, + MT_MINICLEAN, + MT_LOW_VECTORS, + MT_HIGH_VECTORS, + MT_MEMORY, + MT_MEMORY_NX, + MT_ROM, + MT_MEMORY_NONCACHED, + MT_MEMORY_DTCM, + MT_MEMORY_ITCM, + MT_MEMORY_SO, + MT_MEMORY_DMA_READY, +}; #ifdef CONFIG_MMU extern void iotable_init(struct map_desc *, int); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 53cdbd3..e8f0f94 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -291,6 +292,13 @@ static struct mem_type mem_types[] = { .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, .domain = DOMAIN_KERNEL, }, + [MT_MEMORY_NX] = { + .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | + L_PTE_XN, + .prot_l1 = PMD_TYPE_TABLE, + .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, + .domain = DOMAIN_KERNEL, + }, [MT_ROM] = { .prot_sect = PMD_TYPE_SECT, .domain = DOMAIN_KERNEL, @@ -408,6 +416,9 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_XN; + + /* Also setup NX memory mapping */ + mem_types[MT_MEMORY_NX].prot_sect |= PMD_SECT_XN; } if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) { /* @@ -487,6 +498,8 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE_CACHED].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED; + mem_types[MT_MEMORY_NX].prot_sect |= PMD_SECT_S; + mem_types[MT_MEMORY_NX].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY_NONCACHED].prot_pte |= L_PTE_SHARED; @@ -543,6 +556,8 @@ static void __init build_mem_type_table(void) mem_types[MT_HIGH_VECTORS].prot_l1 |= ecc_mask; mem_types[MT_MEMORY].prot_sect |= ecc_mask | cp->pmd; mem_types[MT_MEMORY].prot_pte |= kern_pgprot; + mem_types[MT_MEMORY_NX].prot_sect |= ecc_mask | cp->pmd; + mem_types[MT_MEMORY_NX].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_NONCACHED].prot_sect |= ecc_mask; mem_types[MT_ROM].prot_sect |= cp->pmd; @@ -1294,6 +1309,8 @@ static void __init kmap_init(void) static void __init map_lowmem(void) { struct memblock_region *reg; + unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); + unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); /* Map all the lowmem memory banks. */ for_each_memblock(memory, reg) { @@ -1306,12 +1323,40 @@ static void __init map_lowmem(void) if (start >= end) break; - map.pfn = __phys_to_pfn(start); - map.virtual = __phys_to_virt(start); - map.length = end - start; - map.type = MT_MEMORY; + if (end < kernel_x_start || start >= kernel_x_end) { + map.pfn = __phys_to_pfn(start); + map.virtual = __phys_to_virt(start); + map.length = end - start; + map.type = MT_MEMORY; - create_mapping(&map); + create_mapping(&map); + } else { + /* This better cover the entire kernel */ + if (start < kernel_x_start) { + map.pfn = __phys_to_pfn(start); + map.virtual = __phys_to_virt(start); + map.length = kernel_x_start - start; + map.type = MT_MEMORY_NX; + + create_mapping(&map); + } + + map.pfn = __phys_to_pfn(kernel_x_start); + map.virtual = __phys_to_virt(kernel_x_start); + map.length = kernel_x_end - kernel_x_start; + map.type = MT_MEMORY; + + create_mapping(&map); + + if (kernel_x_end < end) { + map.pfn = __phys_to_pfn(kernel_x_end); + map.virtual = __phys_to_virt(kernel_x_end); + map.length = end - kernel_x_end; + map.type = MT_MEMORY_NX; + + create_mapping(&map); + } + } } }