From patchwork Fri Nov 21 21:50:44 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 5358201 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 081E6C11AC for ; Fri, 21 Nov 2014 21:54:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1BB17201FA for ; Fri, 21 Nov 2014 21:54:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3481B2012E for ; Fri, 21 Nov 2014 21:54:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xrw8L-0004jO-19; Fri, 21 Nov 2014 21:52:53 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xrw73-00046e-Bj for linux-arm-kernel@lists.infradead.org; Fri, 21 Nov 2014 21:51:35 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 9B62114030C; Fri, 21 Nov 2014 21:50:53 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 8CA94140313; Fri, 21 Nov 2014 21:50:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from linux-kernel-memory-lab-01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id D975E14030C; Fri, 21 Nov 2014 21:50:52 +0000 (UTC) From: Laura Abbott To: Will Deacon , Steve Capper , Mark Rutland Subject: [PATCHv6 7/8] arm64: efi: Use ioremap_exec for code sections Date: Fri, 21 Nov 2014 13:50:44 -0800 Message-Id: <1416606645-25633-8-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1416606645-25633-1-git-send-email-lauraa@codeaurora.org> References: <1416606645-25633-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141121_135133_520944_0D46F19E X-CRM114-Status: GOOD ( 16.74 ) X-Spam-Score: -0.0 (/) Cc: Laura Abbott , Kees Cook , Ard Biesheuvel , Catalin Marinas , Leif Lindholm , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP ioremap is not guaranteed to return memory with proper execution permissions. Introduce ioremap_exec which will ensure that permission bits are set as expected for EFI code sections. Tested-by: Kees Cook Signed-off-by: Laura Abbott --- arch/arm64/include/asm/io.h | 1 + arch/arm64/include/asm/pgtable.h | 1 + arch/arm64/kernel/efi.c | 12 +++++++++++- arch/arm64/mm/ioremap.c | 11 +++++++++++ 4 files changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 79f1d519..7dd8465 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -230,6 +230,7 @@ extern void __memset_io(volatile void __iomem *, int, size_t); extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot); extern void __iounmap(volatile void __iomem *addr); extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); +extern void __iomem *ioremap_exec(phys_addr_t phys_addr, size_t size); #define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 41a43bf..9b1d9d0 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -65,6 +65,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val); #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL_NC)) #define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL)) +#define PROT_NORMAL_EXEC (PROT_DEFAULT | PTE_UXN | PTE_ATTRINDX(MT_NORMAL)) #define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 95c49eb..9e41f95 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -47,6 +47,14 @@ static int __init is_normal_ram(efi_memory_desc_t *md) return 0; } +static int __init is_code(efi_memory_desc_t *md) +{ + if (md->attribute & EFI_RUNTIME_SERVICES_CODE) + return 1; + return 0; +} + + static void __init efi_setup_idmap(void) { struct memblock_region *r; @@ -338,7 +346,9 @@ static int __init remap_region(efi_memory_desc_t *md, void **new) memrange_efi_to_native(&paddr, &npages); size = npages << PAGE_SHIFT; - if (is_normal_ram(md)) + if (is_code(md)) + vaddr = (__force u64)ioremap_exec(paddr, size); + else if (is_normal_ram(md)) vaddr = (__force u64)ioremap_cache(paddr, size); else vaddr = (__force u64)ioremap(paddr, size); diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index cbb99c8..b998441 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -103,6 +103,17 @@ void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) } EXPORT_SYMBOL(ioremap_cache); +void __iomem *ioremap_exec(phys_addr_t phys_addr, size_t size) +{ + /* For normal memory we already have a cacheable mapping. */ + if (pfn_valid(__phys_to_pfn(phys_addr))) + return (void __iomem *)__phys_to_virt(phys_addr); + + return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL_EXEC), + __builtin_return_address(0)); +} +EXPORT_SYMBOL(ioremap_exec); + /* * Must be called after early_fixmap_init */