From patchwork Fri Feb 10 17:16:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9567153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3059F601EA for ; Fri, 10 Feb 2017 17:17:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E0E627B81 for ; Fri, 10 Feb 2017 17:17:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 10D4E285AA; Fri, 10 Feb 2017 17:17:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 05C9227B81 for ; Fri, 10 Feb 2017 17:17:26 +0000 (UTC) Received: (qmail 7923 invoked by uid 550); 10 Feb 2017 17:17:15 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7687 invoked from network); 10 Feb 2017 17:17:13 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fssJYRn7/PYyXbxs8TnrR4LS+ANZ5cgdPxzYKwOUY5k=; b=bjwX+NvK7y5/cImrC22iFmY/uGmWkjyEG9N0Jv/qIDa5QkAy90zcvX0g6qFg60IRyL w899E+C3GqpiEX8l1XtUzVLBLyFtjtxzvw1c5T8XkV+VlRRqQ1MeMU/r/I0AUASp+WZf 3moPRhcmi5qPOK7yaMMVcpO/mQkbos/3Pqh7w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fssJYRn7/PYyXbxs8TnrR4LS+ANZ5cgdPxzYKwOUY5k=; b=VAyHnLfAwtqsH8pJyBWEu2yUXzWyULg9s0+6IxBfWyhLRDGW0fPOkv1EEd6Dh8dbsl rzYdz6NJOEx8yMDBpkwXSkR8PnKSHSAN9wtYpgl4KiWgS1Xr9Tkk25Pliuq3PkbkDApd 6diCqNGaQK5sGWns15JSuOPLw2JiqDWzJtAvFiPYysk6oNdqdAHssei0j0f2ZaNoy4dy 0Ep0I6wv/R68gDSHXXwYDyjxkgEY5Viu5ioeoVEwJFklYgHIv7UK57GkJM+Ne+H+iCKA UhaDbM4nmJU11EeGSsxlj6tA29Z8J7hjojivOw1pvrECIq4g5IkVjqjzTlYyU4mA6Wvj X6qA== X-Gm-Message-State: AMke39kgjAtoT51nhXIFPFdR2jBCqg3XeElnnK4ITUKbZIXeYUqTskzTG6xjzHnIiXg/hakg X-Received: by 10.28.146.207 with SMTP id u198mr26875818wmd.36.1486747021777; Fri, 10 Feb 2017 09:17:01 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, keescook@chromium.org, labbott@fedoraproject.org, james.morse@arm.com Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, christoffer.dall@linaro.org, kernel-hardening@lists.openwall.com, andre.przywara@arm.com, Ard Biesheuvel Date: Fri, 10 Feb 2017 17:16:45 +0000 Message-Id: <1486747005-15973-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1486747005-15973-1-git-send-email-ard.biesheuvel@linaro.org> References: <1486747005-15973-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data X-Virus-Scanned: ClamAV using ClamSMTP To avoid having mappings that are writable and executable at the same time, split the init region into a .init.text region that is mapped read-only, and a .init.data region that is mapped non-executable. This is possible now that the alternative patching occurs via the linear mapping, and the linear alias of the init region is always mapped writable (but never executable). Since the alternatives descriptions themselves are read-only data, move those into the .init.text region. The .rela section does not have to be mapped at all after applying the relocations, so drop that from the init mapping entirely. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/sections.h | 3 +- arch/arm64/kernel/vmlinux.lds.S | 32 ++++++++++++++------ arch/arm64/mm/init.c | 3 +- arch/arm64/mm/mmu.c | 12 +++++--- 4 files changed, 35 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 4e7e7067afdb..22582819b2e5 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[]; extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; extern char __hyp_text_start[], __hyp_text_end[]; extern char __idmap_text_start[], __idmap_text_end[]; +extern char __initdata_begin[], __initdata_end[]; +extern char __inittext_begin[], __inittext_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; - #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index b8deffa9e1bf..fa144d16bc91 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -143,12 +143,27 @@ SECTIONS . = ALIGN(SEGMENT_ALIGN); __init_begin = .; + __inittext_begin = .; INIT_TEXT_SECTION(8) .exit.text : { ARM_EXIT_KEEP(EXIT_TEXT) } + . = ALIGN(4); + .altinstructions : { + __alt_instructions = .; + *(.altinstructions) + __alt_instructions_end = .; + } + .altinstr_replacement : { + *(.altinstr_replacement) + } + + . = ALIGN(PAGE_SIZE); + __inittext_end = .; + __initdata_begin = .; + .init.data : { INIT_DATA INIT_SETUP(16) @@ -164,15 +179,14 @@ SECTIONS PERCPU_SECTION(L1_CACHE_BYTES) - . = ALIGN(4); - .altinstructions : { - __alt_instructions = .; - *(.altinstructions) - __alt_instructions_end = .; - } - .altinstr_replacement : { - *(.altinstr_replacement) - } + . = ALIGN(PAGE_SIZE); + __initdata_end = .; + + /* + * The .rela section is not covered by __inittext or __initdata since + * there is no reason to keep it mapped when we switch to the permanent + * swapper page tables. + */ .rela : ALIGN(8) { *(.rela .rela*) } diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8a2713018f2f..6a55feaf46c8 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -493,7 +493,8 @@ void free_initmem(void) * prevents the region from being reused for kernel modules, which * is not supported by kallsyms. */ - unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin)); + unmap_kernel_range((u64)__inittext_begin, + (u64)(__initdata_end - __inittext_begin)); } #ifdef CONFIG_BLK_DEV_INITRD diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5b0dbb9156ce..e6a4bf2acd59 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -482,12 +482,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, */ static void __init map_kernel(pgd_t *pgd) { - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data; + static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext, + vmlinux_initdata, vmlinux_data; map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text); - map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata); - map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC, - &vmlinux_init); + map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL, + &vmlinux_rodata); + map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX, + &vmlinux_inittext); + map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL, + &vmlinux_initdata); map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data); if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {