From patchwork Wed Oct 12 22:32:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9373867 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A709860487 for ; Wed, 12 Oct 2016 22:32:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8F142978F for ; Wed, 12 Oct 2016 22:32:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D4AF2978C; Wed, 12 Oct 2016 22:32:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 818112978C for ; Wed, 12 Oct 2016 22:32:40 +0000 (UTC) Received: (qmail 7805 invoked by uid 550); 12 Oct 2016 22:32:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7680 invoked from network); 12 Oct 2016 22:32:29 -0000 X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ZU9lBTDb/RE4Oj9/lPuLs0fnN2ekNEgPThuKu9DHbaU=; b=UBHumPYYTNssK8eBmcp7NFyBmFYSsSAQM+1F3jn4RgY4e/weZRaw4BewWnbcXRUUNT vN1n41CORL//DJ1XGqrDodv4G4FhKm/hww+p8YJ5D/x8DFL9emSFW6BosZ4Oh8C+MnFL W6++YAMraLEcheub4VFUsyuD8Ul8IYqShLSjcQNzSL0w/rduQuRE03BheZEfBfvQqi8a x6RGDOZNkRItEkuXROJP2p6UP/6dv4I0V/Bmw724TVWO4Q1r3RGMlQZlmP0uqs3j2vBu KtgF4FeTVxRsfMBuvmlzb2pHYfFAIqTAWgTozPfeE6yHLFpTLek+TaHgL9294Apn0MNc ykHQ== X-Gm-Message-State: AA6/9RnoVzzKPl8bAwvFmOcfbIWZcT14xquntSHNVtABxMv/YxTubHBbY+Ouunon/2cWQxJx X-Received: by 10.200.50.157 with SMTP id z29mr3652946qta.11.1476311537811; Wed, 12 Oct 2016 15:32:17 -0700 (PDT) From: Laura Abbott To: AKASHI Takahiro , Mark Rutland , Ard Biesheuvel , David Brown , Will Deacon , Catalin Marinas Cc: Laura Abbott , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Kees Cook , kernel-hardening@lists.openwall.com Date: Wed, 12 Oct 2016 15:32:02 -0700 Message-Id: <1476311522-15381-5-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476311522-15381-1-git-send-email-labbott@redhat.com> References: <1476311522-15381-1-git-send-email-labbott@redhat.com> Subject: [kernel-hardening] [PATCHv2 4/4] arm64: dump: Add checking for writable and exectuable pages X-Virus-Scanned: ClamAV using ClamSMTP Page mappings with full RWX permissions are a security risk. x86 has an option to walk the page tables and dump any bad pages. (See e1a58320a38d ("x86/mm: Warn on W^X mappings")). Add a similar implementation for arm64. Signed-off-by: Laura Abbott Reviewed-by: Mark Rutland Tested-by: Mark Rutland --- v2: Check only init_mm, style cleanups, UXN checks, compiliation fixes for disabled case. --- arch/arm64/Kconfig.debug | 30 ++++++++++++++++++++++++ arch/arm64/include/asm/ptdump.h | 8 +++++++ arch/arm64/mm/dump.c | 52 +++++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/mmu.c | 2 ++ 4 files changed, 92 insertions(+) diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index 21a5b74..3f627c94 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -42,6 +42,36 @@ config ARM64_RANDOMIZE_TEXT_OFFSET of TEXT_OFFSET and platforms must not require a specific value. +config DEBUG_WX + bool "Warn on W+X mappings at boot" + select ARM64_PTDUMP_CORE + ---help--- + Generate a warning if any W+X mappings are found at boot. + + This is useful for discovering cases where the kernel is leaving + W+X mappings after applying NX, as such mappings are a security risk. + This check also includes UXN, which should be set on all kernel + mappings. + + Look for a message in dmesg output like this: + + arm64/mm: Checked W+X mappings: passed, no W+X pages found. + + or like this, if the check failed: + + arm64/mm: Checked W+X mappings: FAILED, W+X pages found. + + Note that even if the check fails, your kernel is possibly + still fine, as W+X mappings are not a security hole in + themselves, what they do is that they make the exploitation + of other unfixed kernel bugs easier. + + There is no runtime or memory usage effect of this option + once the kernel has booted up - it's a one time check. + + If in doubt, say "Y". + + config DEBUG_SET_MODULE_RONX bool "Set loadable kernel module data as NX and text as RO" depends on MODULES diff --git a/arch/arm64/include/asm/ptdump.h b/arch/arm64/include/asm/ptdump.h index 8fc0957..6afd847 100644 --- a/arch/arm64/include/asm/ptdump.h +++ b/arch/arm64/include/asm/ptdump.h @@ -42,5 +42,13 @@ static inline int ptdump_debugfs_register(struct ptdump_info *info, return 0; } #endif +void ptdump_check_wx(void); +#endif /* CONFIG_ARM64_PTDUMP_CORE */ + +#ifdef CONFIG_DEBUG_WX +#define debug_checkwx() ptdump_check_wx() +#else +#define debug_checkwx() do { } while (0) +#endif #endif /* __ASM_PTDUMP_H */ diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index bb36649..4913af5 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -74,6 +74,8 @@ struct pg_state { unsigned long start_address; unsigned level; u64 current_prot; + bool check_wx; + unsigned long wx_pages; }; struct prot_bits { @@ -202,6 +204,35 @@ static void dump_prot(struct pg_state *st, const struct prot_bits *bits, } } +static void note_prot_uxn(struct pg_state *st, unsigned long addr) +{ + if (!st->check_wx) + return; + + if ((st->current_prot & PTE_UXN) == PTE_UXN) + return; + + WARN_ONCE(1, "arm64/mm: Found non-UXN mapping at address %p/%pS\n", + (void *)st->start_address, (void *)st->start_address); + + st->wx_pages += (addr - st->start_address) / PAGE_SIZE; +} + +static void note_prot_wx(struct pg_state *st, unsigned long addr) +{ + if (!st->check_wx) + return; + if ((st->current_prot & PTE_RDONLY) == PTE_RDONLY) + return; + if ((st->current_prot & PTE_PXN) == PTE_PXN) + return; + + WARN_ONCE(1, "arm64/mm: Found insecure W+X mapping at address %p/%pS\n", + (void *)st->start_address, (void *)st->start_address); + + st->wx_pages += (addr - st->start_address) / PAGE_SIZE; +} + static void note_page(struct pg_state *st, unsigned long addr, unsigned level, u64 val) { @@ -219,6 +250,8 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level, unsigned long delta; if (st->current_prot) { + note_prot_uxn(st, addr); + note_prot_wx(st, addr); pt_dump_seq_printf(st->seq, "0x%016lx-0x%016lx ", st->start_address, addr); @@ -344,6 +377,25 @@ static struct ptdump_info kernel_ptdump_info = { .base_addr = VA_START, }; +void ptdump_check_wx(void) +{ + struct pg_state st = { + .seq = NULL, + .marker = (struct addr_marker[]) { + { -1, NULL}, + }, + .check_wx = true, + }; + + walk_pgd(&st, &init_mm, 0); + note_page(&st, 0, 0, 0); + if (st.wx_pages) + pr_info("Checked W+X mappings: FAILED, %lu W+X pages found\n", + st.wx_pages); + else + pr_info("Checked W+X mappings: passed, no W+X pages found\n"); +} + static int ptdump_init(void) { ptdump_initialize(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 05615a3..2cbe2fe 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,6 +42,7 @@ #include #include #include +#include u64 idmap_t0sz = TCR_T0SZ(VA_BITS); @@ -396,6 +397,7 @@ void mark_rodata_ro(void) section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); + debug_checkwx(); } static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,