From patchwork Tue Dec 5 23:51:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 10094165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 33BD16035E for ; Tue, 5 Dec 2017 23:51:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24AFB2962C for ; Tue, 5 Dec 2017 23:51:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 196F929941; Tue, 5 Dec 2017 23:51:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8AD582962C for ; Tue, 5 Dec 2017 23:51:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=c9anfkXWIWF5J3d1G1Bo5St+iVYvfrZzeN8il2026So=; b=B2/BZsYaxygTrtA27RwU+NhYo NyX99NkusLODIRL+3K1V8T9PB7xw7f1WePWGdm2wgJIKskpsLLMaHP11Jy8SusGPiZrnyYz7c0Mft h2Wv9ZYvM4QfeZ34Selp+1xb8pN+uiezBEIqC2iCe7sqS7qaQHE6yuuH+NJlfcWMhHKEZ1slerNdj x9Q8319dp1bqMH6n280H1+r5hNPUCWx3DQvTMXdBHTDdZOxxh9J8GqUWDELJs2M1VNjoSq47/NMsF 5jbBN2WVH3II2oNn3qH9LrRRc+HRTQkRsYe35sq2EDB2mdf40T9GiiJv66IKUhHPBpkpSJ8hRuPnQ /Bf3QhQzw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eMMzv-0000qh-2U; Tue, 05 Dec 2017 23:51:35 +0000 Received: from mail-oi0-f67.google.com ([209.85.218.67]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMMzr-0000ob-DS for linux-arm-kernel@lists.infradead.org; Tue, 05 Dec 2017 23:51:33 +0000 Received: by mail-oi0-f67.google.com with SMTP id l6so1428502oih.11 for ; Tue, 05 Dec 2017 15:51:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=9liSV92gSuQDmpdpQogsill0CHo6dAnhPXgLy/wyr00=; b=tigGlVn6/Sg4IX/wlAI2XoyNX6D4wsGql81LaGR6xQax1d42D6uzbTBw/KB9DIjK62 27AU2+xVC9+hWJWLA5lFLHYMcFXEL2TfZlNQCdEAPydGx/1rnlejh3Sr/UCrG9suWBKJ BCY+lBAp6BrgAs4/prHehFze6N+SBTMNgn1DUwg9R/Kwa3GoLksT7O6Ze93q4idu6r46 w5jG4ORCQZPFH7eC8Hm8El+caB31sol9zV5pyB6ZGV6U987tzKgYJh20ylnyof/6Nhn9 yvrra/LbO7wCmE2rpwsolv8g4ttdhbnF4VZUAwkPnOZQ1ONEVO5iXL35aK70xK7ZPWQM WaJg== X-Gm-Message-State: AKGB3mLoQAZHc3oJ8eovnzg74raaOkW5I1UaGJg0LcWxbkMhOPF+g4ft lQok6LJ6G1w5/6I2FLq2OMXIKg== X-Google-Smtp-Source: AGs4zMYf6CAaJcu1hRJZXfokmecgg551B2HNOVRUK8DZRkIBYsdwPhnLZVDCKSUbMfDDiQZjv9Y8jA== X-Received: by 10.202.94.67 with SMTP id s64mr12591481oib.60.1512517869924; Tue, 05 Dec 2017 15:51:09 -0800 (PST) Received: from ?IPv6:2601:602:9802:a8dc:4eb2:6dae:ab32:e5b0? ([2601:602:9802:a8dc:4eb2:6dae:ab32:e5b0]) by smtp.gmail.com with ESMTPSA id z71sm289631oig.22.2017.12.05.15.51.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 05 Dec 2017 15:51:08 -0800 (PST) Subject: Re: [kernel-hardening][PATCH v3 3/3] arm: mm: dump: add checking for writable and executable pages To: Jinbum Park , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com References: <20171204142709.GA3376@pjb1027-Latitude-E5410> From: Laura Abbott Message-ID: <82ab0116-ac67-c80a-73d5-a812e38eb547@redhat.com> Date: Tue, 5 Dec 2017 15:51:07 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <20171204142709.GA3376@pjb1027-Latitude-E5410> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171205_155131_556966_58A5ED8B X-CRM114-Status: GOOD ( 27.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, vladimir.murzin@arm.com, keescook@chromium.org, arnd@arndb.de, gregkh@linuxfoundation.org, linux@armlinux.org.uk, afzal.mohd.ma@gmail.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On 12/04/2017 06:27 AM, Jinbum Park wrote: > Page mappings with full RWX permissions are a security risk. > x86, arm64 has an option to walk the page tables > and dump any bad pages. > > (1404d6f13e47 > ("arm64: dump: Add checking for writable and exectuable pages")) > Add a similar implementation for arm. > > Signed-off-by: Jinbum Park > --- > v3: Reuse pg_level, prot_bits to check ro, nx prot. > > arch/arm/Kconfig.debug | 27 +++++++++++++++++++++++ > arch/arm/include/asm/ptdump.h | 8 +++++++ > arch/arm/mm/dump.c | 51 +++++++++++++++++++++++++++++++++++++++++++ > arch/arm/mm/init.c | 2 ++ > 4 files changed, 88 insertions(+) > > diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug > index e7b94db..78a6470 100644 > --- a/arch/arm/Kconfig.debug > +++ b/arch/arm/Kconfig.debug > @@ -20,6 +20,33 @@ config ARM_PTDUMP_DEBUGFS > kernel. > If in doubt, say "N" > > +config DEBUG_WX > + bool "Warn on W+X mappings at boot" > + select ARM_PTDUMP_CORE > + ---help--- > + Generate a warning if any W+X mappings are found at boot. > + > + This is useful for discovering cases where the kernel is leaving > + W+X mappings after applying NX, as such mappings are a security risk. > + > + Look for a message in dmesg output like this: > + > + arm/mm: Checked W+X mappings: passed, no W+X pages found. > + > + or like this, if the check failed: > + > + arm/mm: Checked W+X mappings: FAILED, W+X pages found. > + > + Note that even if the check fails, your kernel is possibly > + still fine, as W+X mappings are not a security hole in > + themselves, what they do is that they make the exploitation > + of other unfixed kernel bugs easier. > + > + There is no runtime or memory usage effect of this option > + once the kernel has booted up - it's a one time check. > + > + If in doubt, say "Y". > + > # RMK wants arm kernels compiled with frame pointers or stack unwinding. > # If you know what you are doing and are willing to live without stack > # traces, you can get a slightly smaller kernel by setting this option to > diff --git a/arch/arm/include/asm/ptdump.h b/arch/arm/include/asm/ptdump.h > index 3a6c0b7..b6a0162 100644 > --- a/arch/arm/include/asm/ptdump.h > +++ b/arch/arm/include/asm/ptdump.h > @@ -43,6 +43,14 @@ static inline int ptdump_debugfs_register(struct ptdump_info *info, > } > #endif /* CONFIG_ARM_PTDUMP_DEBUGFS */ > > +void ptdump_check_wx(void); > + > #endif /* CONFIG_ARM_PTDUMP_CORE */ > > +#ifdef CONFIG_DEBUG_WX > +#define debug_checkwx() ptdump_check_wx() > +#else > +#define debug_checkwx() do { } while (0) > +#endif > + > #endif /* __ASM_PTDUMP_H */ > diff --git a/arch/arm/mm/dump.c b/arch/arm/mm/dump.c > index 43a2bee..3e2e6f0 100644 > --- a/arch/arm/mm/dump.c > +++ b/arch/arm/mm/dump.c > @@ -52,6 +52,8 @@ struct pg_state { > unsigned long start_address; > unsigned level; > u64 current_prot; > + bool check_wx; > + unsigned long wx_pages; > const char *current_domain; > }; > > @@ -194,6 +196,8 @@ struct pg_level { > const struct prot_bits *bits; > size_t num; > u64 mask; > + const struct prot_bits *ro_bit; > + const struct prot_bits *nx_bit; > }; > > static struct pg_level pg_level[] = { > @@ -203,9 +207,17 @@ struct pg_level { > }, { /* pmd */ > .bits = section_bits, > .num = ARRAY_SIZE(section_bits), > + #ifdef CONFIG_ARM_LPAE > + .ro_bit = section_bits + 1, > + #else > + .ro_bit = section_bits, > + #endif > + .nx_bit = section_bits + ARRAY_SIZE(section_bits) - 2, > }, { /* pte */ > .bits = pte_bits, > .num = ARRAY_SIZE(pte_bits), > + .ro_bit = pte_bits + 1, > + .nx_bit = pte_bits + 2, > }, > }; > This is better but the addition offset from the array is still prone to breakage if we add entries. Maybe something like this on top of yours: > @@ -226,6 +238,23 @@ static void dump_prot(struct pg_state *st, const struct prot_bits *bits, size_t > } > } > > +static void note_prot_wx(struct pg_state *st, unsigned long addr) > +{ > + if (!st->check_wx) > + return; > + if ((st->current_prot & pg_level[st->level].ro_bit->mask) == > + pg_level[st->level].ro_bit->val) > + return; > + if ((st->current_prot & pg_level[st->level].nx_bit->mask) == > + pg_level[st->level].nx_bit->val) > + return; > + > + WARN_ONCE(1, "arm/mm: Found insecure W+X mapping at address %p/%pS\n", > + (void *)st->start_address, (void *)st->start_address); > + With the new %p hashing, printing just %p is not useful, so just drop it and just have the %pS. Thanks, Laura diff --git a/arch/arm/mm/dump.c b/arch/arm/mm/dump.c index 3e2e6f06e4d9..572cbc4dc247 100644 --- a/arch/arm/mm/dump.c +++ b/arch/arm/mm/dump.c @@ -62,6 +62,8 @@ struct prot_bits { u64 val; const char *set; const char *clear; + bool ro_bit; + bool x_bit; }; static const struct prot_bits pte_bits[] = { @@ -75,11 +77,13 @@ static const struct prot_bits pte_bits[] = { .val = L_PTE_RDONLY, .set = "ro", .clear = "RW", + .ro_bit = true, }, { .mask = L_PTE_XN, .val = L_PTE_XN, .set = "NX", .clear = "x ", + .x_bit = true, }, { .mask = L_PTE_SHARED, .val = L_PTE_SHARED, @@ -143,11 +147,13 @@ static const struct prot_bits section_bits[] = { .val = L_PMD_SECT_RDONLY | PMD_SECT_AP2, .set = "ro", .clear = "RW", + .ro_bit = true, #elif __LINUX_ARM_ARCH__ >= 6 { .mask = PMD_SECT_APX | PMD_SECT_AP_READ | PMD_SECT_AP_WRITE, .val = PMD_SECT_APX | PMD_SECT_AP_WRITE, .set = " ro", + .ro_bit = true, }, { .mask = PMD_SECT_APX | PMD_SECT_AP_READ | PMD_SECT_AP_WRITE, .val = PMD_SECT_AP_WRITE, @@ -166,6 +172,7 @@ static const struct prot_bits section_bits[] = { .mask = PMD_SECT_AP_READ | PMD_SECT_AP_WRITE, .val = 0, .set = " ro", + .ro_bit = true, }, { .mask = PMD_SECT_AP_READ | PMD_SECT_AP_WRITE, .val = PMD_SECT_AP_WRITE, @@ -184,6 +191,7 @@ static const struct prot_bits section_bits[] = { .val = PMD_SECT_XN, .set = "NX", .clear = "x ", + .x_bit = true, }, { .mask = PMD_SECT_S, .val = PMD_SECT_S, @@ -207,17 +215,9 @@ static struct pg_level pg_level[] = { }, { /* pmd */ .bits = section_bits, .num = ARRAY_SIZE(section_bits), - #ifdef CONFIG_ARM_LPAE - .ro_bit = section_bits + 1, - #else - .ro_bit = section_bits, - #endif - .nx_bit = section_bits + ARRAY_SIZE(section_bits) - 2, }, { /* pte */ .bits = pte_bits, .num = ARRAY_SIZE(pte_bits), - .ro_bit = pte_bits + 1, - .nx_bit = pte_bits + 2, }, }; @@ -410,8 +410,13 @@ static void ptdump_initialize(void) for (i = 0; i < ARRAY_SIZE(pg_level); i++) if (pg_level[i].bits) - for (j = 0; j < pg_level[i].num; j++) + for (j = 0; j < pg_level[i].num; j++) { pg_level[i].mask |= pg_level[i].bits[j].mask; + if (pg_level[i].bits[j].ro_bit) + pg_level[i].ro_bit = &pg_level[i].bits[j]; + if (pg_level[i].bits[j].x_bit) + pg_level[i].nx_bit = &pg_level[i].bits[j]; + } address_markers[2].start_address = VMALLOC_START; }