From patchwork Mon Feb 27 20:42:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9594205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4403760471 for ; Mon, 27 Feb 2017 20:44:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3684327F9F for ; Mon, 27 Feb 2017 20:44:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A4642832D; Mon, 27 Feb 2017 20:44:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 1D27D27F9F for ; Mon, 27 Feb 2017 20:44:46 +0000 (UTC) Received: (qmail 7988 invoked by uid 550); 27 Feb 2017 20:43:59 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 6060 invoked from network); 27 Feb 2017 20:43:44 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zWctJw+lDZfsuKvoJc7WqGF1qZ26jwOYa1TBHpZRjQ0=; b=ABPnfA9U/C0FXfOV2Okxy3cw1Q/cuV36sCrBpd7/D7yr9n0RQ+eiP/HmS2jMkASgQ4 gu1/ewHJOYDkcaXRBqmdsjolWXljr25/WN+Nf5Sn+hP+1hLSQR9za7k3FZ9Otj+z90Nu 6dGqEOnThPr2NiVXJn276T5+P/ulDtCooLmLI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zWctJw+lDZfsuKvoJc7WqGF1qZ26jwOYa1TBHpZRjQ0=; b=ZXwUQLXozRc1xVOA5nPVU5voDGZr0KgKK5YO+AmnIdnec9L4Ed5pkT3IUjCXLiNhBw sNSBdzRh+SFefwimXLaws5cDDvO80MNZg6c8LwdRvE43HlFpPZjvucAdATU3bTUD65vD vtLvAlcmWr7Zp2p3yJ6jxoBekkckBcbMVT8U/9tB48G1LSXVulF7F+ZhjeU1JLwyvLNH C15fb6qr+33iY4xHTaPM0p++S3jboh6ZVyE/yYcVom49ZNQAkDCM+jjbmnmHC+H2FsIz 0MAeba8viOXG1JYlD6zllkokNU2s+0r2btCcB81czTgn9Kr0U/CX/sEysN4IArXAYR5o hd4w== X-Gm-Message-State: AMke39kD8CXK8/7JHbFbj/QYyoViu/6nSERoeMaP5lKeeAskKOI7t0q2C7y7W80aH+dELszZ X-Received: by 10.84.168.69 with SMTP id e63mr26945807plb.124.1488228212887; Mon, 27 Feb 2017 12:43:32 -0800 (PST) From: Kees Cook To: kernel-hardening@lists.openwall.com Cc: Kees Cook , Mark Rutland , Andy Lutomirski , Hoeun Ryu , PaX Team , Emese Revfy , Russell King , x86@kernel.org Date: Mon, 27 Feb 2017 12:42:59 -0800 Message-Id: <1488228186-110679-2-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488228186-110679-1-git-send-email-keescook@chromium.org> References: <1488228186-110679-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [RFC][PATCH 1/8] Introduce rare_write() infrastructure X-Virus-Scanned: ClamAV using ClamSMTP Several types of data storage exist in the kernel: read-write data (.data, .bss), read-only data (.rodata), and RO-after-init. This introduces the infrastructure for another type: write-rarely, which intended for data that is either only rarely modified or especially security-sensitive. The intent is to further reduce the internal attack surface of the kernel by making this storage read-only when "at rest". This makes it much harder to be subverted by attackers who have a kernel-write flaw, since they cannot directly change the memory contents. Variables declared __wr_rare will be made const when an architecture supports HAVE_ARCH_WRITE_RARE. To change these variables, either the rare_write() macro can be used, or multiple uses of __rare_write(), wrapped in rare_write_enable()/rare_write_disable() macros. These macros are handled by the arch-specific functions that perform the actions needed to write to otherwise read-only memory. The arch-specific helpers must be not allow non-current CPUs to write the memory area, run non-preemptible to avoid accidentally leaving memory writable, and defined as inline to avoid making them desirable ROP targets for attackers. Signed-off-by: Kees Cook --- arch/Kconfig | 15 +++++++++++++++ include/linux/compiler.h | 38 ++++++++++++++++++++++++++++++++++++++ include/linux/preempt.h | 6 ++++-- 3 files changed, 57 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 99839c23d453..2446de19f66d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -781,4 +781,19 @@ config VMAP_STACK the stack to map directly to the KASAN shadow map using a formula that is incorrect if the stack is in vmalloc space. +config HAVE_ARCH_RARE_WRITE + def_bool n + help + An arch should select this option if it has defined the functions + __arch_rare_write_map() and __arch_rare_write_unmap() to + respectively enable and disable writing to read-only memory.The + routines must meet the following requirements: + - read-only memory writing must only be available on the current + CPU (to make sure other CPUs can't race to make changes too). + - the routines must be declared inline (to discourage ROP use). + - the routines must not be preemptible (likely they will call + preempt_disable() and preempt_enable_no_resched() respectively). + - the routines must validate expected state (e.g. when enabling + writes, BUG() if writes are already be enabled). + source "kernel/gcov/Kconfig" diff --git a/include/linux/compiler.h b/include/linux/compiler.h index cf0fa5d86059..f95603a8ee72 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -325,6 +325,44 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s __u.__val; \ }) +/* + * Build "write rarely" infrastructure for flipping memory r/w + * on a per-CPU basis. + */ +#ifndef CONFIG_HAVE_ARCH_RARE_WRITE +# define __wr_rare +# define __wr_rare_type +# define __rare_write_type(v) typeof(v) +# define __rare_write_ptr(v) (&(v)) +# define __rare_write(__var, __val) ({ \ + __var = __val; \ + __var; \ +}) +# define rare_write_enable() do { } while (0) +# define rare_write_disable() do { } while (0) +#else +# define __wr_rare __ro_after_init +# define __wr_rare_type const +# define __rare_write_type(v) typeof((typeof(v))0) +# define __rare_write_ptr(v) ((__rare_write_type(v) *)&(v)) +# define __rare_write(__var, __val) ({ \ + __rare_write_type(__var) *__rw_var; \ + \ + __rw_var = __rare_write_ptr(__var); \ + *__rw_var = (__val); \ + __var; \ +}) +# define rare_write_enable() __arch_rare_write_map() +# define rare_write_disable() __arch_rare_write_unmap() +#endif + +#define rare_write(__var, __val) ({ \ + rare_write_enable(); \ + __rare_write(__var, __val); \ + rare_write_disable(); \ + __var; \ +}) + #endif /* __KERNEL__ */ #endif /* __ASSEMBLY__ */ diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 7eeceac52dea..183c1d7a8594 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -237,10 +237,12 @@ do { \ /* * Modules have no business playing preemption tricks. */ -#undef sched_preempt_enable_no_resched -#undef preempt_enable_no_resched #undef preempt_enable_no_resched_notrace #undef preempt_check_resched +#ifndef CONFIG_HAVE_ARCH_RARE_WRITE +#undef sched_preempt_enable_no_resched +#undef preempt_enable_no_resched +#endif #endif #define preempt_set_need_resched() \