From patchwork Thu Jul 6 00:27:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9827357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2410460361 for ; Thu, 6 Jul 2017 00:27:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BF912858E for ; Thu, 6 Jul 2017 00:27:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 10637285B0; Thu, 6 Jul 2017 00:27:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id D15BB2858E for ; Thu, 6 Jul 2017 00:27:34 +0000 (UTC) Received: (qmail 10154 invoked by uid 550); 6 Jul 2017 00:27:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 10129 invoked from network); 6 Jul 2017 00:27:31 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=iNIOWNvmwq+ixQHpT6rN5Slu3Oil1mhhzzUqYY92XK0=; b=jZuI2dQlsInyKP3g6Jyp7OZSpLkMmK8fFfZlWjnYc1AndsiYmupmufb7vqun66Dh76 CmRx9NYc3u4hgPJi36SvDroR+VsaBCKA6xxUIftRdy0YgOJJ5Krn3Cs6wt/ly6PUr9MR j6BVYnn5VJ3w+ioRF2MoEfvXuOyLrdZLpgmtY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=iNIOWNvmwq+ixQHpT6rN5Slu3Oil1mhhzzUqYY92XK0=; b=YJqMmzmCignNQvHc7raljqIS8zR7WSflVmvFBvCA/ep7JB7vhq7I89Yx6B7u07jwiI LG1XZDEYKP8ZiFMKywGD2mJtqGA+470i9HqtXkT+PgOPkOPHYRy3Xgqwd8wraIXulOKL xyZPowT7OdjPvqXhahS8roxKOdo4x93GIbl2t+f4g/UdVUDXXCMtXu1z55tHHgVNADc5 EQTfH/os52y+QqP7n6gPkG4OoHzfzD2sA9q7DGOip0Z+E+++mlYrqCgzhe87HR0MslG8 86E5YTC8Mlbf7SyJqe/Z0ZMEj2hrK9IMrWllMdd1Ju2R2a8DevW6AdZzXOBkp6ZqSyQ7 vmpQ== X-Gm-Message-State: AIVw111DQATmFpPWrA5k7Tk9JCav+Qk2upVYY1tzIoI8jmCr7JA6bRZE hjuxUWEaVX+nuhUI X-Received: by 10.98.68.156 with SMTP id m28mr22562799pfi.159.1499300839650; Wed, 05 Jul 2017 17:27:19 -0700 (PDT) Date: Wed, 5 Jul 2017 17:27:18 -0700 From: Kees Cook To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , "Paul E. McKenney" , Ingo Molnar , Josh Triplett , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , linux-mm@kvack.org, Tycho Andersen , linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Message-ID: <20170706002718.GA102852@beast> MIME-Version: 1.0 Content-Disposition: inline Subject: [kernel-hardening] [PATCH v3] mm: Add SLUB free list pointer obfuscation X-Virus-Scanned: ClamAV using ClamSMTP This SLUB free list pointer obfuscation code is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. This adds a per-cache random value to SLUB caches that is XORed with their freelist pointer address and value. This adds nearly zero overhead and frustrates the very common heap overflow exploitation method of overwriting freelist pointers. A recent example of the attack is written up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit This is based on patches by Daniel Micay, and refactored to minimize the use of #ifdef. Under 200-count cycles of "hackbench -g 20 -l 1000" I saw the following run times: before: mean 10.11882499999999999995 variance .03320378329145728642 stdev .18221905304181911048 after: mean 10.12654000000000000014 variance .04700556623115577889 stdev .21680767106160192064 The difference gets lost in the noise, but if the above is to be taken literally, using CONFIG_FREELIST_HARDENED is 0.07% slower. Suggested-by: Daniel Micay Cc: Christoph Lameter Cc: Rik van Riel Cc: Tycho Andersen Signed-off-by: Kees Cook --- v3: - use static inlines instead of macros (akpm). v2: - rename CONFIG_SLAB_HARDENED to CONFIG_FREELIST_HARDENED (labbott). --- include/linux/slub_def.h | 4 ++++ init/Kconfig | 9 +++++++++ mm/slub.c | 42 +++++++++++++++++++++++++++++++++++++----- 3 files changed, 50 insertions(+), 5 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 07ef550c6627..d7990a83b416 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -93,6 +93,10 @@ struct kmem_cache { #endif #endif +#ifdef CONFIG_SLAB_FREELIST_HARDENED + unsigned long random; +#endif + #ifdef CONFIG_NUMA /* * Defragmentation by allocating from a remote node. diff --git a/init/Kconfig b/init/Kconfig index 1d3475fc9496..04ee3e507b9e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1900,6 +1900,15 @@ config SLAB_FREELIST_RANDOM security feature reduces the predictability of the kernel slab allocator against heap overflows. +config SLAB_FREELIST_HARDENED + bool "Harden slab freelist metadata" + depends on SLUB + help + Many kernel heap attacks try to target slab cache metadata and + other infrastructure. This options makes minor performance + sacrifies to harden the kernel slab allocator against common + freelist exploit methods. + config SLUB_CPU_PARTIAL default y depends on SLUB && SMP diff --git a/mm/slub.c b/mm/slub.c index 57e5156f02be..eae0628d3346 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -34,6 +34,7 @@ #include #include #include +#include #include @@ -238,30 +239,58 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) * Core slab cache functions *******************************************************************/ +/* + * Returns freelist pointer (ptr). With hardening, this is obfuscated + * with an XOR of the address where the pointer is held and a per-cache + * random number. + */ +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, + unsigned long ptr_addr) +{ +#ifdef CONFIG_SLAB_FREELIST_HARDENED + return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr); +#else + return ptr; +#endif +} + +/* Returns the freelist pointer recorded at location ptr_addr. */ +static inline void *freelist_dereference(const struct kmem_cache *s, + void *ptr_addr) +{ + return freelist_ptr(s, (void *)*(unsigned long *)(ptr_addr), + (unsigned long)ptr_addr); +} + static inline void *get_freepointer(struct kmem_cache *s, void *object) { - return *(void **)(object + s->offset); + return freelist_dereference(s, object + s->offset); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) { - prefetch(object + s->offset); + if (object) + prefetch(freelist_dereference(s, object + s->offset)); } static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { + unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled()) return get_freepointer(s, object); - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p)); - return p; + freepointer_addr = (unsigned long)object + s->offset; + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); + return freelist_ptr(s, p, freepointer_addr); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) { - *(void **)(object + s->offset) = fp; + unsigned long freeptr_addr = (unsigned long)object + s->offset; + + *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr); } /* Loop over all objects in a slab */ @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); s->reserved = 0; +#ifdef CONFIG_SLAB_FREELIST_HARDENED + s->random = get_random_long(); +#endif if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) s->reserved = sizeof(struct rcu_head);