From patchwork Wed Aug 2 18:06:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9877363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6416360360 for ; Wed, 2 Aug 2017 18:06:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 585F72858E for ; Wed, 2 Aug 2017 18:06:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A3B028622; Wed, 2 Aug 2017 18:06:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 15D0A2858E for ; Wed, 2 Aug 2017 18:06:25 +0000 (UTC) Received: (qmail 24286 invoked by uid 550); 2 Aug 2017 18:06:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24253 invoked from network); 2 Aug 2017 18:06:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=HJDv4D6Uz8Yq1WT+dn6PW60Pr7YS3ecQq1CSBrFBL6c=; b=A44nWs86+iTdqcvckfLcqrnwG8uyHI+lvo+/i5LaT6b/zHvebIghXUm9VM2ekN5aGS 7lwNyNCrK7R7nEnPU5EOKWCBDf6h8zEsaw6v/zYhxF8B6/sYHVDn5IY8/CgxT4DyL3yv YM0PQ/9Q0B9mqrQwu2My9poTW+ZQ11mYaXg9E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=HJDv4D6Uz8Yq1WT+dn6PW60Pr7YS3ecQq1CSBrFBL6c=; b=n2Z4+rY4FzevscHPR5ebU9884OW4PDdoAmzpEX8vCTqH/PqMMTwFjOHO2+pR1FAnNH 1zE3076IyAVdy5o4Hc+M2+/vo9niOdZnBj/MFjSW+MQervAG0j8MbLCXTcKQdTwgyjNn i9bRnVQqqQNE5WCZve14oeisRuwU65FhAvEwOGMC4Nj/JOTM4O51pOO8sxo6f3G7jXi1 P3+tuOD7raKIHY8H5uwWOA2C9bA4Hh6yjIyDqhvXREy4y0j9NEwTOXomacgXNIOaq6jH r5LOa7BLaxcoWxJd5Zi817R1PiaLp7dxjr00LrFzx+M2Mbr83r9H9DdRbKH2ZosmPOiR GClQ== X-Gm-Message-State: AIVw111pfuR5d+KS8rl69+Kdd/vdAYef10+IgUuKRktNgOIIfeLnJx4X Dxb2Th+/mzt29HEJ X-Received: by 10.99.9.194 with SMTP id 185mr13278031pgj.447.1501697171412; Wed, 02 Aug 2017 11:06:11 -0700 (PDT) Date: Wed, 2 Aug 2017 11:06:09 -0700 From: Kees Cook To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , "Paul E. McKenney" , Ingo Molnar , Tejun Heo , Andy Lutomirski , Nicolas Pitre , linux-mm@kvack.org, Rik van Riel , Tycho Andersen , Alexander Popov , linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Message-ID: <20170802180609.GA66807@beast> MIME-Version: 1.0 Content-Disposition: inline Subject: [kernel-hardening] [RESEND][PATCH v4] mm: Add SLUB free list pointer obfuscation X-Virus-Scanned: ClamAV using ClamSMTP This SLUB free list pointer obfuscation code is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. This adds a per-cache random value to SLUB caches that is XORed with their freelist pointer address and value. This adds nearly zero overhead and frustrates the very common heap overflow exploitation method of overwriting freelist pointers. A recent example of the attack is written up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit and there is a section dedicated to the technique the book "A Guide to Kernel Exploitation: Attacking the Core". This is based on patches by Daniel Micay, and refactored to minimize the use of #ifdef. With 200-count cycles of "hackbench -g 20 -l 1000" I saw the following run times: before: mean 10.11882499999999999995 variance .03320378329145728642 stdev .18221905304181911048 after: mean 10.12654000000000000014 variance .04700556623115577889 stdev .21680767106160192064 The difference gets lost in the noise, but if the above is to be taken literally, using CONFIG_FREELIST_HARDENED is 0.07% slower. Suggested-by: Daniel Micay Cc: Christoph Lameter Cc: Rik van Riel Cc: Tycho Andersen Cc: Alexander Popov Signed-off-by: Kees Cook --- Andrew, can you please take this for -mm? It is a simple solution to a common heap attack, and several people have already spoken up about how they'd also like to be using it. (This is a distinct and non-overlapping protection separate from the double-free protection proposed, which would just be using the same CONFIG for toggling enablement.) v4: - add another reference to how common this exploit technique is. v3: - use static inlines instead of macros (akpm). v2: - rename CONFIG_SLAB_HARDENED to CONFIG_FREELIST_HARDENED (labbott). --- include/linux/slub_def.h | 4 ++++ init/Kconfig | 9 +++++++++ mm/slub.c | 42 +++++++++++++++++++++++++++++++++++++----- 3 files changed, 50 insertions(+), 5 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index cc0faf3a90be..0783b622311e 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -115,6 +115,10 @@ struct kmem_cache { #endif #endif +#ifdef CONFIG_SLAB_FREELIST_HARDENED + unsigned long random; +#endif + #ifdef CONFIG_NUMA /* * Defragmentation by allocating from a remote node. diff --git a/init/Kconfig b/init/Kconfig index 8514b25db21c..3dbb980cb70b 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1571,6 +1571,15 @@ config SLAB_FREELIST_RANDOM security feature reduces the predictability of the kernel slab allocator against heap overflows. +config SLAB_FREELIST_HARDENED + bool "Harden slab freelist metadata" + depends on SLUB + help + Many kernel heap attacks try to target slab cache metadata and + other infrastructure. This options makes minor performance + sacrifies to harden the kernel slab allocator against common + freelist exploit methods. + config SLUB_CPU_PARTIAL default y depends on SLUB && SMP diff --git a/mm/slub.c b/mm/slub.c index 1d3f9835f4ea..c92d6369f5e0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -34,6 +34,7 @@ #include #include #include +#include #include @@ -238,30 +239,58 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) * Core slab cache functions *******************************************************************/ +/* + * Returns freelist pointer (ptr). With hardening, this is obfuscated + * with an XOR of the address where the pointer is held and a per-cache + * random number. + */ +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, + unsigned long ptr_addr) +{ +#ifdef CONFIG_SLAB_FREELIST_HARDENED + return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr); +#else + return ptr; +#endif +} + +/* Returns the freelist pointer recorded at location ptr_addr. */ +static inline void *freelist_dereference(const struct kmem_cache *s, + void *ptr_addr) +{ + return freelist_ptr(s, (void *)*(unsigned long *)(ptr_addr), + (unsigned long)ptr_addr); +} + static inline void *get_freepointer(struct kmem_cache *s, void *object) { - return *(void **)(object + s->offset); + return freelist_dereference(s, object + s->offset); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) { - prefetch(object + s->offset); + if (object) + prefetch(freelist_dereference(s, object + s->offset)); } static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { + unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled()) return get_freepointer(s, object); - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p)); - return p; + freepointer_addr = (unsigned long)object + s->offset; + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); + return freelist_ptr(s, p, freepointer_addr); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) { - *(void **)(object + s->offset) = fp; + unsigned long freeptr_addr = (unsigned long)object + s->offset; + + *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr); } /* Loop over all objects in a slab */ @@ -3563,6 +3592,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); s->reserved = 0; +#ifdef CONFIG_SLAB_FREELIST_HARDENED + s->random = get_random_long(); +#endif if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) s->reserved = sizeof(struct rcu_head);