From patchwork Fri Jun 23 01:50:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9805525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 56BEA60329 for ; Fri, 23 Jun 2017 01:50:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 432922864D for ; Fri, 23 Jun 2017 01:50:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3633628653; Fri, 23 Jun 2017 01:50:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 5BA4A2864D for ; Fri, 23 Jun 2017 01:50:27 +0000 (UTC) Received: (qmail 11689 invoked by uid 550); 23 Jun 2017 01:50:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11668 invoked from network); 23 Jun 2017 01:50:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=pzgwxdCurfWHiNTznXpyLHFwyv06gPfotLn0y5zangE=; b=eGpFKVCg8df6GPfE+WMDFFp8qGjeapVd+PU5XYENDJnBzOIrq0rrEjlufO8BIuEOsV os66Xqs7u1k582QWuAdPZi3wFNoDszGeLYmz9/vh1jM2ExHtZYmK58qE5IYhw5jdvOQw 0hTLgOfLwu1BaqhmWmRGpZX1JkTyeWVYle12I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=pzgwxdCurfWHiNTznXpyLHFwyv06gPfotLn0y5zangE=; b=i9SR4tOUo6O7pQb0uLF5gfrQQKcrXy3vnjj7efptGOEXKDNkJLiAI5JPorMAiKQ0yL 8M/Pre7xZphrIhtyWoHW09AozeVrzcDToFAn96FS6fo+b5u2GGmpsl8eim1H2nDj2oGC tAFCsMItgtp/Lr9PaF+aSiWdESvvDWNyi7kyYEKnqU4emtMpgVU/TF6I9FmyO3W3QrwD MKCsS4vvaj60QZsYQSWXxSdsr5I3qocXXmYuILFfNOUjWPUIO0at2fJFeiXWwEl+X0zT B+lSSMBE/h7RMzamuR4whG8OkMMkSR4d/zOHkVWdBlSAT88h3171mbuT1Tf3nLFaGWnG d4sA== X-Gm-Message-State: AKS2vOxJMEayxMu2tYrjWqhpa+FX6MIuwfSrgX5NFqBYdRgp6F08RcSo ZF3OEl7XOd9hQCCn X-Received: by 10.84.197.69 with SMTP id m63mr6195883pld.50.1498182612805; Thu, 22 Jun 2017 18:50:12 -0700 (PDT) Date: Thu, 22 Jun 2017 18:50:10 -0700 From: Kees Cook To: Christoph Lameter , Andrew Morton Cc: Laura Abbott , Daniel Micay , Pekka Enberg , David Rientjes , Joonsoo Kim , "Paul E. McKenney" , Ingo Molnar , Josh Triplett , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Message-ID: <20170623015010.GA137429@beast> MIME-Version: 1.0 Content-Disposition: inline Subject: [kernel-hardening] [PATCH v2] mm: Add SLUB free list pointer obfuscation X-Virus-Scanned: ClamAV using ClamSMTP This SLUB free list pointer obfuscation code is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. This adds a per-cache random value to SLUB caches that is XORed with their freelist pointers. This adds nearly zero overhead and frustrates the very common heap overflow exploitation method of overwriting freelist pointers. A recent example of the attack is written up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit This is based on patches by Daniel Micay, and refactored to avoid lots of #ifdef code. Suggested-by: Daniel Micay Signed-off-by: Kees Cook --- v2: - renamed Kconfig to SLAB_FREELIST_HARDENED; labbott. --- include/linux/slub_def.h | 4 ++++ init/Kconfig | 9 +++++++++ mm/slub.c | 32 +++++++++++++++++++++++++++----- 3 files changed, 40 insertions(+), 5 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 07ef550c6627..d7990a83b416 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -93,6 +93,10 @@ struct kmem_cache { #endif #endif +#ifdef CONFIG_SLAB_FREELIST_HARDENED + unsigned long random; +#endif + #ifdef CONFIG_NUMA /* * Defragmentation by allocating from a remote node. diff --git a/init/Kconfig b/init/Kconfig index 1d3475fc9496..04ee3e507b9e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1900,6 +1900,15 @@ config SLAB_FREELIST_RANDOM security feature reduces the predictability of the kernel slab allocator against heap overflows. +config SLAB_FREELIST_HARDENED + bool "Harden slab freelist metadata" + depends on SLUB + help + Many kernel heap attacks try to target slab cache metadata and + other infrastructure. This options makes minor performance + sacrifies to harden the kernel slab allocator against common + freelist exploit methods. + config SLUB_CPU_PARTIAL default y depends on SLUB && SMP diff --git a/mm/slub.c b/mm/slub.c index 57e5156f02be..590e7830aaed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -34,6 +34,7 @@ #include #include #include +#include #include @@ -238,30 +239,50 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) * Core slab cache functions *******************************************************************/ +#ifdef CONFIG_SLAB_FREELIST_HARDENED +# define initialize_random(s) \ + do { \ + s->random = get_random_long(); \ + } while (0) +# define FREEPTR_VAL(ptr, ptr_addr, s) \ + (void *)((unsigned long)(ptr) ^ s->random ^ (ptr_addr)) +#else +# define initialize_random(s) do { } while (0) +# define FREEPTR_VAL(ptr, addr, s) ((void *)(ptr)) +#endif +#define FREELIST_ENTRY(ptr_addr, s) \ + FREEPTR_VAL(*(unsigned long *)(ptr_addr), \ + (unsigned long)ptr_addr, s) + static inline void *get_freepointer(struct kmem_cache *s, void *object) { - return *(void **)(object + s->offset); + return FREELIST_ENTRY(object + s->offset, s); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) { - prefetch(object + s->offset); + if (object) + prefetch(FREELIST_ENTRY(object + s->offset, s)); } static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { + unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled()) return get_freepointer(s, object); - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p)); - return p; + freepointer_addr = (unsigned long)object + s->offset; + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); + return FREEPTR_VAL(p, freepointer_addr, s); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) { - *(void **)(object + s->offset) = fp; + unsigned long freeptr_addr = (unsigned long)object + s->offset; + + *(void **)freeptr_addr = FREEPTR_VAL(fp, freeptr_addr, s); } /* Loop over all objects in a slab */ @@ -3536,6 +3557,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); s->reserved = 0; + initialize_random(s); if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) s->reserved = sizeof(struct rcu_head);