From patchwork Tue Jun 20 03:01:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9798427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 828E26020B for ; Tue, 20 Jun 2017 03:01:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B806274A3 for ; Tue, 20 Jun 2017 03:01:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FC6A27968; Tue, 20 Jun 2017 03:01:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 16182274A3 for ; Tue, 20 Jun 2017 03:01:32 +0000 (UTC) Received: (qmail 8068 invoked by uid 550); 20 Jun 2017 03:01:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 8050 invoked from network); 20 Jun 2017 03:01:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=KD6NW16BXiTwCBU1GLTXpUlQiBpCup1ysaf6zqQUHSk=; b=F187Eck3e+gY8deGgDcq1GjKw2IjUvG1vkn00wb0ecVtGqqapIDV4+eW4Pt+/two2z LvNNAVMzOTE5T6P+gjzahFZRr4BKI7Cl0D+RSh2U3K01yxZgYKmCnp4MOvmbetbrieq0 ejYwIMZ90tk2RTl4j9hbQcxb/tLN0ZEmZXddc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=KD6NW16BXiTwCBU1GLTXpUlQiBpCup1ysaf6zqQUHSk=; b=Kf0UnpFCRtPdqlGnaPqjJXto0GGOIIZaPEDTrpC+j8j4XTFv4V7yLOYuJzfbFNynt4 1x2fqty/SWKKNb/4z+AFjz9rASKKYCxCPDxEMjQBZ+Y27bYUe6i36/Vw8pRqulbWYKZC 637nVwPHg/7sNrlCoY5xREcWc8n0Qmv9tLSddDuFbos4IFuOS0jIqNMYfnucmnEYeT84 1QIehh6hdA+UJH51zFMlPyvBjI+21D3nrk3RpnWxKu/8L3A0elzCzUfNUcmdG2176nfO +JtPFSa7YdyPE4cVdZOsgshP8RpTIpLgc7Qyyo78RUeGBJmxEkf2lyIr/Fbz0pnHc3Jf 3h8w== X-Gm-Message-State: AKS2vOxP4u8l6UzoufKSVlwOb4p8vusgxyqK4Kut+N5leE+LTrYMP998 CNZ8R0lNIFHBAL8t X-Received: by 10.98.159.19 with SMTP id g19mr10950978pfe.21.1497927674068; Mon, 19 Jun 2017 20:01:14 -0700 (PDT) Date: Mon, 19 Jun 2017 20:01:12 -0700 From: Kees Cook To: Christoph Lameter Cc: Daniel Micay , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Paul E. McKenney" , Ingo Molnar , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Message-ID: <20170620030112.GA140256@beast> MIME-Version: 1.0 Content-Disposition: inline Subject: [kernel-hardening] [PATCH] mm: Add SLUB free list pointer obfuscation X-Virus-Scanned: ClamAV using ClamSMTP This SLUB free list pointer obfuscation code is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. This adds a per-cache random value to SLUB caches that is XORed with their freelist pointers. This adds nearly zero overhead and frustrates the very common heap overflow exploitation method of overwriting freelist pointers. A recent example of the attack is written up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit This is based on patches by Daniel Micay, and refactored to avoid lots of #ifdef code. Suggested-by: Daniel Micay Signed-off-by: Kees Cook --- include/linux/slub_def.h | 4 ++++ init/Kconfig | 10 ++++++++++ mm/slub.c | 32 +++++++++++++++++++++++++++----- 3 files changed, 41 insertions(+), 5 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 07ef550c6627..0258d6d74e9c 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -93,6 +93,10 @@ struct kmem_cache { #endif #endif +#ifdef CONFIG_SLAB_HARDENED + unsigned long random; +#endif + #ifdef CONFIG_NUMA /* * Defragmentation by allocating from a remote node. diff --git a/init/Kconfig b/init/Kconfig index 1d3475fc9496..eb91082546bf 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1900,6 +1900,16 @@ config SLAB_FREELIST_RANDOM security feature reduces the predictability of the kernel slab allocator against heap overflows. +config SLAB_HARDENED + bool "Harden slab cache infrastructure" + default y + depends on SLAB_FREELIST_RANDOM && SLUB + help + Many kernel heap attacks try to target slab cache metadata and + other infrastructure. This options makes minor performance + sacrifies to harden the kernel slab allocator against common + exploit methods. + config SLUB_CPU_PARTIAL default y depends on SLUB && SMP diff --git a/mm/slub.c b/mm/slub.c index 57e5156f02be..ffede2e0c5c1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -34,6 +34,7 @@ #include #include #include +#include #include @@ -238,30 +239,50 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) * Core slab cache functions *******************************************************************/ +#ifdef CONFIG_SLAB_HARDENED +# define initialize_random(s) \ + do { \ + s->random = get_random_long(); \ + } while (0) +# define FREEPTR_VAL(ptr, ptr_addr, s) \ + (void *)((unsigned long)(ptr) ^ s->random ^ (ptr_addr)) +#else +# define initialize_random(s) do { } while (0) +# define FREEPTR_VAL(ptr, addr, s) ((void *)(ptr)) +#endif +#define FREELIST_ENTRY(ptr_addr, s) \ + FREEPTR_VAL(*(unsigned long *)(ptr_addr), \ + (unsigned long)ptr_addr, s) + static inline void *get_freepointer(struct kmem_cache *s, void *object) { - return *(void **)(object + s->offset); + return FREELIST_ENTRY(object + s->offset, s); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) { - prefetch(object + s->offset); + if (object) + prefetch(FREELIST_ENTRY(object + s->offset, s)); } static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { + unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled()) return get_freepointer(s, object); - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p)); - return p; + freepointer_addr = (unsigned long)object + s->offset; + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); + return FREEPTR_VAL(p, freepointer_addr, s); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) { - *(void **)(object + s->offset) = fp; + unsigned long freeptr_addr = (unsigned long)object + s->offset; + + *(void **)freeptr_addr = FREEPTR_VAL(fp, freeptr_addr, s); } /* Loop over all objects in a slab */ @@ -3536,6 +3557,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); s->reserved = 0; + initialize_random(s); if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) s->reserved = sizeof(struct rcu_head);