From patchwork Tue May 24 21:15:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9134291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3082E60221 for ; Tue, 24 May 2016 21:16:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2346A281FE for ; Tue, 24 May 2016 21:16:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1788D2829D; Tue, 24 May 2016 21:16:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id DEF9A281FE for ; Tue, 24 May 2016 21:16:02 +0000 (UTC) Received: (qmail 31924 invoked by uid 550); 24 May 2016 21:15:59 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 30707 invoked from network); 24 May 2016 21:15:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bhUKxxekeapc40VylDbjN3riDo7gGlBFqCbXlhoGWA0=; b=KZdL2SWrrIlgQJ3sy96+8Bt0DD0NTuB5OWxMFAcHl+Oz+AOHY3zd6s3v3HJp6C3Usz +7nbxVEj9PzsJ1FtE23z00h5UPiFhSXRs2ktUwN1X8PcOl4IQHG3rWRW8DhsSaYEFfXB dpyNE591m6AuUMtnxfFKoK3Gua/afPEmeC8MXSxsiZ5ppPrLgPj3GpPVWciQgtSgmzsZ TSo2tbfd9EeqVj90Y7kYjOSLRaxDjlzfeb7fR/zi69gDq87axX3+TaZKpPn1HEcGmqAt 4q4VDcZMSiuQ8gxvCUKWpfhxqzvx0BThJI83kmLK+GzpV6SPlkmNq6mdubGjkzYSv6Zg rwbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bhUKxxekeapc40VylDbjN3riDo7gGlBFqCbXlhoGWA0=; b=muTNYXbeyU8ghODQETvhJEP1uhhJm/KNGyFOCDBozHSbandKYOtd8TER3nGabSuXyV o3wkw687niKs8C4rxMztrQp/AAZJWJSfFgQqvwt+CUBO+NARggPASpZHr1/zuPRTtXS6 pg+Hlc/J1BLei1IaKg7rksZ+6gnmH3EnVlGbfGOQKA8HLAwJtIydOV3DGl5f5x6glXQn GSL/0RpAmwBWP0s0o5ZzIaNNFTOVH4ct7EnNtOBkPbBcncQ1iekUTdIWkoML7F2dGCQN gbh0TS7GpIIdBDZEYNew3a2Ql25G8aecEjgqUQphr8O3bjyDmy0aP9QU1hx/HU+Gl4iR GnQw== X-Gm-Message-State: ALyK8tI8c3i7L4qzbNloXRIEJt3E0+FW2z4da+pLHKrSKM5WWo0BY9T5D4gucCP6GFVf3j9R X-Received: by 10.98.72.27 with SMTP id v27mr349898pfa.143.1464124545738; Tue, 24 May 2016 14:15:45 -0700 (PDT) From: Thomas Garnier To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Paul E . McKenney" , Pranith Kumar , David Howells , Tejun Heo , Johannes Weiner , David Woodhouse , Thomas Garnier , Petr Mladek , Kees Cook Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, gthelen@google.com, kernel-hardening@lists.openwall.com Date: Tue, 24 May 2016 14:15:22 -0700 Message-Id: <1464124523-43051-2-git-send-email-thgarnie@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1464124523-43051-1-git-send-email-thgarnie@google.com> References: <1464124523-43051-1-git-send-email-thgarnie@google.com> Subject: [kernel-hardening] [RFC v2 1/2] mm: Reorganize SLAB freelist randomization X-Virus-Scanned: ClamAV using ClamSMTP This commit reorganizes the previous SLAB freelist randomization to prepare for the SLUB implementation. It moves functions that will be shared to slab_common. It also move the definition of freelist_idx_t in the slab_def header so a similar type can be used for all common functions. The entropy functions are changed to align with the SLUB implementation, now using get_random_* functions. Signed-off-by: Thomas Garnier --- Based on 0e01df100b6bf22a1de61b66657502a6454153c5 --- include/linux/slab_def.h | 11 +++++++- mm/slab.c | 68 ++---------------------------------------------- mm/slab.h | 16 ++++++++++++ mm/slab_common.c | 48 ++++++++++++++++++++++++++++++++++ 4 files changed, 76 insertions(+), 67 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 8694f7a..e05a871 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -3,6 +3,15 @@ #include +#define FREELIST_BYTE_INDEX (((PAGE_SIZE >> BITS_PER_BYTE) \ + <= SLAB_OBJ_MIN_SIZE) ? 1 : 0) + +#if FREELIST_BYTE_INDEX +typedef unsigned char freelist_idx_t; +#else +typedef unsigned short freelist_idx_t; +#endif + /* * Definitions unique to the original Linux SLAB allocator. */ @@ -81,7 +90,7 @@ struct kmem_cache { #endif #ifdef CONFIG_SLAB_FREELIST_RANDOM - void *random_seq; + freelist_idx_t *random_seq; #endif struct kmem_cache_node *node[MAX_NUMNODES]; diff --git a/mm/slab.c b/mm/slab.c index cc8bbc1..8e32562 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -157,15 +157,6 @@ #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN #endif -#define FREELIST_BYTE_INDEX (((PAGE_SIZE >> BITS_PER_BYTE) \ - <= SLAB_OBJ_MIN_SIZE) ? 1 : 0) - -#if FREELIST_BYTE_INDEX -typedef unsigned char freelist_idx_t; -#else -typedef unsigned short freelist_idx_t; -#endif - #define SLAB_OBJ_MAX_NUM ((1 << sizeof(freelist_idx_t) * BITS_PER_BYTE) - 1) /* @@ -1236,61 +1227,6 @@ static void __init set_up_node(struct kmem_cache *cachep, int index) } } -#ifdef CONFIG_SLAB_FREELIST_RANDOM -static void freelist_randomize(struct rnd_state *state, freelist_idx_t *list, - size_t count) -{ - size_t i; - unsigned int rand; - - for (i = 0; i < count; i++) - list[i] = i; - - /* Fisher-Yates shuffle */ - for (i = count - 1; i > 0; i--) { - rand = prandom_u32_state(state); - rand %= (i + 1); - swap(list[i], list[rand]); - } -} - -/* Create a random sequence per cache */ -static int cache_random_seq_create(struct kmem_cache *cachep, gfp_t gfp) -{ - unsigned int seed, count = cachep->num; - struct rnd_state state; - - if (count < 2) - return 0; - - /* If it fails, we will just use the global lists */ - cachep->random_seq = kcalloc(count, sizeof(freelist_idx_t), gfp); - if (!cachep->random_seq) - return -ENOMEM; - - /* Get best entropy at this stage */ - get_random_bytes_arch(&seed, sizeof(seed)); - prandom_seed_state(&state, seed); - - freelist_randomize(&state, cachep->random_seq, count); - return 0; -} - -/* Destroy the per-cache random freelist sequence */ -static void cache_random_seq_destroy(struct kmem_cache *cachep) -{ - kfree(cachep->random_seq); - cachep->random_seq = NULL; -} -#else -static inline int cache_random_seq_create(struct kmem_cache *cachep, gfp_t gfp) -{ - return 0; -} -static inline void cache_random_seq_destroy(struct kmem_cache *cachep) { } -#endif /* CONFIG_SLAB_FREELIST_RANDOM */ - - /* * Initialisation. Called after the page allocator have been initialised and * before smp_init(). @@ -2554,7 +2490,7 @@ static bool freelist_state_initialize(union freelist_init_state *state, unsigned int rand; /* Use best entropy available to define a random shift */ - get_random_bytes_arch(&rand, sizeof(rand)); + rand = get_random_int(); /* Use a random state if the pre-computed list is not available */ if (!cachep->random_seq) { @@ -3979,7 +3915,7 @@ static int enable_cpucache(struct kmem_cache *cachep, gfp_t gfp) int shared = 0; int batchcount = 0; - err = cache_random_seq_create(cachep, gfp); + err = cache_random_seq_create(cachep, cachep->num, gfp); if (err) goto end; diff --git a/mm/slab.h b/mm/slab.h index dedb1a9..2c33bf3 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -42,6 +42,7 @@ struct kmem_cache { #include #include #include +#include /* * State of the slab allocator. @@ -464,4 +465,19 @@ int memcg_slab_show(struct seq_file *m, void *p); void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr); +#ifdef CONFIG_SLAB_FREELIST_RANDOM +void freelist_randomize(struct rnd_state *state, freelist_idx_t *list, + size_t count); +int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count, + gfp_t gfp); +void cache_random_seq_destroy(struct kmem_cache *cachep); +#else +static inline int cache_random_seq_create(struct kmem_cache *cachep, + unsigned int count, gfp_t gfp) +{ + return 0; +} +static inline void cache_random_seq_destroy(struct kmem_cache *cachep) { } +#endif /* CONFIG_SLAB_FREELIST_RANDOM */ + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index a65dad7..657ecf1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1142,6 +1142,54 @@ int memcg_slab_show(struct seq_file *m, void *p) } #endif +#ifdef CONFIG_SLAB_FREELIST_RANDOM +/* Randomize a generic freelist */ +void freelist_randomize(struct rnd_state *state, freelist_idx_t *list, + size_t count) +{ + size_t i; + unsigned int rand; + + for (i = 0; i < count; i++) + list[i] = i; + + /* Fisher-Yates shuffle */ + for (i = count - 1; i > 0; i--) { + rand = prandom_u32_state(state); + rand %= (i + 1); + swap(list[i], list[rand]); + } +} + +/* Create a random sequence per cache */ +int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count, + gfp_t gfp) +{ + struct rnd_state state; + + if (count < 2) + return 0; + + /* If it fails, we will just use the global lists */ + cachep->random_seq = kcalloc(count, sizeof(freelist_idx_t), gfp); + if (!cachep->random_seq) + return -ENOMEM; + + /* Get best entropy at this stage */ + prandom_seed_state(&state, get_random_long()); + + freelist_randomize(&state, cachep->random_seq, count); + return 0; +} + +/* Destroy the per-cache random freelist sequence */ +void cache_random_seq_destroy(struct kmem_cache *cachep) +{ + kfree(cachep->random_seq); + cachep->random_seq = NULL; +} +#endif /* CONFIG_SLAB_FREELIST_RANDOM */ + /* * slabinfo_op - iterator that generates /proc/slabinfo *