From patchwork Thu Apr 18 15:42:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10907661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF4B5922 for ; Thu, 18 Apr 2019 16:33:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D943F28D24 for ; Thu, 18 Apr 2019 16:33:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C92CA28D55; Thu, 18 Apr 2019 16:33:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E2FDC28D24 for ; Thu, 18 Apr 2019 16:33:05 +0000 (UTC) Received: (qmail 29739 invoked by uid 550); 18 Apr 2019 16:33:01 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 9556 invoked from network); 18 Apr 2019 15:42:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M9VDOWzYn6nXrWz02ZAfm+oG12axOPQ9eunlREvBT5g=; b=ruHCFZTfTPR7oJ0Q5gBHGMu4QYXz2jROHunlJBX6a9drH/Sem3dfobkW5bOPT8FC91 PbePih1PzsH47kTmoKLiWVDPuTWQ5hxHdYb1v+NA81Jh3zIjR0ryxKrPaxxZxoybQSMF 6evD1/aGfv3F1rkDrjl+6HIJe9mEfvCypDjpC4108XA1+/IP5R8WGDrFhCn7O3PhsMFK emWsI1sxTUKuUZyHA49ncvdSuwwFsxZ+Eq/eu3pqRs1MSYmE6dLTS/LYiTK+Hz/FO+9w WrfOa4C14q1ZratFGNrl8gBCbYzB0EPrGDIuGdvdk5tm6jyJ0BvqzhCy3mS8ifOQeNo3 KvEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M9VDOWzYn6nXrWz02ZAfm+oG12axOPQ9eunlREvBT5g=; b=OR0gTUPFMEOz1PUR5GC7i+27XQAjdFT+rullSSQhYABPbl1UgL37ZNJl9xI4HLJhLO SxFbNd3gL5f88s+o58WCawD+7B4lTFBYnM5do8y16GD1QplicxQQXQs2oa3/Py64aBSr AncCTmeUpOkcv8Xo4WeSybFVxzUNYefAnZM8BWdSIcqlxehdppf0yeyTKMpdQxGnwuX2 lnRw4JUA/gtqFQRlFtWn54TiFA01sB2ZdutPJth1HFb+hpxCwqQE9kNUpGbBAkvd1f67 vc3uFLNaYYrlGnuSutSWpELYjj5m8C7sdGXrGPD/ADiPynZu6qHULvE8CpTPZh9SeC8E +2ww== X-Gm-Message-State: APjAAAWH//aPg28V5Yx+yyNJls7LtQGVTXOV89PUVMa5am4B/lfVtQFI O6byRf0gZN6OcfsS+Iyyy4G1VrZidSs= X-Google-Smtp-Source: APXvYqxCR/g/qmygXOlG5JedEBKs3Cvl/p3zxixLLIJ6SAf3CKB+orRGA7efImjMaMxC2TICPuF3/VUlaYc= X-Received: by 2002:a0c:86cd:: with SMTP id 13mr76282193qvg.146.1555602155603; Thu, 18 Apr 2019 08:42:35 -0700 (PDT) Date: Thu, 18 Apr 2019 17:42:06 +0200 In-Reply-To: <20190418154208.131118-1-glider@google.com> Message-Id: <20190418154208.131118-2-glider@google.com> Mime-Version: 1.0 References: <20190418154208.131118-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH 1/3] mm: security: introduce the init_allocations=1 boot option From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, dvyukov@google.com, keescook@chromium.org, labbott@redhat.com Cc: linux-mm@kvack.org, linux-security-module@vger.kernel.org, kernel-hardening@lists.openwall.com X-Virus-Scanned: ClamAV using ClamSMTP This option adds the possibility to initialize newly allocated pages and heap objects with zeroes. This is needed to prevent possible information leaks and make the control-flow bugs that depend on uninitialized values more deterministic. Initialization is done at allocation time at the places where checks for __GFP_ZERO are performed. We don't initialize slab caches with constructors to preserve their semantics. To reduce runtime costs of checking cachep->ctor we replace a call to memset with a call to cachep->poison_fn, which is only executed if the memory block needs to be initialized. For kernel testing purposes filling allocations with a nonzero pattern would be more suitable, but may require platform-specific code. To have a simple baseline we've decided to start with zero-initialization. No performance optimizations are done at the moment to reduce double initialization of memory regions. Signed-off-by: Alexander Potapenko Cc: Andrew Morton Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: Qian Cai Cc: Vlastimil Babka Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- drivers/infiniband/core/uverbs_ioctl.c | 2 +- include/linux/mm.h | 8 ++++++++ include/linux/slab_def.h | 1 + include/linux/slub_def.h | 1 + kernel/kexec_core.c | 2 +- mm/dmapool.c | 2 +- mm/page_alloc.c | 18 +++++++++++++++++- mm/slab.c | 12 ++++++------ mm/slab.h | 1 + mm/slab_common.c | 15 +++++++++++++++ mm/slob.c | 2 +- mm/slub.c | 8 ++++---- net/core/sock.c | 2 +- 13 files changed, 58 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index e1379949e663..f31234906be2 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -127,7 +127,7 @@ __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size, res = (void *)pbundle->internal_buffer + pbundle->internal_used; pbundle->internal_used = ALIGN(new_used, sizeof(*pbundle->internal_buffer)); - if (flags & __GFP_ZERO) + if (want_init_memory(flags)) memset(res, 0, size); return res; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..b38b71a5efaa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2597,6 +2597,14 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif +DECLARE_STATIC_KEY_FALSE(init_allocations); +static inline bool want_init_memory(gfp_t flags) +{ + if (static_branch_unlikely(&init_allocations)) + return true; + return flags & __GFP_ZERO; +} + #ifdef CONFIG_DEBUG_PAGEALLOC extern bool _debug_pagealloc_enabled; extern void __kernel_map_pages(struct page *page, int numpages, int enable); diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9a5eafb7145b..9dfe9eb639d7 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -37,6 +37,7 @@ struct kmem_cache { /* constructor func */ void (*ctor)(void *obj); + void (*poison_fn)(struct kmem_cache *c, void *object); /* 4) cache creation/removal */ const char *name; diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index d2153789bd9f..afb928cb7c20 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -99,6 +99,7 @@ struct kmem_cache { gfp_t allocflags; /* gfp flags to use on each alloc */ int refcount; /* Refcount for slab cache destroy */ void (*ctor)(void *); + void (*poison_fn)(struct kmem_cache *c, void *object); unsigned int inuse; /* Offset to metadata */ unsigned int align; /* Alignment */ unsigned int red_left_pad; /* Left redzone padding size */ diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d7140447be75..be84f5f95c97 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -315,7 +315,7 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order) arch_kexec_post_alloc_pages(page_address(pages), count, gfp_mask); - if (gfp_mask & __GFP_ZERO) + if (want_init_memory(gfp_mask)) for (i = 0; i < count; i++) clear_highpage(pages + i); } diff --git a/mm/dmapool.c b/mm/dmapool.c index 76a160083506..796e38160d39 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -381,7 +381,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, #endif spin_unlock_irqrestore(&pool->lock, flags); - if (mem_flags & __GFP_ZERO) + if (want_init_memory(mem_flags)) memset(retval, 0, pool->size); return retval; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d96ca5bc555b..e2a21d866ac9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -133,6 +133,22 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; +bool want_init_allocations __read_mostly; +EXPORT_SYMBOL(want_init_allocations); +DEFINE_STATIC_KEY_FALSE(init_allocations); + +static int __init early_init_allocations(char *buf) +{ + int ret; + + if (!buf) + return -EINVAL; + ret = kstrtobool(buf, &want_init_allocations); + if (want_init_allocations) + static_branch_enable(&init_allocations); + return ret; +} +early_param("init_allocations", early_init_allocations); /* * A cached value of the page's pageblock's migratetype, used when the page is @@ -2014,7 +2030,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags post_alloc_hook(page, order, gfp_flags); - if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO)) + if (!free_pages_prezeroed() && want_init_memory(gfp_flags)) for (i = 0; i < (1 << order); i++) clear_highpage(page + i); diff --git a/mm/slab.c b/mm/slab.c index 47a380a486ee..dcc5b73cf767 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3331,8 +3331,8 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, local_irq_restore(save_flags); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - if (unlikely(flags & __GFP_ZERO) && ptr) - memset(ptr, 0, cachep->object_size); + if (unlikely(want_init_memory(flags)) && ptr) + cachep->poison_fn(cachep, ptr); slab_post_alloc_hook(cachep, flags, 1, &ptr); return ptr; @@ -3388,8 +3388,8 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); - if (unlikely(flags & __GFP_ZERO) && objp) - memset(objp, 0, cachep->object_size); + if (unlikely(want_init_memory(flags)) && objp) + cachep->poison_fn(cachep, objp); slab_post_alloc_hook(cachep, flags, 1, &objp); return objp; @@ -3596,9 +3596,9 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_); /* Clear memory outside IRQ disabled section */ - if (unlikely(flags & __GFP_ZERO)) + if (unlikely(want_init_memory(flags))) for (i = 0; i < size; i++) - memset(p[i], 0, s->object_size); + s->poison_fn(s, p[i]); slab_post_alloc_hook(s, flags, size, p); /* FIXME: Trace call missing. Christoph would like a bulk variant */ diff --git a/mm/slab.h b/mm/slab.h index 43ac818b8592..3b541e8970ee 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -27,6 +27,7 @@ struct kmem_cache { const char *name; /* Slab name for sysfs */ int refcount; /* Use counter */ void (*ctor)(void *); /* Called on object slot creation */ + void (*poison_fn)(struct kmem_cache *c, void *object); struct list_head list; /* List of all slab caches on the system */ }; diff --git a/mm/slab_common.c b/mm/slab_common.c index 58251ba63e4a..37810114b2ea 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -360,6 +360,16 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, return NULL; } +static void poison_zero(struct kmem_cache *c, void *object) +{ + memset(object, 0, c->object_size); +} + +static void poison_dont(struct kmem_cache *c, void *object) +{ + /* Do nothing. Use for caches with constructors. */ +} + static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, @@ -381,6 +391,10 @@ static struct kmem_cache *create_cache(const char *name, s->size = s->object_size = object_size; s->align = align; s->ctor = ctor; + if (ctor) + s->poison_fn = poison_dont; + else + s->poison_fn = poison_zero; s->useroffset = useroffset; s->usersize = usersize; @@ -974,6 +988,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size); s->useroffset = useroffset; s->usersize = usersize; + s->poison_fn = poison_zero; slab_init_memcg_params(s); diff --git a/mm/slob.c b/mm/slob.c index 307c2c9feb44..18981a71e962 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -330,7 +330,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) BUG_ON(!b); spin_unlock_irqrestore(&slob_lock, flags); } - if (unlikely(gfp & __GFP_ZERO)) + if (unlikely(want_init_memory(gfp))) memset(b, 0, size); return b; } diff --git a/mm/slub.c b/mm/slub.c index d30ede89f4a6..e4efb6575510 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2750,8 +2750,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, stat(s, ALLOC_FASTPATH); } - if (unlikely(gfpflags & __GFP_ZERO) && object) - memset(object, 0, s->object_size); + if (unlikely(want_init_memory(gfpflags)) && object) + s->poison_fn(s, object); slab_post_alloc_hook(s, gfpflags, 1, &object); @@ -3172,11 +3172,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_enable(); /* Clear memory outside IRQ disabled fastpath loop */ - if (unlikely(flags & __GFP_ZERO)) { + if (unlikely(want_init_memory(flags))) { int j; for (j = 0; j < i; j++) - memset(p[j], 0, s->object_size); + s->poison_fn(s, p[j]); } /* memcg and kmem_cache debug support */ diff --git a/net/core/sock.c b/net/core/sock.c index 782343bb925b..99b288a19b39 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1601,7 +1601,7 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority, sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO); if (!sk) return sk; - if (priority & __GFP_ZERO) + if (want_init_memory(priority)) sk_prot_clear_nulls(sk, prot->obj_size); } else sk = kmalloc(prot->obj_size, priority); From patchwork Thu Apr 18 15:42:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10907665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87C9F13B5 for ; Thu, 18 Apr 2019 16:33:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74D7028D55 for ; Thu, 18 Apr 2019 16:33:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6653D28D58; Thu, 18 Apr 2019 16:33:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 629EE28D55 for ; Thu, 18 Apr 2019 16:33:21 +0000 (UTC) Received: (qmail 29784 invoked by uid 550); 18 Apr 2019 16:33:02 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 9686 invoked from network); 18 Apr 2019 15:42:51 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7Vx5LHPJZXF/Q6+oqstdvL483bzBYrhBlKRq8Wbm+ds=; b=NqP7PthO/bEtcFc0pQvk/uBBzKLAYIGtocI1hT+DM8hIo72UfBZBIBVnXu+fFrk+tM KJ5KSZENmiDl1qNVhGgd64Eo7q5BYkLRtCEvVEFt4zKeuw5kXUTYR365Z82Cj3qMWVPD aqiFQ4UW07z2n8WK/QUrgcOurRafhQKaNjCdOzCokzVPCi0ptMEt9C1aXeU8dsh2JgaO PjTCt3Tw23L24+A2yOTiggHdld5+GG1vDUoSHz/QUtCxldjzEm0xbBNwT/2ThgQM/Khl l7+MeWTxQUAFNK3s4krDJTyxJdmPZmwDQ+Ap3CxsPF4oW4lMViQzuaQ/Nsi6OPNYZ2do fQAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7Vx5LHPJZXF/Q6+oqstdvL483bzBYrhBlKRq8Wbm+ds=; b=VhnEkyR0iRSllioevfCh/bkRlLab0QXdogRJBu68knZjzMcgNqsWVOnM3NFWlDFyoa /OFCrtiTbPfzS5xxS+clbvJPI206SIzvgNr3TlTisyyiH8OhxjZRx2VOMjpl05xd/9gA d7IZA0v/NT3iqzoV/pmRveYmx0LyZHVNWiG/AqLnMk04Iz+RBji7weoG3Qjpxu8zoHV3 xjiPeYGbBRTRFj7tq6OApO+Ar8QsbfUmqL0/rwgFDYCdcpuxPmxAVGrS+/pZoamAa1Ne +Y1D9obXakZB7/Y3k1DMuR3t36t0M8pJRAyp4EEF6qKnGIwldvw4ghVtkuX0KLa3gr88 hwWQ== X-Gm-Message-State: APjAAAXO/NNmhrD7DDiW6PYTA0EC80q37XxraLzDxlqUPRxNpjsBsDzr rcqnBckRVOn6Y8AoLCMb/JGoNY+k+I8= X-Google-Smtp-Source: APXvYqwjGLSVcle0eamytkf9ScG/iJnIkVEcX89YhKur0TC0GeJgj8QXWXF+lQeOH0HrDzay5Lgtdu1MljE= X-Received: by 2002:a0c:a8e7:: with SMTP id h39mr75555057qvc.34.1555602159823; Thu, 18 Apr 2019 08:42:39 -0700 (PDT) Date: Thu, 18 Apr 2019 17:42:07 +0200 In-Reply-To: <20190418154208.131118-1-glider@google.com> Message-Id: <20190418154208.131118-3-glider@google.com> Mime-Version: 1.0 References: <20190418154208.131118-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH 2/3] gfp: mm: introduce __GFP_NOINIT From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, dvyukov@google.com, keescook@chromium.org, labbott@redhat.com Cc: linux-mm@kvack.org, linux-security-module@vger.kernel.org, kernel-hardening@lists.openwall.com X-Virus-Scanned: ClamAV using ClamSMTP When passed to an allocator (either pagealloc or SL[AOU]B), __GFP_NOINIT tells it to not initialize the requested memory if the init_allocations boot option is enabled. This can be useful in the cases the newly allocated memory is going to be initialized by the caller right away. __GFP_NOINIT basically defeats the hardening against information leaks provided by the init_allocations feature, so one should use it with caution. This patch also adds __GFP_NOINIT to alloc_pages() calls in SL[AOU]B. Signed-off-by: Alexander Potapenko Cc: Andrew Morton Cc: Masahiro Yamada Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: Qian Cai Cc: Vlastimil Babka Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- include/linux/gfp.h | 6 +++++- include/linux/mm.h | 2 +- kernel/kexec_core.c | 2 +- mm/slab.c | 2 +- mm/slob.c | 1 + mm/slub.c | 1 + 6 files changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index fdab7de7490d..66d7f5604fe2 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -44,6 +44,7 @@ struct vm_area_struct; #else #define ___GFP_NOLOCKDEP 0 #endif +#define ___GFP_NOINIT 0x1000000u /* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* @@ -208,16 +209,19 @@ struct vm_area_struct; * %__GFP_COMP address compound page metadata. * * %__GFP_ZERO returns a zeroed page on success. + * + * %__GFP_NOINIT requests non-initialized memory from the underlying allocator. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_NOINIT ((__force gfp_t)___GFP_NOINIT) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/include/linux/mm.h b/include/linux/mm.h index b38b71a5efaa..8f03334a9033 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2601,7 +2601,7 @@ DECLARE_STATIC_KEY_FALSE(init_allocations); static inline bool want_init_memory(gfp_t flags) { if (static_branch_unlikely(&init_allocations)) - return true; + return !(flags & __GFP_NOINIT); return flags & __GFP_ZERO; } diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index be84f5f95c97..f9d1f1236cd0 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -302,7 +302,7 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order) { struct page *pages; - pages = alloc_pages(gfp_mask & ~__GFP_ZERO, order); + pages = alloc_pages((gfp_mask & ~__GFP_ZERO) | __GFP_NOINIT, order); if (pages) { unsigned int count, i; diff --git a/mm/slab.c b/mm/slab.c index dcc5b73cf767..762cb0e7bcc1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1393,7 +1393,7 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, struct page *page; int nr_pages; - flags |= cachep->allocflags; + flags |= (cachep->allocflags | __GFP_NOINIT); page = __alloc_pages_node(nodeid, flags, cachep->gfporder); if (!page) { diff --git a/mm/slob.c b/mm/slob.c index 18981a71e962..867d2d68a693 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -192,6 +192,7 @@ static void *slob_new_pages(gfp_t gfp, int order, int node) { void *page; + gfp |= __GFP_NOINIT; #ifdef CONFIG_NUMA if (node != NUMA_NO_NODE) page = __alloc_pages_node(node, gfp, order); diff --git a/mm/slub.c b/mm/slub.c index e4efb6575510..a79b4cb768a2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1493,6 +1493,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s, struct page *page; unsigned int order = oo_order(oo); + flags |= __GFP_NOINIT; if (node == NUMA_NO_NODE) page = alloc_pages(flags, order); else From patchwork Thu Apr 18 15:42:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10907671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E9E1718FD for ; Thu, 18 Apr 2019 16:33:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D620928D5B for ; Thu, 18 Apr 2019 16:33:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA7DE28D5E; Thu, 18 Apr 2019 16:33:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 87C0A28D5A for ; Thu, 18 Apr 2019 16:33:30 +0000 (UTC) Received: (qmail 29969 invoked by uid 550); 18 Apr 2019 16:33:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 9800 invoked from network); 18 Apr 2019 15:42:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wAzPZXrwubmWQmF9WVABuPChIdQrJJ+ZBr0dE550aoQ=; b=uz44LnEkT/HTvu3a4Wew6M3VjtSaNjOPAV8IN3679cQGnDIKZPAY5UEh6r2rpz7ttY mMl3o4SwPUEN2ofrfX3B0/LWAnL+RPx9lzPcuaqTrmrcSmlARl9V1n1DoKFbvSJGzft0 6H0/dBM/WwbApZ+lzUY8DC1Wl+UbflS1eUi+68yXwwQWB+YKRoOtMMYXAHQsKTpu/pnn LSdeJgCWccEPe8WNpebhj5En/NqxVTcptMohp6f288qWX32M+IcfqNcPhSwSzwQBNTix qmG7T2aSp8g3xTX5pBmhi3ZhghhDIO5DbBC0B/sT5BxS/AO/RZEdc9pecFFoI/oOINKv J6vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wAzPZXrwubmWQmF9WVABuPChIdQrJJ+ZBr0dE550aoQ=; b=laflbAgcE6r9AXu+nyrhiOmHbTDHalLLH7ncKH0sROxQyaDBXZJtiYQBuy2ooRM/9d bSKjpWYTdmYmRqev6Pl/+Nr34Y0crySdX5b4N9JGMc+NVRGvMpLhN891ZQLw9Zd69wSl H/a+dd20awxZTeQgS6/zvJiC/sAi58VtKBep+j+ltJEWzERQbyuZ8SAVkyk+faxNuvrp oj5d0wq/cn6Vg0JxwRJKvkMfIPHfLNI04EjtVeJPckvOS5FB5To6PIqOFUs43I1VoRbT baQ74mO4ByK4JJkDVJ1PJ35u84VJfBn3RUHyCoFyX9Mf+WszgBwpDFIpoFi7siFhK52K EdqA== X-Gm-Message-State: APjAAAUgqPcokV5loEgqjbGXxLaMp3fwCXOA+D35RB4kOEbP7NufMYvs Icr9OAt9GvHmBS/4kUBjZ0ufMtYi1vk= X-Google-Smtp-Source: APXvYqwHmXjFzMWCMAI7HOZrq6fzDFcA552ul92tFMFuwikUvnM6lV/3lV2pYXjBJL2nc139P7EnlIzMCyk= X-Received: by 2002:ab0:348a:: with SMTP id c10mr45283321uar.79.1555602164536; Thu, 18 Apr 2019 08:42:44 -0700 (PDT) Date: Thu, 18 Apr 2019 17:42:08 +0200 In-Reply-To: <20190418154208.131118-1-glider@google.com> Message-Id: <20190418154208.131118-4-glider@google.com> Mime-Version: 1.0 References: <20190418154208.131118-1-glider@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH 3/3] RFC: net: apply __GFP_NOINIT to AF_UNIX sk_buff allocations From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, dvyukov@google.com, keescook@chromium.org, labbott@redhat.com Cc: linux-mm@kvack.org, linux-security-module@vger.kernel.org, kernel-hardening@lists.openwall.com X-Virus-Scanned: ClamAV using ClamSMTP Add sock_alloc_send_pskb_noinit(), which is similar to sock_alloc_send_pskb(), but allocates with __GFP_NOINIT. This helps reduce the slowdown on hackbench from 9% to 0.1%. Signed-off-by: Alexander Potapenko Cc: Andrew Morton Cc: Masahiro Yamada Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Kees Cook Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: Qian Cai Cc: Vlastimil Babka Cc: Eric Dumazet Cc: David S. Miller Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- include/net/sock.h | 5 +++++ net/core/sock.c | 29 +++++++++++++++++++++++++---- net/unix/af_unix.c | 13 +++++++------ 3 files changed, 37 insertions(+), 10 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 8de5ee258b93..37fcdda23884 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1612,6 +1612,11 @@ struct sk_buff *sock_alloc_send_skb(struct sock *sk, unsigned long size, struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, unsigned long data_len, int noblock, int *errcode, int max_page_order); +struct sk_buff *sock_alloc_send_pskb_noinit(struct sock *sk, + unsigned long header_len, + unsigned long data_len, + int noblock, int *errcode, + int max_page_order); void *sock_kmalloc(struct sock *sk, int size, gfp_t priority); void sock_kfree_s(struct sock *sk, void *mem, int size); void sock_kzfree_s(struct sock *sk, void *mem, int size); diff --git a/net/core/sock.c b/net/core/sock.c index 99b288a19b39..0a2af1e1fa1c 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2187,9 +2187,11 @@ static long sock_wait_for_wmem(struct sock *sk, long timeo) * Generic send/receive buffer handlers */ -struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, - unsigned long data_len, int noblock, - int *errcode, int max_page_order) +struct sk_buff *sock_alloc_send_pskb_internal(struct sock *sk, + unsigned long header_len, + unsigned long data_len, + int noblock, int *errcode, + int max_page_order, gfp_t gfp) { struct sk_buff *skb; long timeo; @@ -2218,7 +2220,7 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, timeo = sock_wait_for_wmem(sk, timeo); } skb = alloc_skb_with_frags(header_len, data_len, max_page_order, - errcode, sk->sk_allocation); + errcode, sk->sk_allocation | gfp); if (skb) skb_set_owner_w(skb, sk); return skb; @@ -2229,8 +2231,27 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, *errcode = err; return NULL; } + +struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len, + unsigned long data_len, int noblock, + int *errcode, int max_page_order) +{ + return sock_alloc_send_pskb_internal(sk, header_len, data_len, + noblock, errcode, max_page_order, /*gfp*/0); +} EXPORT_SYMBOL(sock_alloc_send_pskb); +struct sk_buff *sock_alloc_send_pskb_noinit(struct sock *sk, + unsigned long header_len, + unsigned long data_len, + int noblock, int *errcode, + int max_page_order) +{ + return sock_alloc_send_pskb_internal(sk, header_len, data_len, + noblock, errcode, max_page_order, /*gfp*/__GFP_NOINIT); +} +EXPORT_SYMBOL(sock_alloc_send_pskb_noinit); + struct sk_buff *sock_alloc_send_skb(struct sock *sk, unsigned long size, int noblock, int *errcode) { diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index ddb838a1b74c..9a45824c3c48 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1627,9 +1627,9 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg, BUILD_BUG_ON(SKB_MAX_ALLOC < PAGE_SIZE); } - skb = sock_alloc_send_pskb(sk, len - data_len, data_len, - msg->msg_flags & MSG_DONTWAIT, &err, - PAGE_ALLOC_COSTLY_ORDER); + skb = sock_alloc_send_pskb_noinit(sk, len - data_len, data_len, + msg->msg_flags & MSG_DONTWAIT, &err, + PAGE_ALLOC_COSTLY_ORDER); if (skb == NULL) goto out; @@ -1824,9 +1824,10 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg, data_len = min_t(size_t, size, PAGE_ALIGN(data_len)); - skb = sock_alloc_send_pskb(sk, size - data_len, data_len, - msg->msg_flags & MSG_DONTWAIT, &err, - get_order(UNIX_SKB_FRAGS_SZ)); + skb = sock_alloc_send_pskb_noinit(sk, size - data_len, data_len, + msg->msg_flags & MSG_DONTWAIT, + &err, + get_order(UNIX_SKB_FRAGS_SZ)); if (!skb) goto out_err;