From patchwork Thu May 23 12:42:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10957551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F23A614C0 for ; Thu, 23 May 2019 12:42:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D791528498 for ; Thu, 23 May 2019 12:42:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CB376284A3; Thu, 23 May 2019 12:42:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 856C828498 for ; Thu, 23 May 2019 12:42:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B9C26B0006; Thu, 23 May 2019 08:42:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6438B6B0007; Thu, 23 May 2019 08:42:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BF766B0008; Thu, 23 May 2019 08:42:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id 2363B6B0006 for ; Thu, 23 May 2019 08:42:33 -0400 (EDT) Received: by mail-qt1-f199.google.com with SMTP id l37so5192132qtc.8 for ; Thu, 23 May 2019 05:42:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=zZVC/4k73iNfGOK0eskSt0mbdzi7I7K1yUBw5U05vrg=; b=EpsCHle7F6HzdzPlfDeqvoZW8W+Ued4j0WHjGbZ2+lBBSxEjxj1teiXz2vMuJA2gJu xnxkAkcxdvZvQLPQaBJhvrm7VB6xDYRyEqMMw+m/T/hUIOkP7156HXJXqwRciEg0twgl j4RPV5X/l1NhoQ3Z1wa2pI6CpulpP1+82sthwTlcADn3Z5clAbKv0JkfXDOvcfNii7X7 5ZmQ5N5rHzvUv/7yjmwx2urnlUt4F3QM2qXgE2RrFLGfKOk7l/xOkIGo7IaAhKC4V2ut Qzu2gtEvd1pmEdLTRwWyD4I+kyRBp03WqerCShMXnxNVFJ9dUq0AowOugugveCXGAexo UA3Q== X-Gm-Message-State: APjAAAU/79aBpR5muZ4zscLVNzjmSqjSzCZYqgea0g5qklLEBVDTfMKW FK0pw+Bh//VzuMJAGfmMiCi2lGlFEFc1zdOcYJsmn78lqGsdTffNYa7ctNtf6iO0Wj4sQwkG1mI s7/Uw5UAI/vKYopoToFbWHTTdXpZk5INTJVAG4k975HjpdaCgVSVMxU6dfFBWGugU4Q== X-Received: by 2002:aed:2a25:: with SMTP id c34mr81700857qtd.62.1558615352826; Thu, 23 May 2019 05:42:32 -0700 (PDT) X-Received: by 2002:aed:2a25:: with SMTP id c34mr81700775qtd.62.1558615351746; Thu, 23 May 2019 05:42:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558615351; cv=none; d=google.com; s=arc-20160816; b=FPFXAzHJxYWeSDnYEFF0M+E0Kf6dXhPQIGrJa6vO0P6TzH8PZ6mg9MBtcDyDu/cqu6 ywijaN4k4A/sUpV8YtbZwr/xKJO6AZTiIu2Ujm2YsLuv3t6s+YTpr2TJVSjvFdcUfiuJ H1RdLSe37BcT8LvPJF3S+JXwaT2XwP7DSqmRZTASb+I0pqz5eVgyEdaS1iBU59uikskL Zt/N07b68ww99kgXbckNPkoNpPHRQs5r4o2Iv/j7Ocr2/3AbAS+lEDehVw/AR+H0CafQ K85Cg/3dYrPkLHBiFeuTXioAju4EJaq3cNF7kiWOZaACkWqLa0nQXyVv+nT+dYcDBO68 v55w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=zZVC/4k73iNfGOK0eskSt0mbdzi7I7K1yUBw5U05vrg=; b=nAgNM34TRqBko1Bs453N0dwDLHmLqhDCxzVOhjJlMR2koEqutlPjlpP/uw0aRTE6wN RgWb7ALZAuSI2S+5Hb2oxBSq+WOoVgWxluM7t3fmAQleSy3QxN+sitHwzobk65atmncT RNSXpMjtNiSRmzGA09PKiyvJ2eFjt8oSXEEznF5gsBkPVh1KGZlUvOD2ZvlVk7Bxh09X pfo6k4DKD7HfvmyKM+2CJvyfbKX3C7tRmIEdrGIcjtF4vfGam14sdayL05lDiE0CfdeV /ZrWJoTUpQdF1DVDFSFuDnSN9Q9whLOYOg5wisxdDNyyhYpyRGbeH6K44GKNsq+TYhIm EXsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=WChDJcTZ; spf=pass (google.com: domain of 3n5xmxaykcogqvsnobqyyqvo.mywvsxeh-wwufkmu.ybq@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3N5XmXAYKCOgQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id w3sor7783006qtj.41.2019.05.23.05.42.31 for (Google Transport Security); Thu, 23 May 2019 05:42:31 -0700 (PDT) Received-SPF: pass (google.com: domain of 3n5xmxaykcogqvsnobqyyqvo.mywvsxeh-wwufkmu.ybq@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=WChDJcTZ; spf=pass (google.com: domain of 3n5xmxaykcogqvsnobqyyqvo.mywvsxeh-wwufkmu.ybq@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3N5XmXAYKCOgQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zZVC/4k73iNfGOK0eskSt0mbdzi7I7K1yUBw5U05vrg=; b=WChDJcTZORO+XxYcC+WHCJMGDoAOZjJ7+JFNXSO30DPSdAgnED/1KuRUVEk15yU83c S+SsXffR2WeFvCI7gH9XbT3+DDhb/oRwzIdB7JWTii74mKveVsdtb/HzAGcpb/TVHiD1 NV0JRKYs1lGI9KOF9toLWSp1tKsFf07Zv9Ta9byP7Qqzj4yvoRTFDB8YTnxN6RShvz5w AupM8c+0TO+z+DaJvLbRgU7wYVd9rGJbeX/V/+1d3cBBinW5gIMURf24/a7Sd0PUJviV zJgycbneliwoRTXHLEgN2Ujr8qsAloZTK2o/Hn4OZHoqsxA6jYStsw0//QIC4W0XYUYF 09Fg== X-Google-Smtp-Source: APXvYqzZdKkEEMPp4TTIbPbUoXPUj9ZL1RAfC1aAJDEPWuw4SNFtmMDnIdSlaOfdJUm4TDf7SfAEIoia+mw= X-Received: by 2002:ac8:2418:: with SMTP id c24mr39788983qtc.37.1558615351325; Thu, 23 May 2019 05:42:31 -0700 (PDT) Date: Thu, 23 May 2019 14:42:14 +0200 In-Reply-To: <20190523124216.40208-1-glider@google.com> Message-Id: <20190523124216.40208-2-glider@google.com> Mime-Version: 1.0 References: <20190523124216.40208-1-glider@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH v3 1/3] mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, keescook@chromium.org Cc: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-security-module@vger.kernel.org, Masahiro Yamada , Michal Hocko , James Morris , "Serge E. Hallyn" , Nick Desaulniers , Kostya Serebryany , Dmitry Vyukov , Sandeep Patil , Laura Abbott , Randy Dunlap , Jann Horn , Mark Rutland X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The new options are needed to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic. init_on_alloc=1 makes the kernel initialize newly allocated pages and heap objects with zeroes. Initialization is done at allocation time at the places where checks for __GFP_ZERO are performed. init_on_free=1 makes the kernel initialize freed pages and heap objects with zeroes upon their deletion. This helps to ensure sensitive data doesn't leak via use-after-free accesses. Both init_on_alloc=1 and init_on_free=1 guarantee that the allocator returns zeroed memory. The only exception is slab caches with constructors. Those are never zero-initialized to preserve their semantics. For SLOB allocator init_on_free=1 also implies init_on_alloc=1 behavior, i.e. objects are zeroed at both allocation and deallocation time. This is done because SLOB may otherwise return multiple freelist pointers in the allocated object. For SLAB and SLUB enabling either init_on_alloc or init_on_free leads to one-time initialization of the object. Both init_on_alloc and init_on_free default to zero, but those defaults can be overridden with CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON. Slowdown for the new features compared to init_on_free=0, init_on_alloc=0: hackbench, init_on_free=1: +7.62% sys time (st.err 0.74%) hackbench, init_on_alloc=1: +7.75% sys time (st.err 2.14%) Linux build with -j12, init_on_free=1: +8.38% wall time (st.err 0.39%) Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%) Linux build with -j12, init_on_alloc=1: -0.13% wall time (st.err 0.42%) Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%) The slowdown for init_on_free=0, init_on_alloc=0 compared to the baseline is within the standard error. The new features are also going to pave the way for hardware memory tagging (e.g. arm64's MTE), which will require both on_alloc and on_free hooks to set the tags for heap objects. With MTE, tagging will have the same cost as memory initialization. Although init_on_free is rather costly, there are paranoid use-cases where in-memory data lifetime is desired to be minimized. There are various arguments for/against the realism of the associated threat models, but given that we'll need the infrastructre for MTE anyway, and there are people who want wipe-on-free behavior no matter what the performance cost, it seems reasonable to include it in this series. Signed-off-by: Alexander Potapenko To: Andrew Morton To: Christoph Lameter To: Kees Cook Cc: Masahiro Yamada Cc: Michal Hocko Cc: James Morris Cc: "Serge E. Hallyn" Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Sandeep Patil Cc: Laura Abbott Cc: Randy Dunlap Cc: Jann Horn Cc: Mark Rutland Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- v2: - unconditionally initialize pages in kernel_init_free_pages() - comment from Randy Dunlap: drop 'default false' lines from Kconfig.hardening v3: - don't call kernel_init_free_pages() from memblock_free_pages() - adopted some Kees' comments for the patch description --- .../admin-guide/kernel-parameters.txt | 8 +++ drivers/infiniband/core/uverbs_ioctl.c | 2 +- include/linux/mm.h | 22 +++++++ kernel/kexec_core.c | 2 +- mm/dmapool.c | 2 +- mm/page_alloc.c | 63 ++++++++++++++++--- mm/slab.c | 16 ++++- mm/slab.h | 16 +++++ mm/slob.c | 22 ++++++- mm/slub.c | 27 ++++++-- net/core/sock.c | 2 +- security/Kconfig.hardening | 14 +++++ 12 files changed, 175 insertions(+), 21 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 52e6fbb042cc..68fb6fa41cc1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1673,6 +1673,14 @@ initrd= [BOOT] Specify the location of the initial ramdisk + init_on_alloc= [MM] Fill newly allocated pages and heap objects with + zeroes. + Format: 0 | 1 + Default set by CONFIG_INIT_ON_ALLOC_DEFAULT_ON. + init_on_free= [MM] Fill freed pages and heap objects with zeroes. + Format: 0 | 1 + Default set by CONFIG_INIT_ON_FREE_DEFAULT_ON. + init_pkru= [x86] Specify the default memory protection keys rights register contents for all processes. 0x55555554 by default (disallow access to all but pkey 0). Can diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index 829b0c6944d8..61758201d9b2 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -127,7 +127,7 @@ __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size, res = (void *)pbundle->internal_buffer + pbundle->internal_used; pbundle->internal_used = ALIGN(new_used, sizeof(*pbundle->internal_buffer)); - if (flags & __GFP_ZERO) + if (want_init_on_alloc(flags)) memset(res, 0, size); return res; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e8834ac32b7..7733a341c0c4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2685,6 +2685,28 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif +#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON +DECLARE_STATIC_KEY_TRUE(init_on_alloc); +#else +DECLARE_STATIC_KEY_FALSE(init_on_alloc); +#endif +static inline bool want_init_on_alloc(gfp_t flags) +{ + if (static_branch_unlikely(&init_on_alloc)) + return true; + return flags & __GFP_ZERO; +} + +#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON +DECLARE_STATIC_KEY_TRUE(init_on_free); +#else +DECLARE_STATIC_KEY_FALSE(init_on_free); +#endif +static inline bool want_init_on_free(void) +{ + return static_branch_unlikely(&init_on_free); +} + extern bool _debug_pagealloc_enabled; static inline bool debug_pagealloc_enabled(void) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index fd5c95ff9251..2f75dd0d0d81 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -315,7 +315,7 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order) arch_kexec_post_alloc_pages(page_address(pages), count, gfp_mask); - if (gfp_mask & __GFP_ZERO) + if (want_init_on_alloc(gfp_mask)) for (i = 0; i < count; i++) clear_highpage(pages + i); } diff --git a/mm/dmapool.c b/mm/dmapool.c index 76a160083506..493d151067cb 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -381,7 +381,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, #endif spin_unlock_irqrestore(&pool->lock, flags); - if (mem_flags & __GFP_ZERO) + if (want_init_on_alloc(mem_flags)) memset(retval, 0, pool->size); return retval; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3b13d3914176..14ded6620aa0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -135,6 +135,48 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; +#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON +DEFINE_STATIC_KEY_TRUE(init_on_alloc); +#else +DEFINE_STATIC_KEY_FALSE(init_on_alloc); +#endif +#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON +DEFINE_STATIC_KEY_TRUE(init_on_free); +#else +DEFINE_STATIC_KEY_FALSE(init_on_free); +#endif + +static int __init early_init_on_alloc(char *buf) +{ + int ret; + bool bool_result; + + if (!buf) + return -EINVAL; + ret = kstrtobool(buf, &bool_result); + if (bool_result) + static_branch_enable(&init_on_alloc); + else + static_branch_disable(&init_on_alloc); + return ret; +} +early_param("init_on_alloc", early_init_on_alloc); + +static int __init early_init_on_free(char *buf) +{ + int ret; + bool bool_result; + + if (!buf) + return -EINVAL; + ret = kstrtobool(buf, &bool_result); + if (bool_result) + static_branch_enable(&init_on_free); + else + static_branch_disable(&init_on_free); + return ret; +} +early_param("init_on_free", early_init_on_free); /* * A cached value of the page's pageblock's migratetype, used when the page is @@ -1089,6 +1131,14 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) return ret; } +static void kernel_init_free_pages(struct page *page, int numpages) +{ + int i; + + for (i = 0; i < numpages; i++) + clear_highpage(page + i); +} + static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free) { @@ -1141,6 +1191,8 @@ static __always_inline bool free_pages_prepare(struct page *page, } arch_free_page(page, order); kernel_poison_pages(page, 1 << order, 0); + if (want_init_on_free()) + kernel_init_free_pages(page, 1 << order); if (debug_pagealloc_enabled()) kernel_map_pages(page, 1 << order, 0); @@ -2019,8 +2071,8 @@ static inline int check_new_page(struct page *page) static inline bool free_pages_prezeroed(void) { - return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) && - page_poisoning_enabled(); + return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) && + page_poisoning_enabled()) || want_init_on_free(); } #ifdef CONFIG_DEBUG_VM @@ -2074,13 +2126,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order, static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags) { - int i; - post_alloc_hook(page, order, gfp_flags); - if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO)) - for (i = 0; i < (1 << order); i++) - clear_highpage(page + i); + if (!free_pages_prezeroed() && want_init_on_alloc(gfp_flags)) + kernel_init_free_pages(page, 1 << order); if (order && (gfp_flags & __GFP_COMP)) prep_compound_page(page, order); diff --git a/mm/slab.c b/mm/slab.c index 2915d912e89a..d42eb11f8f50 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1853,6 +1853,14 @@ static bool set_objfreelist_slab_cache(struct kmem_cache *cachep, cachep->num = 0; + /* + * If slab auto-initialization on free is enabled, store the freelist + * off-slab, so that its contents don't end up in one of the allocated + * objects. + */ + if (unlikely(slab_want_init_on_free(cachep))) + return false; + if (cachep->ctor || flags & SLAB_TYPESAFE_BY_RCU) return false; @@ -3293,7 +3301,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, local_irq_restore(save_flags); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - if (unlikely(flags & __GFP_ZERO) && ptr) + if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) memset(ptr, 0, cachep->object_size); slab_post_alloc_hook(cachep, flags, 1, &ptr); @@ -3350,7 +3358,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); - if (unlikely(flags & __GFP_ZERO) && objp) + if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) memset(objp, 0, cachep->object_size); slab_post_alloc_hook(cachep, flags, 1, &objp); @@ -3471,6 +3479,8 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, struct array_cache *ac = cpu_cache_get(cachep); check_irq_off(); + if (unlikely(slab_want_init_on_free(cachep))) + memset(objp, 0, cachep->object_size); kmemleak_free_recursive(objp, cachep->flags); objp = cache_free_debugcheck(cachep, objp, caller); @@ -3558,7 +3568,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_); /* Clear memory outside IRQ disabled section */ - if (unlikely(flags & __GFP_ZERO)) + if (unlikely(slab_want_init_on_alloc(flags, s))) for (i = 0; i < size; i++) memset(p[i], 0, s->object_size); diff --git a/mm/slab.h b/mm/slab.h index 43ac818b8592..24ae887359b8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -524,4 +524,20 @@ static inline int cache_random_seq_create(struct kmem_cache *cachep, static inline void cache_random_seq_destroy(struct kmem_cache *cachep) { } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ +static inline bool slab_want_init_on_alloc(gfp_t flags, struct kmem_cache *c) +{ + if (static_branch_unlikely(&init_on_alloc)) + return !(c->ctor); + else + return flags & __GFP_ZERO; +} + +static inline bool slab_want_init_on_free(struct kmem_cache *c) +{ + if (static_branch_unlikely(&init_on_free)) + return !(c->ctor); + else + return false; +} + #endif /* MM_SLAB_H */ diff --git a/mm/slob.c b/mm/slob.c index 84aefd9b91ee..1b565ee7f479 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -212,6 +212,19 @@ static void slob_free_pages(void *b, int order) free_pages((unsigned long)b, order); } +/* + * init_on_free=1 also implies initialization at allocation time. + * This is because newly allocated objects may contain freelist pointers + * somewhere in the middle. + */ +static inline bool slob_want_init_on_alloc(gfp_t flags, struct kmem_cache *c) +{ + if (static_branch_unlikely(&init_on_alloc) || + static_branch_unlikely(&init_on_free)) + return c ? (!c->ctor) : true; + return flags & __GFP_ZERO; +} + /* * slob_page_alloc() - Allocate a slob block within a given slob_page sp. * @sp: Page to look in. @@ -353,8 +366,6 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) BUG_ON(!b); spin_unlock_irqrestore(&slob_lock, flags); } - if (unlikely(gfp & __GFP_ZERO)) - memset(b, 0, size); return b; } @@ -389,6 +400,9 @@ static void slob_free(void *block, int size) return; } + if (unlikely(want_init_on_free())) + memset(block, 0, size); + if (!slob_page_free(sp)) { /* This slob page is about to become partially free. Easy! */ sp->units = units; @@ -484,6 +498,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) } kmemleak_alloc(ret, size, 1, gfp); + if (unlikely(slob_want_init_on_alloc(gfp, 0))) + memset(ret, 0, size); return ret; } @@ -582,6 +598,8 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) WARN_ON_ONCE(flags & __GFP_ZERO); c->ctor(b); } + if (unlikely(slob_want_init_on_alloc(flags, c))) + memset(b, 0, c->size); kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags); return b; diff --git a/mm/slub.c b/mm/slub.c index cd04dbd2b5d0..5fcb3f71cf84 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1424,6 +1424,19 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) static inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail) { + + void *object; + void *next = *head; + void *old_tail = *tail ? *tail : *head; + + if (slab_want_init_on_free(s)) + do { + object = next; + next = get_freepointer(s, object); + memset(object, 0, s->size); + set_freepointer(s, object, next); + } while (object != old_tail); + /* * Compiler cannot detect this function can be removed if slab_free_hook() * evaluates to nothing. Thus, catch all relevant config debug options here. @@ -1433,9 +1446,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, defined(CONFIG_DEBUG_OBJECTS_FREE) || \ defined(CONFIG_KASAN) - void *object; - void *next = *head; - void *old_tail = *tail ? *tail : *head; + next = *head; /* Head and tail of the reconstructed freelist */ *head = NULL; @@ -2741,8 +2752,14 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, prefetch_freepointer(s, next_object); stat(s, ALLOC_FASTPATH); } + /* + * If the object has been wiped upon free, make sure it's fully + * initialized by zeroing out freelist pointer. + */ + if (slab_want_init_on_free(s)) + *(void **)object = 0; - if (unlikely(gfpflags & __GFP_ZERO) && object) + if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); slab_post_alloc_hook(s, gfpflags, 1, &object); @@ -3163,7 +3180,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_enable(); /* Clear memory outside IRQ disabled fastpath loop */ - if (unlikely(flags & __GFP_ZERO)) { + if (unlikely(slab_want_init_on_alloc(flags, s))) { int j; for (j = 0; j < i; j++) diff --git a/net/core/sock.c b/net/core/sock.c index 75b1c950b49f..9ceb90c875bc 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1602,7 +1602,7 @@ static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority, sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO); if (!sk) return sk; - if (priority & __GFP_ZERO) + if (want_init_on_alloc(priority)) sk_prot_clear_nulls(sk, prot->obj_size); } else sk = kmalloc(prot->obj_size, priority); diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index 0a1d4ca314f4..87883e3e3c2a 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -159,6 +159,20 @@ config STACKLEAK_RUNTIME_DISABLE runtime to control kernel stack erasing for kernels built with CONFIG_GCC_PLUGIN_STACKLEAK. +config INIT_ON_ALLOC_DEFAULT_ON + bool "Set init_on_alloc=1 by default" + help + Enable init_on_alloc=1 by default, making the kernel initialize every + page and heap allocation with zeroes. + init_on_alloc can be overridden via command line. + +config INIT_ON_FREE_DEFAULT_ON + bool "Set init_on_free=1 by default" + help + Enable init_on_free=1 by default, making the kernel initialize freed + pages and slab memory with zeroes. + init_on_free can be overridden via command line. + endmenu endmenu From patchwork Thu May 23 12:42:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10957555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D41DF14C0 for ; Thu, 23 May 2019 12:42:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB12028498 for ; Thu, 23 May 2019 12:42:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE9332849D; Thu, 23 May 2019 12:42:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1829428498 for ; Thu, 23 May 2019 12:42:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE60D6B0007; Thu, 23 May 2019 08:42:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D482D6B0008; Thu, 23 May 2019 08:42:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEAFE6B000A; Thu, 23 May 2019 08:42:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-vs1-f71.google.com (mail-vs1-f71.google.com [209.85.217.71]) by kanga.kvack.org (Postfix) with ESMTP id 91A016B0007 for ; Thu, 23 May 2019 08:42:36 -0400 (EDT) Received: by mail-vs1-f71.google.com with SMTP id h22so1318411vso.18 for ; Thu, 23 May 2019 05:42:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=NCAxoLMPldvKQMidzNTr8hyOEHq1jSHIcOt+CxAESVs=; b=Xxkahhr7yBFrS/BLPo5kRQLDXLIZyZVBbNQFJ86X/pwway+ct+3E7fOHtPfRNcHAwp 2Ry/EzJLF05cwIMYzRX9NVffyjDKdLHmfdkBdbO9oh8ClR1frmoZUE/Bd4VyLrpPt58x DpQpeVEGi6I9QbGoSfghp9SSKb4lLJOT/Zf1qV59Q2+eVAq2FEK+I7yXrg+KAb9Gzg01 1zXzmJG2MLUJencgJ99R2wHg4C3a5M5O9FDiKG2kg+cE7N20X0JAZxM6WHLFaB4KWZe3 gfzWc4rmrp6tzi0+OC5PU1jxsFRNnoaalzXxVWarQZ/JJzeS6wKpt+cxkEFuvLPzPg/C zFkA== X-Gm-Message-State: APjAAAW9oBMDoHKhob/d1yxMqL9Gm9kZDQc1va9kud/qpjVZjHvc+9w5 u4nk2aMCgk2OxsdnRaGKWzx5N9Jmh6TDWWxVrnouixBg5TDsE2i4fI7N/7fiivB3APfprMaI3Eu iwNCTXdc34eGGkTOCvCYRUfj1m1Eu76gJcOQT5khksPtnyUj5czC8PQiZmLnb+ZzJRw== X-Received: by 2002:a67:8e01:: with SMTP id q1mr673322vsd.1.1558615356170; Thu, 23 May 2019 05:42:36 -0700 (PDT) X-Received: by 2002:a67:8e01:: with SMTP id q1mr673301vsd.1.1558615355585; Thu, 23 May 2019 05:42:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558615355; cv=none; d=google.com; s=arc-20160816; b=fubk4oSEotanEg5IXw8/l4He3/+CUIMscTxnNOjHPSuhiI8NC2aM4e7myTbLQBiElt UeJ2ecUnvnzKtBDCqD6hPpE90+pBcJO6ylCF0R8iXIxvTg5AQRhr3292A7usoIIkwOtX kCAJjBu1M/YMzva1suiyXeEn5kJx3YGhCuzNj2AVqWBc6bksFx9zBBglcPxJvdr3TVLb hJogEky2fSMztvv9cpB9xImOgQ/VnzCKuUZWN8b/keBB5PVZjZLOcE5SLV/bDDrq7nKg 1BpDE9KjJQJFkyRooR0ni2rZ3fOKxTCEuqtLa41Et98FGGW8ZzVXjI/rzsaoBuFh8ONQ 8NCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=NCAxoLMPldvKQMidzNTr8hyOEHq1jSHIcOt+CxAESVs=; b=es2PV5gH5wYRePwTuj5sFjdOSmH2k8vqzoRkwk4OhBIz/LkYJgQxo57pcgoWTS2+sD OnHchJkKl7J1iWM8znzMqy1pr555O0GVufcQrQO6LGyAnEpKGqynO4Vrl8V14y/woEAA 96NUwlHx0Xyszr2ZCsstSuQkN0NxXZb2raeW63dVe151uC/rIjO5JuF63sTL2FaFFb1z UlPH+Uo6ehJ5TP6JG36+G62c5qM939Wfnl0ZiRb74Sj5GrZAoj/9F1EfOdM/Wkztddjh UHJsYiwebiRSSjv4bBkwOd+6WVPgusKGncAOfGZX+GQ5HS/folKRAtNAbCWu4Mul68Oo hrmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=kwxaN5TX; spf=pass (google.com: domain of 3o5xmxaykcowuzwrsfuccuzs.qcazwbil-aayjoqy.cfu@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3O5XmXAYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id x4sor8833929vkg.36.2019.05.23.05.42.35 for (Google Transport Security); Thu, 23 May 2019 05:42:35 -0700 (PDT) Received-SPF: pass (google.com: domain of 3o5xmxaykcowuzwrsfuccuzs.qcazwbil-aayjoqy.cfu@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=kwxaN5TX; spf=pass (google.com: domain of 3o5xmxaykcowuzwrsfuccuzs.qcazwbil-aayjoqy.cfu@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3O5XmXAYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NCAxoLMPldvKQMidzNTr8hyOEHq1jSHIcOt+CxAESVs=; b=kwxaN5TXi2fzUmuexBBY0GXWxXPKI1jjIZBR+Z9tiHoEQ/9ujAqPr7MVyXRNhvSd9+ AK6kCkOhJm5SbE0jSM+BNskxXce/VCkWz+Q+nhHVzGM9EiVIxQWQw6xGFRwc293n6pL6 UjuOZc6NpSLuxS9lgkZ9YHgOqEJdb30RHbq5NVEGKZ2RrMvsO8eGym5LjADg9LCNz+Mj M2xn9+Ia0uIgVsD4bzKVJvdYZBbu08DtKLv/k2Kp238pWpoARR/ey5wwSaCTCcg+zg5u IdrKv4BJiVUrPXZyjmF08n8tXNbuPmuj2yNUQwZ+XjwO6HLAiarXc1mbTJcvhpLSEk3Y /qnA== X-Google-Smtp-Source: APXvYqyogFkZZsLOf7+neUZRBCYC8MGU1KnLcLqehoJt4hw5P+8oApijqvC8NaJniUHj65EbWsPUM/NXYM8= X-Received: by 2002:a1f:944d:: with SMTP id w74mr1575300vkd.38.1558615355140; Thu, 23 May 2019 05:42:35 -0700 (PDT) Date: Thu, 23 May 2019 14:42:15 +0200 In-Reply-To: <20190523124216.40208-1-glider@google.com> Message-Id: <20190523124216.40208-3-glider@google.com> Mime-Version: 1.0 References: <20190523124216.40208-1-glider@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH 2/3] mm: init: report memory auto-initialization features at boot time From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, keescook@chromium.org Cc: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-security-module@vger.kernel.org, Dmitry Vyukov , James Morris , Jann Horn , Kostya Serebryany , Laura Abbott , Mark Rutland , Masahiro Yamada , Matthew Wilcox , Nick Desaulniers , Randy Dunlap , Sandeep Patil , "Serge E. Hallyn" , Souptick Joarder X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Print the currently enabled stack and heap initialization modes. The possible options for stack are: - "all" for CONFIG_INIT_STACK_ALL; - "byref_all" for CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL; - "byref" for CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF; - "__user" for CONFIG_GCC_PLUGIN_STRUCTLEAK_USER; - "off" otherwise. Depending on the values of init_on_alloc and init_on_free boottime options we also report "heap alloc" and "heap free" as "on"/"off". In the init_on_free mode initializing pages at boot time may take some time, so print a notice about that as well. Signed-off-by: Alexander Potapenko Suggested-by: Kees Cook To: Andrew Morton To: Christoph Lameter Cc: Dmitry Vyukov Cc: James Morris Cc: Jann Horn Cc: Kostya Serebryany Cc: Laura Abbott Cc: Mark Rutland Cc: Masahiro Yamada Cc: Matthew Wilcox Cc: Nick Desaulniers Cc: Randy Dunlap Cc: Sandeep Patil Cc: "Serge E. Hallyn" Cc: Souptick Joarder Cc: kernel-hardening@lists.openwall.com Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org --- init/main.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/init/main.c b/init/main.c index 5a2c69b4d7b3..90f721c58e61 100644 --- a/init/main.c +++ b/init/main.c @@ -519,6 +519,29 @@ static inline void initcall_debug_enable(void) } #endif +/* Report memory auto-initialization states for this boot. */ +void __init report_meminit(void) +{ + const char *stack; + + if (IS_ENABLED(CONFIG_INIT_STACK_ALL)) + stack = "all"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL)) + stack = "byref_all"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF)) + stack = "byref"; + else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER)) + stack = "__user"; + else + stack = "off"; + + pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n", + stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off", + want_init_on_free() ? "on" : "off"); + if (want_init_on_free()) + pr_info("Clearing system memory may take some time...\n"); +} + /* * Set up kernel memory allocators */ @@ -529,6 +552,7 @@ static void __init mm_init(void) * bigger than MAX_ORDER unless SPARSEMEM. */ page_ext_init_flatmem(); + report_meminit(); mem_init(); kmem_cache_init(); pgtable_init(); From patchwork Thu May 23 12:42:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 10957559 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BE566C5 for ; Thu, 23 May 2019 12:42:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E637628496 for ; Thu, 23 May 2019 12:42:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DA18128498; Thu, 23 May 2019 12:42:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 117FA28496 for ; Thu, 23 May 2019 12:42:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C38626B0008; Thu, 23 May 2019 08:42:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BC1E46B000A; Thu, 23 May 2019 08:42:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A14386B000C; Thu, 23 May 2019 08:42:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yw1-f72.google.com (mail-yw1-f72.google.com [209.85.161.72]) by kanga.kvack.org (Postfix) with ESMTP id 7AA8A6B0008 for ; Thu, 23 May 2019 08:42:40 -0400 (EDT) Received: by mail-yw1-f72.google.com with SMTP id p123so399236ywg.3 for ; Thu, 23 May 2019 05:42:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=GrPd0Ehzkwgs8flG0McuJvkTnr/DIjfrIhDeOSvXnoY=; b=cGzzvH76bZ+iUMGoZzQpcXslOQHBydL2jGnhYsqex16wLuGSuwhebvq1WKk9aLdfk+ Pn7rpwBO3PobPRiuyH0UxIYjVwAApflXio+/0aPyHsfKtGm19p7YAhNaJZfonWekaNao ZNKFKE5d7k/6kLHwnWi+w7MUxc7GsdwWiPUHTHC5mxxl/kqlejI1XDRTjvF6W3a/XwF/ cDxfgWo5E854iuwDo4m8kTOZF4tHfO0GCWE9zDr7fnTWqvIECLVreVEbAHIZHeXZ7A1u 6b2DmK437/kMHlzxvkIoSt6ZRGO8bRcRMBwdOzkAK+axooR4V1DTGXivrhMlVcS47sjx L7Tw== X-Gm-Message-State: APjAAAUu2OGAZSmcG7cGQOYPyb2ykEntazHiJQHiHRrk3bRUzXQgXed7 fWnEzOk5y/t2rCaldXrLZzzbcKUllit7zfSp0kzIJ1fTJ3oKmD7/8tOCSTs9k5/CTA/UJS0bQcy LF/ePJN+Laojt+4X0ZdKqJwi245jHYEOdoKXwaZjpEEXdv9/F36qoWGTEsCj8H7GQeg== X-Received: by 2002:a25:390:: with SMTP id 138mr595613ybd.404.1558615360210; Thu, 23 May 2019 05:42:40 -0700 (PDT) X-Received: by 2002:a25:390:: with SMTP id 138mr595588ybd.404.1558615359493; Thu, 23 May 2019 05:42:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558615359; cv=none; d=google.com; s=arc-20160816; b=TUJ5cLcW5eUA2WZ11w48nzBYVr291IXW6UbTWs6KdYuCMtyqd7pHwJLBMg/MdiEDhu 5jMqQM93M1MH6i2Ny6rS9DplsCRshM+cRFwopS/GbMj1tsEj35j1TIpiAfOODVUH6zRg YjfWNzNZJfi6CyjlPKzqMQshKBWBmjgQxFN3xKenz4MgxRMwK6Poa1KMg0GF3mfxBtLU vPC/G3O+rOr/nhAVChOjNJS3YfbfVXD4T3u1Y38jLRxHinrHsTN9LScpqt34Ax9iVFGX L51Sd+9qLu1wjqmE0E/IuCIABcCN1HQhf0C8zlSPOXBxDubyOdpC3OglvYUPaKzi0bUF DHbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=GrPd0Ehzkwgs8flG0McuJvkTnr/DIjfrIhDeOSvXnoY=; b=uyCPF/zICzHnQBImgkLmbphz92ti5VUTIFoYzzwA60ouSgEFf70sRPdUTVCUyGDfJu BCR5dFc2hRUnDwrg7hDHlHNIKeK/P9iiVOprwagvfOGMymZ2mYSdkmi9X8/twT0n1nwU 9/MfIZJW4t9E+6PALqPGbHDcJ0dM+0G/ihXUlgZd0o9OwJYvL+NbMcNrlzgTaIIUv+Ea TX930c3RJokCRs2lkTrqIYC0R7SrhuFpfihdKmgjuQ3PHCoaP5z0rRViodKoqlljD5ol j4EPRxn7yQXWiMF8PY/YJOs7yGJyxv+CkIt2rmVDdkV4F/9OZf9UN4Wlz4FiGmZH6iBE BgGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=c4PTxt6G; spf=pass (google.com: domain of 3p5xmxaykcpaydavwjyggydw.ugedafmp-eecnsuc.gjy@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3P5XmXAYKCPAYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id w18sor3520337ybs.55.2019.05.23.05.42.39 for (Google Transport Security); Thu, 23 May 2019 05:42:39 -0700 (PDT) Received-SPF: pass (google.com: domain of 3p5xmxaykcpaydavwjyggydw.ugedafmp-eecnsuc.gjy@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=c4PTxt6G; spf=pass (google.com: domain of 3p5xmxaykcpaydavwjyggydw.ugedafmp-eecnsuc.gjy@flex--glider.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3P5XmXAYKCPAYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GrPd0Ehzkwgs8flG0McuJvkTnr/DIjfrIhDeOSvXnoY=; b=c4PTxt6GbbkKh29sih5enoKVGyMiYVYDStyGmFdx5QefGnvHTake1qlrYrHoHEQrCE ++1kksetXbRrLBsUzGoBPVIMAAMNahTmjf785FIERAu9VEACDN6Dz1mlWOxOGpbeA7pb j/fU9z5gy2GIxuhbsTrOsbsC5W7GBJTGTn7dwa9/LCuRdsWwyAmoyq3PDyg5GU2ug5s+ HLYCxfheuLXUX4xre/bvMAurMTrcffJpEMMKp+j6Wtq8qnswrfxvwcbQ75DKYA0YMApY vroFWYFkyIu9Htfdesto2WrVrIfrGq7SCiUXYBDLUYAih13Eo2rIPWIc+nwn32ib129k jpNw== X-Google-Smtp-Source: APXvYqx6+4Lx9KEyyCmocn+ERDYmrBvW59FCcdrGthGBmVTyeg7hNq7ggY14xUbi5JovTOJdj/wBvcGas8A= X-Received: by 2002:a5b:888:: with SMTP id e8mr12252456ybq.505.1558615359109; Thu, 23 May 2019 05:42:39 -0700 (PDT) Date: Thu, 23 May 2019 14:42:16 +0200 In-Reply-To: <20190523124216.40208-1-glider@google.com> Message-Id: <20190523124216.40208-4-glider@google.com> Mime-Version: 1.0 References: <20190523124216.40208-1-glider@google.com> X-Mailer: git-send-email 2.21.0.1020.gf2820cf01a-goog Subject: [PATCH 3/3] lib: introduce test_meminit module From: Alexander Potapenko To: akpm@linux-foundation.org, cl@linux.com, keescook@chromium.org Cc: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-security-module@vger.kernel.org, Nick Desaulniers , Kostya Serebryany , Dmitry Vyukov , Sandeep Patil , Laura Abbott , Jann Horn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add tests for heap and pagealloc initialization. These can be used to check init_on_alloc and init_on_free implementations as well as other approaches to initialization. Expected test output in the case the kernel provides heap initialization (e.g. when running with either init_on_alloc=1 or init_on_free=1): test_meminit: all 10 tests in test_pages passed test_meminit: all 40 tests in test_kvmalloc passed test_meminit: all 20 tests in test_kmemcache passed test_meminit: all 70 tests passed! Signed-off-by: Alexander Potapenko To: Kees Cook To: Andrew Morton To: Christoph Lameter Cc: Nick Desaulniers Cc: Kostya Serebryany Cc: Dmitry Vyukov Cc: Sandeep Patil Cc: Laura Abbott Cc: Jann Horn Cc: linux-mm@kvack.org Cc: linux-security-module@vger.kernel.org Cc: kernel-hardening@lists.openwall.com --- v3: - added example test output to the description - fixed a missing include spotted by kbuild test robot - added a missing MODULE_LICENSE - call do_kmem_cache_size() with size >= sizeof(void*) to unbreak debug builds --- lib/Kconfig.debug | 8 ++ lib/Makefile | 1 + lib/test_meminit.c | 208 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 217 insertions(+) create mode 100644 lib/test_meminit.c diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fdfa173651eb..036e8ef03831 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2043,6 +2043,14 @@ config TEST_STACKINIT If unsure, say N. +config TEST_MEMINIT + tristate "Test level of heap/page initialization" + help + Test if the kernel is zero-initializing heap and page allocations. + This can be useful to test init_on_alloc and init_on_free features. + + If unsure, say N. + endif # RUNTIME_TESTING_MENU config MEMTEST diff --git a/lib/Makefile b/lib/Makefile index fb7697031a79..05980c802500 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -91,6 +91,7 @@ obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o obj-$(CONFIG_TEST_STACKINIT) += test_stackinit.o +obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/ diff --git a/lib/test_meminit.c b/lib/test_meminit.c new file mode 100644 index 000000000000..d46e2b8c8e8e --- /dev/null +++ b/lib/test_meminit.c @@ -0,0 +1,208 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for SL[AOU]B/page initialization at alloc/free time. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include + +#define GARBAGE_INT (0x09A7BA9E) +#define GARBAGE_BYTE (0x9E) + +#define REPORT_FAILURES_IN_FN() \ + do { \ + if (failures) \ + pr_info("%s failed %d out of %d times\n", \ + __func__, failures, num_tests); \ + else \ + pr_info("all %d tests in %s passed\n", \ + num_tests, __func__); \ + } while (0) + +/* Calculate the number of uninitialized bytes in the buffer. */ +static int count_nonzero_bytes(void *ptr, size_t size) +{ + int i, ret = 0; + unsigned char *p = (unsigned char *)ptr; + + for (i = 0; i < size; i++) + if (p[i]) + ret++; + return ret; +} + +static void fill_with_garbage(void *ptr, size_t size) +{ + unsigned int *p = (unsigned int *)ptr; + int i = 0; + + while (size >= sizeof(*p)) { + p[i] = GARBAGE_INT; + i++; + size -= sizeof(*p); + } + if (size) + memset(&p[i], GARBAGE_BYTE, size); +} + +static int __init do_alloc_pages_order(int order, int *total_failures) +{ + struct page *page; + void *buf; + size_t size = PAGE_SIZE << order; + + page = alloc_pages(GFP_KERNEL, order); + buf = page_address(page); + fill_with_garbage(buf, size); + __free_pages(page, order); + + page = alloc_pages(GFP_KERNEL, order); + buf = page_address(page); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + __free_pages(page, order); + return 1; +} + +static int __init test_pages(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i; + + for (i = 0; i < 10; i++) + num_tests += do_alloc_pages_order(i, &failures); + + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +static int __init do_kmalloc_size(size_t size, int *total_failures) +{ + void *buf; + + buf = kmalloc(size, GFP_KERNEL); + fill_with_garbage(buf, size); + kfree(buf); + + buf = kmalloc(size, GFP_KERNEL); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + kfree(buf); + return 1; +} + +static int __init do_vmalloc_size(size_t size, int *total_failures) +{ + void *buf; + + buf = vmalloc(size); + fill_with_garbage(buf, size); + vfree(buf); + + buf = vmalloc(size); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + vfree(buf); + return 1; +} + +static int __init test_kvmalloc(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i, size; + + for (i = 0; i < 20; i++) { + size = 1 << i; + num_tests += do_kmalloc_size(size, &failures); + num_tests += do_vmalloc_size(size, &failures); + } + + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +#define CTOR_BYTES 4 +/* Initialize the first 4 bytes of the object. */ +void some_ctor(void *obj) +{ + memset(obj, 'A', CTOR_BYTES); +} + +static int __init do_kmem_cache_size(size_t size, bool want_ctor, + int *total_failures) +{ + struct kmem_cache *c; + void *buf; + int iter, bytes = 0; + int fail = 0; + + c = kmem_cache_create("test_cache", size, 1, 0, + want_ctor ? some_ctor : NULL); + for (iter = 0; iter < 10; iter++) { + buf = kmem_cache_alloc(c, GFP_KERNEL); + if (!want_ctor || iter == 0) + bytes = count_nonzero_bytes(buf, size); + if (want_ctor) { + /* + * Newly initialized memory must be initialized using + * the constructor. + */ + if (iter == 0 && bytes < CTOR_BYTES) + fail = 1; + } else { + if (bytes) + fail = 1; + } + fill_with_garbage(buf, size); + kmem_cache_free(c, buf); + } + kmem_cache_destroy(c); + + *total_failures += fail; + return 1; +} + +static int __init test_kmemcache(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i, size; + + for (i = 0; i < 10; i++) { + size = 8 << i; + num_tests += do_kmem_cache_size(size, false, &failures); + num_tests += do_kmem_cache_size(size, true, &failures); + } + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +static int __init test_meminit_init(void) +{ + int failures = 0, num_tests = 0; + + num_tests += test_pages(&failures); + num_tests += test_kvmalloc(&failures); + num_tests += test_kmemcache(&failures); + + if (failures == 0) + pr_info("all %d tests passed!\n", num_tests); + else + pr_info("failures: %d out of %d\n", failures, num_tests); + + return failures ? -EINVAL : 0; +} +module_init(test_meminit_init); + +MODULE_LICENSE("GPL");