diff mbox series

mm: security: introduce CONFIG_INIT_HEAP_ALL

Message ID 20190412124501.132678-1-glider@google.com (mailing list archive)
State New, archived
Headers show
Series mm: security: introduce CONFIG_INIT_HEAP_ALL | expand

Commit Message

Alexander Potapenko April 12, 2019, 12:45 p.m. UTC
This config option adds the possibility to initialize newly allocated
pages and heap objects with zeroes. This is needed to prevent possible
information leaks and make the control-flow bugs that depend on
uninitialized values more deterministic.

Initialization is done at allocation time at the places where checks for
__GFP_ZERO are performed. We don't initialize slab caches with
constructors or SLAB_TYPESAFE_BY_RCU to preserve their semantics.

For kernel testing purposes filling allocations with a nonzero pattern
would be more suitable, but may require platform-specific code. To have
a simple baseline we've decided to start with zero-initialization.

No performance optimizations are done at the moment to reduce double
initialization of memory regions.

Signed-off-by: Alexander Potapenko <glider@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-mm@kvack.org
Cc: linux-security-module@vger.kernel.org
Cc: kernel-hardening@lists.openwall.com
---
This patch applies on top of the "Refactor memory initialization
hardening" patch series by Kees Cook: https://lkml.org/lkml/2019/4/10/748
---
 drivers/infiniband/core/uverbs_ioctl.c |  2 +-
 include/linux/gfp.h                    |  8 ++++++++
 kernel/kexec_core.c                    |  2 +-
 mm/dmapool.c                           |  2 +-
 mm/page_alloc.c                        |  2 +-
 mm/slab.c                              |  6 +++---
 mm/slab.h                              | 10 ++++++++++
 mm/slob.c                              |  2 +-
 mm/slub.c                              |  4 ++--
 net/core/sock.c                        |  2 +-
 security/Kconfig.hardening             | 10 ++++++++++
 11 files changed, 39 insertions(+), 11 deletions(-)

Comments

Qian Cai April 12, 2019, 2:16 p.m. UTC | #1
On Fri, 2019-04-12 at 14:45 +0200, Alexander Potapenko wrote:
> This config option adds the possibility to initialize newly allocated
> pages and heap objects with zeroes. This is needed to prevent possible
> information leaks and make the control-flow bugs that depend on
> uninitialized values more deterministic.
> 
> Initialization is done at allocation time at the places where checks for
> __GFP_ZERO are performed. We don't initialize slab caches with
> constructors or SLAB_TYPESAFE_BY_RCU to preserve their semantics.
> 
> For kernel testing purposes filling allocations with a nonzero pattern
> would be more suitable, but may require platform-specific code. To have
> a simple baseline we've decided to start with zero-initialization.
> 
> No performance optimizations are done at the moment to reduce double
> initialization of memory regions.

Sounds like this has already existed in some degree, i.e.,

CONFIG_PAGE_POISONING_ZERO
Alexander Potapenko April 12, 2019, 3:23 p.m. UTC | #2
On Fri, Apr 12, 2019 at 4:16 PM Qian Cai <cai@lca.pw> wrote:
>
> On Fri, 2019-04-12 at 14:45 +0200, Alexander Potapenko wrote:
> > This config option adds the possibility to initialize newly allocated
> > pages and heap objects with zeroes. This is needed to prevent possible
> > information leaks and make the control-flow bugs that depend on
> > uninitialized values more deterministic.
> >
> > Initialization is done at allocation time at the places where checks for
> > __GFP_ZERO are performed. We don't initialize slab caches with
> > constructors or SLAB_TYPESAFE_BY_RCU to preserve their semantics.
> >
> > For kernel testing purposes filling allocations with a nonzero pattern
> > would be more suitable, but may require platform-specific code. To have
> > a simple baseline we've decided to start with zero-initialization.
> >
> > No performance optimizations are done at the moment to reduce double
> > initialization of memory regions.
>
> Sounds like this has already existed in some degree, i.e.,
>
> CONFIG_PAGE_POISONING_ZERO
Note that CONFIG_PAGE_POISONING[_ZERO] initializes freed pages,
whereas the proposed patch initializes newly allocated pages.
It's debatable whether initializing pages on kmalloc()/alloc_pages()
is better or worse than doing so in kfree()/free_pages() from the
security perspective.
But the approach proposed in the patch makes it possible to use a
special GFP flag to request uninitialized memory from the underlying
allocator, so that we don't wipe it twice.
This will be harder to do in the functions that free memory, because
they don't accept GFP flags.




--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Andrew Morton April 16, 2019, 2:02 a.m. UTC | #3
On Fri, 12 Apr 2019 14:45:01 +0200 Alexander Potapenko <glider@google.com> wrote:

> This config option adds the possibility to initialize newly allocated
> pages and heap objects with zeroes.

At what cost?  Some performance test results would help this along.

> This is needed to prevent possible
> information leaks and make the control-flow bugs that depend on
> uninitialized values more deterministic.
> 
> Initialization is done at allocation time at the places where checks for
> __GFP_ZERO are performed. We don't initialize slab caches with
> constructors or SLAB_TYPESAFE_BY_RCU to preserve their semantics.
> 
> For kernel testing purposes filling allocations with a nonzero pattern
> would be more suitable, but may require platform-specific code. To have
> a simple baseline we've decided to start with zero-initialization.
> 
> No performance optimizations are done at the moment to reduce double
> initialization of memory regions.

Requiring a kernel rebuild is rather user-hostile.  A boot option
(early_param()) would be much more useful and I expect that the loss in
coverage would be small and acceptable?  Could possibly use the
static_branch infrastructure.

> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -167,6 +167,16 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			      SLAB_TEMPORARY | \
>  			      SLAB_ACCOUNT)
>  
> +/*
> + * Do we need to initialize this allocation?
> + * Always true for __GFP_ZERO, CONFIG_INIT_HEAP_ALL enforces initialization
> + * of caches without constructors and RCU.
> + */
> +#define SLAB_WANT_INIT(cache, gfp_flags) \
> +	((GFP_INIT_ALWAYS_ON && !(cache)->ctor && \
> +	  !((cache)->flags & SLAB_TYPESAFE_BY_RCU)) || \
> +	 (gfp_flags & __GFP_ZERO))

Is there any reason why this *must* be implemented as a macro?  If not,
it should be written in C please.
Vlastimil Babka April 16, 2019, 8:30 a.m. UTC | #4
On 4/12/19 2:45 PM, Alexander Potapenko wrote:
> +config INIT_HEAP_ALL
> +	bool "Initialize kernel heap allocations"

Calling slab and page allocations together as "heap" is rather uncommon
in the kernel I think. But I don't have a better word right now.

> +	default n
> +	help
> +	  Enforce initialization of pages allocated from page allocator
> +	  and objects returned by kmalloc and friends.
> +	  Allocated memory is initialized with zeroes, preventing possible
> +	  information leaks and making the control-flow bugs that depend
> +	  on uninitialized values more deterministic.
> +
>  config GCC_PLUGIN_STRUCTLEAK_VERBOSE
>  	bool "Report forcefully initialized variables"
>  	depends on GCC_PLUGIN_STRUCTLEAK
>
Vlastimil Babka April 16, 2019, 8:33 a.m. UTC | #5
On 4/16/19 4:02 AM, Andrew Morton wrote:
> Requiring a kernel rebuild is rather user-hostile.  A boot option
> (early_param()) would be much more useful and I expect that the loss in
> coverage would be small and acceptable?  Could possibly use the
> static_branch infrastructure.

Agreed. There could be a config option to make it default on if no param
given. Then a config option to (not) compile this in at all would be
probably superfluous, although small systems/architectures without
effective static keys might care.
Alexander Potapenko April 16, 2019, 12:04 p.m. UTC | #6
On Tue, Apr 16, 2019 at 10:33 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 4/12/19 2:45 PM, Alexander Potapenko wrote:
> > +config INIT_HEAP_ALL
> > +     bool "Initialize kernel heap allocations"
>
> Calling slab and page allocations together as "heap" is rather uncommon
> in the kernel I think. But I don't have a better word right now.
We can provide two separate flags for slab and page allocator to avoid this.
I cannot think of a situation where this level of control is necessary
though (apart from benchmarking).
> > +     default n
> > +     help
> > +       Enforce initialization of pages allocated from page allocator
> > +       and objects returned by kmalloc and friends.
> > +       Allocated memory is initialized with zeroes, preventing possible
> > +       information leaks and making the control-flow bugs that depend
> > +       on uninitialized values more deterministic.
> > +
> >  config GCC_PLUGIN_STRUCTLEAK_VERBOSE
> >       bool "Report forcefully initialized variables"
> >       depends on GCC_PLUGIN_STRUCTLEAK
> >
>
Alexander Potapenko April 16, 2019, 12:21 p.m. UTC | #7
On Tue, Apr 16, 2019 at 4:02 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Fri, 12 Apr 2019 14:45:01 +0200 Alexander Potapenko <glider@google.com> wrote:
>
> > This config option adds the possibility to initialize newly allocated
> > pages and heap objects with zeroes.
>
> At what cost?  Some performance test results would help this along.
I'll make more measurements for the new implementation, but the
preliminary results are:
~0.17% sys time slowdown (~0% wall time slowdown) on hackbench (1 CPU);
1.3% sys time slowdown (0.2% wall time slowdown) when building Linux with -j12;
4% sys time slowdown (2.6% wall time slowdown) on af_inet_loopback benchmark;
up to 100% slowdown on netperf (caused by sk buffers being initialized
multiple times; also netperf is too fast to perform any precise
measurements)

Are there any benchmarks you can recommend?
> > This is needed to prevent possible
> > information leaks and make the control-flow bugs that depend on
> > uninitialized values more deterministic.
> >
> > Initialization is done at allocation time at the places where checks for
> > __GFP_ZERO are performed. We don't initialize slab caches with
> > constructors or SLAB_TYPESAFE_BY_RCU to preserve their semantics.
> >
> > For kernel testing purposes filling allocations with a nonzero pattern
> > would be more suitable, but may require platform-specific code. To have
> > a simple baseline we've decided to start with zero-initialization.
> >
> > No performance optimizations are done at the moment to reduce double
> > initialization of memory regions.
>
> Requiring a kernel rebuild is rather user-hostile.
This is intended to be used together with other hardening measures,
like CONFIG_INIT_STACK_ALL (see a patchset by Kees).
All of those require a kernel rebuild, but we assume users don't push
and pull that lever back and forth often.

> A boot option
> (early_param()) would be much more useful and I expect that the loss in
> coverage would be small and acceptable?  Could possibly use the
> static_branch infrastructure.
I'll try that out and see if there's a notable performance difference.

> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -167,6 +167,16 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> >                             SLAB_TEMPORARY | \
> >                             SLAB_ACCOUNT)
> >
> > +/*
> > + * Do we need to initialize this allocation?
> > + * Always true for __GFP_ZERO, CONFIG_INIT_HEAP_ALL enforces initialization
> > + * of caches without constructors and RCU.
> > + */
> > +#define SLAB_WANT_INIT(cache, gfp_flags) \
> > +     ((GFP_INIT_ALWAYS_ON && !(cache)->ctor && \
> > +       !((cache)->flags & SLAB_TYPESAFE_BY_RCU)) || \
> > +      (gfp_flags & __GFP_ZERO))
>
> Is there any reason why this *must* be implemented as a macro?  If not,
> it should be written in C please.
Agreed. Even in the case we want GFP_INIT_ALWAYS_ON to be known at
compile time there's no reason for this to be a macro.
>
Christoph Lameter (Ampere) April 16, 2019, 3:32 p.m. UTC | #8
On Fri, 12 Apr 2019, Alexander Potapenko wrote:

> diff --git a/mm/slab.h b/mm/slab.h
> index 43ac818b8592..4bb10af0031b 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -167,6 +167,16 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			      SLAB_TEMPORARY | \
>  			      SLAB_ACCOUNT)
>
> +/*
> + * Do we need to initialize this allocation?
> + * Always true for __GFP_ZERO, CONFIG_INIT_HEAP_ALL enforces initialization
> + * of caches without constructors and RCU.
> + */
> +#define SLAB_WANT_INIT(cache, gfp_flags) \
> +	((GFP_INIT_ALWAYS_ON && !(cache)->ctor && \
> +	  !((cache)->flags & SLAB_TYPESAFE_BY_RCU)) || \
> +	 (gfp_flags & __GFP_ZERO))

This is another complex thing to maintain when adding flags to the slab
allocator.

> +config INIT_HEAP_ALL
> +	bool "Initialize kernel heap allocations"

"Zero pages and objects allocated in the kernel"

> +	default n
> +	help
> +	  Enforce initialization of pages allocated from page allocator
> +	  and objects returned by kmalloc and friends.
> +	  Allocated memory is initialized with zeroes, preventing possible
> +	  information leaks and making the control-flow bugs that depend
> +	  on uninitialized values more deterministic.

Hmmm... But we already have debugging options that poison objects and
pages?
Alexander Potapenko April 16, 2019, 4:01 p.m. UTC | #9
On Tue, Apr 16, 2019 at 5:32 PM Christopher Lameter <cl@linux.com> wrote:
>
> On Fri, 12 Apr 2019, Alexander Potapenko wrote:
>
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 43ac818b8592..4bb10af0031b 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -167,6 +167,16 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> >                             SLAB_TEMPORARY | \
> >                             SLAB_ACCOUNT)
> >
> > +/*
> > + * Do we need to initialize this allocation?
> > + * Always true for __GFP_ZERO, CONFIG_INIT_HEAP_ALL enforces initialization
> > + * of caches without constructors and RCU.
> > + */
> > +#define SLAB_WANT_INIT(cache, gfp_flags) \
> > +     ((GFP_INIT_ALWAYS_ON && !(cache)->ctor && \
> > +       !((cache)->flags & SLAB_TYPESAFE_BY_RCU)) || \
> > +      (gfp_flags & __GFP_ZERO))
>
> This is another complex thing to maintain when adding flags to the slab
> allocator.
>
> > +config INIT_HEAP_ALL
> > +     bool "Initialize kernel heap allocations"
>
> "Zero pages and objects allocated in the kernel"
>
> > +     default n
> > +     help
> > +       Enforce initialization of pages allocated from page allocator
> > +       and objects returned by kmalloc and friends.
> > +       Allocated memory is initialized with zeroes, preventing possible
> > +       information leaks and making the control-flow bugs that depend
> > +       on uninitialized values more deterministic.
>
> Hmmm... But we already have debugging options that poison objects and
> pages?
Laura Abbott mentioned in one of the previous threads
(https://marc.info/?l=kernel-hardening&m=155474181528491&w=2) that:

"""
I've looked at doing something similar in the past (failing to find
the thread this morning...) and while this will work, it has pretty
serious performance issues. It's not actually the poisoning which
is expensive but that turning on debugging removes the cpu slab
which has significant performance penalties.

I'd rather go back to the proposal of just poisoning the slab
at alloc/free without using SLAB_POISON.
"""
, so slab poisoning is probably off the table.
Christoph Lameter (Ampere) April 16, 2019, 4:30 p.m. UTC | #10
On Tue, 16 Apr 2019, Alexander Potapenko wrote:

> > Hmmm... But we already have debugging options that poison objects and
> > pages?
> Laura Abbott mentioned in one of the previous threads
> (https://marc.info/?l=kernel-hardening&m=155474181528491&w=2) that:
>
> """
> I've looked at doing something similar in the past (failing to find
> the thread this morning...) and while this will work, it has pretty
> serious performance issues. It's not actually the poisoning which
> is expensive but that turning on debugging removes the cpu slab
> which has significant performance penalties.

Ok you could rework that logic to be able to keep the per cpu slabs?

Also if you do the zeroing then you need to do it in the hotpath. And this
patch introduces new instructions to that hotpath for checking and
executing the zeroing.
Alexander Potapenko April 17, 2019, 11:03 a.m. UTC | #11
On Tue, Apr 16, 2019 at 6:30 PM Christopher Lameter <cl@linux.com> wrote:
>
> On Tue, 16 Apr 2019, Alexander Potapenko wrote:
>
> > > Hmmm... But we already have debugging options that poison objects and
> > > pages?
> > Laura Abbott mentioned in one of the previous threads
> > (https://marc.info/?l=kernel-hardening&m=155474181528491&w=2) that:
> >
> > """
> > I've looked at doing something similar in the past (failing to find
> > the thread this morning...) and while this will work, it has pretty
> > serious performance issues. It's not actually the poisoning which
> > is expensive but that turning on debugging removes the cpu slab
> > which has significant performance penalties.
>
> Ok you could rework that logic to be able to keep the per cpu slabs?
I'll look into that. There's a lot going on with checking those
poisoned bytes, although we don't need that for hardening.

What do you think about the proposed approach to page initialization?
We could separate that part from slab poisoning.

> Also if you do the zeroing then you need to do it in the hotpath. And this
> patch introduces new instructions to that hotpath for checking and
> executing the zeroing.
Right now the patch doesn't slow down the default case when
CONFIG_INIT_HEAP_ALL=n, as GFP_INIT_ALWAYS_ON is 0.
In the case heap initialization is enabled we could probably omit the
gfp_flags check, as it'll be always zero in the case there's a
constructor or RCU flag is set.
So we'll have two branches instead of one in the case CONFIG_INIT_HEAP_ALL=y.
Alexander Potapenko April 17, 2019, 5:04 p.m. UTC | #12
On Wed, Apr 17, 2019 at 1:03 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Tue, Apr 16, 2019 at 6:30 PM Christopher Lameter <cl@linux.com> wrote:
> >
> > On Tue, 16 Apr 2019, Alexander Potapenko wrote:
> >
> > > > Hmmm... But we already have debugging options that poison objects and
> > > > pages?
> > > Laura Abbott mentioned in one of the previous threads
> > > (https://marc.info/?l=kernel-hardening&m=155474181528491&w=2) that:
> > >
> > > """
> > > I've looked at doing something similar in the past (failing to find
> > > the thread this morning...) and while this will work, it has pretty
> > > serious performance issues. It's not actually the poisoning which
> > > is expensive but that turning on debugging removes the cpu slab
> > > which has significant performance penalties.
> >
> > Ok you could rework that logic to be able to keep the per cpu slabs?
> I'll look into that. There's a lot going on with checking those
> poisoned bytes, although we don't need that for hardening.
>
> What do you think about the proposed approach to page initialization?
> We could separate that part from slab poisoning.
>
> > Also if you do the zeroing then you need to do it in the hotpath. And this
> > patch introduces new instructions to that hotpath for checking and
> > executing the zeroing.
> Right now the patch doesn't slow down the default case when
> CONFIG_INIT_HEAP_ALL=n, as GFP_INIT_ALWAYS_ON is 0.
> In the case heap initialization is enabled we could probably omit the
> gfp_flags check, as it'll be always zero in the case there's a
> constructor or RCU flag is set.
> So we'll have two branches instead of one in the case CONFIG_INIT_HEAP_ALL=y.
>
Ok, I think we could do the same without extra branches.
Right now I'm working on a patch that uses static branches in the
function that checks GFP flags:

static inline bool want_init_memory(gfp_t flags)
{
        if (static_branch_unlikely(&init_allocations))
                return true;
        return flags & __GFP_ZERO;
}

and does the following in slab_alloc_node():

        if (unlikely(want_init_memory(gfpflags)) && object)
                s->poison_fn(s, object);
, where s->poison_fn is either memset(object, 0, s->object_size) for
normal SLAB caches or a no-op for SLAB caches that have ctors
(I _think_ I don't have to special-case SLAB_TYPESAFE_BY_RCU).

With init_allocations disabled this doesn't affect the kernel
performance (hackbench shows negative slowdown within the standard
deviation). Most certainly the indirect call is performed not too
often.
With init_allocations enabled this yields ~7% slowdown on hackbench. I
believe most of that is caused by double initialization, which we can
eliminate by passing an extra GFP flag to the page allocator.

>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg
diff mbox series

Patch

diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c
index e1379949e663..34937cecac62 100644
--- a/drivers/infiniband/core/uverbs_ioctl.c
+++ b/drivers/infiniband/core/uverbs_ioctl.c
@@ -127,7 +127,7 @@  __malloc void *_uverbs_alloc(struct uverbs_attr_bundle *bundle, size_t size,
 	res = (void *)pbundle->internal_buffer + pbundle->internal_used;
 	pbundle->internal_used =
 		ALIGN(new_used, sizeof(*pbundle->internal_buffer));
-	if (flags & __GFP_ZERO)
+	if (GFP_WANT_INIT(flags))
 		memset(res, 0, size);
 	return res;
 }
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index fdab7de7490d..4f49a6a13f6f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -213,6 +213,14 @@  struct vm_area_struct;
 #define __GFP_COMP	((__force gfp_t)___GFP_COMP)
 #define __GFP_ZERO	((__force gfp_t)___GFP_ZERO)
 
+#ifdef CONFIG_INIT_HEAP_ALL
+#define GFP_WANT_INIT(flags) (1)
+#define GFP_INIT_ALWAYS_ON (1)
+#else
+#define GFP_WANT_INIT(flags) (unlikely((flags) & __GFP_ZERO))
+#define GFP_INIT_ALWAYS_ON (0)
+#endif
+
 /* Disable lockdep for GFP context tracking */
 #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
 
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index d7140447be75..1ad0097695a1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -315,7 +315,7 @@  static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 		arch_kexec_post_alloc_pages(page_address(pages), count,
 					    gfp_mask);
 
-		if (gfp_mask & __GFP_ZERO)
+		if (GFP_WANT_INIT(gfp_mask))
 			for (i = 0; i < count; i++)
 				clear_highpage(pages + i);
 	}
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 76a160083506..d40d62145ca3 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -381,7 +381,7 @@  void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
 #endif
 	spin_unlock_irqrestore(&pool->lock, flags);
 
-	if (mem_flags & __GFP_ZERO)
+	if (GFP_WANT_INIT(mem_flags))
 		memset(retval, 0, pool->size);
 
 	return retval;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d96ca5bc555b..ceddc4eeaff4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2014,7 +2014,7 @@  static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
 
 	post_alloc_hook(page, order, gfp_flags);
 
-	if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
+	if (!free_pages_prezeroed() && GFP_WANT_INIT(gfp_flags))
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
diff --git a/mm/slab.c b/mm/slab.c
index 47a380a486ee..848e47658667 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3331,7 +3331,7 @@  slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
 	local_irq_restore(save_flags);
 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
 
-	if (unlikely(flags & __GFP_ZERO) && ptr)
+	if (SLAB_WANT_INIT(cachep, flags) && ptr)
 		memset(ptr, 0, cachep->object_size);
 
 	slab_post_alloc_hook(cachep, flags, 1, &ptr);
@@ -3388,7 +3388,7 @@  slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
 	prefetchw(objp);
 
-	if (unlikely(flags & __GFP_ZERO) && objp)
+	if (SLAB_WANT_INIT(cachep, flags) && objp)
 		memset(objp, 0, cachep->object_size);
 
 	slab_post_alloc_hook(cachep, flags, 1, &objp);
@@ -3596,7 +3596,7 @@  int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
 
 	/* Clear memory outside IRQ disabled section */
-	if (unlikely(flags & __GFP_ZERO))
+	if (SLAB_WANT_INIT(s, flags))
 		for (i = 0; i < size; i++)
 			memset(p[i], 0, s->object_size);
 
diff --git a/mm/slab.h b/mm/slab.h
index 43ac818b8592..4bb10af0031b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -167,6 +167,16 @@  static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 			      SLAB_TEMPORARY | \
 			      SLAB_ACCOUNT)
 
+/*
+ * Do we need to initialize this allocation?
+ * Always true for __GFP_ZERO, CONFIG_INIT_HEAP_ALL enforces initialization
+ * of caches without constructors and RCU.
+ */
+#define SLAB_WANT_INIT(cache, gfp_flags) \
+	((GFP_INIT_ALWAYS_ON && !(cache)->ctor && \
+	  !((cache)->flags & SLAB_TYPESAFE_BY_RCU)) || \
+	 (gfp_flags & __GFP_ZERO))
+
 bool __kmem_cache_empty(struct kmem_cache *);
 int __kmem_cache_shutdown(struct kmem_cache *);
 void __kmem_cache_release(struct kmem_cache *);
diff --git a/mm/slob.c b/mm/slob.c
index 307c2c9feb44..0c402e819cf7 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -330,7 +330,7 @@  static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 		BUG_ON(!b);
 		spin_unlock_irqrestore(&slob_lock, flags);
 	}
-	if (unlikely(gfp & __GFP_ZERO))
+	if (GFP_WANT_INIT(gfp))
 		memset(b, 0, size);
 	return b;
 }
diff --git a/mm/slub.c b/mm/slub.c
index d30ede89f4a6..686ab9d49ced 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2750,7 +2750,7 @@  static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 		stat(s, ALLOC_FASTPATH);
 	}
 
-	if (unlikely(gfpflags & __GFP_ZERO) && object)
+	if (SLAB_WANT_INIT(s, gfpflags) && object)
 		memset(object, 0, s->object_size);
 
 	slab_post_alloc_hook(s, gfpflags, 1, &object);
@@ -3172,7 +3172,7 @@  int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	local_irq_enable();
 
 	/* Clear memory outside IRQ disabled fastpath loop */
-	if (unlikely(flags & __GFP_ZERO)) {
+	if (SLAB_WANT_INIT(s, flags)) {
 		int j;
 
 		for (j = 0; j < i; j++)
diff --git a/net/core/sock.c b/net/core/sock.c
index 782343bb925b..51b13d7fd82f 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1601,7 +1601,7 @@  static struct sock *sk_prot_alloc(struct proto *prot, gfp_t priority,
 		sk = kmem_cache_alloc(slab, priority & ~__GFP_ZERO);
 		if (!sk)
 			return sk;
-		if (priority & __GFP_ZERO)
+		if (GFP_WANT_INIT(priority))
 			sk_prot_clear_nulls(sk, prot->obj_size);
 	} else
 		sk = kmalloc(prot->obj_size, priority);
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index d744e20140b4..cb7d7dfb506f 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -93,6 +93,16 @@  choice
 
 endchoice
 
+config INIT_HEAP_ALL
+	bool "Initialize kernel heap allocations"
+	default n
+	help
+	  Enforce initialization of pages allocated from page allocator
+	  and objects returned by kmalloc and friends.
+	  Allocated memory is initialized with zeroes, preventing possible
+	  information leaks and making the control-flow bugs that depend
+	  on uninitialized values more deterministic.
+
 config GCC_PLUGIN_STRUCTLEAK_VERBOSE
 	bool "Report forcefully initialized variables"
 	depends on GCC_PLUGIN_STRUCTLEAK