PM: hibernate: fix crashes with init_on_free=1
diff mbox series

Message ID 20200116110934.90531-1-glider@google.com
State Mainlined, archived
Headers show
Series
  • PM: hibernate: fix crashes with init_on_free=1
Related show

Commit Message

Alexander Potapenko Jan. 16, 2020, 11:09 a.m. UTC
Upon resuming from hibernation, free pages may contain stale data from
the kernel that initiated the resume. This breaks the invariant
inflicted by init_on_free=1 that freed pages must be zeroed.

To deal with this problem, make clear_free_pages() also clear the free
pages when init_on_free is enabled.

Reported-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Alexander Potapenko <glider@google.com>
Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
---
 kernel/power/snapshot.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Rafael J. Wysocki Jan. 16, 2020, 10:55 p.m. UTC | #1
On Thu, Jan 16, 2020 at 12:10 PM <glider@google.com> wrote:
>
> Upon resuming from hibernation, free pages may contain stale data from
> the kernel that initiated the resume. This breaks the invariant
> inflicted by init_on_free=1 that freed pages must be zeroed.
>
> To deal with this problem, make clear_free_pages() also clear the free
> pages when init_on_free is enabled.
>
> Reported-by: Johannes Stezenbach <js@sig21.net>
> Signed-off-by: Alexander Potapenko <glider@google.com>
> Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
> ---
>  kernel/power/snapshot.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 26b9168321e7..d65f2d5ab694 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1147,24 +1147,24 @@ void free_basic_memory_bitmaps(void)
>
>  void clear_free_pages(void)
>  {
> -#ifdef CONFIG_PAGE_POISONING_ZERO
>         struct memory_bitmap *bm = free_pages_map;
>         unsigned long pfn;
>
>         if (WARN_ON(!(free_pages_map)))
>                 return;
>
> -       memory_bm_position_reset(bm);
> -       pfn = memory_bm_next_pfn(bm);
> -       while (pfn != BM_END_OF_MAP) {
> -               if (pfn_valid(pfn))
> -                       clear_highpage(pfn_to_page(pfn));
> -
> +       if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
> +               memory_bm_position_reset(bm);
>                 pfn = memory_bm_next_pfn(bm);
> +               while (pfn != BM_END_OF_MAP) {
> +                       if (pfn_valid(pfn))
> +                               clear_highpage(pfn_to_page(pfn));
> +
> +                       pfn = memory_bm_next_pfn(bm);
> +               }
> +               memory_bm_position_reset(bm);
> +               pr_info("free pages cleared after restore\n");
>         }
> -       memory_bm_position_reset(bm);
> -       pr_info("free pages cleared after restore\n");
> -#endif /* PAGE_POISONING_ZERO */
>  }
>
>  /**
> --

Queued up as a fix for 5.5-rc and "stable", thanks!
Alexander Potapenko Jan. 22, 2020, 10:54 a.m. UTC | #2
On Thu, Jan 16, 2020 at 11:56 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Jan 16, 2020 at 12:10 PM <glider@google.com> wrote:
> >
> > Upon resuming from hibernation, free pages may contain stale data from
> > the kernel that initiated the resume. This breaks the invariant
> > inflicted by init_on_free=1 that freed pages must be zeroed.
> >
> > To deal with this problem, make clear_free_pages() also clear the free
> > pages when init_on_free is enabled.
> >
> > Reported-by: Johannes Stezenbach <js@sig21.net>
> > Signed-off-by: Alexander Potapenko <glider@google.com>
> > Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
> > ---
> >  kernel/power/snapshot.c | 20 ++++++++++----------
> >  1 file changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > index 26b9168321e7..d65f2d5ab694 100644
> > --- a/kernel/power/snapshot.c
> > +++ b/kernel/power/snapshot.c
> > @@ -1147,24 +1147,24 @@ void free_basic_memory_bitmaps(void)
> >
> >  void clear_free_pages(void)
> >  {
> > -#ifdef CONFIG_PAGE_POISONING_ZERO
> >         struct memory_bitmap *bm = free_pages_map;
> >         unsigned long pfn;
> >
> >         if (WARN_ON(!(free_pages_map)))
> >                 return;
> >
> > -       memory_bm_position_reset(bm);
> > -       pfn = memory_bm_next_pfn(bm);
> > -       while (pfn != BM_END_OF_MAP) {
> > -               if (pfn_valid(pfn))
> > -                       clear_highpage(pfn_to_page(pfn));
> > -
> > +       if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
> > +               memory_bm_position_reset(bm);
> >                 pfn = memory_bm_next_pfn(bm);
> > +               while (pfn != BM_END_OF_MAP) {
> > +                       if (pfn_valid(pfn))
> > +                               clear_highpage(pfn_to_page(pfn));
> > +
> > +                       pfn = memory_bm_next_pfn(bm);
> > +               }
> > +               memory_bm_position_reset(bm);
> > +               pr_info("free pages cleared after restore\n");
> >         }
> > -       memory_bm_position_reset(bm);
> > -       pr_info("free pages cleared after restore\n");
> > -#endif /* PAGE_POISONING_ZERO */
> >  }
> >
> >  /**
> > --
>
> Queued up as a fix for 5.5-rc and "stable", thanks!

It's actually interesting that init_on_free=1 has introduced an
invariant that pages in page heap must be initialized with zeros.
Code in mm/page_alloc.c relies on this invariant when allocating with
__GFP_ZERO, so if there's a page that somehow ends up uninitialized,
we get all sorts of vulnerabilities.
We were lucky to see this firing in hibernate, and I've grepped the
code for other occurrences of CONFIG_PAGE_POISONING_ZERO, but I'm not
sure we are not missing any other ways to put an uninitialized page
into the page heap.

Patch
diff mbox series

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 26b9168321e7..d65f2d5ab694 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1147,24 +1147,24 @@  void free_basic_memory_bitmaps(void)
 
 void clear_free_pages(void)
 {
-#ifdef CONFIG_PAGE_POISONING_ZERO
 	struct memory_bitmap *bm = free_pages_map;
 	unsigned long pfn;
 
 	if (WARN_ON(!(free_pages_map)))
 		return;
 
-	memory_bm_position_reset(bm);
-	pfn = memory_bm_next_pfn(bm);
-	while (pfn != BM_END_OF_MAP) {
-		if (pfn_valid(pfn))
-			clear_highpage(pfn_to_page(pfn));
-
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
+		memory_bm_position_reset(bm);
 		pfn = memory_bm_next_pfn(bm);
+		while (pfn != BM_END_OF_MAP) {
+			if (pfn_valid(pfn))
+				clear_highpage(pfn_to_page(pfn));
+
+			pfn = memory_bm_next_pfn(bm);
+		}
+		memory_bm_position_reset(bm);
+		pr_info("free pages cleared after restore\n");
 	}
-	memory_bm_position_reset(bm);
-	pr_info("free pages cleared after restore\n");
-#endif /* PAGE_POISONING_ZERO */
 }
 
 /**