diff mbox

[1/2] binfmt_elf: fix calculations for bss padding

Message ID 1468014494-25291-2-git-send-email-keescook@chromium.org (mailing list archive)
State New, archived
Headers show

Commit Message

Kees Cook July 8, 2016, 9:48 p.m. UTC
A double-bug exists in the bss calculation code, where an overflow can
happen in the "last_bss - elf_bss" calculation, but vm_brk internally
aligns the argument, underflowing it, wrapping back around safe. We
shouldn't depend on these bugs staying in sync, so this cleans up the bss
padding handling to avoid the overflow.

This moves the bss padzero() before the last_bss > elf_bss case, since
the zero-filling of the ELF_PAGE should have nothing to do with the
relationship of last_bss and elf_bss: any trailing portion should be
zeroed, and a zero size is already handled by padzero().

Then it handles the math on elf_bss vs last_bss correctly. These need
to both be ELF_PAGE aligned to get the comparison correct, since that's
the expected granularity of the mappings. Since elf_bss already had
alignment-based padding happen in padzero(), the "start" of the new
vm_brk() should be moved forward as done in the original code. However,
since the "end" of the vm_brk() area will already become PAGE_ALIGNed in
vm_brk() then last_bss should get aligned here to avoid hiding it as a
side-effect.

Additionally makes a cosmetic change to the initial last_bss calculation
so it's easier to read in comparison to the load_addr calculation above it
(i.e. the only difference is p_filesz vs p_memsz).

Reported-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 fs/binfmt_elf.c | 34 ++++++++++++++++++----------------
 1 file changed, 18 insertions(+), 16 deletions(-)

Comments

Kees Cook July 12, 2016, 10:32 p.m. UTC | #1
On Fri, Jul 8, 2016 at 5:48 PM, Kees Cook <keescook@chromium.org> wrote:
> A double-bug exists in the bss calculation code, where an overflow can
> happen in the "last_bss - elf_bss" calculation, but vm_brk internally
> aligns the argument, underflowing it, wrapping back around safe. We
> shouldn't depend on these bugs staying in sync, so this cleans up the bss
> padding handling to avoid the overflow.
>
> This moves the bss padzero() before the last_bss > elf_bss case, since
> the zero-filling of the ELF_PAGE should have nothing to do with the
> relationship of last_bss and elf_bss: any trailing portion should be
> zeroed, and a zero size is already handled by padzero().
>
> Then it handles the math on elf_bss vs last_bss correctly. These need
> to both be ELF_PAGE aligned to get the comparison correct, since that's
> the expected granularity of the mappings. Since elf_bss already had
> alignment-based padding happen in padzero(), the "start" of the new
> vm_brk() should be moved forward as done in the original code. However,
> since the "end" of the vm_brk() area will already become PAGE_ALIGNed in
> vm_brk() then last_bss should get aligned here to avoid hiding it as a
> side-effect.
>
> Additionally makes a cosmetic change to the initial last_bss calculation
> so it's easier to read in comparison to the load_addr calculation above it
> (i.e. the only difference is p_filesz vs p_memsz).
>
> Reported-by: Hector Marco-Gisbert <hecmargi@upv.es>
> Signed-off-by: Kees Cook <keescook@chromium.org>

Andrew or Al, can you pick this up for -next? It doesn't depend on the
do_brk() fix (patch 2/2)...

-Kees

> ---
>  fs/binfmt_elf.c | 34 ++++++++++++++++++----------------
>  1 file changed, 18 insertions(+), 16 deletions(-)
>
> diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
> index e158b22ef32f..fe948933bcc5 100644
> --- a/fs/binfmt_elf.c
> +++ b/fs/binfmt_elf.c
> @@ -605,28 +605,30 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
>                          * Do the same thing for the memory mapping - between
>                          * elf_bss and last_bss is the bss section.
>                          */
> -                       k = load_addr + eppnt->p_memsz + eppnt->p_vaddr;
> +                       k = load_addr + eppnt->p_vaddr + eppnt->p_memsz;
>                         if (k > last_bss)
>                                 last_bss = k;
>                 }
>         }
>
> +       /*
> +        * Now fill out the bss section: first pad the last page from
> +        * the file up to the page boundary, and zero it from elf_bss
> +        * up to the end of the page.
> +        */
> +       if (padzero(elf_bss)) {
> +               error = -EFAULT;
> +               goto out;
> +       }
> +       /*
> +        * Next, align both the file and mem bss up to the page size,
> +        * since this is where elf_bss was just zeroed up to, and where
> +        * last_bss will end after the vm_brk() below.
> +        */
> +       elf_bss = ELF_PAGEALIGN(elf_bss);
> +       last_bss = ELF_PAGEALIGN(last_bss);
> +       /* Finally, if there is still more bss to allocate, do it. */
>         if (last_bss > elf_bss) {
> -               /*
> -                * Now fill out the bss section.  First pad the last page up
> -                * to the page boundary, and then perform a mmap to make sure
> -                * that there are zero-mapped pages up to and including the
> -                * last bss page.
> -                */
> -               if (padzero(elf_bss)) {
> -                       error = -EFAULT;
> -                       goto out;
> -               }
> -
> -               /* What we have mapped so far */
> -               elf_bss = ELF_PAGESTART(elf_bss + ELF_MIN_ALIGN - 1);
> -
> -               /* Map the last of the bss segment */
>                 error = vm_brk(elf_bss, last_bss - elf_bss);
>                 if (error)
>                         goto out;
> --
> 2.7.4
>
diff mbox

Patch

diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index e158b22ef32f..fe948933bcc5 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -605,28 +605,30 @@  static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
 			 * Do the same thing for the memory mapping - between
 			 * elf_bss and last_bss is the bss section.
 			 */
-			k = load_addr + eppnt->p_memsz + eppnt->p_vaddr;
+			k = load_addr + eppnt->p_vaddr + eppnt->p_memsz;
 			if (k > last_bss)
 				last_bss = k;
 		}
 	}
 
+	/*
+	 * Now fill out the bss section: first pad the last page from
+	 * the file up to the page boundary, and zero it from elf_bss
+	 * up to the end of the page.
+	 */
+	if (padzero(elf_bss)) {
+		error = -EFAULT;
+		goto out;
+	}
+	/*
+	 * Next, align both the file and mem bss up to the page size,
+	 * since this is where elf_bss was just zeroed up to, and where
+	 * last_bss will end after the vm_brk() below.
+	 */
+	elf_bss = ELF_PAGEALIGN(elf_bss);
+	last_bss = ELF_PAGEALIGN(last_bss);
+	/* Finally, if there is still more bss to allocate, do it. */
 	if (last_bss > elf_bss) {
-		/*
-		 * Now fill out the bss section.  First pad the last page up
-		 * to the page boundary, and then perform a mmap to make sure
-		 * that there are zero-mapped pages up to and including the
-		 * last bss page.
-		 */
-		if (padzero(elf_bss)) {
-			error = -EFAULT;
-			goto out;
-		}
-
-		/* What we have mapped so far */
-		elf_bss = ELF_PAGESTART(elf_bss + ELF_MIN_ALIGN - 1);
-
-		/* Map the last of the bss segment */
 		error = vm_brk(elf_bss, last_bss - elf_bss);
 		if (error)
 			goto out;