diff mbox

[4/5] mm: split ET_DYN ASLR from mmap ASLR

Message ID 1425341988-1599-5-git-send-email-keescook@chromium.org (mailing list archive)
State New, archived
Headers show

Commit Message

Kees Cook March 3, 2015, 12:19 a.m. UTC
This fixes the "offset2lib" weakness in ASLR for arm, arm64, mips,
powerpc, and x86. The problem is that if there is a leak of ASLR from
the executable (ET_DYN), it means a leak of shared library offset as
well (mmap), and vice versa. Further details and a PoC of this attack
are available here:
http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html

With this patch, a PIE linked executable (ET_DYN) has its own ASLR region:

$ ./show_mmaps_pie
54859ccd6000-54859ccd7000 r-xp  ...  /tmp/show_mmaps_pie
54859ced6000-54859ced7000 r--p  ...  /tmp/show_mmaps_pie
54859ced7000-54859ced8000 rw-p  ...  /tmp/show_mmaps_pie
7f75be764000-7f75be91f000 r-xp  ...  /lib/x86_64-linux-gnu/libc.so.6
7f75be91f000-7f75beb1f000 ---p  ...  /lib/x86_64-linux-gnu/libc.so.6
7f75beb1f000-7f75beb23000 r--p  ...  /lib/x86_64-linux-gnu/libc.so.6
7f75beb23000-7f75beb25000 rw-p  ...  /lib/x86_64-linux-gnu/libc.so.6
7f75beb25000-7f75beb2a000 rw-p  ...
7f75beb2a000-7f75beb4d000 r-xp  ...  /lib64/ld-linux-x86-64.so.2
7f75bed45000-7f75bed46000 rw-p  ...
7f75bed46000-7f75bed47000 r-xp  ...
7f75bed47000-7f75bed4c000 rw-p  ...
7f75bed4c000-7f75bed4d000 r--p  ...  /lib64/ld-linux-x86-64.so.2
7f75bed4d000-7f75bed4e000 rw-p  ...  /lib64/ld-linux-x86-64.so.2
7f75bed4e000-7f75bed4f000 rw-p  ...
7fffb3741000-7fffb3762000 rw-p  ...  [stack]
7fffb377b000-7fffb377d000 r--p  ...  [vvar]
7fffb377d000-7fffb377f000 r-xp  ...  [vdso]

The change is to add a call the newly created arch_mmap_rnd() into the
ELF loader for handling ET_DYN ASLR in a separate region from mmap ASLR,
as already done on s390. Removes CONFIG_BINFMT_ELF_RANDOMIZE_PIE, which
is no longer needed.

Reported-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig            |  1 -
 arch/arm64/Kconfig          |  1 -
 arch/mips/Kconfig           |  1 -
 arch/powerpc/Kconfig        |  1 -
 arch/s390/include/asm/elf.h |  4 ++--
 arch/x86/Kconfig            |  1 -
 fs/Kconfig.binfmt           |  3 ---
 fs/binfmt_elf.c             | 17 ++---------------
 8 files changed, 4 insertions(+), 25 deletions(-)

Comments

Michael Ellerman March 4, 2015, 4:16 a.m. UTC | #1
On Mon, 2015-03-02 at 16:19 -0800, Kees Cook wrote:
> This fixes the "offset2lib" weakness in ASLR for arm, arm64, mips,
> powerpc, and x86. The problem is that if there is a leak of ASLR from
> the executable (ET_DYN), it means a leak of shared library offset as
> well (mmap), and vice versa. Further details and a PoC of this attack
> are available here:
> http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html
> 
> With this patch, a PIE linked executable (ET_DYN) has its own ASLR region:
> 
> $ ./show_mmaps_pie
> 54859ccd6000-54859ccd7000 r-xp  ...  /tmp/show_mmaps_pie
> 54859ced6000-54859ced7000 r--p  ...  /tmp/show_mmaps_pie
> 54859ced7000-54859ced8000 rw-p  ...  /tmp/show_mmaps_pie

Just to be clear, it's the fact that the above vmas are in a different
address range to those below that shows the patch is working, right?

> 7f75be764000-7f75be91f000 r-xp  ...  /lib/x86_64-linux-gnu/libc.so.6
> 7f75be91f000-7f75beb1f000 ---p  ...  /lib/x86_64-linux-gnu/libc.so.6


On powerpc I'm seeing:

# /bin/dash
# cat /proc/$$/maps
524e0000-52510000 r-xp 00000000 08:03 129814                             /bin/dash
52510000-52520000 rw-p 00020000 08:03 129814                             /bin/dash
10034f20000-10034f50000 rw-p 00000000 00:00 0                            [heap]
3fffaeaf0000-3fffaeca0000 r-xp 00000000 08:03 13529                      /lib/powerpc64le-linux-gnu/libc-2.19.so
3fffaeca0000-3fffaecb0000 rw-p 001a0000 08:03 13529                      /lib/powerpc64le-linux-gnu/libc-2.19.so
3fffaecc0000-3fffaecd0000 rw-p 00000000 00:00 0 
3fffaecd0000-3fffaecf0000 r-xp 00000000 00:00 0                          [vdso]
3fffaecf0000-3fffaed20000 r-xp 00000000 08:03 13539                      /lib/powerpc64le-linux-gnu/ld-2.19.so
3fffaed20000-3fffaed30000 rw-p 00020000 08:03 13539                      /lib/powerpc64le-linux-gnu/ld-2.19.so
3fffc7070000-3fffc70a0000 rw-p 00000000 00:00 0                          [stack]


Whereas previously the /bin/dash vmas were up at 3fff..

So looks good to me for powerpc.

Acked-by: Michael Ellerman <mpe@ellerman.id.au>

cheers



--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Kees Cook March 4, 2015, 9:13 p.m. UTC | #2
On Tue, Mar 3, 2015 at 8:16 PM, Michael Ellerman <mpe@ellerman.id.au> wrote:
> On Mon, 2015-03-02 at 16:19 -0800, Kees Cook wrote:
>> This fixes the "offset2lib" weakness in ASLR for arm, arm64, mips,
>> powerpc, and x86. The problem is that if there is a leak of ASLR from
>> the executable (ET_DYN), it means a leak of shared library offset as
>> well (mmap), and vice versa. Further details and a PoC of this attack
>> are available here:
>> http://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html
>>
>> With this patch, a PIE linked executable (ET_DYN) has its own ASLR region:
>>
>> $ ./show_mmaps_pie
>> 54859ccd6000-54859ccd7000 r-xp  ...  /tmp/show_mmaps_pie
>> 54859ced6000-54859ced7000 r--p  ...  /tmp/show_mmaps_pie
>> 54859ced7000-54859ced8000 rw-p  ...  /tmp/show_mmaps_pie
>
> Just to be clear, it's the fact that the above vmas are in a different
> address range to those below that shows the patch is working, right?

That's correct, yes. I've called this out explicitly now in the 9/10
patch in v4.

>
>> 7f75be764000-7f75be91f000 r-xp  ...  /lib/x86_64-linux-gnu/libc.so.6
>> 7f75be91f000-7f75beb1f000 ---p  ...  /lib/x86_64-linux-gnu/libc.so.6
>
>
> On powerpc I'm seeing:
>
> # /bin/dash
> # cat /proc/$$/maps
> 524e0000-52510000 r-xp 00000000 08:03 129814                             /bin/dash
> 52510000-52520000 rw-p 00020000 08:03 129814                             /bin/dash
> 10034f20000-10034f50000 rw-p 00000000 00:00 0                            [heap]
> 3fffaeaf0000-3fffaeca0000 r-xp 00000000 08:03 13529                      /lib/powerpc64le-linux-gnu/libc-2.19.so
> 3fffaeca0000-3fffaecb0000 rw-p 001a0000 08:03 13529                      /lib/powerpc64le-linux-gnu/libc-2.19.so
> 3fffaecc0000-3fffaecd0000 rw-p 00000000 00:00 0
> 3fffaecd0000-3fffaecf0000 r-xp 00000000 00:00 0                          [vdso]
> 3fffaecf0000-3fffaed20000 r-xp 00000000 08:03 13539                      /lib/powerpc64le-linux-gnu/ld-2.19.so
> 3fffaed20000-3fffaed30000 rw-p 00020000 08:03 13539                      /lib/powerpc64le-linux-gnu/ld-2.19.so
> 3fffc7070000-3fffc70a0000 rw-p 00000000 00:00 0                          [stack]
>
>
> Whereas previously the /bin/dash vmas were up at 3fff..

Fantastic! Thanks very much for testing!

>
> So looks good to me for powerpc.
>
> Acked-by: Michael Ellerman <mpe@ellerman.id.au>

I had a question in the powerpc-specific change that may have gone unnoticed:

Can mmap ASLR be safely enabled in the legacy mmap case here? Other archs
use "mm->mmap_base = TASK_UNMAPPED_BASE + random_factor".

Separate from this series, do you happen to know if this improvement
can be made, or if the legacy mmap on powerpc can't handle this?

Thanks!

-Kees
Michael Ellerman March 4, 2015, 11:56 p.m. UTC | #3
On Wed, 2015-03-04 at 13:13 -0800, Kees Cook wrote:
> 
> I had a question in the powerpc-specific change that may have gone unnoticed:
> 
> Can mmap ASLR be safely enabled in the legacy mmap case here? Other archs
> use "mm->mmap_base = TASK_UNMAPPED_BASE + random_factor".
> 
> Separate from this series, do you happen to know if this improvement
> can be made, or if the legacy mmap on powerpc can't handle this?

Yeah I saw that. The short answer is I'm not sure.

I assume we have that distinction for some good reason, but whether we still
need it I don't know. I'll dig a bit and see if anyone can remember the details.

cheers


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Russell King - ARM Linux March 9, 2015, 3:13 p.m. UTC | #4
On Mon, Mar 02, 2015 at 04:19:47PM -0800, Kees Cook wrote:
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 248d99cabaa8..e2f0ef9c6ee3 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1,7 +1,6 @@
>  config ARM
>  	bool
>  	default y
> -	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
>  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>  	select ARCH_HAS_ELF_RANDOMIZE
>  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST

This doesn't mean much on its own...

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
diff mbox

Patch

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 248d99cabaa8..e2f0ef9c6ee3 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1,7 +1,6 @@ 
 config ARM
 	bool
 	default y
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5f469095e0e2..07e0fc7adc88 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1,6 +1,5 @@ 
 config ARM64
 	def_bool y
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 72ce5cece768..557c5f1772c1 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -23,7 +23,6 @@  config MIPS
 	select HAVE_KRETPROBES
 	select HAVE_DEBUG_KMEMLEAK
 	select HAVE_SYSCALL_TRACEPOINTS
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ELF_RANDOMIZE
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES && 64BIT
 	select RTC_LIB if !MACH_LOONGSON
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 14fe1c411489..910fa4f9ad1e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -88,7 +88,6 @@  config PPC
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
 	select BINFMT_ELF
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ELF_RANDOMIZE
 	select OF
 	select OF_EARLY_FLATTREE
diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
index 9ed68e7ee856..617f7fabdb0a 100644
--- a/arch/s390/include/asm/elf.h
+++ b/arch/s390/include/asm/elf.h
@@ -163,9 +163,9 @@  extern unsigned int vdso_enabled;
    the loader.  We need to make sure that it is out of the way of the program
    that it will "exec", and that there is sufficient room for the brk. 64-bit
    tasks are aligned to 4GB. */
-#define ELF_ET_DYN_BASE (arch_mmap_rnd() + (is_32bit_task() ? \
+#define ELF_ET_DYN_BASE	(is_32bit_task() ? \
 				(STACK_TOP / 3 * 2) : \
-				(STACK_TOP / 3 * 2) & ~((1UL << 32) - 1)))
+				(STACK_TOP / 3 * 2) & ~((1UL << 32) - 1))
 
 /* This yields a mask that user programs can use to figure out what
    instruction set this CPU supports. */
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9aa91727fbf8..328be0fab910 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -87,7 +87,6 @@  config X86
 	select HAVE_ARCH_KMEMCHECK
 	select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
 	select HAVE_USER_RETURN_NOTIFIER
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
 	select ARCH_HAS_ELF_RANDOMIZE
 	select HAVE_ARCH_JUMP_LABEL
 	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
index 270c48148f79..2d0cbbd14cfc 100644
--- a/fs/Kconfig.binfmt
+++ b/fs/Kconfig.binfmt
@@ -27,9 +27,6 @@  config COMPAT_BINFMT_ELF
 	bool
 	depends on COMPAT && BINFMT_ELF
 
-config ARCH_BINFMT_ELF_RANDOMIZE_PIE
-	bool
-
 config ARCH_BINFMT_ELF_STATE
 	bool
 
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index b1c5ef5d9322..203c2e6f9a25 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -910,21 +910,8 @@  static int load_elf_binary(struct linux_binprm *bprm)
 			 * default mmap base, as well as whatever program they
 			 * might try to exec.  This is because the brk will
 			 * follow the loader, and is not movable.  */
-#ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE
-			/* Memory randomization might have been switched off
-			 * in runtime via sysctl or explicit setting of
-			 * personality flags.
-			 * If that is the case, retain the original non-zero
-			 * load_bias value in order to establish proper
-			 * non-randomized mappings.
-			 */
-			if (current->flags & PF_RANDOMIZE)
-				load_bias = 0;
-			else
-				load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
-#else
-			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
-#endif
+			load_bias = ELF_ET_DYN_BASE + arch_mmap_rnd() - vaddr;
+			load_bias = ELF_PAGESTART(load_bias);
 		}
 
 		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,