Message ID | 20170807201542.GA21271@beast (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: > Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 > broke AddressSanitizer. This is a partial revert of: > > commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") > commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") > > The AddressSanitizer tool has hard-coded expectations about where > executable mappings are loaded. The motivation for changing the PIE > base in the above commits was to avoid the Stack-Clash CVEs that > allowed executable mappings to get too close to heap and stack. This > was mainly a problem on 32-bit, but the 64-bit bases were moved too, > in an effort to proactively protect those systems (proofs of concept > do exist that show 64-bit collisions, but other recent changes to fix > stack accounting and setuid behaviors will minimize the impact). > > The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC > base), so only the 64-bit PIE base needs to be reverted to let x86 and > arm64 ASan binaries run again. Future changes to the 64-bit PIE base on > these architectures can be made optional once a more dynamic method for > dealing with AddressSanitizer is found. (e.g. always loading PIE into > the mmap region for marked binaries.) > > Reported-by: Kostya Serebryany <kcc@google.com> > Cc: stable@vger.kernel.org > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > arch/arm64/include/asm/elf.h | 4 ++-- > arch/x86/include/asm/elf.h | 4 ++-- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h > index acae781f7359..3288c2b36731 100644 > --- a/arch/arm64/include/asm/elf.h > +++ b/arch/arm64/include/asm/elf.h > @@ -114,10 +114,10 @@ > > /* > * This is the base location for PIE (ET_DYN with INTERP) loads. On > - * 64-bit, this is raised to 4GB to leave the entire 32-bit address > + * 64-bit, this is above 4GB to leave the entire 32-bit address > * space open for things that want to use the area for 32-bit pointers. > */ > -#define ELF_ET_DYN_BASE 0x100000000UL > +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) > > #ifndef __ASSEMBLY__ Acked-by: Will Deacon <will.deacon@arm.com> I assume this is going via akpm like the original commit? Will
On Tue, Aug 8, 2017 at 2:34 AM, Will Deacon <will.deacon@arm.com> wrote: > On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: >> Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 >> broke AddressSanitizer. This is a partial revert of: >> >> commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") >> commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") >> >> The AddressSanitizer tool has hard-coded expectations about where >> executable mappings are loaded. The motivation for changing the PIE >> base in the above commits was to avoid the Stack-Clash CVEs that >> allowed executable mappings to get too close to heap and stack. This >> was mainly a problem on 32-bit, but the 64-bit bases were moved too, >> in an effort to proactively protect those systems (proofs of concept >> do exist that show 64-bit collisions, but other recent changes to fix >> stack accounting and setuid behaviors will minimize the impact). >> >> The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC >> base), so only the 64-bit PIE base needs to be reverted to let x86 and >> arm64 ASan binaries run again. Future changes to the 64-bit PIE base on >> these architectures can be made optional once a more dynamic method for >> dealing with AddressSanitizer is found. (e.g. always loading PIE into >> the mmap region for marked binaries.) >> >> Reported-by: Kostya Serebryany <kcc@google.com> >> Cc: stable@vger.kernel.org >> Signed-off-by: Kees Cook <keescook@chromium.org> >> --- >> arch/arm64/include/asm/elf.h | 4 ++-- >> arch/x86/include/asm/elf.h | 4 ++-- >> 2 files changed, 4 insertions(+), 4 deletions(-) >> >> diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h >> index acae781f7359..3288c2b36731 100644 >> --- a/arch/arm64/include/asm/elf.h >> +++ b/arch/arm64/include/asm/elf.h >> @@ -114,10 +114,10 @@ >> >> /* >> * This is the base location for PIE (ET_DYN with INTERP) loads. On >> - * 64-bit, this is raised to 4GB to leave the entire 32-bit address >> + * 64-bit, this is above 4GB to leave the entire 32-bit address >> * space open for things that want to use the area for 32-bit pointers. >> */ >> -#define ELF_ET_DYN_BASE 0x100000000UL >> +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) >> >> #ifndef __ASSEMBLY__ > > Acked-by: Will Deacon <will.deacon@arm.com> > > I assume this is going via akpm like the original commit? That's my expectation, yup. Thanks for the Ack! -Kees
ping to akpm to please pick this up for -mm... On Tue, Aug 8, 2017 at 7:00 PM, Kees Cook <keescook@chromium.org> wrote: > On Tue, Aug 8, 2017 at 2:34 AM, Will Deacon <will.deacon@arm.com> wrote: >> On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: >>> Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 >>> broke AddressSanitizer. This is a partial revert of: >>> >>> commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") >>> commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") >>> >>> The AddressSanitizer tool has hard-coded expectations about where >>> executable mappings are loaded. The motivation for changing the PIE >>> base in the above commits was to avoid the Stack-Clash CVEs that >>> allowed executable mappings to get too close to heap and stack. This >>> was mainly a problem on 32-bit, but the 64-bit bases were moved too, >>> in an effort to proactively protect those systems (proofs of concept >>> do exist that show 64-bit collisions, but other recent changes to fix >>> stack accounting and setuid behaviors will minimize the impact). >>> >>> The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC >>> base), so only the 64-bit PIE base needs to be reverted to let x86 and >>> arm64 ASan binaries run again. Future changes to the 64-bit PIE base on >>> these architectures can be made optional once a more dynamic method for >>> dealing with AddressSanitizer is found. (e.g. always loading PIE into >>> the mmap region for marked binaries.) >>> >>> Reported-by: Kostya Serebryany <kcc@google.com> >>> Cc: stable@vger.kernel.org >>> Signed-off-by: Kees Cook <keescook@chromium.org> >>> --- >>> arch/arm64/include/asm/elf.h | 4 ++-- >>> arch/x86/include/asm/elf.h | 4 ++-- >>> 2 files changed, 4 insertions(+), 4 deletions(-) >>> >>> diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h >>> index acae781f7359..3288c2b36731 100644 >>> --- a/arch/arm64/include/asm/elf.h >>> +++ b/arch/arm64/include/asm/elf.h >>> @@ -114,10 +114,10 @@ >>> >>> /* >>> * This is the base location for PIE (ET_DYN with INTERP) loads. On >>> - * 64-bit, this is raised to 4GB to leave the entire 32-bit address >>> + * 64-bit, this is above 4GB to leave the entire 32-bit address >>> * space open for things that want to use the area for 32-bit pointers. >>> */ >>> -#define ELF_ET_DYN_BASE 0x100000000UL >>> +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) >>> >>> #ifndef __ASSEMBLY__ >> >> Acked-by: Will Deacon <will.deacon@arm.com> >> >> I assume this is going via akpm like the original commit? > > That's my expectation, yup. Thanks for the Ack! > > -Kees > > -- > Kees Cook > Pixel Security
*extreme thread[1] necromancy* On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: > Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 > broke AddressSanitizer. This is a partial revert of: > > commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") > commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") > > The AddressSanitizer tool has hard-coded expectations about where > executable mappings are loaded. The motivation for changing the PIE > base in the above commits was to avoid the Stack-Clash CVEs that > allowed executable mappings to get too close to heap and stack. This > was mainly a problem on 32-bit, but the 64-bit bases were moved too, > in an effort to proactively protect those systems (proofs of concept > do exist that show 64-bit collisions, but other recent changes to fix > stack accounting and setuid behaviors will minimize the impact). I happened to be looking at this again today, and wondered where things stood. It seems like ASan's mappings are documented here: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#64-bit This implies that it would be safe to move the ELF_ET_DYN_BASE from 0x555555554000 down to 0x200000000000, since the shadow map ends at 0x10007fff7fff. (Well, anything above there would work, I was just picking a "round" number above it. We could just as well use 0x100080000000, I think.) Is this correct? I'd like to open up some more room between mmap and stack... Thanks! -Kees [1] https://lore.kernel.org/lkml/20170807201542.GA21271@beast/ > > The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC > base), so only the 64-bit PIE base needs to be reverted to let x86 and > arm64 ASan binaries run again. Future changes to the 64-bit PIE base on > these architectures can be made optional once a more dynamic method for > dealing with AddressSanitizer is found. (e.g. always loading PIE into > the mmap region for marked binaries.) > > Reported-by: Kostya Serebryany <kcc@google.com> > Cc: stable@vger.kernel.org > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > arch/arm64/include/asm/elf.h | 4 ++-- > arch/x86/include/asm/elf.h | 4 ++-- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h > index acae781f7359..3288c2b36731 100644 > --- a/arch/arm64/include/asm/elf.h > +++ b/arch/arm64/include/asm/elf.h > @@ -114,10 +114,10 @@ > > /* > * This is the base location for PIE (ET_DYN with INTERP) loads. On > - * 64-bit, this is raised to 4GB to leave the entire 32-bit address > + * 64-bit, this is above 4GB to leave the entire 32-bit address > * space open for things that want to use the area for 32-bit pointers. > */ > -#define ELF_ET_DYN_BASE 0x100000000UL > +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) > > #ifndef __ASSEMBLY__ > > diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h > index 1c18d83d3f09..9aeb91935ce0 100644 > --- a/arch/x86/include/asm/elf.h > +++ b/arch/x86/include/asm/elf.h > @@ -247,11 +247,11 @@ extern int force_personality32; > > /* > * This is the base location for PIE (ET_DYN with INTERP) loads. On > - * 64-bit, this is raised to 4GB to leave the entire 32-bit address > + * 64-bit, this is above 4GB to leave the entire 32-bit address > * space open for things that want to use the area for 32-bit pointers. > */ > #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ > - 0x100000000UL) > + (TASK_SIZE / 3 * 2)) > > /* This yields a mask that user programs can use to figure out what > instruction set this CPU supports. This could be done in user space, > -- > 2.7.4 > > > -- > Kees Cook > Pixel Security
On Sat, 17 Feb 2024 at 07:50, Kees Cook <keescook@chromium.org> wrote: > > *extreme thread[1] necromancy* > > On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: > > Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 > > broke AddressSanitizer. This is a partial revert of: > > > > commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") > > commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") > > > > The AddressSanitizer tool has hard-coded expectations about where > > executable mappings are loaded. The motivation for changing the PIE > > base in the above commits was to avoid the Stack-Clash CVEs that > > allowed executable mappings to get too close to heap and stack. This > > was mainly a problem on 32-bit, but the 64-bit bases were moved too, > > in an effort to proactively protect those systems (proofs of concept > > do exist that show 64-bit collisions, but other recent changes to fix > > stack accounting and setuid behaviors will minimize the impact). > > I happened to be looking at this again today, and wondered where things > stood. It seems like ASan's mappings are documented here: > https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#64-bit > > This implies that it would be safe to move the ELF_ET_DYN_BASE from > 0x555555554000 down to 0x200000000000, since the shadow map ends at > 0x10007fff7fff. (Well, anything above there would work, I was just > picking a "round" number above it. We could just as well use > 0x100080000000, I think.) > > Is this correct? I'd like to open up some more room between mmap and > stack... Note that there is also TSAN and MSAN with their own mappings. These are also different per-arch, e.g. TSAN/Linux/x86_64: https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L48-L58 Search "linux/" in that file for other arches, e.g.: https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L156-L165 https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L187-L196 https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L218-L227 https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L252-L263 And MSAN mappings: https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/msan/msan.h#L44-L61 > Thanks! > > -Kees > > [1] https://lore.kernel.org/lkml/20170807201542.GA21271@beast/ > > > > > The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC > > base), so only the 64-bit PIE base needs to be reverted to let x86 and > > arm64 ASan binaries run again. Future changes to the 64-bit PIE base on > > these architectures can be made optional once a more dynamic method for > > dealing with AddressSanitizer is found. (e.g. always loading PIE into > > the mmap region for marked binaries.) > > > > Reported-by: Kostya Serebryany <kcc@google.com> > > Cc: stable@vger.kernel.org > > Signed-off-by: Kees Cook <keescook@chromium.org> > > --- > > arch/arm64/include/asm/elf.h | 4 ++-- > > arch/x86/include/asm/elf.h | 4 ++-- > > 2 files changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h > > index acae781f7359..3288c2b36731 100644 > > --- a/arch/arm64/include/asm/elf.h > > +++ b/arch/arm64/include/asm/elf.h > > @@ -114,10 +114,10 @@ > > > > /* > > * This is the base location for PIE (ET_DYN with INTERP) loads. On > > - * 64-bit, this is raised to 4GB to leave the entire 32-bit address > > + * 64-bit, this is above 4GB to leave the entire 32-bit address > > * space open for things that want to use the area for 32-bit pointers. > > */ > > -#define ELF_ET_DYN_BASE 0x100000000UL > > +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) > > > > #ifndef __ASSEMBLY__ > > > > diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h > > index 1c18d83d3f09..9aeb91935ce0 100644 > > --- a/arch/x86/include/asm/elf.h > > +++ b/arch/x86/include/asm/elf.h > > @@ -247,11 +247,11 @@ extern int force_personality32; > > > > /* > > * This is the base location for PIE (ET_DYN with INTERP) loads. On > > - * 64-bit, this is raised to 4GB to leave the entire 32-bit address > > + * 64-bit, this is above 4GB to leave the entire 32-bit address > > * space open for things that want to use the area for 32-bit pointers. > > */ > > #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ > > - 0x100000000UL) > > + (TASK_SIZE / 3 * 2)) > > > > /* This yields a mask that user programs can use to figure out what > > instruction set this CPU supports. This could be done in user space, > > -- > > 2.7.4 > > > > > > -- > > Kees Cook > > Pixel Security > > -- > Kees Cook
On Tue, Feb 20, 2024 at 08:47:09AM +0100, Dmitry Vyukov wrote: > On Sat, 17 Feb 2024 at 07:50, Kees Cook <keescook@chromium.org> wrote: > > > > *extreme thread[1] necromancy* > > > > On Mon, Aug 07, 2017 at 01:15:42PM -0700, Kees Cook wrote: > > > Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 > > > broke AddressSanitizer. This is a partial revert of: > > > > > > commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") > > > commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") > > > > > > The AddressSanitizer tool has hard-coded expectations about where > > > executable mappings are loaded. The motivation for changing the PIE > > > base in the above commits was to avoid the Stack-Clash CVEs that > > > allowed executable mappings to get too close to heap and stack. This > > > was mainly a problem on 32-bit, but the 64-bit bases were moved too, > > > in an effort to proactively protect those systems (proofs of concept > > > do exist that show 64-bit collisions, but other recent changes to fix > > > stack accounting and setuid behaviors will minimize the impact). > > > > I happened to be looking at this again today, and wondered where things > > stood. It seems like ASan's mappings are documented here: > > https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#64-bit > > > > This implies that it would be safe to move the ELF_ET_DYN_BASE from > > 0x555555554000 down to 0x200000000000, since the shadow map ends at > > 0x10007fff7fff. (Well, anything above there would work, I was just > > picking a "round" number above it. We could just as well use > > 0x100080000000, I think.) > > > > Is this correct? I'd like to open up some more room between mmap and > > stack... Thanks for the details! > Note that there is also TSAN and MSAN with their own mappings. > These are also different per-arch, e.g. TSAN/Linux/x86_64: > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L48-L58 > Search "linux/" in that file for other arches, e.g.: > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L156-L165 > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L187-L196 > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L218-L227 > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/tsan/rtl/tsan_platform.h#L252-L263 Depending on VA size, these are effectively all below 0x3800 0000 0000. > > And MSAN mappings: > https://github.com/llvm/llvm-project/blob/d2a26a7bd5fc7cc5752337b7f4f999642feb37dc/compiler-rt/lib/msan/msan.h#L44-L61 These are all below 0x1000 0000 0000. So there probably isn't much benefit in reducing the PIE program position below the current 0x55.... Okay, thanks!
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index acae781f7359..3288c2b36731 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -114,10 +114,10 @@ /* * This is the base location for PIE (ET_DYN with INTERP) loads. On - * 64-bit, this is raised to 4GB to leave the entire 32-bit address + * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */ -#define ELF_ET_DYN_BASE 0x100000000UL +#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) #ifndef __ASSEMBLY__ diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index 1c18d83d3f09..9aeb91935ce0 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -247,11 +247,11 @@ extern int force_personality32; /* * This is the base location for PIE (ET_DYN with INTERP) loads. On - * 64-bit, this is raised to 4GB to leave the entire 32-bit address + * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */ #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ - 0x100000000UL) + (TASK_SIZE / 3 * 2)) /* This yields a mask that user programs can use to figure out what instruction set this CPU supports. This could be done in user space,
Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000 broke AddressSanitizer. This is a partial revert of: commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") The AddressSanitizer tool has hard-coded expectations about where executable mappings are loaded. The motivation for changing the PIE base in the above commits was to avoid the Stack-Clash CVEs that allowed executable mappings to get too close to heap and stack. This was mainly a problem on 32-bit, but the 64-bit bases were moved too, in an effort to proactively protect those systems (proofs of concept do exist that show 64-bit collisions, but other recent changes to fix stack accounting and setuid behaviors will minimize the impact). The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC base), so only the 64-bit PIE base needs to be reverted to let x86 and arm64 ASan binaries run again. Future changes to the 64-bit PIE base on these architectures can be made optional once a more dynamic method for dealing with AddressSanitizer is found. (e.g. always loading PIE into the mmap region for marked binaries.) Reported-by: Kostya Serebryany <kcc@google.com> Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> --- arch/arm64/include/asm/elf.h | 4 ++-- arch/x86/include/asm/elf.h | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)