add the option of fortified string.h functions
diff mbox

Message ID 20170505103839.GB699@leverpostej
State New
Headers show

Commit Message

Mark Rutland May 5, 2017, 10:38 a.m. UTC
On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:

> > > ... with an EFI stub fortify_panic() hacked in, I can build an arm64
> > > kernel with this applied. It dies at some point after exiting EFI
> > > boot services; i don't know whether it made it out of the stub and
> > > into the kernel proper.
> > 
> > Could start with #define __NO_FORTIFY above the #include sections there
> > instead (or -D__NO_FORTIFY as a compiler flag), which will skip
> > fortifying those for now.
> 
> Neat. Given there are a few files, doing the latter for the stub is the
> simplest option.
> 
> > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so it
> > should be close to working on other systems (but not necessarily with
> > messy drivers). The x86 EFI workaround works.
> 
> FWIW, I've been playing atop of next-20170504, with a tonne of other
> debug options enabled (including KASAN_INLINE).
> 
> From a quick look with a JTAG debugger, the CPU got out of the stub and
> into the kernel. It looks like it's dying initialising KASAN, where the
> vectors appear to have been corrupted.

From a walk up the call chain, I saw mm/kasan/kasan.c's memcpy was being
called recursively. Somehow the fortified memcpy() instrumentation
results in kasan's memcpy() calling itself rather than __memcpy().

The resulting stack overflow ends up clobbering the vectors (adn
everythigg else) as this is happening early at boot when everything is
mapepd RW.

That can be avoided with:

---->8----
---->8----

... though it's a worring that __memcpy() is overridden. I think we need
to be more careful with the way we instrument the string functions.

FWIW, with that, and the previous bits, I can boot a next-20170504
kernel with this applied.

However, I get a KASAN splat from the SLUB init code, even though that's
deliberately not instrumented by KASAN:

[    0.000000] ==================================================================
[    0.000000] BUG: KASAN: slab-out-of-bounds in kmem_cache_alloc+0x2ec/0x438
[    0.000000] Write of size 352 at addr ffff800936802000 by task swapper/0
[    0.000000] 
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.11.0-next-20170504-00002-g760cfdb-dirty #15
[    0.000000] Hardware name: ARM Juno development board (r1) (DT)
[    0.000000] Call trace:
[    0.000000] [<ffff2000080949c8>] dump_backtrace+0x0/0x538
[    0.000000] [<ffff2000080951a0>] show_stack+0x20/0x30
[    0.000000] [<ffff200008c125a0>] dump_stack+0x120/0x188
[    0.000000] [<ffff20000857ac04>] print_address_description+0x10c/0x380
[    0.000000] [<ffff20000857b354>] kasan_report+0x12c/0x3b8
[    0.000000] [<ffff200008579d54>] check_memory_region+0x144/0x1a0
[    0.000000] [<ffff20000857a1f4>] memset+0x2c/0x50
[    0.000000] [<ffff2000085730bc>] kmem_cache_alloc+0x2ec/0x438
[    0.000000] [<ffff20000a937528>] bootstrap+0x34/0x28c
[    0.000000] [<ffff20000a937804>] kmem_cache_init+0x84/0x118
[    0.000000] [<ffff20000a9014bc>] start_kernel+0x2f8/0x644
[    0.000000] [<ffff20000a9001e8>] __primary_switched+0x6c/0x74
[    0.000000] 
[    0.000000] Allocated by task 0:
[    0.000000] (stack is not available)
[    0.000000] 
[    0.000000] Freed by task 0:
[    0.000000] (stack is not available)
[    0.000000] 
[    0.000000] The buggy address belongs to the object at ffff800936802000
[    0.000000]  which belongs to the cache kmem_cache of size 352
[    0.000000] The buggy address is located 0 bytes inside of
[    0.000000]  352-byte region [ffff800936802000, ffff800936802160)
[    0.000000] The buggy address belongs to the page:
[    0.000000] page:ffff7e0024da0080 count:1 mapcount:0 mapping:          (null) index:0x0 compound_mapcount: 0
[    0.000000] flags: 0x1fffc00000008100(slab|head)
[    0.000000] raw: 1fffc00000008100 0000000000000000 0000000000000000 0000000180100010
[    0.000000] raw: dead000000000100 dead000000000200 ffff20000aa2c068 0000000000000000
[    0.000000] page dumped because: kasan: bad access detected
[    0.000000] 
[    0.000000] Memory state around the buggy address:
[    0.000000]  ffff800936801f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[    0.000000]  ffff800936801f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[    0.000000] >ffff800936802000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000]                    ^
[    0.000000]  ffff800936802080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000]  ffff800936802100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[    0.000000] ==================================================================

Thanks,
Mark.

Comments

Daniel Micay May 5, 2017, 5:38 p.m. UTC | #1
On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > ... with an EFI stub fortify_panic() hacked in, I can build an
> > > > arm64
> > > > kernel with this applied. It dies at some point after exiting
> > > > EFI
> > > > boot services; i don't know whether it made it out of the stub
> > > > and
> > > > into the kernel proper.
> > > 
> > > Could start with #define __NO_FORTIFY above the #include sections
> > > there
> > > instead (or -D__NO_FORTIFY as a compiler flag), which will skip
> > > fortifying those for now.
> > 
> > Neat. Given there are a few files, doing the latter for the stub is
> > the
> > simplest option.
> > 
> > > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so
> > > it
> > > should be close to working on other systems (but not necessarily
> > > with
> > > messy drivers). The x86 EFI workaround works.
> > 
> > FWIW, I've been playing atop of next-20170504, with a tonne of other
> > debug options enabled (including KASAN_INLINE).
> > 
> > From a quick look with a JTAG debugger, the CPU got out of the stub
> > and
> > into the kernel. It looks like it's dying initialising KASAN, where
> > the
> > vectors appear to have been corrupted.
> 
> 
> ... though it's a worring that __memcpy() is overridden. I think we
> need
> to be more careful with the way we instrument the string functions.

I don't think there's any way for the fortify code to be intercepting
__memcpy. There's a memcpy function in arch/x86/boot/compressed/string.c
defined via __memcpy and that appears to be working.

A shot in the dark is that it might not happen if a __real_memcpy alias
via __RENAME is used instead of __builtin_memcpy, but I'm not sure how
or why this is breaking in the first place.

> FWIW, with that, and the previous bits, I can boot a next-20170504
> kernel with this applied.
> 
> However, I get a KASAN splat from the SLUB init code, even though
> that's
> deliberately not instrumented by KASAN:
> 
> [    0.000000]
> ==================================================================
> [    0.000000] BUG: KASAN: slab-out-of-bounds in
> kmem_cache_alloc+0x2ec/0x438
> [    0.000000] Write of size 352 at addr ffff800936802000 by task
> swapper/0
> [    0.000000] 
> [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.11.0-next-
> 20170504-00002-g760cfdb-dirty #15
> [    0.000000] Hardware name: ARM Juno development board (r1) (DT)
> [    0.000000] Call trace:
> [    0.000000] [<ffff2000080949c8>] dump_backtrace+0x0/0x538
> [    0.000000] [<ffff2000080951a0>] show_stack+0x20/0x30
> [    0.000000] [<ffff200008c125a0>] dump_stack+0x120/0x188
> [    0.000000] [<ffff20000857ac04>]
> print_address_description+0x10c/0x380
> [    0.000000] [<ffff20000857b354>] kasan_report+0x12c/0x3b8
> [    0.000000] [<ffff200008579d54>] check_memory_region+0x144/0x1a0
> [    0.000000] [<ffff20000857a1f4>] memset+0x2c/0x50
> [    0.000000] [<ffff2000085730bc>] kmem_cache_alloc+0x2ec/0x438
> [    0.000000] [<ffff20000a937528>] bootstrap+0x34/0x28c
> [    0.000000] [<ffff20000a937804>] kmem_cache_init+0x84/0x118
> [    0.000000] [<ffff20000a9014bc>] start_kernel+0x2f8/0x644
> [    0.000000] [<ffff20000a9001e8>] __primary_switched+0x6c/0x74
> [    0.000000] 
> [    0.000000] Allocated by task 0:
> [    0.000000] (stack is not available)
> [    0.000000] 
> [    0.000000] Freed by task 0:
> [    0.000000] (stack is not available)
> [    0.000000] 
> [    0.000000] The buggy address belongs to the object at
> ffff800936802000
> [    0.000000]  which belongs to the cache kmem_cache of size 352
> [    0.000000] The buggy address is located 0 bytes inside of
> [    0.000000]  352-byte region [ffff800936802000, ffff800936802160)
> [    0.000000] The buggy address belongs to the page:
> [    0.000000] page:ffff7e0024da0080 count:1 mapcount:0
> mapping:          (null) index:0x0 compound_mapcount: 0
> [    0.000000] flags: 0x1fffc00000008100(slab|head)
> [    0.000000] raw: 1fffc00000008100 0000000000000000 0000000000000000
> 0000000180100010
> [    0.000000] raw: dead000000000100 dead000000000200 ffff20000aa2c068
> 0000000000000000
> [    0.000000] page dumped because: kasan: bad access detected
> [    0.000000] 
> [    0.000000] Memory state around the buggy address:
> [    0.000000]  ffff800936801f00: ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> [    0.000000]  ffff800936801f80: ff ff ff ff ff ff ff ff ff ff ff ff
> ff ff ff ff
> [    0.000000] >ffff800936802000: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]                    ^
> [    0.000000]  ffff800936802080: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]  ffff800936802100: fc fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc
> [    0.000000]
> ==================================================================

I'm not sure about this either. I'd like to avoid needing !KASAN since
these are useful when paired together for finding bugs...

ASan is incompatible with glibc _FORTIFY_SOURCE, but this is much
different (no _chk functions) and it *should* just work already.
Mark Rutland May 8, 2017, 11:41 a.m. UTC | #2
On Fri, May 05, 2017 at 01:38:04PM -0400, Daniel Micay wrote:
> On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> > On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > > ... with an EFI stub fortify_panic() hacked in, I can build an
> > > > > arm64 kernel with this applied. It dies at some point after
> > > > > exiting EFI boot services; i don't know whether it made it out
> > > > > of the stub and into the kernel proper.
> > > > 
> > > > Could start with #define __NO_FORTIFY above the #include
> > > > sections there instead (or -D__NO_FORTIFY as a compiler flag),
> > > > which will skip fortifying those for now.
> > > 
> > > Neat. Given there are a few files, doing the latter for the stub
> > > is the simplest option.
> > > 
> > > > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel,
> > > > so it should be close to working on other systems (but not
> > > > necessarily with messy drivers). The x86 EFI workaround works.
> > > 
> > > FWIW, I've been playing atop of next-20170504, with a tonne of
> > > other debug options enabled (including KASAN_INLINE).
> > > 
> > > From a quick look with a JTAG debugger, the CPU got out of the
> > > stub and into the kernel. It looks like it's dying initialising
> > > KASAN, where the vectors appear to have been corrupted.
> > 
> > ... though it's a worring that __memcpy() is overridden. I think we
> > need to be more careful with the way we instrument the string
> > functions.
> 
> I don't think there's any way for the fortify code to be intercepting
> __memcpy. There's a memcpy function in arch/x86/boot/compressed/string.c
> defined via __memcpy and that appears to be working.

Just to check, are there additional patches to disable fortification of
the KASAN code? With that, things seem fine.

> A shot in the dark is that it might not happen if a __real_memcpy
> alias via __RENAME is used instead of __builtin_memcpy, but I'm not
> sure how or why this is breaking in the first place.

Using a RENAME(__real_memcpy), and a call to that didn't help.

With the rename removed (i.e. just an extern __real_memcpy()), it called
__real_memcpy as expected.

I think there's some unintended interaction with <asm/string.h>:

---->8----
#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)

/*
 * For files that are not instrumented (e.g. mm/slub.c) we
 * should use not instrumented version of mem* functions.
 */

#define memcpy(dst, src, len) __memcpy(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)
#endif
---->8----

If I *only* comment out the memcpy define above, KASAN's memcpy() calls
__memcpy() as we expect.

Looking at the assembly, I see memmove() and memset() have the same
issue, and messing with their <asm/string.h> defintion helps.

Looking at the preprocessed source, the fortified memcpy ends up as:

extern inline __attribute__((always_inline)) __attribute__((no_instrument_function)) __attribute__((always_inline)) __attribute__((gnu_inline)) void *__memcpy(void *p, const void *q, __kernel_size_t size)
{
 size_t p_size = __builtin_object_size(p, 0);
 size_t q_size = __builtin_object_size(q, 0);
 if (__builtin_constant_p(size) && (p_size < size || q_size < size))
  __buffer_overflow();
 if (p_size < size || q_size < size)
  fortify_panic(__func__);
 return __builtin_memcpy(p, q, size);
}

... i.e. we override __memcpy() rather than memcpy().

In KASAN, we undef memcpy before providing KASAN's version, so it keeps
its intended name, ending up as:

void *memcpy(void *dest, const void *src, size_t len)
{
 check_memory_region((unsigned long)src, len, false, (unsigned long)__builtin_return_address(0));
 check_memory_region((unsigned long)dest, len, true, (unsigned long)__builtin_return_address(0));

 return __memcpy(dest, src, len);
}

... then __memcpy() gets inlined and the builtin stuff resolved, calling
memcpy().

This'll require some thought.

> > FWIW, with that, and the previous bits, I can boot a next-20170504
> > kernel with this applied.
> > 
> > However, I get a KASAN splat from the SLUB init code, even though
> > that's deliberately not instrumented by KASAN:

[...]

> I'm not sure about this either. I'd like to avoid needing !KASAN since
> these are useful when paired together for finding bugs...

Likewise! I'd like to have both enabled for my fuzzing config.

Thanks,
Mark.
Daniel Micay May 8, 2017, 4:08 p.m. UTC | #3
On Mon, 2017-05-08 at 12:41 +0100, Mark Rutland wrote:
> On Fri, May 05, 2017 at 01:38:04PM -0400, Daniel Micay wrote:
> > On Fri, 2017-05-05 at 11:38 +0100, Mark Rutland wrote:
> > > On Thu, May 04, 2017 at 07:09:17PM +0100, Mark Rutland wrote:
> > > > On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote:
> > > > > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote:
> > > > > > ... with an EFI stub fortify_panic() hacked in, I can build
> > > > > > an
> > > > > > arm64 kernel with this applied. It dies at some point after
> > > > > > exiting EFI boot services; i don't know whether it made it
> > > > > > out
> > > > > > of the stub and into the kernel proper.
> > > > > 
> > > > > Could start with #define __NO_FORTIFY above the #include
> > > > > sections there instead (or -D__NO_FORTIFY as a compiler flag),
> > > > > which will skip fortifying those for now.
> > > > 
> > > > Neat. Given there are a few files, doing the latter for the stub
> > > > is the simplest option.
> > > > 
> > > > > I'm successfully using this on a non-EFI ARM64 3.18 LTS
> > > > > kernel,
> > > > > so it should be close to working on other systems (but not
> > > > > necessarily with messy drivers). The x86 EFI workaround works.
> > > > 
> > > > FWIW, I've been playing atop of next-20170504, with a tonne of
> > > > other debug options enabled (including KASAN_INLINE).
> > > > 
> > > > From a quick look with a JTAG debugger, the CPU got out of the
> > > > stub and into the kernel. It looks like it's dying initialising
> > > > KASAN, where the vectors appear to have been corrupted.
> > > 
> > > ... though it's a worring that __memcpy() is overridden. I think
> > > we
> > > need to be more careful with the way we instrument the string
> > > functions.
> > 
> > I don't think there's any way for the fortify code to be
> > intercepting
> > __memcpy. There's a memcpy function in
> > arch/x86/boot/compressed/string.c
> > defined via __memcpy and that appears to be working.
> 
> Just to check, are there additional patches to disable fortification
> of
> the KASAN code? With that, things seem fine.
> 
> > A shot in the dark is that it might not happen if a __real_memcpy
> > alias via __RENAME is used instead of __builtin_memcpy, but I'm not
> > sure how or why this is breaking in the first place.
> 
> Using a RENAME(__real_memcpy), and a call to that didn't help.
> 
> With the rename removed (i.e. just an extern __real_memcpy()), it
> called
> __real_memcpy as expected.
> 
> I think there's some unintended interaction with <asm/string.h>:
> 
> ---->8----
> #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
> 
> /*
>  * For files that are not instrumented (e.g. mm/slub.c) we
>  * should use not instrumented version of mem* functions.
>  */
> 
> #define memcpy(dst, src, len) __memcpy(dst, src, len)
> #define memmove(dst, src, len) __memmove(dst, src, len)
> #define memset(s, c, n) __memset(s, c, n)
> #endif
> ---->8----
> 
> If I *only* comment out the memcpy define above, KASAN's memcpy()
> calls
> __memcpy() as we expect.
> 
> Looking at the assembly, I see memmove() and memset() have the same
> issue, and messing with their <asm/string.h> defintion helps.
> 
> Looking at the preprocessed source, the fortified memcpy ends up as:
> 
> extern inline __attribute__((always_inline))
> __attribute__((no_instrument_function)) __attribute__((always_inline))
> __attribute__((gnu_inline)) void *__memcpy(void *p, const void *q,
> __kernel_size_t size)
> {
>  size_t p_size = __builtin_object_size(p, 0);
>  size_t q_size = __builtin_object_size(q, 0);
>  if (__builtin_constant_p(size) && (p_size < size || q_size < size))
>   __buffer_overflow();
>  if (p_size < size || q_size < size)
>   fortify_panic(__func__);
>  return __builtin_memcpy(p, q, size);
> }
> 
> ... i.e. we override __memcpy() rather than memcpy().
> 
> In KASAN, we undef memcpy before providing KASAN's version, so it
> keeps
> its intended name, ending up as:
> 
> void *memcpy(void *dest, const void *src, size_t len)
> {
>  check_memory_region((unsigned long)src, len, false, (unsigned
> long)__builtin_return_address(0));
>  check_memory_region((unsigned long)dest, len, true, (unsigned
> long)__builtin_return_address(0));
> 
>  return __memcpy(dest, src, len);
> }
> 
> ... then __memcpy() gets inlined and the builtin stuff resolved,
> calling
> memcpy().
> 
> This'll require some thought.
>
> > > FWIW, with that, and the previous bits, I can boot a next-20170504
> > > kernel with this applied.
> > > 
> > > However, I get a KASAN splat from the SLUB init code, even though
> > > that's deliberately not instrumented by KASAN:
> 
> [...]
> 
> > I'm not sure about this either. I'd like to avoid needing !KASAN
> > since
> > these are useful when paired together for finding bugs...
> 
> Likewise! I'd like to have both enabled for my fuzzing config.

Ah, it's happening because of that same #define issue. The wrapper ends
up overriding __memcpy in uninstrumented functions but then it ends up
calling __builtin_memcpy which makes it instrumented again.

An initial solution would be adding #define __NO_FORTIFY to that #if
block where memcpy and friends are redefined to disable KASan. It isn't
ideal, but it only impacts KASan builds and only where KASan is being
disabled. Alternatively, it could #define something for the fortify
wrappers to force them to use an alias for __memcpy instead of
__builtin_memcpy but I don't think it's worth the complexity for a tiny
bit of extra coverage in KASan builds. I'll take the easy path.
Kees Cook May 9, 2017, 8:39 p.m. UTC | #4
On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> ---->8----
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index f742596..b5327f5 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
>
>  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
>                                    $(call cc-option,-ffreestanding) \
> -                                  $(call cc-option,-fno-stack-protector)
> +                                  $(call cc-option,-fno-stack-protector) \
> +                                  -D__NO_FORTIFY
>
>  GCOV_PROFILE                   := n
>  KASAN_SANITIZE                 := n
> ---->8----

Can we split the compile time from runtime checks so the efi stub is
still covered by the build-time checks? (Or was there a compile
failure I missed?)

-Kees
Daniel Micay May 9, 2017, 11:02 p.m. UTC | #5
On Tue, 2017-05-09 at 13:39 -0700, Kees Cook wrote:
> On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com>
> wrote:
> > ---->8----
> > diff --git a/drivers/firmware/efi/libstub/Makefile
> > b/drivers/firmware/efi/libstub/Makefile
> > index f742596..b5327f5 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  +=
> > -I$(srctree)/scripts/dtc/libfdt
> > 
> >  KBUILD_CFLAGS                  := $(cflags-y)
> > -DDISABLE_BRANCH_PROFILING \
> >                                    $(call cc-option,-ffreestanding)
> > \
> > -                                  $(call cc-option,-fno-stack-
> > protector)
> > +                                  $(call cc-option,-fno-stack-
> > protector) \
> > +                                  -D__NO_FORTIFY
> > 
> >  GCOV_PROFILE                   := n
> >  KASAN_SANITIZE                 := n
> > ---->8----
> 
> Can we split the compile time from runtime checks so the efi stub is
> still covered by the build-time checks? (Or was there a compile
> failure I missed?)
> 
> -Kees

It might just need fortify_panic defined somewhere. It seems like the
place I defined it on x86 covers this but I might be wrong about that.
Mark Rutland May 10, 2017, 11:12 a.m. UTC | #6
On Tue, May 09, 2017 at 01:39:01PM -0700, Kees Cook wrote:
> On Fri, May 5, 2017 at 3:38 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> > ---->8----
> > diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> > index f742596..b5327f5 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -18,7 +18,8 @@ cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
> >
> >  KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
> >                                    $(call cc-option,-ffreestanding) \
> > -                                  $(call cc-option,-fno-stack-protector)
> > +                                  $(call cc-option,-fno-stack-protector) \
> > +                                  -D__NO_FORTIFY
> >
> >  GCOV_PROFILE                   := n
> >  KASAN_SANITIZE                 := n
> > ---->8----
> 
> Can we split the compile time from runtime checks so the efi stub is
> still covered by the build-time checks? (Or was there a compile
> failure I missed?)

Without this, the EFI stub won't build for arm64, due to the lack of a
fortify_panic(). The arm64 __efistub_ symbol mangling prevents us using
the usual kernel version, which would be wrong to use anyway.

Thanks,
Mark.

Patch
diff mbox

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index f742596..b5327f5 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -18,7 +18,8 @@  cflags-$(CONFIG_EFI_ARMSTUB)  += -I$(srctree)/scripts/dtc/libfdt
 
 KBUILD_CFLAGS                  := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
                                   $(call cc-option,-ffreestanding) \
-                                  $(call cc-option,-fno-stack-protector)
+                                  $(call cc-option,-fno-stack-protector) \
+                                  -D__NO_FORTIFY
 
 GCOV_PROFILE                   := n
 KASAN_SANITIZE                 := n