Message ID | 20170504154850.GE20461@leverpostej (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote: > Hi, > > From a glance, in the arm64 vdso case, that's due to the definition of > vdso_start as a char giving it a single byte size. > > We can/should probably use char[] for vdso_{start,end} on arm/arm64 as > we do for other linker symbols (and x86 and tile do for > vdso_{start,end}), i.e. Yeah, I think that's the right approach, and this also applies to features like -fsanitize=object-size in UBSan. I worked around it by bypassing the function with __builtin_memcmp as I did for the other cases I ran into, but they should all be fixed properly upstream. > ---->8---- > diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c > index 41b6e31..ae35f18 100644 > --- a/arch/arm64/kernel/vdso.c > +++ b/arch/arm64/kernel/vdso.c > @@ -37,7 +37,7 @@ > #include <asm/vdso.h> > #include <asm/vdso_datapage.h> > > -extern char vdso_start, vdso_end; > +extern char vdso_start[], vdso_end[]; > static unsigned long vdso_pages __ro_after_init; > > /* > @@ -125,14 +125,14 @@ static int __init vdso_init(void) > struct page **vdso_pagelist; > unsigned long pfn; > > - if (memcmp(&vdso_start, "\177ELF", 4)) { > + if (memcmp(vdso_start, "\177ELF", 4)) { > pr_err("vDSO is not a valid ELF object!\n"); > return -EINVAL; > } > > - vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT; > + vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT; > pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n", > - vdso_pages + 1, vdso_pages, &vdso_start, 1L, > vdso_data); > + vdso_pages + 1, vdso_pages, vdso_start, 1L, > vdso_data); > > /* Allocate the vDSO pagelist, plus a page for the data. */ > vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *), > @@ -145,7 +145,7 @@ static int __init vdso_init(void) > > > /* Grab the vDSO code pages. */ > - pfn = sym_to_pfn(&vdso_start); > + pfn = sym_to_pfn(vdso_start); > > for (i = 0; i < vdso_pages; i++) > vdso_pagelist[i + 1] = pfn_to_page(pfn + i); > ---->8---- > > With that fixed, I see we also need a fortify_panic() for the EFI > stub. > > I'm not sure if the x86 EFI stub gets linked with the > boot/compressed/misc.c version below, and/or whether it's safe for the > EFI stub to call that. > > ... with an EFI stub fortify_panic() hacked in, I can build an arm64 > kernel > with this applied. It dies at some point after exiting EFI boot > services; i > don't know whether it made it out of the stub and into the kernel > proper. Could start with #define __NO_FORTIFY above the #include sections there instead (or -D__NO_FORTIFY as a compiler flag), which will skip fortifying those for now. I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so it should be close to working on other systems (but not necessarily with messy drivers). The x86 EFI workaround works. > > It isn't particularly bad, but there are likely some issues that > > occur > > during regular use at runtime (none found so far). > > It might be worth seeing if anyone can throw syzkaller and friends at > this. It tends to find stack buffer overflows, etc. not detected by ASan, so that'd be nice. Can expand coverage a bit to some heap allocations with these, but I expect slab debugging and ASan already found most of what these would uncover: https://github.com/thestinger/linux-hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c1666dce461bc82521b6711e4.patch Unfortunately, ksize means alloc_size on kmalloc is not 100% correct since the extra space from size class rounding falls outside of what it will claim to be the size of the allocation. C standard libraries with _FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size. It doesn't have many uses though.
On Thu, May 04, 2017 at 01:49:44PM -0400, Daniel Micay wrote: > On Thu, 2017-05-04 at 16:48 +0100, Mark Rutland wrote: > > Hi, > > > > From a glance, in the arm64 vdso case, that's due to the definition of > > vdso_start as a char giving it a single byte size. > > > > We can/should probably use char[] for vdso_{start,end} on arm/arm64 as > > we do for other linker symbols (and x86 and tile do for > > vdso_{start,end}), i.e. > > Yeah, I think that's the right approach, and this also applies to > features like -fsanitize=object-size in UBSan. I worked around it by > bypassing the function with __builtin_memcmp as I did for the other > cases I ran into, but they should all be fixed properly upstream. Sure. > > With that fixed, I see we also need a fortify_panic() for the EFI > > stub. > > > > I'm not sure if the x86 EFI stub gets linked with the > > boot/compressed/misc.c version below, and/or whether it's safe for > > the EFI stub to call that. > > > > ... with an EFI stub fortify_panic() hacked in, I can build an arm64 > > kernel with this applied. It dies at some point after exiting EFI > > boot services; i don't know whether it made it out of the stub and > > into the kernel proper. > > Could start with #define __NO_FORTIFY above the #include sections there > instead (or -D__NO_FORTIFY as a compiler flag), which will skip > fortifying those for now. Neat. Given there are a few files, doing the latter for the stub is the simplest option. > I'm successfully using this on a non-EFI ARM64 3.18 LTS kernel, so it > should be close to working on other systems (but not necessarily with > messy drivers). The x86 EFI workaround works. FWIW, I've been playing atop of next-20170504, with a tonne of other debug options enabled (including KASAN_INLINE). From a quick look with a JTAG debugger, the CPU got out of the stub and into the kernel. It looks like it's dying initialising KASAN, where the vectors appear to have been corrupted. I have a rough idea of why that might be. > > > It isn't particularly bad, but there are likely some issues that > > > occur > > > during regular use at runtime (none found so far). > > > > It might be worth seeing if anyone can throw syzkaller and friends at > > this. > > It tends to find stack buffer overflows, etc. not detected by ASan, so > that'd be nice. Can expand coverage a bit to some heap allocations > with these, but I expect slab debugging and ASan already found most of > what these would uncover: > > https://github.com/thestinger/linux-hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch > https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c1666dce461bc82521b6711e4.patch > > Unfortunately, ksize means alloc_size on kmalloc is not 100% correct > since the extra space from size class rounding falls outside of what it > will claim to be the size of the allocation. C standard libraries with > _FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size. It > doesn't have many uses though. Perhaps I've misunderstood, but does that matter? If a caller is relying on accessing padding, I'd say that's a bug. Thanks, Mark.
> > https://github.com/thestinger/linux- > > hardened/commit/6efe84cdb88f73e8b8c59b59a8ea46fa4b1bdab1.patch > > https://github.com/thestinger/linux-hardened/commit/d342da362c5f852c > > 1666dce461bc82521b6711e4.patch > > > > Unfortunately, ksize means alloc_size on kmalloc is not 100% correct > > since the extra space from size class rounding falls outside of what > > it > > will claim to be the size of the allocation. C standard libraries > > with > > _FORTIFY_SOURCE seem to ignore this problem for malloc_usable_size. > > It > > doesn't have many uses though. > > Perhaps I've misunderstood, but does that matter? > > If a caller is relying on accessing padding, I'd say that's a bug. I think it's gross, but it's essentially what ksize provides: exposing how much usable padding is available. If the size class rounding padding is being used by slab debugging red zones, etc. ksize doesn't expose it as part of the size. It's definitely not widely used. I think the main use case is for dynamic arrays, to take advantage of the space added to round to the next size class. There are also likely some users of it that are not tracking sizes themselves but rather relying on ksize, and then might end up using that extra space. I think the glibc authors decided to start considering it a bug to make real use of malloc_usable_size, and the man page discourages it but doesn't explicitly document that: NOTES The value returned by malloc_usable_size() may be greater than the requested size of the allocation because of alignment and minimum size constraints. Although the excess bytes can be overwritten by the application without ill effects, this is not good programming practice: the number of excess bytes in an allocation depends on the underlying implementa‐ tion. The main use of this function is for debugging and introspection. It's mostly safe to use alloc_size like glibc... but not entirely if using ksize to make use of extra padding is permitted, and it seems like it is. I don't think it's particularly useful even for dynamic arrays, but unfortunately it exists / is used.
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 41b6e31..ae35f18 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -37,7 +37,7 @@ #include <asm/vdso.h> #include <asm/vdso_datapage.h> -extern char vdso_start, vdso_end; +extern char vdso_start[], vdso_end[]; static unsigned long vdso_pages __ro_after_init; /* @@ -125,14 +125,14 @@ static int __init vdso_init(void) struct page **vdso_pagelist; unsigned long pfn; - if (memcmp(&vdso_start, "\177ELF", 4)) { + if (memcmp(vdso_start, "\177ELF", 4)) { pr_err("vDSO is not a valid ELF object!\n"); return -EINVAL; } - vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT; + vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT; pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n", - vdso_pages + 1, vdso_pages, &vdso_start, 1L, vdso_data); + vdso_pages + 1, vdso_pages, vdso_start, 1L, vdso_data); /* Allocate the vDSO pagelist, plus a page for the data. */ vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *), @@ -145,7 +145,7 @@ static int __init vdso_init(void) /* Grab the vDSO code pages. */ - pfn = sym_to_pfn(&vdso_start); + pfn = sym_to_pfn(vdso_start); for (i = 0; i < vdso_pages; i++) vdso_pagelist[i + 1] = pfn_to_page(pfn + i);