Message ID | 1520107232-14111-5-git-send-email-alex.popov@linux.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 03/03/2018 12:00 PM, Alexander Popov wrote: > @@ -128,6 +134,7 @@ static long syscall_trace_enter(struct pt_regs *regs) > > do_audit_syscall_entry(regs, arch); > > + erase_kstack(); > return ret ?: regs->orig_ax; > } This seems like an odd place to be clearing the stack. Why was it done her?
On Mon, Mar 5, 2018 at 11:40 AM, Dave Hansen <dave.hansen@linux.intel.com> wrote: > On 03/03/2018 12:00 PM, Alexander Popov wrote: >> @@ -128,6 +134,7 @@ static long syscall_trace_enter(struct pt_regs *regs) >> >> do_audit_syscall_entry(regs, arch); >> >> + erase_kstack(); >> return ret ?: regs->orig_ax; >> } > > This seems like an odd place to be clearing the stack. Why was it done her? Perhaps the commit log could be improved, but the idea is that the audit work (ptrace, seccomp, etc), is happening before the syscall code starts running, and it has therefore written to the stack (that used to be cleared on last exit). This retains the clear stack state even in the face of ptrace-ish work happening before the syscall proper starts. -Kees
This is the first I see of any of this, it was apparently not actually posted to lkml or anything like that. Honestly, what I see just makes me go "this is security-masturbation". It doesn't actually seem to help *find* bugs at all. As such, it's another "paper over and forget" thing that just adds fairly high overhead when it's enabled. I'm NAK'ing it sight-unseen (see above) just because I'm tired of these kinds of pointless things that don't actually strive to improve on the kernel, just add more and more overhead for nebulous "things may happen", and that just make the code uglier. Why wasn't it even posted to lkml? And why isn't the focus of security people on tools to _analyse_ and find problems? Linus
On Mon, Mar 05, 2018 at 12:06:18PM -0800, Kees Cook wrote: > On Mon, Mar 5, 2018 at 11:40 AM, Dave Hansen > <dave.hansen@linux.intel.com> wrote: > > On 03/03/2018 12:00 PM, Alexander Popov wrote: > >> @@ -128,6 +134,7 @@ static long syscall_trace_enter(struct pt_regs *regs) > >> > >> do_audit_syscall_entry(regs, arch); > >> > >> + erase_kstack(); > >> return ret ?: regs->orig_ax; > >> } > > > > This seems like an odd place to be clearing the stack. Why was it done her? > > Perhaps the commit log could be improved, but the idea is that the > audit work (ptrace, seccomp, etc), is happening before the syscall > code starts running, and it has therefore written to the stack (that > used to be cleared on last exit). This retains the clear stack state > even in the face of ptrace-ish work happening before the syscall > proper starts. I'd suggest a code-comment over a Changelog twiddle. The changelog bit only helps now, that code comments helps us again in 6 motnhs time when we've forgotten everything again.
Hello Linus, Thanks for your reply (despite some strong words). On 05.03.2018 23:15, Linus Torvalds wrote: > This is the first I see of any of this, it was apparently not actually > posted to lkml or anything like that. I described that below. > Honestly, what I see just makes me go "this is security-masturbation". Let me quote the cover letter of this patch series. STACKLEAK (initially developed by PaX Team): - reduces the information that can be revealed through kernel stack leak bugs; - blocks some uninitialized stack variable attacks (e.g. CVE-2017-17712, CVE-2010-2963); - introduces some runtime checks for kernel stack overflow detection. It blocks the Stack Clash attack against the kernel. So it seems to be a useful feature. > It doesn't actually seem to help *find* bugs at all. As such, it's > another "paper over and forget" thing that just adds fairly high > overhead when it's enabled. The cover letter also contains the information about the performance impact. It's 0.6% on the compiling the kernel (with Ubuntu config) and approx 4% on a very intensive hackbench test. > I'm NAK'ing it sight-unseen (see above) just because I'm tired of > these kinds of pointless things that don't actually strive to improve > on the kernel, just add more and more overhead for nebulous "things > may happen", and that just make the code uglier. > > Why wasn't it even posted to lkml? That's my mistake. I started to learn that feature 9 month ago, just before Qualys published the Stack Clash attack (which is blocked by STACKLEAK). I sent first WIP versions to a short list of people (and had a lot of feedback to work with). But later unfortunately I didn't adjust the list of recipients. That was not done intentionally. > And why isn't the focus of security people on tools to _analyse_ and > find problems? You know, I like KASAN and kernel fuzzing as well: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=82f2341c94d270421f383641b7cd670e474db56b Best regards, Alexander
On Mon, Mar 5, 2018 at 12:15 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > This is the first I see of any of this, it was apparently not actually > posted to lkml or anything like that. This series was still RFC and narrowly focused (e.g. we recently had compiler folks reviewing it). I'd say we're now to the point where it's getting mature enough for larger review (I commented on the 0/n patch to this end earlier today). > It doesn't actually seem to help *find* bugs at all. As such, it's > another "paper over and forget" thing that just adds fairly high > overhead when it's enabled. This specific class of flaw ("uninitialized" stack contents being used or leaked) is already being looked for by tons of tools like KASan. There are teams of people working on it, and they still haven't found all the cases where these flaws appear. Luckily we don't have to pick one or the other: we can continue to look for bugs while defending against them ever happening in the first place. We've already seen multiple cases of this just with the by-ref initialization plugin, where a stack content leak goes away because we asked the compiler to please initialize the memory for us when we forgot to do it ourselves. Getting the compiler to help us seems like the obviously correct thing to do, since we're using such a memory-safety-unfriendly language. :) > I'm NAK'ing it sight-unseen (see above) just because I'm tired of > these kinds of pointless things that don't actually strive to improve > on the kernel, just add more and more overhead for nebulous "things > may happen", and that just make the code uglier. I hope you'll reconsider. In this case, I think it does improve the kernel, especially if we can gain more complete coverage through native compiler options (instead of just a plugin). Right now, for example, the kernel is littered with memset()s because the compiler can't be trusted to correctly zero-init padding, etc. This is an endless source of bugs, and this patch series provides a comprehensive and fast way to keep the stack cleared. (See the benchmark I asked for that compared this 1% "clear only what was used" against the 30% to wipe the entire stack every time. The latter is obviously insane, and the former is quite clever and kills an entire bug class.) -Kees
On Mon, Mar 5, 2018 at 1:02 PM, Kees Cook <keescook@chromium.org> wrote: > This specific class of flaw ("uninitialized" stack contents being used > or leaked) is already being looked for by tons of tools like KASan. > There are teams of people working on it, and they still haven't found > all the cases where these flaws appear. Right. So let's do the *smart* thing, not the *stupid* thing. This "mindlessly clear stack after use" is stupid. There are smart things we can do, and it's not just about "find the problems" like KASAN, but also "avoid undefined behavior". I absolutely detest undefined compiler behavior. We should fix it. One of the biggest mistakes C ever did was to have "undefined" in the standard, and we already obviously limit that kind of broken behavior with -fwrapv and -fno-strict-alias. Both of those disable what I consider to be just bugs in the C standard that should not be allowed for system software - it's a class of "stupid compiler tricks to generate bad code", which by definition a compiler shouldn't do. A compiler should generate *good* code, not random bad code. And yes: > We've already seen multiple cases of this just with the by-ref > initialization plugin, where a stack content leak goes away because we > asked the compiler to please initialize the memory for us when we > forgot to do it ourselves. Getting the compiler to help us seems like > the obviously correct thing to do, since we're using such a > memory-safety-unfriendly language. :) This is more of the *smart* kind of behavior - I'm also perfectly willing to say that automatic variables should just always initialize to zero, exactly the same way static variables do. And it doesn't necessarily generate any worse code. Why? Because it's the _intelligent_ thing to do - unlike some completely mindless idiotic "clear the stack" patch. Now, I haven't actually seen the patches, I've only seen signs of them, but the signs I have seen very much seem to say "this is the mindless and *stupid* kind of crap that we should not do". Things like adding hundreds of lines of new asm code, just because. So by all means, push the zero initializations of automatic variables. That's just good sense. But don't push this crap. Linus
On Mon, Mar 5, 2018 at 1:40 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > This is more of the *smart* kind of behavior - I'm also perfectly > willing to say that automatic variables should just always initialize > to zero, exactly the same way static variables do. .. and perhaps even a build flag or plugin to make sure that the compiler doesn't remove "dead" writes to those automatic variables that got allocated on the stack? Honestly, with clearing of automatic variables, what stack leaks really exists in practice that this all would help against? Linus
On Mon, Mar 5, 2018 at 1:40 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > This "mindlessly clear stack after use" is stupid. In defense of the series, it's hardly "mindless". :) The primary feature is that it has run-time tracking of stack depth to clear only the minimum needed portion of the stack. > There are smart things we can do, and it's not just about "find the > problems" like KASAN, but also "avoid undefined behavior". > > I absolutely detest undefined compiler behavior. We should fix it. One > of the biggest mistakes C ever did was to have "undefined" in the > standard, and we already obviously limit that kind of broken behavior > with -fwrapv and -fno-strict-alias. And -fno-delete-null-pointer-checks, and and and.... :P What we've done, traditionally, has always been two pronged: fix the kernel source and fix the compiler. Our kernel fixes have been short term (fixing specific instances where we notice a problem), with the compiler fix becoming the global solution down the road once everyone has that version of the compiler. This leaves a defense gap for bugs we haven't found yet (which are actually present, whether _we_ know about them or not :P). The recent discussions on minimum compiler version underscore the fact that people move forward on compilers _very_ slowly. I've been trying to add a third prong (with many of these kinds of defenses), where we can address the gap. The first two prongs remain: fix the specific cases as they're uncovered (e.g. by KASan), and fix the global problem with the compiler (I recently detailed[1] four specific features I wanted to see from compilers on this front last week). Then the added third prong is: provide wide coverage _now_ for those that don't have a fixed compiler (especially when no such fix even exists right now) to catch all the cases we haven't found yet. > This is more of the *smart* kind of behavior - I'm also perfectly > willing to say that automatic variables should just always initialize > to zero, exactly the same way static variables do. > > And it doesn't necessarily generate any worse code. I agree, though some performance-sensitive subsystem (e.g. networking) get very defensive about an always-on stack initialization[2]. No matter what happens with this kind of automatic initialization, I suspect it's going to have to stay a build-time option to let some people opt-out of it. > Honestly, with clearing of automatic variables, what stack leaks > really exists in practice that this all would help against? As we both know, we have very different ideas about what "in practice" means for security flaws. :) So, yes, while auto-init gets us coverage for a large portion of stack content leak bugs, it's still temporally different from clearing the stack on exit. For example, a stack read flaw with a negative index can read out the prior syscall's deeper stack contents. Stack-clearing also reduces the lifetime of stack contents (e.g. in the case of cross-thread reads from another process, the time for the race to read the stack is longer). While these are certainly more rare cases, they do exist, and I've seen much weirder attacks. Another case is that this series provides actual stack probing to detect VLA abuse. This is less of an issue now with VMAP_STACK, and I've had VLA removal on the long-term goal list for the kernel for a while now, but the probing does work... I would love to see (and am already pursuing) auto-init (see [1] again), but this series does provide additional coverage, and it does it today. -Kees [1] http://www.openwall.com/lists/kernel-hardening/2018/02/27/41 [2] Both these cases, and so many more, are solved with the byref initialization plugin, but have been NAKed by -net: https://lkml.org/lkml/2013/4/9/641 https://lkml.org/lkml/2017/10/31/699
On Mon, Mar 5, 2018 at 4:56 PM, Kees Cook <keescook@chromium.org> wrote: > On Mon, Mar 5, 2018 at 1:40 PM, Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> and we already obviously limit that kind of broken behavior >> with -fwrapv and -fno-strict-alias. > > And -fno-delete-null-pointer-checks, and and and.... :P Indeed. > The recent discussions on minimum compiler version underscore the fact > that people move forward on compilers _very_ slowly. Yes. At the same time, things like that are kind of like a config option for hardening - it's not like everybody absolutely needs to have the compiler support, but people who want it can. Sure, it limits us in some ways (ie we can't just say "we _depend_ on automatic variables being zero"), but as long as it's not a huge maintenance burden, it probably doesn't much matter. >> And it doesn't necessarily generate any worse code. > > I agree, though some performance-sensitive subsystem (e.g. networking) > get very defensive about an always-on stack initialization[2]. No > matter what happens with this kind of automatic initialization, I > suspect it's going to have to stay a build-time option to let some > people opt-out of it. I do think that having some way of marking "this variable really doesn't need zeroing" would be fine. Then peope'd *think* about the fact that you're passing an actual uninitialized piece of memory around, and people could be careful with them. And the places that would actually want that should show up like a sore thumb in a profile. And if they don't, then clearly the zeroing can't be much of an issue, can it? > Another case is that this series provides actual stack probing to > detect VLA abuse. This is less of an issue now with VMAP_STACK, and > I've had VLA removal on the long-term goal list for the kernel for a > while now, but the probing does work... I was actually hoping that the clang people would get rid of those, but they only seemed to care about VLA's in structures, not about them in general ;( I detest VLA's, we really shouldn't use them. I'm sorry we have any. Linus
* Kees Cook <keescook@chromium.org> wrote: > In defense of the series, it's hardly "mindless". :) The primary > feature is that it has run-time tracking of stack depth to clear only > the minimum needed portion of the stack. The stack-tracer has been able to do that for years, right? > > And it doesn't necessarily generate any worse code. > > I agree, though some performance-sensitive subsystem (e.g. networking) > get very defensive about an always-on stack initialization[2]. > [2] Both these cases, and so many more, are solved with the byref > initialization plugin, but have been NAKed by -net: > https://lkml.org/lkml/2013/4/9/641 > https://lkml.org/lkml/2017/10/31/699 > --- a/net/socket.c > +++ b/net/socket.c > @@ -2188,6 +2188,7 @@ static int ___sys_recvmsg(struct socket *sock, struct user_msghdr __user *msg, > struct sockaddr __user *uaddr; > int __user *uaddr_len = COMPAT_NAMELEN(msg); > > + memset(&addr, 0, sizeof(addr)); > msg_sys->msg_name = &addr; > > if (MSG_CMSG_COMPAT & flags) In defense of DaveM and Eric, that networking patch is just pure, utter garbage and I'll NACK it just as much: it adds an unconditional 128-byte memset() to a hot path!! NACKed-by: Ingo Molnar <mingo@kernel.org> The changelog is also infuriatingly misleading: > Some protocols do not correctly wipe the contents of the on-stack > struct sockaddr_storage sent down into recvmsg() (e.g. SCTP), and leak > kernel stack contents to userspace. This wipes it unconditionally before > per-protocol handlers run. > > Note that leaks like this are mitigated by building with > CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL=y It's just a scary, passive-aggressive lie about "some protocols" and ignores the desired case where all protocol handlers are correctly implemented. Did you *really* expect this patch to be applied? The on-stack struct clearing GCC plugins (CONFIG_GCC_PLUGIN_STRUCTLEAK*=y) are a nice feature which fix a bad oversight in the C standard, but this particular unconditional memset() patch is just garbage. Also, this characterization of the patch review process of the networking subsystem: > though some performance-sensitive subsystem (e.g. networking) > get very defensive about an always-on stack initialization[2]. ... is thus very unfair as well: they didn't NAK or resist the CONFIG_GCC_PLUGIN_STRUCTLEAK* compiler feature at all, they NAK-ed a poorly thought out, unconditional memset() which adds unconditional overhead, which is their job to NAK! Please refrain from using such unfair pressure tactics to get harebrained security patches upstream... Thanks, Ingo
* Kees Cook <keescook@chromium.org> wrote: > [...] We've already seen multiple cases of this just with the by-ref > initialization plugin, where a stack content leak goes away because we asked the > compiler to please initialize the memory for us when we forgot to do it > ourselves. Getting the compiler to help us seems like the obviously correct > thing to do, since we're using such a memory-safety-unfriendly language. :) So the question is, are there any known classes of stack content leak that the following .config options: CONFIG_GCC_PLUGIN_STRUCTLEAK=y CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL=y ... do not handle, and for which leaks the best solution is such runtime stack clearing? Because the GCC plugins clearing automatic local variables are _way_ superior: - it's probably a heck of a lot cheaper to clear structs than it is to clear the stack in every system call... - it's probably also safer statistically, because if there _is_ a reference to an uninitialized piece of memory in the kernel it will be to zero, not to a user controlled value... Thanks, Ingo
There are cases that clearing the stack can cover but I don't think it has high importance if all locals were being default initialized to zero. The new stack will end up with gaps from alignment, variables that weren't fully used, etc. where data from an old stack is present. Unless there's something really important left over like a private key, it shouldn't make issues like a before read overflow on the stack much worse. There's an important missing feature which does get covered by this plugin: variable-length arrays can skip past the VMAP_STACK guard pages (as could alloca / large frames if they were actually used). The ideal would probably be both GCC and Clang providing a working, safe implementation of stack probes to enable with VMAP_STACK but they don't do that. I'm not sure it makes sense for the kernel to be working around that downstream though. It's strange because Clang has had a working implementation on Windows for ages where the ABI requires it but for some reason they never provided the option elsewhere. Zeroing locals isn't very expensive because the compiler has a chance to optimize it out when it can figure out it isn't needed and they're almost always used. If there are cases it hurts, it might be possible to fix that by inlining an initialization function so it can see the initialization. From my tests doing it in the entirety of the kernel and Android userspace via a Clang patch (which we use in production), SSP has a much more significant cost. Filling them with a non-zero value is also a nice way to find bugs and I think that will still be true with KMSan available via Clang. There might be cases where there's a large array where the compiler can't remove the zeroing and the call is pretty hot... but we didn't run into any substantial performance costs, although for our niche we only really care about UI latency, battery life, etc. and losing 1-2% throughput for some things is irrelevant.
The kernel could use -Werror=vla-larger-than=2048 as a stop-gap and work around all the errors by making sure it can't be larger and reporting an error or adding `BUG_ON(size > limit)` to prove it to the compiler if necessary. It warns if the VLA *can be* larger than the limit, unlike the frame size warning the kernel uses only warning when the frame is *guaranteed* to be larger than the limit. It wouldn't be a guaranteed to work solution since a function can have other VLAs, etc. but in practice it could probably be good enough. I've tried it and I don't think it would be that much work. It might save someone from a vulnerability that isn't using VMAP_STACK even if the issue gets fixed for that. Android kernels are likely mostly going to be compiled with Clang now for 4.4+ and it doesn't have a similar warning though. I mentioned last year that there were some real vulnerabilities caused by unbounded VLAs: http://openwall.com/lists/kernel-hardening/2017/05/12/33. It's fairly easy to flip up and accidentally use one especially for people that work with other languages. Maybe the kernel should already be using -Werror=vla and overriding it for the few cases that exist to stop letting people introduce more.
On Tue, Mar 6, 2018 at 4:30 AM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Mon, Mar 5, 2018 at 4:56 PM, Kees Cook <keescook@chromium.org> wrote: >> On Mon, Mar 5, 2018 at 1:40 PM, Linus Torvalds >> <torvalds@linux-foundation.org> wrote: > > I was actually hoping that the clang people would get rid of those, > but they only seemed to care about VLA's in structures, not about them > in general ;( > > I detest VLA's, we really shouldn't use them. I'm sorry we have any. Then we need to fix our abysmal synchronous crypto API, which, AFAICT, is the primary user of on-stack VLAs in the kernel.
On Tue, Mar 6, 2018 at 7:16 AM, Daniel Micay <danielmicay@gmail.com> wrote: > > Zeroing locals isn't very expensive because the compiler has a chance > to optimize it out when it can figure out it isn't needed and they're > almost always used. I thought I saw Kees with some stats, and it wasn't even noticeable. But maybe I mis-remember. I definitely like the "zero local variables", and would actually prefer something even stronger than GCC_PLUGIN_STRUCTLEAK_BYREF_ALL (although I can't actually read gcc plugins, so I'm just going by name). It's not necessarily just struct, it can very much be arrays and regular automatic variables too. And yes, we might want to have some override switch for individual variables ("I really know what I'm doing and I promise I fill it myself, but the compiler doesn't understand me, look here:"). Linus
On Tue, Mar 06, 2018 at 10:56:58AM -0800, Linus Torvalds wrote: > On Tue, Mar 6, 2018 at 7:16 AM, Daniel Micay <danielmicay@gmail.com> wrote: > > > > Zeroing locals isn't very expensive because the compiler has a chance > > to optimize it out when it can figure out it isn't needed and they're > > almost always used. > > I thought I saw Kees with some stats, and it wasn't even noticeable. > But maybe I mis-remember. > > I definitely like the "zero local variables", and would actually > prefer something even stronger than GCC_PLUGIN_STRUCTLEAK_BYREF_ALL > (although I can't actually read gcc plugins, so I'm just going by > name). > > It's not necessarily just struct, it can very much be arrays and > regular automatic variables too. > > And yes, we might want to have some override switch for individual > variables ("I really know what I'm doing and I promise I fill it > myself, but the compiler doesn't understand me, look here:"). Right, I have one case in perf where we very carefully pass a partially initialized structure around for performance raisins. So I would definitely like to have that attribute that allows us to over-ride that BYREF_ALL thing.
On 6 March 2018 at 18:56, Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Mar 6, 2018 at 7:16 AM, Daniel Micay <danielmicay@gmail.com> wrote: >> >> Zeroing locals isn't very expensive because the compiler has a chance >> to optimize it out when it can figure out it isn't needed and they're >> almost always used. > > I thought I saw Kees with some stats, and it wasn't even noticeable. > But maybe I mis-remember. > > I definitely like the "zero local variables", and would actually > prefer something even stronger than GCC_PLUGIN_STRUCTLEAK_BYREF_ALL > (although I can't actually read gcc plugins, so I'm just going by > name). > > It's not necessarily just struct, it can very much be arrays and > regular automatic variables too. > The compiler usually does a pretty good job of detecting which scalar variables are never initialized by regular assignment. The reason that this is different for struct type variables is that we often initialize them by passing a reference to a function, and if this function is in another compilation unit, the compiler doesn't know whether such a call amounts to an initialization or not. We could easily extend this to scalar and array types, but we'd first need to see what the performance impact is, because I don't think it will be negligible.
On Tue, Mar 6, 2018 at 11:07 AM, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > The compiler usually does a pretty good job of detecting which scalar > variables are never initialized by regular assignment. Sure, but "usually" is not really the same as always. Sometimes scalar types are initialized by passing a reference to them too. > We could easily extend this to scalar and array types, but we'd first > need to see what the performance impact is, because I don't think it > will be negligible. For scalar types, I suspect it will be entirely unnoticeable, because they are not only small, but it's rare that this kind of "initialize by passing a reference" happens in the first place. For arrays, I agree. We very well may have arrays that we really want to do magic things about. But even then I'd rather have a "don't initialize this" flag for critical stuff that really *does* get initialized some other way. Then we can grep for those things and be more careful. If somebody has big arrays on the stack, that's often a problem anyway. It may be common in non-kernel code, but kernel code is very special. Linus
On Tue, Mar 6, 2018 at 8:16 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Mar 6, 2018 at 11:07 AM, Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> >> The compiler usually does a pretty good job of detecting which scalar >> variables are never initialized by regular assignment. > > Sure, but "usually" is not really the same as always. Sometimes scalar > types are initialized by passing a reference to them too. > >> We could easily extend this to scalar and array types, but we'd first >> need to see what the performance impact is, because I don't think it >> will be negligible. > > For scalar types, I suspect it will be entirely unnoticeable, because > they are not only small, but it's rare that this kind of "initialize > by passing a reference" happens in the first place. A lot of the scalar variables with actual bugs are missed by the gcc warnings, because it never allocates a stack slot for examples like int f(int c) { int i; if (c) return i; /* uninitialized return */ asm volatile("" : "=r" (i)); /* gcc sees that 'i' escapes here */ return 0; } int g(int c) { int i; if (c) /* gcc optimizes out the condition as nothing else sets i */ i = 1; return i; } At -O2 optimization level, these fail to produce a warning, and they won't ever leak stack data, but they are still undefined behavior and don't do what the author intended. Forcing gcc to allocate a stack slot and zero-initialize it should find many bugs by adding valid warnings, but also add lots of false positives as well as prevent important optimizations in other places that are actually well-defined. > For arrays, I agree. We very well may have arrays that we really want > to do magic things about. But even then I'd rather have a "don't > initialize this" flag for critical stuff that really *does* get > initialized some other way. Then we can grep for those things and be > more careful. > > If somebody has big arrays on the stack, that's often a problem > anyway. It may be common in non-kernel code, but kernel code is very > special. I can think of a few cases that are important, this one is well-known: int core_sys_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp, struct timespec64 *end_time) { .... /* Allocate small arguments on the stack to save memory and be faster */ long stack_fds[SELECT_STACK_ALLOC/sizeof(long)]; Another case I came across very recently with a similar optimization is: int ib_process_cq_direct(struct ib_cq *cq, int budget) { struct ib_wc wcs[IB_POLL_BATCH]; In both cases, the stack variables are chosen to be just under the CONFIG_FRAME_WARN limit to avoid a memory allocation in the fast path. If we add an explicit zero initialization, that optimization may turn out counterproductive, but a "don't initialize" flag would be sufficient to deal with them one at a time. There is also the really scary code like: #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ struct skcipher_request *name = (void *)__##name##_desc that implements an alloca() through a dynamic array for storing a variable-sized structure on the stack. These are usually small, but the size is driver specific and some can be surprisingly big, e.g. struct ccp_aes_req_ctx, struct hifn_request_context, or struct iproc_reqctx_s. If we can come up with a way to avoid those, we could actually enable -Wstack-usage=${CONFIG_FRAME_WARN} to warn for any functions with dynamic stack allocation. Arnd
On Tue, Mar 6, 2018 at 12:42 PM, Arnd Bergmann <arnd@arndb.de> wrote: > > Forcing gcc to allocate a stack slot and zero-initialize it should > find many bugs by adding valid warnings, but also add lots of > false positives as well as prevent important optimizations in other > places that are actually well-defined. Oh, no, the "force gcc to allocate a stack slot" would be absolutely insane. You should never do that. Anybody who does that should be shot. But you don't have to force any stack allocation. You should just initialize it to zero (*without* the stack allocation). Then the optimization passes will just remove the initialization in 99.9% of all cases. Only very occasionally - when gcc cannot see it being overwritten - would it remain. And those are exactly the cases where you *want* it to remain. Of course, it is possible that you can't just do that from a plugin. But I can almost guarantee that it would be trivial to do as a gcc switch if any gcc people wanted to do it. Just look at each scalar auto variable, and if it doesn't have an initializer, just add one to zero completely mindlessly. As the crusaders said: "Kill them all and let the optimizer sort them out". (Note: the "trivial to do" is about scalar values only. Non-scalar values have more complexities, like struct padding issues etc). >> If somebody has big arrays on the stack, that's often a problem >> anyway. It may be common in non-kernel code, but kernel code is very >> special. > > I can think of a few cases that are important, this one is well-known: > > int core_sys_select(int n, fd_set __user *inp, fd_set __user *outp, > fd_set __user *exp, struct timespec64 *end_time) > { > .... > /* Allocate small arguments on the stack to save memory and be faster */ > long stack_fds[SELECT_STACK_ALLOC/sizeof(long)]; Oh, I can well imagine things like this. But they will be a handful, they will be seen in profiles, and they'd be easy to mark. And then you can validate the ones you mark, instead of worrying about all the ones you don't even know about. > There is also the really scary code like: > > #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ > char __##name##_desc[sizeof(struct skcipher_request) + \ > crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ > struct skcipher_request *name = (void *)__##name##_desc The crypto stuff does disgusting things. It used to do VLA's in structures too, that has morphed into using these crazy XYZ_ON_STACK() defines. They eventually need to fix their own crap at some point. It shouldn't be something we care about from a design standpoint. Linus
On Tue, Mar 6, 2018 at 10:01 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Mar 6, 2018 at 12:42 PM, Arnd Bergmann <arnd@arndb.de> wrote: >> >> Forcing gcc to allocate a stack slot and zero-initialize it should >> find many bugs by adding valid warnings, but also add lots of >> false positives as well as prevent important optimizations in other >> places that are actually well-defined. > > Oh, no, the "force gcc to allocate a stack slot" would be absolutely insane. > > You should never do that. Anybody who does that should be shot. > > But you don't have to force any stack allocation. You should just > initialize it to zero (*without* the stack allocation). > > Then the optimization passes will just remove the initialization in > 99.9% of all cases. Only very occasionally - when gcc cannot see it > being overwritten - would it remain. And those are exactly the cases > where you *want* it to remain. Right, there are two separate problems: the missing warnings and the actual uninitialized use. Allocating the stack slots would address both but only at an enormous cost. Assigning a value would still have a cost, as it would prevent certain other optimizations, and it wouldn't help find the missing initializations, but the cost would obviously be much lower. Arnd
On Tue, Mar 6, 2018 at 1:21 PM, Arnd Bergmann <arnd@arndb.de> wrote: > > Right, there are two separate problems: the missing warnings > and the actual uninitialized use. Allocating the stack slots would > address both but only at an enormous cost. Assigning a value > would still have a cost, as it would prevent certain other optimizations, > and it wouldn't help find the missing initializations, > but the cost would obviously be much lower. What possible optimization would it prevent? I cannot think of a single interesting optimization that would be invalidated by a trivial initialization of 0 to a scalar. Yes, it can obviously generate some extra code for the "pass pointer to uninitialized scalar around to be initialized", where it can now make that stack slot be initialized to zero - so it can generate extra code. But disabling an optimization? What did you have in mind? Linus
On Tue, 6 Mar 2018 13:01:20 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > Then the optimization passes will just remove the initialization in > 99.9% of all cases. Only very occasionally - when gcc cannot see it > being overwritten - would it remain. And those are exactly the cases > where you *want* it to remain. You mean have gcc fill in the variables that it thinks is used uninitialized with zeros? As long as it still warns about it, because that usually catches some real bugs where zeroing the variable doesn't actually fix the bug. I also tried the example Arnd posted with: int g(int c) { int i; if (c) /* gcc optimizes out the condition as nothing else sets i */ i = 1; return i; } And he's right. -O2 doesn't warn :-( I think that it should. -- Steve
On Tue, Mar 6, 2018 at 1:36 PM, Steven Rostedt <rostedt@goodmis.org> wrote: > You mean have gcc fill in the variables that it thinks is used > uninitialized with zeros? No. It should do it unconditionally. Because "gcc thinks is used uninitialized" is simply not good enough. The warning would remain for the case where you don't enable this hardening feature, so it wouldn't go away. The warning also wouldn't be *improved*. The warning is simply a completely a separate thing, and as your example shows is *entirely* independent of the "make all locals be initialized to zero". Linus
On Tue, Mar 6, 2018 at 1:41 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > The warning would remain for the case where you don't enable this > hardening feature, so it wouldn't go away. Side note: if in ten years we'd have a minimum gcc version that we could just unconditionally say "auto (scalars) initialize to zero", then we'd just make that be the *semantics*, and the warning would obviously simply not ever be an issue. But that's no different from "static variables initialize to zero" and nobody sane expects a warning from that. It's just what it is. But that's at least ten years away. We tend to support disgustingly old compiler versions. Linus
On Tue, Mar 6, 2018 at 10:36 PM, Steven Rostedt <rostedt@goodmis.org> wrote: > On Tue, 6 Mar 2018 13:01:20 -0800 > Linus Torvalds <torvalds@linux-foundation.org> wrote: > >> Then the optimization passes will just remove the initialization in >> 99.9% of all cases. Only very occasionally - when gcc cannot see it >> being overwritten - would it remain. And those are exactly the cases >> where you *want* it to remain. > > You mean have gcc fill in the variables that it thinks is used > uninitialized with zeros? As long as it still warns about it, because > that usually catches some real bugs where zeroing the variable doesn't > actually fix the bug. > > I also tried the example Arnd posted with: > > > int g(int c) > { > int i; > if (c) /* gcc optimizes out the condition as nothing else sets i */ > i = 1; > return i; > } > > And he's right. -O2 doesn't warn :-( I think that it should. See this bug that has been open since 2004 and the 31 bugs that have been marked as duplicates since then: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=18501 I don't really understand it myself, but I do understand that the gcc developers think this is a hard problem to solve. Arnd
On Tue, Mar 6, 2018 at 10:29 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Mar 6, 2018 at 1:21 PM, Arnd Bergmann <arnd@arndb.de> wrote: >> >> Right, there are two separate problems: the missing warnings >> and the actual uninitialized use. Allocating the stack slots would >> address both but only at an enormous cost. Assigning a value >> would still have a cost, as it would prevent certain other optimizations, >> and it wouldn't help find the missing initializations, >> but the cost would obviously be much lower. > > What possible optimization would it prevent? > > I cannot think of a single interesting optimization that would be > invalidated by a trivial initialization of 0 to a scalar. > > Yes, it can obviously generate some extra code for the "pass pointer > to uninitialized scalar around to be initialized", where it can now > make that stack slot be initialized to zero - so it can generate extra > code. But disabling an optimization? > > What did you have in mind? The type of optimization that causes my earlier example int g(int c) { int i; if (c) /* gcc optimizes out the condition as nothing else sets i */ i = 1; return i; } to be simplified to int g(int c) { return 1; } Obviously the specific example is a bug and we'd be better off with the zero-initialization, but it was clearly an intentional optimization. The bug report in [1] suggests that the optimization step that causes the loss of information here is "sparse conditional constant propagation" as described in [2]. I have to admit that I understand nearly nothing of that paper, but my memory from earlier discussions with gcc folks is that it will merge code paths for uninitialized variables with the normal case if there is exactly one initialization, or more generally try to eliminate any code path that leads to undefined behavior. By forcing an initialization, I would imagine that this early dead code elimination fails even for the case that a later optimization step can prove that an uninitialized variable use never occurs. Arnd [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=18501 [2] http://www.cs.wustl.edu/~cytron/531Pages/f11/Resources/Papers/cprop.pdf
On Tue, Mar 6, 2018 at 1:47 PM, Arnd Bergmann <arnd@arndb.de> wrote: > > I don't really understand it myself, but I do understand that > the gcc developers think this is a hard problem to solve. Depending on the internal representation, things that look absolutely trivially obvious to humans may not actually be all that trivial at all. In particular, the only sane modern IR for a compiler is based on SSA (single static assignment), and in that form, the example code is actually very obvious: the variable 'i' is assigned to exactly once, and has the value 1, so *OBVIOUSLY* the value of 'i' is 1 at each use. It's really so obvious that sparse does exactly the same. If you use the sparse "test-linearize" program, you will see that test program result in g: .L0: <entry-point> ret.32 $1 because it is very obvious to the optimizer that 'i' is 1. In fact, to get anything else, you almost _have_ to initialize 'i' to "UNDEFINED" in the compiler. At that point the SSA representation says 'i' has two sources, one undefined and one '1', and all the SSA optimizations and simplifications just DTRT automatically. But it literally involves adding that initialization. This is actually one reason I think the C "automatic variables are undefined" behavior is wrong. It made sense historically, but in an SSA world, it actually ends up not even helping. You might as well just initialize the damn things to zero, avoiding an undefined case and just be done with it, and make that part of the language definition. It would just be consistent with static variables anyway, so it would actually clean up the language conceptually too. Of course, within the confines of C _history_, that didn't make sense. SSA became a thing two decades after C had been invented, and it took even longer for it to become the de-facto standard IR for any sane optimization. Honestly, that whole "local variable might be used uninitialized" is just a historical accident, and is not some fundamentally good warning. It's _only_ a valid warning due to a language mis-feature. Linus
On Tue, Mar 6, 2018 at 2:09 PM, Arnd Bergmann <arnd@arndb.de> wrote: > On Tue, Mar 6, 2018 at 10:29 PM, Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> >> Yes, it can obviously generate some extra code for the "pass pointer >> to uninitialized scalar around to be initialized", where it can now >> make that stack slot be initialized to zero - so it can generate extra >> code. But disabling an optimization? >> >> What did you have in mind? > > The type of optimization that causes my earlier example That's not a valid "optimization". It doesn't make correct code run faster. It just makes incorrect code result in incorrect results. See my previous email about _why_ it does it, but the point remains: the "optimization" is not actually an optimization. It's just "garbage in, garbage out". Speed doesn't matter for garbage. In fact, see my explanation for _why_ that optimization happens to understand why adding an iniitializer doesn't make it go slower, but simply makes ie well-defined and thus makes the optimization go away. Really, only stupid compiler people think it's an "optimization" to take advantage of undefined behavior. Sure, garbage code generation can run faster, but that's not "optimization". We want to very much avoid those kinds of people, and those kinds of optimizations. It's very much why we disable strict aliasing etc - because the whole optimization is purely based on taking advantage of undefined behavior. It's shit. Don't call it anything else. Getting rid of that optimization is a *good* thing, not a bad thing. Linus
On Tue, 6 Mar 2018 13:47:11 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > But that's no different from "static variables initialize to zero" and > nobody sane expects a warning from that. It's just what it is. Yeah but I would argue that static variables are different. They are either used in multiple functions or are used for the same function that maintains its value for multiple calls to that function. Either way the semantics of a static variable is that it has to be initialized to something, because you don't know when the first one sets it. Local variables on the other hand are only in scope of one logical function algorithm. I've done lots of stupid errors where I may initialize a local variable in a loop and forget that the loop may never execute. Sometimes the original value shouldn't be zero, although most times it is. But that warning has saved me, especially when the value isn't supposed to be zero. -- Steve
On Tue, Mar 6, 2018 at 2:29 PM, Steven Rostedt <rostedt@goodmis.org> wrote: > > Yeah but I would argue that static variables are different. Not really. You're basically saying "I'd like my language to have intrinsic undefined behavior, so that when I hit a bug that might trigger that undefined behavior I get a warning":. Do such bugs happen? Sure. But _other_ bugs happen all the time, and the ONLY real reason you are so adamant about _that_ particular warning, is that "undefined behavior" is such a serious problem. Think about that for a second. Make the serious problem go away, and the warning MAKES NO SENSE. Do you want a compiler to warn you when you write code like this: double area(double radius) { return radius*2*pi; } just because it's obviously buggy shit? And do you *really* expect a compiler warning for that kind of obviously buggy shit? I bet you don't. And if you don't, why do you expect a compiler warning for int g(int c) { int i = 0; if (c) i = 1; return i; } which is *literally* what your example would have been had just C been specified to avoid unnecessary undefined behavior? Seriously. Look at that example, and tell me that gcc should warn about it. I can imagine the warning ("warning: function 'g()' is stupidly written, just use !!c"). But nobody sane really would expect a compiler to warn about it. Once a compiler is that smart, you wouldn't write code for it, you'd just ask it to generate code from a description. Guys, everybody agrees that C isn't a safe language. Do you think that lack of safety is a _good_ thing? Do you realize that most of the lack of safety is almost directly about flexibility, simplicity, and good code generation? But what if I told you that some of the lack of safety doesn't actually add to flexibility, simplicity, _or_ good code generation? Wouldn't you say "we don't want it to be unsafe" then? I'm literally telling you that lack of variable initialization is almost purely a bad thing. C would be a safer language, with less undefined behavior, if it just made the initialization of automatic variables be something you cannot avoid. For scalars in particular, there is basically no downside. Aggregate types really are much less black-and-white. Linus
On Tue, 6 Mar 2018 14:41:55 -0800 Linus Torvalds <torvalds@linux-foundation.org> wrote: > I'm literally telling you that lack of variable initialization is > almost purely a bad thing. C would be a safer language, with less > undefined behavior, if it just made the initialization of automatic > variables be something you cannot avoid. I get your point. Basically you are saying that the language should have forced all local variables to be initialized to zero in the spec, and the compiler is free to optimize that initialization out if the variable is always initialized first. Just like it would optimize the below. int g(int c) { int i = 0; if (c < 10) i = 1; else i = 2; return i; }; There's no reason to initialize i to zero because it will never return zero. But what you are saying is to make that implicit by just declaring 'i'. Other languages do this, and you are right, I don't expect warnings to appear in those. I guess I've grown so accustomed to this "undefined behavior" it's part of what I just expect. -- Steve
On Tue, Mar 6, 2018 at 2:52 PM, Steven Rostedt <rostedt@goodmis.org> wrote: > > I get your point. Basically you are saying that the language should > have forced all local variables to be initialized to zero in the spec, > and the compiler is free to optimize that initialization out if the > variable is always initialized first. Exactly. > Just like it would optimize the below. > > int g(int c) > { > int i = 0; > > if (c < 10) > i = 1; > else > i = 2; > return i; > }; > > There's no reason to initialize i to zero because it will never return > zero. But what you are saying is to make that implicit by just > declaring 'i'. Yes. And at the same time, it is certainly true that a compiler cannot *always* remove the unnecessary initialization. But with any half-way modern compiler, all the _normal_ cases would be no-brainers, and the case where the compiler couldn't do so really is code that almost certainly wants that initialization anyway, because a _human_ generally wouldn't see that it's initialized either. The obvious counter-example is literally > int g(int c) > { > int i = 0; > > initialize_variable(&i); > > if (c < 10) where a *human* can tell that "hey, that initialization to zero is pointless, because 'i' is clearly being initialized by that 'initialize_variable()' function call". But the compiler often cannot see that "obvious" initialization, and would have to spend the extra instruction to actually do that zero initialization. So I'm not saying that it would always be entirely free to just have the safe default. But in the context of the kernel, I think that we can probably agree that "oops, we've had exactly the kind of bug where a function _didn't_ end up initializing the variable every time even though to a human obviously thought it did" actually has happened. Now, as mentioned, aggregate types are different. They are _less_ different for the kernel than they are for user programs (because we have to be careful about aggregate types on the stack anyway just for stack size reasons), but they do have more issues with the implicit initializers. For aggregate types, initializers are obviously more expensive to begin with, and the "initialize by passing a reference to an external function" is much more common too. So there's a double hit. At the same time, aggregate types are obviously where we tend to have most problems, and I really think we should strive to avoid even medium-sized aggregate types in the kernel anyway. Which is why I'm much more willing to take a hit on those kinds of things (that I think should be rare), than add things like "let's just clear the stack after every system call" kinds of things. But I do agree that for aggregate types, we almost certainly do want to have some way to flag "this is explicitly not initialized". Instead, right now, we're in the reverse situation, where we have to add explicit initializers. I think we'd be in a better situation if the *default* semantics was the safe and well-defined behavior, and we'd have to do extra things to get the undefined uninitialized behavior for when we know better, and we really really care. Linus
* Linus Torvalds <torvalds@linux-foundation.org> wrote: > On Tue, Mar 6, 2018 at 1:41 PM, Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > > > The warning would remain for the case where you don't enable this > > hardening feature, so it wouldn't go away. > > Side note: if in ten years we'd have a minimum gcc version that we > could just unconditionally say "auto (scalars) initialize to zero", > then we'd just make that be the *semantics*, and the warning would > obviously simply not ever be an issue. Btw., I'd suggest we initialize aggregate types to zero as well, and then work from there by marking exceptions via attributes. From what I've seen over 90% of 'tricky' initialization sequences either don't matter to performance, or are unnecessarily complicated. I.e. let's eliminate VLAs and let's also make the object initialization aspect of the C language reliably and broadly safe by default (via a GCC plugin) with no exceptions, and allow an opt-in mechanism for more fragile (but faster if coded correctly) constructs. Is it possible to implement this "safe automatic variable initialization" language feature via a GCC plugin robustly, while still keeping code generation sane? (i.e. no forced allocation of stack slots, etc.) It should be a superset of CONFIG_GCC_PLUGIN_STRUCTLEAK=y. Plugin support is present in GCC version 4.5 and higher, correct? So if such a plugin is possible we could raise the minimum GCC version to support it unconditionally. I suspect a fair chunk of all kernel CVEs would go away if we fixed the C language this way. Thanks, Ingo
On 12 March 2018 at 08:22, Ingo Molnar <mingo@kernel.org> wrote: > > * Linus Torvalds <torvalds@linux-foundation.org> wrote: > >> On Tue, Mar 6, 2018 at 1:41 PM, Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >> > >> > The warning would remain for the case where you don't enable this >> > hardening feature, so it wouldn't go away. >> >> Side note: if in ten years we'd have a minimum gcc version that we >> could just unconditionally say "auto (scalars) initialize to zero", >> then we'd just make that be the *semantics*, and the warning would >> obviously simply not ever be an issue. > > Btw., I'd suggest we initialize aggregate types to zero as well, and then work > from there by marking exceptions via attributes. > I'd argue that we need to move to struct assignment for constructors, similar to how checkpatch recommends it over memcpy()/memset(). That way, the compiler can tell that such a variable is being assigned, and so it can warn in the usual way when it notices a code path that does not involve an initialization. At the same time, it will warn about not returning a value if there is a code path in the constructor that results in no initialization being performed. It should also help the compiler optimize by keeping such variables entirely in registers if the address is never taken otherwise. > From what I've seen over 90% of 'tricky' initialization sequences either don't > matter to performance, or are unnecessarily complicated. > > I.e. let's eliminate VLAs and let's also make the object initialization aspect of > the C language reliably and broadly safe by default (via a GCC plugin) with no > exceptions, and allow an opt-in mechanism for more fragile (but faster if coded > correctly) constructs. > > Is it possible to implement this "safe automatic variable initialization" language > feature via a GCC plugin robustly, while still keeping code generation sane? (i.e. > no forced allocation of stack slots, etc.) It should be a superset of > CONFIG_GCC_PLUGIN_STRUCTLEAK=y. > I think that should be feasible, yes. It would be worth trying whether the current code can be simplified, though: it currently takes care not to add such an initialization if it can already spot one, but I wonder whether just blindly adding one at the start and letting the compiler optimize it away again is safer. > Plugin support is present in GCC version 4.5 and higher, correct? So if such a > plugin is possible we could raise the minimum GCC version to support it > unconditionally. > I think that would be reasonable, yes, but we should check across architectures as well.
* Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > Is it possible to implement this "safe automatic variable initialization" language > > feature via a GCC plugin robustly, while still keeping code generation sane? (i.e. > > no forced allocation of stack slots, etc.) It should be a superset of > > CONFIG_GCC_PLUGIN_STRUCTLEAK=y. > > I think that should be feasible, yes. > > It would be worth trying whether the current code can be simplified, though: it > currently takes care not to add such an initialization if it can already spot > one, but I wonder whether just blindly adding one at the start and letting the > compiler optimize it away again is safer. Absolutely - followed by: - a good look at the resulting code generation with a reasonably uptodate GCC - a look at the resulting code with older GCCs, to make sure there's no pathological code generation ... because IMHO in the end it is the practical effects that will make (or break) any such attempt. ( BTW., initializing all automatic variables might even speed up certain optimization passes, FWIIW. ) Thanks, Ingo
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 74f6eee..b4be776 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -46,6 +46,12 @@ __visible inline void enter_from_user_mode(void) static inline void enter_from_user_mode(void) {} #endif +#ifdef CONFIG_GCC_PLUGIN_STACKLEAK +asmlinkage void erase_kstack(void); +#else +static void erase_kstack(void) {} +#endif + static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch) { #ifdef CONFIG_X86_64 @@ -128,6 +134,7 @@ static long syscall_trace_enter(struct pt_regs *regs) do_audit_syscall_entry(regs, arch); + erase_kstack(); return ret ?: regs->orig_ax; }
Make STACKLEAK erase kernel stack after ptrace/seccomp/auditing not to leave any sensitive information on the stack for the syscall code. This code is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on our understanding of the code. Changes or omissions from the original code are ours and don't reflect the original grsecurity/PaX code. Signed-off-by: Alexander Popov <alex.popov@linux.com> --- arch/x86/entry/common.c | 7 +++++++ 1 file changed, 7 insertions(+)