diff mbox

[v11,2/6] x86/entry: Add STACKLEAK erasing the kernel stack at the end of syscalls

Message ID 1523024546-6150-3-git-send-email-alex.popov@linux.com (mailing list archive)
State New, archived
Headers show

Commit Message

Alexander Popov April 6, 2018, 2:22 p.m. UTC
The STACKLEAK feature erases the kernel stack before returning from
syscalls. That reduces the information which kernel stack leak bugs can
reveal and blocks some uninitialized stack variable attacks. Moreover,
STACKLEAK provides runtime checks for kernel stack overflow detection.

This commit introduces the architecture-specific code filling the used
part of the kernel stack with a poison value before returning to the
userspace. Full STACKLEAK feature also contains the gcc plugin which
comes in a separate commit.

The STACKLEAK feature is ported from grsecurity/PaX. More information at:
  https://grsecurity.net/
  https://pax.grsecurity.net/

This code is modified from Brad Spengler/PaX Team's code in the last
public patch of grsecurity/PaX based on our understanding of the code.
Changes or omissions from the original code are ours and don't reflect
the original grsecurity/PaX code.

Signed-off-by: Alexander Popov <alex.popov@linux.com>
---
 Documentation/x86/x86_64/mm.txt  |  2 ++
 arch/Kconfig                     | 27 ++++++++++++++++++++
 arch/x86/Kconfig                 |  1 +
 arch/x86/entry/Makefile          |  3 +++
 arch/x86/entry/calling.h         | 14 +++++++++++
 arch/x86/entry/entry_32.S        |  7 ++++++
 arch/x86/entry/entry_64.S        |  3 +++
 arch/x86/entry/entry_64_compat.S |  5 ++++
 arch/x86/entry/erase.c           | 54 ++++++++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/processor.h |  4 +++
 arch/x86/kernel/process_32.c     |  5 ++++
 arch/x86/kernel/process_64.c     |  5 ++++
 include/linux/compiler.h         |  4 +++
 13 files changed, 134 insertions(+)
 create mode 100644 arch/x86/entry/erase.c

Comments

Kees Cook April 16, 2018, 6:29 p.m. UTC | #1
On Fri, Apr 6, 2018 at 7:22 AM, Alexander Popov <alex.popov@linux.com> wrote:
> This commit introduces the architecture-specific code filling the used
> part of the kernel stack with a poison value before returning to the
> userspace. Full STACKLEAK feature also contains the gcc plugin which
> comes in a separate commit.

Thanks for sending this again! And thanks for the updated reasoning
for why this remains a valuable addition:

https://lkml.kernel.org/r/1523024546-6150-1-git-send-email-alex.popov@linux.com

I, too, remain convinced this is a good protection to have, even as we
slowly remove VLAs and try to improve the compiler's initialization of
stack variables.

Dave, Ingo, Linus: how does this look? With the assembly rewritten
into C, the entry changes are very small:

>  arch/x86/entry/entry_32.S        |  7 ++++++
>  arch/x86/entry/entry_64.S        |  3 +++
>  arch/x86/entry/entry_64_compat.S |  5 ++++
>  arch/x86/entry/erase.c           | 54 ++++++++++++++++++++++++++++++++++++++++

I'd really like to get people's Ack/Review. :)

Laura, can this C version work for arm64 as well?

Thanks,

-Kees
Laura Abbott April 18, 2018, 6:33 p.m. UTC | #2
On 04/16/2018 11:29 AM, Kees Cook wrote:
> On Fri, Apr 6, 2018 at 7:22 AM, Alexander Popov <alex.popov@linux.com> wrote:
>> This commit introduces the architecture-specific code filling the used
>> part of the kernel stack with a poison value before returning to the
>> userspace. Full STACKLEAK feature also contains the gcc plugin which
>> comes in a separate commit.
> 
> Thanks for sending this again! And thanks for the updated reasoning
> for why this remains a valuable addition:
> 
> https://lkml.kernel.org/r/1523024546-6150-1-git-send-email-alex.popov@linux.com
> 
> I, too, remain convinced this is a good protection to have, even as we
> slowly remove VLAs and try to improve the compiler's initialization of
> stack variables.
> 
> Dave, Ingo, Linus: how does this look? With the assembly rewritten
> into C, the entry changes are very small:
> 
>>   arch/x86/entry/entry_32.S        |  7 ++++++
>>   arch/x86/entry/entry_64.S        |  3 +++
>>   arch/x86/entry/entry_64_compat.S |  5 ++++
>>   arch/x86/entry/erase.c           | 54 ++++++++++++++++++++++++++++++++++++++++
> 
> I'd really like to get people's Ack/Review. :)
> 
> Laura, can this C version work for arm64 as well?
> 
> Thanks,
> 
> -Kees
> 

I did a quick port and it seems to work on a minimal system
(passes LKDTM tests). I'll clean it up and do a few more
tests to send out and see about give this series another
review.

Thanks,
Laura
Dave Hansen April 18, 2018, 6:50 p.m. UTC | #3
On 04/16/2018 11:29 AM, Kees Cook wrote:
> Dave, Ingo, Linus: how does this look? With the assembly rewritten
> into C, the entry changes are very small:

The assembly looks very nice to me now.  It is as minimally invasive as
it can get.  Definitely no objections from me on that part.
Kees Cook April 24, 2018, 1:03 a.m. UTC | #4
On Wed, Apr 18, 2018 at 11:50 AM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 04/16/2018 11:29 AM, Kees Cook wrote:
>> Dave, Ingo, Linus: how does this look? With the assembly rewritten
>> into C, the entry changes are very small:
>
> The assembly looks very nice to me now.  It is as minimally invasive as
> it can get.  Definitely no objections from me on that part.

Can you give an Acked-by for the x86 parts? Or Ingo?

If this is workable, I'd like to carry it in -next to see if anything
else shakes out...

-Kees
Dave Hansen April 24, 2018, 4:23 a.m. UTC | #5
Hi Alexander,

You can add:

Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>

for this patch if you like.  I haven't taken a super close look at the
rest, but this is certainly minimally invasive from my point of view for
the entry code.  Thanks, again for reworking it.
Kees Cook April 30, 2018, 11:48 p.m. UTC | #6
On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> Hi Alexander,
>
> You can add:
>
> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
>
> for this patch if you like.  I haven't taken a super close look at the
> rest, but this is certainly minimally invasive from my point of view for
> the entry code.  Thanks, again for reworking it.

Thanks Dave!

Given this improvement and your review, I'm going to start carrying
this for linux-next. Linus, if you're still opposed to this even after
the changes here in v11, please let us know. I'd rather hash things
out now instead of during a NAK in the 4.18 merge window. :)

Thanks!

-Kees
Thomas Gleixner May 2, 2018, 8:42 a.m. UTC | #7
On Mon, 30 Apr 2018, Kees Cook wrote:

> On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
> <dave.hansen@linux.intel.com> wrote:
> > Hi Alexander,
> >
> > You can add:
> >
> > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
> >
> > for this patch if you like.  I haven't taken a super close look at the
> > rest, but this is certainly minimally invasive from my point of view for
> > the entry code.  Thanks, again for reworking it.
> 
> Thanks Dave!
> 
> Given this improvement and your review, I'm going to start carrying
> this for linux-next. Linus, if you're still opposed to this even after
> the changes here in v11, please let us know. I'd rather hash things
> out now instead of during a NAK in the 4.18 merge window. :)

Kees, can we please route that x86/entry stuff through tip to avoid
conflicts as there are other changes in that area on the horizon.

Thanks,

	tglx
Kees Cook May 2, 2018, 12:38 p.m. UTC | #8
On Wed, May 2, 2018 at 1:42 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> On Mon, 30 Apr 2018, Kees Cook wrote:
>
>> On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
>> <dave.hansen@linux.intel.com> wrote:
>> > Hi Alexander,
>> >
>> > You can add:
>> >
>> > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
>> >
>> > for this patch if you like.  I haven't taken a super close look at the
>> > rest, but this is certainly minimally invasive from my point of view for
>> > the entry code.  Thanks, again for reworking it.
>>
>> Thanks Dave!
>>
>> Given this improvement and your review, I'm going to start carrying
>> this for linux-next. Linus, if you're still opposed to this even after
>> the changes here in v11, please let us know. I'd rather hash things
>> out now instead of during a NAK in the 4.18 merge window. :)
>
> Kees, can we please route that x86/entry stuff through tip to avoid
> conflicts as there are other changes in that area on the horizon.

Sure, let me figure out how best to split up the patches, since it
touch x86/entry, gcc-plugins, and lkdtm. Thanks!

-Kees
Thomas Gleixner May 2, 2018, 12:39 p.m. UTC | #9
On Wed, 2 May 2018, Kees Cook wrote:
> On Wed, May 2, 2018 at 1:42 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > On Mon, 30 Apr 2018, Kees Cook wrote:
> >
> >> On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
> >> <dave.hansen@linux.intel.com> wrote:
> >> > Hi Alexander,
> >> >
> >> > You can add:
> >> >
> >> > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
> >> >
> >> > for this patch if you like.  I haven't taken a super close look at the
> >> > rest, but this is certainly minimally invasive from my point of view for
> >> > the entry code.  Thanks, again for reworking it.
> >>
> >> Thanks Dave!
> >>
> >> Given this improvement and your review, I'm going to start carrying
> >> this for linux-next. Linus, if you're still opposed to this even after
> >> the changes here in v11, please let us know. I'd rather hash things
> >> out now instead of during a NAK in the 4.18 merge window. :)
> >
> > Kees, can we please route that x86/entry stuff through tip to avoid
> > conflicts as there are other changes in that area on the horizon.
> 
> Sure, let me figure out how best to split up the patches, since it
> touch x86/entry, gcc-plugins, and lkdtm. Thanks!

Are they independent or do they carry dependencies?

Thanks,

	tglx
Kees Cook May 2, 2018, 12:51 p.m. UTC | #10
On Wed, May 2, 2018 at 5:39 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> On Wed, 2 May 2018, Kees Cook wrote:
>> On Wed, May 2, 2018 at 1:42 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
>> > On Mon, 30 Apr 2018, Kees Cook wrote:
>> >
>> >> On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
>> >> <dave.hansen@linux.intel.com> wrote:
>> >> > Hi Alexander,
>> >> >
>> >> > You can add:
>> >> >
>> >> > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
>> >> >
>> >> > for this patch if you like.  I haven't taken a super close look at the
>> >> > rest, but this is certainly minimally invasive from my point of view for
>> >> > the entry code.  Thanks, again for reworking it.
>> >>
>> >> Thanks Dave!
>> >>
>> >> Given this improvement and your review, I'm going to start carrying
>> >> this for linux-next. Linus, if you're still opposed to this even after
>> >> the changes here in v11, please let us know. I'd rather hash things
>> >> out now instead of during a NAK in the 4.18 merge window. :)
>> >
>> > Kees, can we please route that x86/entry stuff through tip to avoid
>> > conflicts as there are other changes in that area on the horizon.
>>
>> Sure, let me figure out how best to split up the patches, since it
>> touch x86/entry, gcc-plugins, and lkdtm. Thanks!
>
> Are they independent or do they carry dependencies?

They carry dependencies, as it interacts with the gcc plugin (and
lkdtm). As I don't have other plugin changes for 4.18 queued, you
could take the whole series for x86/entry if you want? Otherwise I can
try to split out the x86 change so it's more self-contained.

-Kees
Kees Cook May 2, 2018, 9:02 p.m. UTC | #11
On Wed, May 2, 2018 at 5:51 AM, Kees Cook <keescook@chromium.org> wrote:
> On Wed, May 2, 2018 at 5:39 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
>> On Wed, 2 May 2018, Kees Cook wrote:
>>> On Wed, May 2, 2018 at 1:42 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
>>> > On Mon, 30 Apr 2018, Kees Cook wrote:
>>> >
>>> >> On Mon, Apr 23, 2018 at 9:23 PM, Dave Hansen
>>> >> <dave.hansen@linux.intel.com> wrote:
>>> >> > Hi Alexander,
>>> >> >
>>> >> > You can add:
>>> >> >
>>> >> > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
>>> >> >
>>> >> > for this patch if you like.  I haven't taken a super close look at the
>>> >> > rest, but this is certainly minimally invasive from my point of view for
>>> >> > the entry code.  Thanks, again for reworking it.
>>> >>
>>> >> Thanks Dave!
>>> >>
>>> >> Given this improvement and your review, I'm going to start carrying
>>> >> this for linux-next. Linus, if you're still opposed to this even after
>>> >> the changes here in v11, please let us know. I'd rather hash things
>>> >> out now instead of during a NAK in the 4.18 merge window. :)
>>> >
>>> > Kees, can we please route that x86/entry stuff through tip to avoid
>>> > conflicts as there are other changes in that area on the horizon.
>>>
>>> Sure, let me figure out how best to split up the patches, since it
>>> touch x86/entry, gcc-plugins, and lkdtm. Thanks!
>>
>> Are they independent or do they carry dependencies?
>
> They carry dependencies, as it interacts with the gcc plugin (and
> lkdtm). As I don't have other plugin changes for 4.18 queued, you
> could take the whole series for x86/entry if you want? Otherwise I can
> try to split out the x86 change so it's more self-contained.

The best way to do this would be to add the x86 entry changes without
CONFIG_GCC_PLUGIN_STACKLEAK, which leaves the results not
compile-testable. Alternatively, if you carried everything, it'd be
weird too, with arm64 coming (which has small changes to the plugin).

I think it'd be better for this to go via my tree with your Ack (where
I can carry the plugin, lkdtm, x86, and arm64 changes). How does that
sound?

-Kees
Thomas Gleixner May 6, 2018, 10:04 a.m. UTC | #12
On Wed, 2 May 2018, Kees Cook wrote:
> On Wed, May 2, 2018 at 5:51 AM, Kees Cook <keescook@chromium.org> wrote:
> I think it'd be better for this to go via my tree with your Ack (where
> I can carry the plugin, lkdtm, x86, and arm64 changes). How does that
> sound?

Works for me.

Acked-by: Thomas Gleixner <tglx@linutronix.de>
diff mbox

Patch

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index ea91cb6..21ee7c5 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -24,6 +24,7 @@  ffffffffa0000000 - [fixmap start]   (~1526 MB) module mapping space (variable)
 [fixmap start]   - ffffffffff5fffff kernel-internal fixmap range
 ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
 ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
+STACKLEAK_POISON value in this last hole: ffffffffffff4111
 
 Virtual memory map with 5 level page tables:
 
@@ -50,6 +51,7 @@  ffffffffa0000000 - fffffffffeffffff (1520 MB) module mapping space
 [fixmap start]   - ffffffffff5fffff kernel-internal fixmap range
 ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
 ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
+STACKLEAK_POISON value in this last hole: ffffffffffff4111
 
 Architecture defines a 64-bit virtual address. Implementations can support
 less. Currently supported are 48- and 57-bit virtual addresses. Bits 63
diff --git a/arch/Kconfig b/arch/Kconfig
index 76c0b54..368e2fb 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -401,6 +401,13 @@  config SECCOMP_FILTER
 
 	  See Documentation/prctl/seccomp_filter.txt for details.
 
+config HAVE_ARCH_STACKLEAK
+	bool
+	help
+	  An architecture should select this if it has the code which
+	  fills the used part of the kernel stack with the STACKLEAK_POISON
+	  value before returning from system calls.
+
 config HAVE_GCC_PLUGINS
 	bool
 	help
@@ -531,6 +538,26 @@  config GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 	  in structures.  This reduces the performance hit of RANDSTRUCT
 	  at the cost of weakened randomization.
 
+config GCC_PLUGIN_STACKLEAK
+	bool "Erase the kernel stack before returning from syscalls"
+	depends on GCC_PLUGINS
+	depends on HAVE_ARCH_STACKLEAK
+	help
+	  This option makes the kernel erase the kernel stack before it
+	  returns from a system call. That reduces the information which
+	  kernel stack leak bugs can reveal and blocks some uninitialized
+	  stack variable attacks. This option also provides runtime checks
+	  for kernel stack overflow detection.
+
+	  The tradeoff is the performance impact: on a single CPU system kernel
+	  compilation sees a 1% slowdown, other systems and workloads may vary
+	  and you are advised to test this feature on your expected workload
+	  before deploying it.
+
+	  This plugin was ported from grsecurity/PaX. More information at:
+	   * https://grsecurity.net/
+	   * https://pax.grsecurity.net/
+
 config HAVE_CC_STACKPROTECTOR
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0fa71a7..e700879 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -119,6 +119,7 @@  config X86
 	select HAVE_ARCH_COMPAT_MMAP_BASES	if MMU && COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_THREAD_STRUCT_WHITELIST
+	select HAVE_ARCH_STACKLEAK
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64
diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
index 06fc70c..abe4d92 100644
--- a/arch/x86/entry/Makefile
+++ b/arch/x86/entry/Makefile
@@ -15,3 +15,6 @@  obj-y				+= vsyscall/
 
 obj-$(CONFIG_IA32_EMULATION)	+= entry_64_compat.o syscall_32.o
 
+obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += erase.o
+KASAN_SANITIZE_erase.o		:= n
+
diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index be63330..a555712 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -327,8 +327,22 @@  For 32-bit we have the following conventions - kernel is built with
 
 #endif
 
+.macro ERASE_KSTACK_NOCLOBBER
+#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
+	PUSH_AND_CLEAR_REGS
+	call erase_kstack
+	POP_REGS
+#endif
+.endm
+
 #endif /* CONFIG_X86_64 */
 
+.macro ERASE_KSTACK
+#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
+	call erase_kstack
+#endif
+.endm
+
 /*
  * This does 'call enter_from_user_mode' unless we can avoid it based on
  * kernel config or using the static jump infrastructure.
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 6ad064c..733088e 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -46,6 +46,8 @@ 
 #include <asm/frame.h>
 #include <asm/nospec-branch.h>
 
+#include "calling.h"
+
 	.section .entry.text, "ax"
 
 /*
@@ -298,6 +300,7 @@  ENTRY(ret_from_fork)
 	/* When we fork, we trace the syscall return in the child, too. */
 	movl    %esp, %eax
 	call    syscall_return_slowpath
+	ERASE_KSTACK
 	jmp     restore_all
 
 	/* kernel thread */
@@ -458,6 +461,8 @@  ENTRY(entry_SYSENTER_32)
 	ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \
 		    "jmp .Lsyscall_32_done", X86_FEATURE_XENPV
 
+	ERASE_KSTACK
+
 /* Opportunistic SYSEXIT */
 	TRACE_IRQS_ON			/* User mode traces as IRQs on. */
 	movl	PT_EIP(%esp), %edx	/* pt_regs->ip */
@@ -544,6 +549,8 @@  ENTRY(entry_INT80_32)
 	call	do_int80_syscall_32
 .Lsyscall_32_done:
 
+	ERASE_KSTACK
+
 restore_all:
 	TRACE_IRQS_IRET
 .Lrestore_all_notrace:
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 18ed349..e267899 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -323,6 +323,8 @@  syscall_return_via_sysret:
 	 * We are on the trampoline stack.  All regs except RDI are live.
 	 * We can do future final exit work right here.
 	 */
+	ERASE_KSTACK_NOCLOBBER
+
 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
 
 	popq	%rdi
@@ -681,6 +683,7 @@  GLOBAL(swapgs_restore_regs_and_return_to_usermode)
 	 * We are on the trampoline stack.  All regs except RDI are live.
 	 * We can do future final exit work right here.
 	 */
+	ERASE_KSTACK_NOCLOBBER
 
 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
 
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 08425c4..03d03d4 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -258,6 +258,11 @@  GLOBAL(entry_SYSCALL_compat_after_hwframe)
 
 	/* Opportunistic SYSRET */
 sysret32_from_system_call:
+	/*
+	 * We are not going to return to the userspace from the trampoline
+	 * stack. So let's erase the thread stack right now.
+	 */
+	ERASE_KSTACK
 	TRACE_IRQS_ON			/* User mode traces as IRQs on. */
 	movq	RBX(%rsp), %rbx		/* pt_regs->rbx */
 	movq	RBP(%rsp), %rbp		/* pt_regs->rbp */
diff --git a/arch/x86/entry/erase.c b/arch/x86/entry/erase.c
new file mode 100644
index 0000000..4892335
--- /dev/null
+++ b/arch/x86/entry/erase.c
@@ -0,0 +1,54 @@ 
+#include <linux/bug.h>
+#include <linux/sched.h>
+#include <asm/current.h>
+#include <asm/linkage.h>
+#include <asm/processor.h>
+
+asmlinkage void erase_kstack(void)
+{
+	register unsigned long p = current->thread.lowest_stack;
+	register unsigned long boundary = p & ~(THREAD_SIZE - 1);
+	unsigned long poison = 0;
+	const unsigned long check_depth = STACKLEAK_POISON_CHECK_DEPTH /
+							sizeof(unsigned long);
+
+	/*
+	 * Let's search for the poison value in the stack.
+	 * Start from the lowest_stack and go to the bottom.
+	 */
+	while (p > boundary && poison <= check_depth) {
+		if (*(unsigned long *)p == STACKLEAK_POISON)
+			poison++;
+		else
+			poison = 0;
+
+		p -= sizeof(unsigned long);
+	}
+
+	/*
+	 * One long int at the bottom of the thread stack is reserved and
+	 * should not be poisoned (see CONFIG_SCHED_STACK_END_CHECK).
+	 */
+	if (p == boundary)
+		p += sizeof(unsigned long);
+
+	/*
+	 * So let's write the poison value to the kernel stack.
+	 * Start from the address in p and move up till the new boundary.
+	 */
+	if (on_thread_stack())
+		boundary = current_stack_pointer;
+	else
+		boundary = current_top_of_stack();
+
+	BUG_ON(boundary - p >= THREAD_SIZE);
+
+	while (p < boundary) {
+		*(unsigned long *)p = STACKLEAK_POISON;
+		p += sizeof(unsigned long);
+	}
+
+	/* Reset the lowest_stack value for the next syscall */
+	current->thread.lowest_stack = current_top_of_stack() - 256;
+}
+
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b0ccd48..0c87813 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -494,6 +494,10 @@  struct thread_struct {
 
 	mm_segment_t		addr_limit;
 
+#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
+	unsigned long		lowest_stack;
+#endif
+
 	unsigned int		sig_on_uaccess_err:1;
 	unsigned int		uaccess_err:1;	/* uaccess failed */
 
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 5224c60..1b0892e 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -136,6 +136,11 @@  int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 	p->thread.sp0 = (unsigned long) (childregs+1);
 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
 
+#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
+	p->thread.lowest_stack = (unsigned long)task_stack_page(p) +
+						sizeof(unsigned long);
+#endif
+
 	if (unlikely(p->flags & PF_KTHREAD)) {
 		/* kernel thread */
 		memset(childregs, 0, sizeof(struct pt_regs));
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 9eb448c..82122af 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -281,6 +281,11 @@  int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 	p->thread.sp = (unsigned long) fork_frame;
 	p->thread.io_bitmap_ptr = NULL;
 
+#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
+	p->thread.lowest_stack = (unsigned long)task_stack_page(p) +
+						sizeof(unsigned long);
+#endif
+
 	savesegment(gs, p->thread.gsindex);
 	p->thread.gsbase = p->thread.gsindex ? 0 : me->thread.gsbase;
 	savesegment(fs, p->thread.fsindex);
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ab4711c..341b6cf8 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -342,4 +342,8 @@  unsigned long read_word_at_a_time(const void *addr)
 	compiletime_assert(__native_word(t),				\
 		"Need native word sized stores/loads for atomicity.")
 
+/* Poison value points to the unused hole in the virtual memory map */
+#define STACKLEAK_POISON -0xBEEF
+#define STACKLEAK_POISON_CHECK_DEPTH 128
+
 #endif /* __LINUX_COMPILER_H */