Patchwork Re: [PATCH RFC v3 1/1] gcc-plugins: Add stackleak feature erasing the kernel stack at the end of syscalls

login
register
mail settings
Submitter Alexander Popov
Date Sept. 25, 2017, 4:17 p.m.
Message ID <7bf586cc-527c-eb3c-b86a-2adbbb7fdfda@linux.com>
Download mbox | patch
Permalink /patch/9970197/
State New
Headers show

Comments

Alexander Popov - Sept. 25, 2017, 4:17 p.m.
Hello Tycho,

On 21.09.2017 22:39, Alexander Popov wrote:
> On 21.09.2017 00:18, Tycho Andersen wrote:
>> +	/*
>> +	 * Check each byte, as we don't know the current stack alignment.
>> +	 */
> 
> Excuse me, what do you mean by the "current stack alignment"?

I guess I got it now. For x86 and x86_64 the stack alignment is controlled by
cc_stack_align in arch/x86/Makefile (-mpreferred-stack-boundary in case of gcc).
The stack is 4-byte aligned for x86 and 8-byte aligned for x86_64.

> The STACKLEAK_POISON position is always 8-byte aligned for x86_64 and 4-byte
> aligned for x86 (see the shr instruction in the asm implementation).

Eh, my statement is wrong. I've made a simple experiment: this change makes the
poison be unaligned:



So your idea to check each byte at first should work fine.

Would you allow me to make the next version of your test and include it into the
fourth version of the STACKLEAK patch? I'll show it to you before sending to the
mailing list.

Best regards,
Alexander
Tycho Andersen - Sept. 25, 2017, 5:23 p.m.
Hi Alexander,

On Mon, Sep 25, 2017 at 07:17:32PM +0300, Alexander Popov wrote:
> Hello Tycho,
> 
> On 21.09.2017 22:39, Alexander Popov wrote:
> > On 21.09.2017 00:18, Tycho Andersen wrote:
> >> +	/*
> >> +	 * Check each byte, as we don't know the current stack alignment.
> >> +	 */
> > 
> > Excuse me, what do you mean by the "current stack alignment"?
> 
> I guess I got it now. For x86 and x86_64 the stack alignment is controlled by
> cc_stack_align in arch/x86/Makefile (-mpreferred-stack-boundary in case of gcc).
> The stack is 4-byte aligned for x86 and 8-byte aligned for x86_64.
> 
> > The STACKLEAK_POISON position is always 8-byte aligned for x86_64 and 4-byte
> > aligned for x86 (see the shr instruction in the asm implementation).
> 
> Eh, my statement is wrong. I've made a simple experiment: this change makes the
> poison be unaligned:
> 
> diff --git a/fs/exec.c b/fs/exec.c
> index 56bdc19..893d2e4 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1961,7 +1961,7 @@ COMPAT_SYSCALL_DEFINE5(execveat, int, fd,
>  #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
>  void __used track_stack(void)
>  {
> -       unsigned long sp = (unsigned long)&sp;
> +       unsigned long sp = (unsigned long)&sp + 1;

Yep, this is the case I was talking about with alignment.

> 
>         if (sp < current->thread.lowest_stack &&
>             sp >= (unsigned long)task_stack_page(current) +
> 
> 
> So your idea to check each byte at first should work fine.
> 
> Would you allow me to make the next version of your test and include it into the
> fourth version of the STACKLEAK patch? I'll show it to you before sending to the
> mailing list.

Sure, that works for me.

Tycho

Patch

diff --git a/fs/exec.c b/fs/exec.c
index 56bdc19..893d2e4 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1961,7 +1961,7 @@  COMPAT_SYSCALL_DEFINE5(execveat, int, fd,
 #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
 void __used track_stack(void)
 {
-       unsigned long sp = (unsigned long)&sp;
+       unsigned long sp = (unsigned long)&sp + 1;

        if (sp < current->thread.lowest_stack &&
            sp >= (unsigned long)task_stack_page(current) +