diff mbox

[BUG/PATCH] kernel RNG and its secrets

Message ID 20150318095345.GA12923@zoho.com (mailing list archive)
State Superseded
Headers show

Commit Message

mancha March 18, 2015, 9:53 a.m. UTC
Hi.

The kernel RNG introduced memzero_explicit in d4c5efdb9777 to protect
memory cleansing against things like dead store optimization:

   void memzero_explicit(void *s, size_t count)
   {
           memset(s, 0, count);
           OPTIMIZER_HIDE_VAR(s);
   }

OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect crypto_memneq
against timing analysis, is defined when using gcc as:

   #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))

My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc from
optimizing out memset (i.e. secrets remain in memory).

Two things that do work:

   __asm__ __volatile__ ("" : "=r" (var) : "0" (var))

   and

   __asm__ __volatile__("": : :"memory")

The first is OPTIMIZER_HIDE_VAR plus a volatile qualifier and the second
is barrier() [as defined when using gcc].

I propose memzero_explicit use barrier().

For any attribution deemed necessary, please use "mancha security".
Please CC me on replies.

--mancha

PS CC'ing Herbert Xu in case this impacts crypto_memneq.

Comments

Daniel Borkmann March 18, 2015, 10:30 a.m. UTC | #1
[ Cc'ing Cesar ]

On 03/18/2015 10:53 AM, mancha wrote:
> Hi.
>
> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to protect
> memory cleansing against things like dead store optimization:
>
>     void memzero_explicit(void *s, size_t count)
>     {
>             memset(s, 0, count);
>             OPTIMIZER_HIDE_VAR(s);
>     }
>
> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect crypto_memneq
> against timing analysis, is defined when using gcc as:
>
>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
>
> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc from
> optimizing out memset (i.e. secrets remain in memory).

Could you elaborate on your test case?

memzero_explicit() is actually an EXPORT_SYMBOL(), are you saying
that gcc removes the call to memzero_explicit() entirely, inlines
it, and then optimizes the memset() eventually away?

Last time I looked, it emitted a call to memzero_explicit(), and
inside memzero_explicit() it did the memset() as it cannot make
any assumption from there. I'm using gcc (GCC) 4.8.3 20140911
(Red Hat 4.8.3-7).

> Two things that do work:
>
>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>
>     and
>
>     __asm__ __volatile__("": : :"memory")
>
> The first is OPTIMIZER_HIDE_VAR plus a volatile qualifier and the second
> is barrier() [as defined when using gcc].
>
> I propose memzero_explicit use barrier().
>
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -616,7 +616,7 @@ EXPORT_SYMBOL(memset);
>   void memzero_explicit(void *s, size_t count)
>   {
>          memset(s, 0, count);
> -       OPTIMIZER_HIDE_VAR(s);
> +       barrier();
>   }
>   EXPORT_SYMBOL(memzero_explicit);
>
> For any attribution deemed necessary, please use "mancha security".
> Please CC me on replies.
>
> --mancha
>
> PS CC'ing Herbert Xu in case this impacts crypto_memneq.
>

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa March 18, 2015, 10:50 a.m. UTC | #2
On Wed, Mar 18, 2015, at 10:53, mancha wrote:
> Hi.
> 
> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to protect
> memory cleansing against things like dead store optimization:
> 
>    void memzero_explicit(void *s, size_t count)
>    {
>            memset(s, 0, count);
>            OPTIMIZER_HIDE_VAR(s);
>    }
> 
> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect crypto_memneq
> against timing analysis, is defined when using gcc as:
> 
>    #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
> 
> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc from
> optimizing out memset (i.e. secrets remain in memory).
> 
> Two things that do work:
> 
>    __asm__ __volatile__ ("" : "=r" (var) : "0" (var))

You are correct, volatile signature should be added to
OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
allowed to check if it is needed and may remove the asm statement.
Another option would be to just use var as an input variable - asm
blocks without output variables are always considered being volatile by
gcc.

Can you send a patch?

I don't think it is security critical, as Daniel pointed out, the call
will happen because the function is an external call to the crypto
functions, thus the compiler has to flush memory on return.

Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann March 18, 2015, 10:56 a.m. UTC | #3
On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>
>
> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>> Hi.
>>
>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to protect
>> memory cleansing against things like dead store optimization:
>>
>>     void memzero_explicit(void *s, size_t count)
>>     {
>>             memset(s, 0, count);
>>             OPTIMIZER_HIDE_VAR(s);
>>     }
>>
>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect crypto_memneq
>> against timing analysis, is defined when using gcc as:
>>
>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0" (var))
>>
>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc from
>> optimizing out memset (i.e. secrets remain in memory).
>>
>> Two things that do work:
>>
>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>
> You are correct, volatile signature should be added to
> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
> allowed to check if it is needed and may remove the asm statement.
> Another option would be to just use var as an input variable - asm
> blocks without output variables are always considered being volatile by
> gcc.
>
> Can you send a patch?
>
> I don't think it is security critical, as Daniel pointed out, the call
> will happen because the function is an external call to the crypto
> functions, thus the compiler has to flush memory on return.

Just had a look.

$ gdb vmlinux
(gdb) disassemble memzero_explicit
Dump of assembler code for function memzero_explicit:
    0xffffffff813a18b0 <+0>:	push   %rbp
    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
    0xffffffff813a18be <+14>:	pop    %rbp
    0xffffffff813a18bf <+15>:	retq
End of assembler dump.

(gdb) disassemble extract_entropy
[...]
    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80 <extract_entropy+176>
    0xffffffff814a5009 <+313>:	mov    %r12,%rdi
    0xffffffff814a500c <+316>:	mov    $0xa,%esi
    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0 <memzero_explicit>
    0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
[...]

I would be fine with __volatile__.

Thanks a lot mancha, could you send a patch?

Best,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller March 18, 2015, 11:09 a.m. UTC | #4
Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:

Hi Daniel,

>On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>>> Hi.
>>> 
>>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
>>> protect
>>> 
>>> memory cleansing against things like dead store optimization:
>>>     void memzero_explicit(void *s, size_t count)
>>>     {
>>>     
>>>             memset(s, 0, count);
>>>             OPTIMIZER_HIDE_VAR(s);
>>>     
>>>     }
>>> 
>>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
>>> crypto_memneq>> 
>>> against timing analysis, is defined when using gcc as:
>>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0"
>>>     (var))
>>> 
>>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc
>>> from optimizing out memset (i.e. secrets remain in memory).
>>> 
>>> Two things that do work:
>>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>> 
>> You are correct, volatile signature should be added to
>> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
>> allowed to check if it is needed and may remove the asm statement.
>> Another option would be to just use var as an input variable - asm
>> blocks without output variables are always considered being volatile
>> by gcc.
>> 
>> Can you send a patch?
>> 
>> I don't think it is security critical, as Daniel pointed out, the
>> call
>> will happen because the function is an external call to the crypto
>> functions, thus the compiler has to flush memory on return.
>
>Just had a look.
>
>$ gdb vmlinux
>(gdb) disassemble memzero_explicit
>Dump of assembler code for function memzero_explicit:
>    0xffffffff813a18b0 <+0>:	push   %rbp
>    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
>    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
>    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
>    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
>    0xffffffff813a18be <+14>:	pop    %rbp
>    0xffffffff813a18bf <+15>:	retq
>End of assembler dump.
>
>(gdb) disassemble extract_entropy
>[...]
>    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
>    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
>    0xffffffff814a500c <+316>:	mov    $0xa,%esi
>    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
><memzero_explicit> 0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
>[...]
>
>I would be fine with __volatile__.

Are we sure that simply adding a __volatile__ works in any case? I just 
did a test with a simple user space app:

static inline void memset_secure(void *s, int c, size_t n)
{
        memset(s, c, n);
        //__asm__ __volatile__("": : :"memory");
        __asm__ __volatile__("" : "=r" (s) : "0" (s));
}

int main(int argc, char *argv[])
{
#define BUFLEN 20
        char buf[BUFLEN];

        snprintf(buf, (BUFLEN - 1), "teststring\n");
        printf("%s", buf);

        memset_secure(buf, 0, BUFLEN);
}

When using the discussed code of __asm__ __volatile__("" : "=r" (s) : 
"0" (s));  I do not find the code implementing memset(0) in objdump. 
Only when I enable the memory barrier, I see the following (when 
compiling with -O2):

objdump -d memset_secure:
...
0000000000400440 <main>:
...
  400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
  400470:       00 
  400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
  400478:       00 00 
  40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
  400481:       00
...

>
>Thanks a lot mancha, could you send a patch?
>
>Best,
>Daniel
>--
>To unsubscribe from this list: send the line "unsubscribe linux-crypto"
>in the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa March 18, 2015, 12:02 p.m. UTC | #5
On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
> Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
> >On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
> >> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
> >>> Hi.
> >>> 
> >>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
> >>> protect
> >>> 
> >>> memory cleansing against things like dead store optimization:
> >>>     void memzero_explicit(void *s, size_t count)
> >>>     {
> >>>     
> >>>             memset(s, 0, count);
> >>>             OPTIMIZER_HIDE_VAR(s);
> >>>     
> >>>     }
> >>> 
> >>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
> >>> crypto_memneq>> 
> >>> against timing analysis, is defined when using gcc as:
> >>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0"
> >>>     (var))
> >>> 
> >>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc
> >>> from optimizing out memset (i.e. secrets remain in memory).
> >>> 
> >>> Two things that do work:
> >>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
> >> 
> >> You are correct, volatile signature should be added to
> >> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
> >> allowed to check if it is needed and may remove the asm statement.
> >> Another option would be to just use var as an input variable - asm
> >> blocks without output variables are always considered being volatile
> >> by gcc.
> >> 
> >> Can you send a patch?
> >> 
> >> I don't think it is security critical, as Daniel pointed out, the
> >> call
> >> will happen because the function is an external call to the crypto
> >> functions, thus the compiler has to flush memory on return.
> >
> >Just had a look.
> >
> >$ gdb vmlinux
> >(gdb) disassemble memzero_explicit
> >Dump of assembler code for function memzero_explicit:
> >    0xffffffff813a18b0 <+0>:	push   %rbp
> >    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
> >    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
> >    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
> >    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
> >    0xffffffff813a18be <+14>:	pop    %rbp
> >    0xffffffff813a18bf <+15>:	retq
> >End of assembler dump.
> >
> >(gdb) disassemble extract_entropy
> >[...]
> >    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
> >    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
> ><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
> >    0xffffffff814a500c <+316>:	mov    $0xa,%esi
> >    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
> ><memzero_explicit> 0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
> >[...]
> >
> >I would be fine with __volatile__.
> 
> Are we sure that simply adding a __volatile__ works in any case? I just 
> did a test with a simple user space app:
> 
> static inline void memset_secure(void *s, int c, size_t n)
> {
>         memset(s, c, n);
>         //__asm__ __volatile__("": : :"memory");
>         __asm__ __volatile__("" : "=r" (s) : "0" (s));
> }
> 

Good point, thanks!

Of course an input or output of s does not force the memory pointed to
by s being flushed.


My proposal would be to add a

#define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : : "m"(
({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )

and use this in the code function.

This is documented in gcc manual 6.43.2.5.

Bye,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller March 18, 2015, 12:14 p.m. UTC | #6
Am Mittwoch, 18. März 2015, 13:02:12 schrieb Hannes Frederic Sowa:

Hi Hannes,

>On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
>> Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
>> >On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>> >> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>> >>> Hi.
>> >>> 
>> >>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
>> >>> protect
>> >>> 
>> >>> memory cleansing against things like dead store optimization:
>> >>>     void memzero_explicit(void *s, size_t count)
>> >>>     {
>> >>>     
>> >>>             memset(s, 0, count);
>> >>>             OPTIMIZER_HIDE_VAR(s);
>> >>>     
>> >>>     }
>> >>> 
>> >>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
>> >>> crypto_memneq>>
>> >>> 
>> >>> against timing analysis, is defined when using gcc as:
>> >>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) :
>> >>>     "0"
>> >>>     (var))
>> >>> 
>> >>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent
>> >>> gcc
>> >>> from optimizing out memset (i.e. secrets remain in memory).
>> >>> 
>> >>> Two things that do work:
>> >>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>> >> 
>> >> You are correct, volatile signature should be added to
>> >> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
>> >> allowed to check if it is needed and may remove the asm statement.
>> >> Another option would be to just use var as an input variable - asm
>> >> blocks without output variables are always considered being
>> >> volatile
>> >> by gcc.
>> >> 
>> >> Can you send a patch?
>> >> 
>> >> I don't think it is security critical, as Daniel pointed out, the
>> >> call
>> >> will happen because the function is an external call to the crypto
>> >> functions, thus the compiler has to flush memory on return.
>> >
>> >Just had a look.
>> >
>> >$ gdb vmlinux
>> >(gdb) disassemble memzero_explicit
>> >
>> >Dump of assembler code for function memzero_explicit:
>> >    0xffffffff813a18b0 <+0>:	push   %rbp
>> >    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
>> >    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
>> >    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
>> >    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 
<memset>
>> >    0xffffffff813a18be <+14>:	pop    %rbp
>> >    0xffffffff813a18bf <+15>:	retq
>> >
>> >End of assembler dump.
>> >
>> >(gdb) disassemble extract_entropy
>> >[...]
>> >
>> >    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
>> >    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
>> >
>> ><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
>> >
>> >    0xffffffff814a500c <+316>:	mov    $0xa,%esi
>> >    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
>> >
>> ><memzero_explicit> 0xffffffff814a5016 <+326>:	mov   
>> >-0x48(%rbp),%rax
>> >[...]
>> >
>> >I would be fine with __volatile__.
>> 
>> Are we sure that simply adding a __volatile__ works in any case? I
>> just did a test with a simple user space app:
>> 
>> static inline void memset_secure(void *s, int c, size_t n)
>> {
>> 
>>         memset(s, c, n);
>>         //__asm__ __volatile__("": : :"memory");
>>         __asm__ __volatile__("" : "=r" (s) : "0" (s));
>> 
>> }
>
>Good point, thanks!
>
>Of course an input or output of s does not force the memory pointed to
>by s being flushed.
>
>
>My proposal would be to add a
>
>#define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : : "m"(
>({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
>
>and use this in the code function.
>
>This is documented in gcc manual 6.43.2.5.

That one adds the zeroization instructuctions. But now there are much 
more than with the barrier.

  400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
  400470:       00 
  400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
  400478:       00 00 
  40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
  400481:       00 
  400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
  400489:       00 00 
  40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
  400492:       00 00 
  400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
  40049b:       00

Any ideas?
>
>Bye,
>Hannes


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa March 18, 2015, 12:19 p.m. UTC | #7
On Wed, Mar 18, 2015, at 13:14, Stephan Mueller wrote:
> Am Mittwoch, 18. März 2015, 13:02:12 schrieb Hannes Frederic Sowa:
> 
> Hi Hannes,
> 
> >On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
> >> Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
> >> >On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
> >> >> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
> >> >>> Hi.
> >> >>> 
> >> >>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
> >> >>> protect
> >> >>> 
> >> >>> memory cleansing against things like dead store optimization:
> >> >>>     void memzero_explicit(void *s, size_t count)
> >> >>>     {
> >> >>>     
> >> >>>             memset(s, 0, count);
> >> >>>             OPTIMIZER_HIDE_VAR(s);
> >> >>>     
> >> >>>     }
> >> >>> 
> >> >>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
> >> >>> crypto_memneq>>
> >> >>> 
> >> >>> against timing analysis, is defined when using gcc as:
> >> >>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) :
> >> >>>     "0"
> >> >>>     (var))
> >> >>> 
> >> >>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent
> >> >>> gcc
> >> >>> from optimizing out memset (i.e. secrets remain in memory).
> >> >>> 
> >> >>> Two things that do work:
> >> >>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
> >> >> 
> >> >> You are correct, volatile signature should be added to
> >> >> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
> >> >> allowed to check if it is needed and may remove the asm statement.
> >> >> Another option would be to just use var as an input variable - asm
> >> >> blocks without output variables are always considered being
> >> >> volatile
> >> >> by gcc.
> >> >> 
> >> >> Can you send a patch?
> >> >> 
> >> >> I don't think it is security critical, as Daniel pointed out, the
> >> >> call
> >> >> will happen because the function is an external call to the crypto
> >> >> functions, thus the compiler has to flush memory on return.
> >> >
> >> >Just had a look.
> >> >
> >> >$ gdb vmlinux
> >> >(gdb) disassemble memzero_explicit
> >> >
> >> >Dump of assembler code for function memzero_explicit:
> >> >    0xffffffff813a18b0 <+0>:	push   %rbp
> >> >    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
> >> >    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
> >> >    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
> >> >    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 
> <memset>
> >> >    0xffffffff813a18be <+14>:	pop    %rbp
> >> >    0xffffffff813a18bf <+15>:	retq
> >> >
> >> >End of assembler dump.
> >> >
> >> >(gdb) disassemble extract_entropy
> >> >[...]
> >> >
> >> >    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
> >> >    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
> >> >
> >> ><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
> >> >
> >> >    0xffffffff814a500c <+316>:	mov    $0xa,%esi
> >> >    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
> >> >
> >> ><memzero_explicit> 0xffffffff814a5016 <+326>:	mov   
> >> >-0x48(%rbp),%rax
> >> >[...]
> >> >
> >> >I would be fine with __volatile__.
> >> 
> >> Are we sure that simply adding a __volatile__ works in any case? I
> >> just did a test with a simple user space app:
> >> 
> >> static inline void memset_secure(void *s, int c, size_t n)
> >> {
> >> 
> >>         memset(s, c, n);
> >>         //__asm__ __volatile__("": : :"memory");
> >>         __asm__ __volatile__("" : "=r" (s) : "0" (s));
> >> 
> >> }
> >
> >Good point, thanks!
> >
> >Of course an input or output of s does not force the memory pointed to
> >by s being flushed.
> >
> >
> >My proposal would be to add a
> >
> >#define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : : "m"(
> >({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
> >
> >and use this in the code function.
> >
> >This is documented in gcc manual 6.43.2.5.
> 
> That one adds the zeroization instructuctions. But now there are much 
> more than with the barrier.
> 
>   400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
>   400470:       00 
>   400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
>   400478:       00 00 
>   40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
>   400481:       00 
>   400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
>   400489:       00 00 
>   40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
>   400492:       00 00 
>   400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
>   40049b:       00
> 
> Any ideas?

Hmm, correct definition of u8?

Which version of gcc do you use? I can't see any difference if I compile
your example at -O2.

Bye,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller March 18, 2015, 12:20 p.m. UTC | #8
Am Mittwoch, 18. März 2015, 13:19:07 schrieb Hannes Frederic Sowa:

Hi Hannes,

>On Wed, Mar 18, 2015, at 13:14, Stephan Mueller wrote:
>> Am Mittwoch, 18. März 2015, 13:02:12 schrieb Hannes Frederic Sowa:
>> 
>> Hi Hannes,
>> 
>> >On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
>> >> Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
>> >> >On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>> >> >> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>> >> >>> Hi.
>> >> >>> 
>> >> >>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
>> >> >>> protect
>> >> >>> 
>> >> >>> memory cleansing against things like dead store optimization:
>> >> >>>     void memzero_explicit(void *s, size_t count)
>> >> >>>     {
>> >> >>>     
>> >> >>>             memset(s, 0, count);
>> >> >>>             OPTIMIZER_HIDE_VAR(s);
>> >> >>>     
>> >> >>>     }
>> >> >>> 
>> >> >>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
>> >> >>> crypto_memneq>>
>> >> >>> 
>> >> >>> against timing analysis, is defined when using gcc as:
>> >> >>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) :
>> >> >>>     "0"
>> >> >>>     (var))
>> >> >>> 
>> >> >>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent
>> >> >>> gcc
>> >> >>> from optimizing out memset (i.e. secrets remain in memory).
>> >> >>> 
>> >> >>> Two things that do work:
>> >> >>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>> >> >> 
>> >> >> You are correct, volatile signature should be added to
>> >> >> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc
>> >> >> is
>> >> >> allowed to check if it is needed and may remove the asm
>> >> >> statement.
>> >> >> Another option would be to just use var as an input variable -
>> >> >> asm
>> >> >> blocks without output variables are always considered being
>> >> >> volatile
>> >> >> by gcc.
>> >> >> 
>> >> >> Can you send a patch?
>> >> >> 
>> >> >> I don't think it is security critical, as Daniel pointed out,
>> >> >> the
>> >> >> call
>> >> >> will happen because the function is an external call to the
>> >> >> crypto
>> >> >> functions, thus the compiler has to flush memory on return.
>> >> >
>> >> >Just had a look.
>> >> >
>> >> >$ gdb vmlinux
>> >> >(gdb) disassemble memzero_explicit
>> >> >
>> >> >Dump of assembler code for function memzero_explicit:
>> >> >    0xffffffff813a18b0 <+0>:	push   %rbp
>> >> >    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
>> >> >    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
>> >> >    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
>> >> >    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120
>> 
>> <memset>
>> 
>> >> >    0xffffffff813a18be <+14>:	pop    %rbp
>> >> >    0xffffffff813a18bf <+15>:	retq
>> >> >
>> >> >End of assembler dump.
>> >> >
>> >> >(gdb) disassemble extract_entropy
>> >> >[...]
>> >> >
>> >> >    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
>> >> >    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
>> >> >
>> >> ><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
>> >> >
>> >> >    0xffffffff814a500c <+316>:	mov    $0xa,%esi
>> >> >    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
>> >> >
>> >> ><memzero_explicit> 0xffffffff814a5016 <+326>:	mov
>> >> >-0x48(%rbp),%rax
>> >> >[...]
>> >> >
>> >> >I would be fine with __volatile__.
>> >> 
>> >> Are we sure that simply adding a __volatile__ works in any case? I
>> >> just did a test with a simple user space app:
>> >> 
>> >> static inline void memset_secure(void *s, int c, size_t n)
>> >> {
>> >> 
>> >>         memset(s, c, n);
>> >>         //__asm__ __volatile__("": : :"memory");
>> >>         __asm__ __volatile__("" : "=r" (s) : "0" (s));
>> >> 
>> >> }
>> >
>> >Good point, thanks!
>> >
>> >Of course an input or output of s does not force the memory pointed
>> >to
>> >by s being flushed.
>> >
>> >
>> >My proposal would be to add a
>> >
>> >#define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : :
>> >"m"(
>> >({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
>> >
>> >and use this in the code function.
>> >
>> >This is documented in gcc manual 6.43.2.5.
>> 
>> That one adds the zeroization instructuctions. But now there are much
>> more than with the barrier.
>> 
>>   400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
>>   400470:       00
>>   400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
>>   400478:       00 00
>>   40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
>>   400481:       00
>>   400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
>>   400489:       00 00
>>   40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
>>   400492:       00 00
>>   400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
>>   40049b:       00
>> 
>> Any ideas?
>
>Hmm, correct definition of u8?

I use unsigned char
>
>Which version of gcc do you use? I can't see any difference if I
>compile your example at -O2.

gcc-Version 4.9.2 20150212 (Red Hat 4.9.2-6) (GCC)
>
>Bye,
>Hannes


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann March 18, 2015, 12:42 p.m. UTC | #9
On 03/18/2015 01:20 PM, Stephan Mueller wrote:
> Am Mittwoch, 18. März 2015, 13:19:07 schrieb Hannes Frederic Sowa:
>
> Hi Hannes,
>
>> On Wed, Mar 18, 2015, at 13:14, Stephan Mueller wrote:
>>> Am Mittwoch, 18. März 2015, 13:02:12 schrieb Hannes Frederic Sowa:
>>>
>>> Hi Hannes,
>>>
>>>> On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
>>>>> Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
>>>>>> On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>>>>>>> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>>>>>>>> Hi.
>>>>>>>>
>>>>>>>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
>>>>>>>> protect
>>>>>>>>
>>>>>>>> memory cleansing against things like dead store optimization:
>>>>>>>>      void memzero_explicit(void *s, size_t count)
>>>>>>>>      {
>>>>>>>>
>>>>>>>>              memset(s, 0, count);
>>>>>>>>              OPTIMIZER_HIDE_VAR(s);
>>>>>>>>
>>>>>>>>      }
>>>>>>>>
>>>>>>>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
>>>>>>>> crypto_memneq>>
>>>>>>>>
>>>>>>>> against timing analysis, is defined when using gcc as:
>>>>>>>>      #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) :
>>>>>>>>      "0"
>>>>>>>>      (var))
>>>>>>>>
>>>>>>>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent
>>>>>>>> gcc
>>>>>>>> from optimizing out memset (i.e. secrets remain in memory).
>>>>>>>>
>>>>>>>> Two things that do work:
>>>>>>>>      __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>>>>>>>
>>>>>>> You are correct, volatile signature should be added to
>>>>>>> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc
>>>>>>> is
>>>>>>> allowed to check if it is needed and may remove the asm
>>>>>>> statement.
>>>>>>> Another option would be to just use var as an input variable -
>>>>>>> asm
>>>>>>> blocks without output variables are always considered being
>>>>>>> volatile
>>>>>>> by gcc.
>>>>>>>
>>>>>>> Can you send a patch?
>>>>>>>
>>>>>>> I don't think it is security critical, as Daniel pointed out,
>>>>>>> the
>>>>>>> call
>>>>>>> will happen because the function is an external call to the
>>>>>>> crypto
>>>>>>> functions, thus the compiler has to flush memory on return.
>>>>>>
>>>>>> Just had a look.
>>>>>>
>>>>>> $ gdb vmlinux
>>>>>> (gdb) disassemble memzero_explicit
>>>>>>
>>>>>> Dump of assembler code for function memzero_explicit:
>>>>>>     0xffffffff813a18b0 <+0>:	push   %rbp
>>>>>>     0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
>>>>>>     0xffffffff813a18b4 <+4>:	xor    %esi,%esi
>>>>>>     0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
>>>>>>     0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120
>>>
>>> <memset>
>>>
>>>>>>     0xffffffff813a18be <+14>:	pop    %rbp
>>>>>>     0xffffffff813a18bf <+15>:	retq
>>>>>>
>>>>>> End of assembler dump.
>>>>>>
>>>>>> (gdb) disassemble extract_entropy
>>>>>> [...]
>>>>>>
>>>>>>     0xffffffff814a5000 <+304>:	sub    %r15,%rbx
>>>>>>     0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
>>>>>>
>>>>>> <extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
>>>>>>
>>>>>>     0xffffffff814a500c <+316>:	mov    $0xa,%esi
>>>>>>     0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
>>>>>>
>>>>>> <memzero_explicit> 0xffffffff814a5016 <+326>:	mov
>>>>>> -0x48(%rbp),%rax
>>>>>> [...]
>>>>>>
>>>>>> I would be fine with __volatile__.
>>>>>
>>>>> Are we sure that simply adding a __volatile__ works in any case? I
>>>>> just did a test with a simple user space app:
>>>>>
>>>>> static inline void memset_secure(void *s, int c, size_t n)
>>>>> {
>>>>>
>>>>>          memset(s, c, n);
>>>>>          //__asm__ __volatile__("": : :"memory");
>>>>>          __asm__ __volatile__("" : "=r" (s) : "0" (s));
>>>>>
>>>>> }
>>>>
>>>> Good point, thanks!
>>>>
>>>> Of course an input or output of s does not force the memory pointed
>>>> to
>>>> by s being flushed.
>>>>
>>>>
>>>> My proposal would be to add a
>>>>
>>>> #define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : :
>>>> "m"(
>>>> ({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
>>>>
>>>> and use this in the code function.
>>>>
>>>> This is documented in gcc manual 6.43.2.5.
>>>
>>> That one adds the zeroization instructuctions. But now there are much
>>> more than with the barrier.
>>>
>>>    400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
>>>    400470:       00
>>>    400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
>>>    400478:       00 00
>>>    40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
>>>    400481:       00
>>>    400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
>>>    400489:       00 00
>>>    40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
>>>    400492:       00 00
>>>    400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
>>>    40049b:       00
>>>
>>> Any ideas?
>>
>> Hmm, correct definition of u8?
>
> I use unsigned char
>>
>> Which version of gcc do you use? I can't see any difference if I
>> compile your example at -O2.
>
> gcc-Version 4.9.2 20150212 (Red Hat 4.9.2-6) (GCC)

I can see the same with the gcc version I previously posted. So
it clears the 20 bytes from your example (movq, movq, movl) at
two locations, presumably buf[] and b[].

Best,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
mancha March 18, 2015, 12:58 p.m. UTC | #10
On Wed, Mar 18, 2015 at 01:02:12PM +0100, Hannes Frederic Sowa wrote:
> On Wed, Mar 18, 2015, at 12:09, Stephan Mueller wrote:
> > Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
> > >On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
> > >> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
> > >>> Hi.
> > >>> 
> > >>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
> > >>> protect
> > >>> 
> > >>> memory cleansing against things like dead store optimization:
> > >>>     void memzero_explicit(void *s, size_t count)
> > >>>     {
> > >>>     
> > >>>             memset(s, 0, count);
> > >>>             OPTIMIZER_HIDE_VAR(s);
> > >>>     
> > >>>     }
> > >>> 
> > >>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
> > >>> crypto_memneq>> 
> > >>> against timing analysis, is defined when using gcc as:
> > >>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0"
> > >>>     (var))
> > >>> 
> > >>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc
> > >>> from optimizing out memset (i.e. secrets remain in memory).
> > >>> 
> > >>> Two things that do work:
> > >>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
> > >> 
> > >> You are correct, volatile signature should be added to
> > >> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
> > >> allowed to check if it is needed and may remove the asm statement.
> > >> Another option would be to just use var as an input variable - asm
> > >> blocks without output variables are always considered being volatile
> > >> by gcc.
> > >> 
> > >> Can you send a patch?
> > >> 
> > >> I don't think it is security critical, as Daniel pointed out, the
> > >> call
> > >> will happen because the function is an external call to the crypto
> > >> functions, thus the compiler has to flush memory on return.
> > >
> > >Just had a look.
> > >
> > >$ gdb vmlinux
> > >(gdb) disassemble memzero_explicit
> > >Dump of assembler code for function memzero_explicit:
> > >    0xffffffff813a18b0 <+0>:	push   %rbp
> > >    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
> > >    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
> > >    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
> > >    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
> > >    0xffffffff813a18be <+14>:	pop    %rbp
> > >    0xffffffff813a18bf <+15>:	retq
> > >End of assembler dump.
> > >
> > >(gdb) disassemble extract_entropy
> > >[...]
> > >    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
> > >    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
> > ><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
> > >    0xffffffff814a500c <+316>:	mov    $0xa,%esi
> > >    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
> > ><memzero_explicit> 0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
> > >[...]
> > >
> > >I would be fine with __volatile__.
> > 
> > Are we sure that simply adding a __volatile__ works in any case? I just 
> > did a test with a simple user space app:
> > 
> > static inline void memset_secure(void *s, int c, size_t n)
> > {
> >         memset(s, c, n);
> >         //__asm__ __volatile__("": : :"memory");
> >         __asm__ __volatile__("" : "=r" (s) : "0" (s));
> > }
> > 
> 
> Good point, thanks!
> 
> Of course an input or output of s does not force the memory pointed to
> by s being flushed.
> 
> 
> My proposal would be to add a
> 
> #define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : : "m"(
> ({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
> 
> and use this in the code function.
> 
> This is documented in gcc manual 6.43.2.5.
> 
> Bye,
> Hannes
> 

Hi all.

Any reason to not use __asm__ __volatile__("": : :"memory") [aka 
barrier()]?

Or maybe __asm__ __volatile__("": :"r"(ptr) :"memory").

Cheers.

--mancha
Hannes Frederic Sowa March 18, 2015, 3:09 p.m. UTC | #11
On Wed, Mar 18, 2015, at 13:42, Daniel Borkmann wrote:
> On 03/18/2015 01:20 PM, Stephan Mueller wrote:
> > Am Mittwoch, 18. März 2015, 13:19:07 schrieb Hannes Frederic Sowa:
> >>>> My proposal would be to add a
> >>>>
> >>>> #define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" : :
> >>>> "m"(
> >>>> ({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
> >>>>
> >>>> and use this in the code function.
> >>>>
> >>>> This is documented in gcc manual 6.43.2.5.
> >>>
> >>> That one adds the zeroization instructuctions. But now there are much
> >>> more than with the barrier.
> >>>
> >>>    400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
> >>>    400470:       00
> >>>    400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
> >>>    400478:       00 00
> >>>    40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
> >>>    400481:       00
> >>>    400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
> >>>    400489:       00 00
> >>>    40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
> >>>    400492:       00 00
> >>>    400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
> >>>    40049b:       00
> >>>
> >>> Any ideas?
> >>
> >> Hmm, correct definition of u8?
> >
> > I use unsigned char
> >>
> >> Which version of gcc do you use? I can't see any difference if I
> >> compile your example at -O2.
> >
> > gcc-Version 4.9.2 20150212 (Red Hat 4.9.2-6) (GCC)

Well, was an error on my side, I see the same behavior.

> 
> I can see the same with the gcc version I previously posted. So
> it clears the 20 bytes from your example (movq, movq, movl) at
> two locations, presumably buf[] and b[].

Yes, it looks like that. The reservation on the stack changes, too.

Seems like just using barrier() is the best and easiest option.

Thanks,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller March 18, 2015, 4:02 p.m. UTC | #12
Am Mittwoch, 18. März 2015, 16:09:34 schrieb Hannes Frederic Sowa:

Hi Hannes,

>On Wed, Mar 18, 2015, at 13:42, Daniel Borkmann wrote:
>> On 03/18/2015 01:20 PM, Stephan Mueller wrote:
>> > Am Mittwoch, 18. März 2015, 13:19:07 schrieb Hannes Frederic Sowa:
>> >>>> My proposal would be to add a
>> >>>> 
>> >>>> #define OPTIMIZER_HIDE_MEM(ptr, len) __asm__ __volatile__ ("" :
>> >>>> :
>> >>>> "m"(
>> >>>> ({ struct { u8 b[len]; } *p = (void *)ptr ; *p; }) )
>> >>>> 
>> >>>> and use this in the code function.
>> >>>> 
>> >>>> This is documented in gcc manual 6.43.2.5.
>> >>> 
>> >>> That one adds the zeroization instructuctions. But now there are
>> >>> much
>> >>> more than with the barrier.
>> >>> 
>> >>>    400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
>> >>>    400470:       00
>> >>>    400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
>> >>>    400478:       00 00
>> >>>    40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
>> >>>    400481:       00
>> >>>    400482:       48 c7 44 24 20 00 00    movq   $0x0,0x20(%rsp)
>> >>>    400489:       00 00
>> >>>    40048b:       48 c7 44 24 28 00 00    movq   $0x0,0x28(%rsp)
>> >>>    400492:       00 00
>> >>>    400494:       c7 44 24 30 00 00 00    movl   $0x0,0x30(%rsp)
>> >>>    40049b:       00
>> >>> 
>> >>> Any ideas?
>> >> 
>> >> Hmm, correct definition of u8?
>> > 
>> > I use unsigned char
>> > 
>> >> Which version of gcc do you use? I can't see any difference if I
>> >> compile your example at -O2.
>> > 
>> > gcc-Version 4.9.2 20150212 (Red Hat 4.9.2-6) (GCC)
>
>Well, was an error on my side, I see the same behavior.
>
>> I can see the same with the gcc version I previously posted. So
>> it clears the 20 bytes from your example (movq, movq, movl) at
>> two locations, presumably buf[] and b[].
>
>Yes, it looks like that. The reservation on the stack changes, too.
>
>Seems like just using barrier() is the best and easiest option.

Would you prepare a patch for that?
>
>Thanks,
>Hannes


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o March 18, 2015, 5:41 p.m. UTC | #13
Maybe we should add a kernel self-test that automatically checks
whether or not memset_explicit() gets optimized away?  Otherwise we
might not notice when gcc or how we implement barrier() or whatever
else we end up using ends up changing.

It shold be something that is really fast, so it might be a good idea
to simply automatically run it as part of an __initcall()
unconditionally.  We can debate where the __initcall() lives, but I'd
prefer that it be run even if the crypto layer isn't configured for
some reason.  Hopefully such an self-test is small enough that the
kernel bloat people won't complain.  :-)

							 -Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa March 18, 2015, 5:56 p.m. UTC | #14
On Wed, Mar 18, 2015, at 18:41, Theodore Ts'o wrote:
> Maybe we should add a kernel self-test that automatically checks
> whether or not memset_explicit() gets optimized away?  Otherwise we
> might not notice when gcc or how we implement barrier() or whatever
> else we end up using ends up changing.
> 
> It shold be something that is really fast, so it might be a good idea
> to simply automatically run it as part of an __initcall()
> unconditionally.  We can debate where the __initcall() lives, but I'd
> prefer that it be run even if the crypto layer isn't configured for
> some reason.  Hopefully such an self-test is small enough that the
> kernel bloat people won't complain.  :-)
> 
> 							 -Ted

Maybe a BUILD_BUGON: ;)

__label__ l1, l2;
char buffer[1024];
l1:
    memset(buffer, 0, 1024);
l2:
  BUILD_BUGON(&&l1 == &&l2);

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Theodore Ts'o March 18, 2015, 5:58 p.m. UTC | #15
On Wed, Mar 18, 2015 at 06:56:19PM +0100, Hannes Frederic Sowa wrote:
> 
> Maybe a BUILD_BUGON: ;)

Even better!  :-)

				- Ted

> 
> __label__ l1, l2;
> char buffer[1024];
> l1:
>     memset(buffer, 0, 1024);
> l2:
>   BUILD_BUGON(&&l1 == &&l2);
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 10, 2015, 1:25 p.m. UTC | #16
Am Mittwoch, 18. März 2015, 12:09:45 schrieb Stephan Mueller:

Hi,

>Am Mittwoch, 18. März 2015, 11:56:43 schrieb Daniel Borkmann:
>
>Hi Daniel,
>
>>On 03/18/2015 11:50 AM, Hannes Frederic Sowa wrote:
>>> On Wed, Mar 18, 2015, at 10:53, mancha wrote:
>>>> Hi.
>>>> 
>>>> The kernel RNG introduced memzero_explicit in d4c5efdb9777 to
>>>> protect
>>>> 
>>>> memory cleansing against things like dead store optimization:
>>>>     void memzero_explicit(void *s, size_t count)
>>>>     {
>>>>     
>>>>             memset(s, 0, count);
>>>>             OPTIMIZER_HIDE_VAR(s);
>>>>     
>>>>     }
>>>> 
>>>> OPTIMIZER_HIDE_VAR, introduced in fe8c8a126806 to protect
>>>> crypto_memneq>>
>>>> 
>>>> against timing analysis, is defined when using gcc as:
>>>>     #define OPTIMIZER_HIDE_VAR(var) __asm__ ("" : "=r" (var) : "0"
>>>>     (var))
>>>> 
>>>> My tests with gcc 4.8.2 on x86 find it insufficient to prevent gcc
>>>> from optimizing out memset (i.e. secrets remain in memory).
>>>> 
>>>> Two things that do work:
>>>>     __asm__ __volatile__ ("" : "=r" (var) : "0" (var))
>>> 
>>> You are correct, volatile signature should be added to
>>> OPTIMIZER_HIDE_VAR. Because we use an output variable "=r", gcc is
>>> allowed to check if it is needed and may remove the asm statement.
>>> Another option would be to just use var as an input variable - asm
>>> blocks without output variables are always considered being volatile
>>> by gcc.
>>> 
>>> Can you send a patch?
>>> 
>>> I don't think it is security critical, as Daniel pointed out, the
>>> call
>>> will happen because the function is an external call to the crypto
>>> functions, thus the compiler has to flush memory on return.
>>
>>Just had a look.
>>
>>$ gdb vmlinux
>>(gdb) disassemble memzero_explicit
>>
>>Dump of assembler code for function memzero_explicit:
>>    0xffffffff813a18b0 <+0>:	push   %rbp
>>    0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
>>    0xffffffff813a18b4 <+4>:	xor    %esi,%esi
>>    0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
>>    0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
>>    0xffffffff813a18be <+14>:	pop    %rbp
>>    0xffffffff813a18bf <+15>:	retq
>>
>>End of assembler dump.
>>
>>(gdb) disassemble extract_entropy
>>[...]
>>
>>    0xffffffff814a5000 <+304>:	sub    %r15,%rbx
>>    0xffffffff814a5003 <+307>:	jne    0xffffffff814a4f80
>>
>><extract_entropy+176> 0xffffffff814a5009 <+313>:	mov    %r12,%rdi
>>
>>    0xffffffff814a500c <+316>:	mov    $0xa,%esi
>>    0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0
>>
>><memzero_explicit> 0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
>>[...]
>>
>>I would be fine with __volatile__.
>
>Are we sure that simply adding a __volatile__ works in any case? I just
>did a test with a simple user space app:
>
>static inline void memset_secure(void *s, int c, size_t n)
>{
>        memset(s, c, n);
>        //__asm__ __volatile__("": : :"memory");
>        __asm__ __volatile__("" : "=r" (s) : "0" (s));
>}
>
>int main(int argc, char *argv[])
>{
>#define BUFLEN 20
>        char buf[BUFLEN];
>
>        snprintf(buf, (BUFLEN - 1), "teststring\n");
>        printf("%s", buf);
>
>        memset_secure(buf, 0, BUFLEN);
>}
>
>When using the discussed code of __asm__ __volatile__("" : "=r" (s) :
>"0" (s));  I do not find the code implementing memset(0) in objdump.
>Only when I enable the memory barrier, I see the following (when
>compiling with -O2):
>
>objdump -d memset_secure:
>...
>0000000000400440 <main>:
>...
>  400469:       48 c7 04 24 00 00 00    movq   $0x0,(%rsp)
>  400470:       00
>  400471:       48 c7 44 24 08 00 00    movq   $0x0,0x8(%rsp)
>  400478:       00 00
>  40047a:       c7 44 24 10 00 00 00    movl   $0x0,0x10(%rsp)
>  400481:       00
>...

I would like to bring up that topic again as I did some more analyses:

For testing I used the following code:

static inline void memset_secure(void *s, int c, size_t n)
{
        memset(s, c, n);
	BARRIER
}

where BARRIER is defined as:

(1) __asm__ __volatile__("" : "=r" (s) : "0" (s));

(2) __asm__ __volatile__("": : :"memory");

(3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");

I tested the code with gcc and clang, considering that there is effort 
underway to compile the kernel with clang too.

The following table marks an X when the aforementioned movq/movl code is 
present (or an invocation of memset@plt) in the object code (i.e. the code we 
want). Contrary the table marks - where the code is not present (i.e. the code 
we do not want):

         | BARRIER  | (1) | (2) | (3)
---------+----------+     |     |
Compiler |          |     |     |
=========+==========+==================
                    |     |     |
gcc -O0             |  X  |  X  |  X
                    |     |     |
gcc -O2             |  -  |  X  |  X
                    |     |     |
gcc -O3             |  -  |  X  |  X
                    |     |     |
clang -00           |  X  |  X  |  X
                    |     |     |
clang -02           |  X  |  -  |  X
                    |     |     |
clang -03           |  -  |  -  |  X

As the kernel is compiled with -O2, clang folks would still be left uncovered 
with the current solution (i.e. BARRIER option (2)).

Thus, may I propose to update the patch to use option (3) instead as (i) it 
does not cost anything extra on gcc and (ii) it covers clang too?

Ciao
Stephan 
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa April 10, 2015, 2 p.m. UTC | #17
On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
> I would like to bring up that topic again as I did some more analyses:
> 
> For testing I used the following code:
> 
> static inline void memset_secure(void *s, int c, size_t n)
> {
>         memset(s, c, n);
> 	BARRIER
> }
> 
> where BARRIER is defined as:
> 
> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
> 
> (2) __asm__ __volatile__("": : :"memory");
> 
> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");

Hm, I wonder a little bit...

Could you quickly test if you replace (s) with (n) just for the fun of
it? I don't know if we should ask clang people about that, at least it
is their goal to be as highly compatible with gcc inline asm.

Thanks for looking into this!

Bye,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 10, 2015, 2:09 p.m. UTC | #18
Am Freitag, 10. April 2015, 16:00:03 schrieb Hannes Frederic Sowa:

Hi Hannes,

>On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
>> I would like to bring up that topic again as I did some more analyses:
>> 
>> For testing I used the following code:
>> 
>> static inline void memset_secure(void *s, int c, size_t n)
>> {
>> 
>>         memset(s, c, n);
>> 	
>> 	BARRIER
>> 
>> }
>> 
>> where BARRIER is defined as:
>> 
>> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
>> 
>> (2) __asm__ __volatile__("": : :"memory");
>> 
>> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");
>
>Hm, I wonder a little bit...
>
>Could you quickly test if you replace (s) with (n) just for the fun of
>it? I don't know if we should ask clang people about that, at least it
>is their goal to be as highly compatible with gcc inline asm.

Using 

 __asm__ __volatile__("" : "=r" (n) : "0" (n) : "memory");

clang O2/3: no mov

gcc O2/3: mov present

==> not good


Using
 __asm__ __volatile__("" : "=r" (n) : "0" (n));

clang O2/3: no mov

gcc O2/3: no mov


==> not good


What do you expect that change shall do?

>
>Thanks for looking into this!
>
>Bye,
>Hannes


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
mancha April 10, 2015, 2:22 p.m. UTC | #19
On Fri, Apr 10, 2015 at 04:09:10PM +0200, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 16:00:03 schrieb Hannes Frederic Sowa:
> 
> Hi Hannes,
> 
> >On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
> >> I would like to bring up that topic again as I did some more analyses:
> >> 
> >> For testing I used the following code:
> >> 
> >> static inline void memset_secure(void *s, int c, size_t n)
> >> {
> >> 
> >>         memset(s, c, n);
> >> 	
> >> 	BARRIER
> >> 
> >> }
> >> 
> >> where BARRIER is defined as:
> >> 
> >> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
> >> 
> >> (2) __asm__ __volatile__("": : :"memory");
> >> 
> >> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");
> >
> >Hm, I wonder a little bit...
> >
> >Could you quickly test if you replace (s) with (n) just for the fun of
> >it? I don't know if we should ask clang people about that, at least it
> >is their goal to be as highly compatible with gcc inline asm.
> 
> Using 
> 
>  __asm__ __volatile__("" : "=r" (n) : "0" (n) : "memory");
> 
> clang O2/3: no mov
> 
> gcc O2/3: mov present
> 
> ==> not good
> 
> 
> Using
>  __asm__ __volatile__("" : "=r" (n) : "0" (n));
> 
> clang O2/3: no mov
> 
> gcc O2/3: no mov
> 
> 
> ==> not good
> 
> 
> What do you expect that change shall do?
> 
> >
> >Thanks for looking into this!
> >
> >Bye,
> >Hannes
> 
> 
> Ciao
> Stephan

Thanks for the comprehensive testing! Clang 3.3 and was giving me good
results; didn't try newer versions.

I wonder what your tests give with an earlier suggestion of mine:

#define barrier(p) __asm__ __volatile__("": :"r"(p) :"memory")

void memzero_explicit(void *s, size_t count)
{
  memset(s, 0, count);
  barrier(s);
}

--mancha
Hannes Frederic Sowa April 10, 2015, 2:26 p.m. UTC | #20
On Fr, 2015-04-10 at 16:09 +0200, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 16:00:03 schrieb Hannes Frederic Sowa:
> 
> Hi Hannes,
> 
> >On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
> >> I would like to bring up that topic again as I did some more analyses:
> >> 
> >> For testing I used the following code:
> >> 
> >> static inline void memset_secure(void *s, int c, size_t n)
> >> {
> >> 
> >>         memset(s, c, n);
> >> 	
> >> 	BARRIER
> >> 
> >> }
> >> 
> >> where BARRIER is defined as:
> >> 
> >> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
> >> 
> >> (2) __asm__ __volatile__("": : :"memory");
> >> 
> >> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");
> >
> >Hm, I wonder a little bit...
> >
> >Could you quickly test if you replace (s) with (n) just for the fun of
> >it? I don't know if we should ask clang people about that, at least it
> >is their goal to be as highly compatible with gcc inline asm.
> 
> Using 
> 
>  __asm__ __volatile__("" : "=r" (n) : "0" (n) : "memory");
> 
> clang O2/3: no mov
> 
> gcc O2/3: mov present
> 
> ==> not good

I suspected a problem in how volatile with non-present output args could
be different, but this seems not to be the case.

I would contact llvm/clang mailing list and ask. Maybe there is a
problem? It seems kind of strange to me...

Thanks,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 10, 2015, 2:33 p.m. UTC | #21
Am Freitag, 10. April 2015, 14:22:08 schrieb mancha security:

Hi mancha,

>__asm__ __volatile__("": :"r"(p) :"memory")

gcc -O2/3: mov present

clang -O2/3: mov present

==> approach would be good too.

Note, the assembly code does not seem to change whether to use this approach 
or the one I initially tested.


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 10, 2015, 2:36 p.m. UTC | #22
Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:

Hi Hannes,

>On Fr, 2015-04-10 at 16:09 +0200, Stephan Mueller wrote:
>> Am Freitag, 10. April 2015, 16:00:03 schrieb Hannes Frederic Sowa:
>> 
>> Hi Hannes,
>> 
>> >On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
>> >> I would like to bring up that topic again as I did some more analyses:
>> >> 
>> >> For testing I used the following code:
>> >> 
>> >> static inline void memset_secure(void *s, int c, size_t n)
>> >> {
>> >> 
>> >>         memset(s, c, n);
>> >> 	
>> >> 	BARRIER
>> >> 
>> >> }
>> >> 
>> >> where BARRIER is defined as:
>> >> 
>> >> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
>> >> 
>> >> (2) __asm__ __volatile__("": : :"memory");
>> >> 
>> >> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");
>> >
>> >Hm, I wonder a little bit...
>> >
>> >Could you quickly test if you replace (s) with (n) just for the fun of
>> >it? I don't know if we should ask clang people about that, at least it
>> >is their goal to be as highly compatible with gcc inline asm.
>> 
>> Using
>> 
>>  __asm__ __volatile__("" : "=r" (n) : "0" (n) : "memory");
>> 
>> clang O2/3: no mov
>> 
>> gcc O2/3: mov present
>> 
>> ==> not good
>
>I suspected a problem in how volatile with non-present output args could
>be different, but this seems not to be the case.
>
>I would contact llvm/clang mailing list and ask. Maybe there is a
>problem? It seems kind of strange to me...

Do you really think this is a compiler issue? I would rather think it is how 
to interpret the pure "memory" asm option. Thus, I would rather think that 
both, gcc and clang are right and we just need to use the code that fits both.
>
>Thanks,
>Hannes
>
>
>--
>To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hannes Frederic Sowa April 10, 2015, 2:45 p.m. UTC | #23
On Fr, 2015-04-10 at 16:36 +0200, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:
> 
> Hi Hannes,
> 
> >On Fr, 2015-04-10 at 16:09 +0200, Stephan Mueller wrote:
> >> Am Freitag, 10. April 2015, 16:00:03 schrieb Hannes Frederic Sowa:
> >> 
> >> Hi Hannes,
> >> 
> >> >On Fr, 2015-04-10 at 15:25 +0200, Stephan Mueller wrote:
> >> >> I would like to bring up that topic again as I did some more analyses:
> >> >> 
> >> >> For testing I used the following code:
> >> >> 
> >> >> static inline void memset_secure(void *s, int c, size_t n)
> >> >> {
> >> >> 
> >> >>         memset(s, c, n);
> >> >> 	
> >> >> 	BARRIER
> >> >> 
> >> >> }
> >> >> 
> >> >> where BARRIER is defined as:
> >> >> 
> >> >> (1) __asm__ __volatile__("" : "=r" (s) : "0" (s));
> >> >> 
> >> >> (2) __asm__ __volatile__("": : :"memory");
> >> >> 
> >> >> (3) __asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");
> >> >
> >> >Hm, I wonder a little bit...
> >> >
> >> >Could you quickly test if you replace (s) with (n) just for the fun of
> >> >it? I don't know if we should ask clang people about that, at least it
> >> >is their goal to be as highly compatible with gcc inline asm.
> >> 
> >> Using
> >> 
> >>  __asm__ __volatile__("" : "=r" (n) : "0" (n) : "memory");
> >> 
> >> clang O2/3: no mov
> >> 
> >> gcc O2/3: mov present
> >> 
> >> ==> not good
> >
> >I suspected a problem in how volatile with non-present output args could
> >be different, but this seems not to be the case.
> >
> >I would contact llvm/clang mailing list and ask. Maybe there is a
> >problem? It seems kind of strange to me...
> 
> Do you really think this is a compiler issue? I would rather think it is how 
> to interpret the pure "memory" asm option. Thus, I would rather think that 
> both, gcc and clang are right and we just need to use the code that fits both.

Clang docs state that they want to be highly compatible with gcc inline
asm. Also, kernel code also uses barrier() in other places and in my
opinion, the compiler cannot make any assumptions about memory and
registers when using volatile asm with memory clobbers. But somehow
clang+llvm seems it does, no?

Thanks,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 10, 2015, 2:46 p.m. UTC | #24
On 04/10/2015 04:36 PM, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:
...
>> I suspected a problem in how volatile with non-present output args could
>> be different, but this seems not to be the case.
>>
>> I would contact llvm/clang mailing list and ask. Maybe there is a
>> problem? It seems kind of strange to me...

+1

> Do you really think this is a compiler issue?

If clang/LLVM advertises "GCC compatibility", then this would
certainly be a different behavior.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 10, 2015, 2:50 p.m. UTC | #25
Am Freitag, 10. April 2015, 16:46:04 schrieb Daniel Borkmann:

Hi Daniel,

>On 04/10/2015 04:36 PM, Stephan Mueller wrote:
>> Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:
>...
>
>>> I suspected a problem in how volatile with non-present output args could
>>> be different, but this seems not to be the case.
>>> 
>>> I would contact llvm/clang mailing list and ask. Maybe there is a
>>> problem? It seems kind of strange to me...
>
>+1
>
>> Do you really think this is a compiler issue?
>
>If clang/LLVM advertises "GCC compatibility", then this would
>certainly be a different behavior.

As you wish. I will contact the clang folks. As the proposed fix is not super 
urgend, I think we can leave it until I got word from clang.

>--
>To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 10, 2015, 2:54 p.m. UTC | #26
On 04/10/2015 04:50 PM, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 16:46:04 schrieb Daniel Borkmann:
>
> Hi Daniel,
>
>> On 04/10/2015 04:36 PM, Stephan Mueller wrote:
>>> Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:
>> ...
>>
>>>> I suspected a problem in how volatile with non-present output args could
>>>> be different, but this seems not to be the case.
>>>>
>>>> I would contact llvm/clang mailing list and ask. Maybe there is a
>>>> problem? It seems kind of strange to me...
>>
>> +1
>>
>>> Do you really think this is a compiler issue?
>>
>> If clang/LLVM advertises "GCC compatibility", then this would
>> certainly be a different behavior.
>
> As you wish. I will contact the clang folks. As the proposed fix is not super
> urgend, I think we can leave it until I got word from clang.

Okay, that would be good, please let us know.

I believe there's certainly effort in the direction of an official
kernel clang/LLVM support, but not officially supported yet. If we
could get some clarification on the issue, even better.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
mancha April 10, 2015, 8:09 p.m. UTC | #27
On Fri, Apr 10, 2015 at 04:33:17PM +0200, Stephan Mueller wrote:
> Am Freitag, 10. April 2015, 14:22:08 schrieb mancha security:
> 
> Hi mancha,
> 
> >__asm__ __volatile__("": :"r"(p) :"memory")
> 
> gcc -O2/3: mov present
> 
> clang -O2/3: mov present
> 
> ==> approach would be good too.
> 
> Note, the assembly code does not seem to change whether to use this approach 
> or the one I initially tested.
> 
> 
> Ciao
> Stephan

Hi Stephan.

Many thanks for confirmation.
Stephan Mueller April 27, 2015, 7:10 p.m. UTC | #28
Am Freitag, 10. April 2015, 16:50:22 schrieb Stephan Mueller:

Hi Stephan,

>Am Freitag, 10. April 2015, 16:46:04 schrieb Daniel Borkmann:
>
>Hi Daniel,
>
>>On 04/10/2015 04:36 PM, Stephan Mueller wrote:
>>> Am Freitag, 10. April 2015, 16:26:00 schrieb Hannes Frederic Sowa:
>>...
>>
>>>> I suspected a problem in how volatile with non-present output args could
>>>> be different, but this seems not to be the case.
>>>> 
>>>> I would contact llvm/clang mailing list and ask. Maybe there is a
>>>> problem? It seems kind of strange to me...
>>
>>+1
>>
>>> Do you really think this is a compiler issue?
>>
>>If clang/LLVM advertises "GCC compatibility", then this would
>>certainly be a different behavior.
>
>As you wish. I will contact the clang folks. As the proposed fix is not super
>urgend, I think we can leave it until I got word from clang.

I posted the issue on the clang mailing list on April 10 -- no word so far. I 
would interpret this as a sign that it is a no-issue for them.

Thus, I propose we update our memzero_explicit implementation to use

__asm__ __volatile__("" : "=r" (s) : "0" (s) : "memory");

Concerns?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 27, 2015, 8:34 p.m. UTC | #29
On 04/27/2015 09:10 PM, Stephan Mueller wrote:
...
> I posted the issue on the clang mailing list on April 10 -- no word so far. I
> would interpret this as a sign that it is a no-issue for them.

Hm. ;)

Here's a bug report on the topic, gcc vs llvm:

   https://llvm.org/bugs/show_bug.cgi?id=15495

Lets add a new barrier macro to linux/compiler{,-gcc}.h, f.e.

   #define barrier_data(ptr) __asm__ __volatile__("" : : "r" (ptr) : "memory")

or the version Mancha proposed. You could wrap that ...

   #define OPTIMIZER_HIDE(ptr)   barrier_data(ptr)

... and use that one for memzero_explicit() instead:

   void memzero_explicit(void *s, size_t count)
   {
      memset(s, 0, count);
      OPTIMIZER_HIDE(s);
   }

It certainly needs comments explaining in what situations to use
which OPTIMIZER_HIDE* variants, etc.

Do you want to send a patch?
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephan Mueller April 27, 2015, 8:41 p.m. UTC | #30
Am Montag, 27. April 2015, 22:34:30 schrieb Daniel Borkmann:

Hi Daniel,

> On 04/27/2015 09:10 PM, Stephan Mueller wrote:
> ...
> 
> > I posted the issue on the clang mailing list on April 10 -- no word so
> > far. I would interpret this as a sign that it is a no-issue for them.
> 
> Hm. ;)
> 
> Here's a bug report on the topic, gcc vs llvm:
> 
>    https://llvm.org/bugs/show_bug.cgi?id=15495
> 
> Lets add a new barrier macro to linux/compiler{,-gcc}.h, f.e.
> 
>    #define barrier_data(ptr) __asm__ __volatile__("" : : "r" (ptr) :
> "memory")
> 
> or the version Mancha proposed. You could wrap that ...
> 
>    #define OPTIMIZER_HIDE(ptr)   barrier_data(ptr)
> 
> ... and use that one for memzero_explicit() instead:
> 
>    void memzero_explicit(void *s, size_t count)
>    {
>       memset(s, 0, count);
>       OPTIMIZER_HIDE(s);
>    }
> 
> It certainly needs comments explaining in what situations to use
> which OPTIMIZER_HIDE* variants, etc.
> 
> Do you want to send a patch?

It seems you have the code already in mind, so please if you could write it 
:-)
Daniel Borkmann April 27, 2015, 8:53 p.m. UTC | #31
On 04/27/2015 10:41 PM, Stephan Mueller wrote:
...
> It seems you have the code already in mind, so please if you could write it
> :-)

Ok, sure. I'll cook something by tomorrow morning.

Cheers,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- a/lib/string.c
+++ b/lib/string.c
@@ -616,7 +616,7 @@  EXPORT_SYMBOL(memset);
 void memzero_explicit(void *s, size_t count)
 {
        memset(s, 0, count);
-       OPTIMIZER_HIDE_VAR(s);
+       barrier();
 }
 EXPORT_SYMBOL(memzero_explicit);