Message ID | 1453393448-13636-1-git-send-email-elicooper@gmx.com (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Herbert Xu |
Headers | show |
Hi Eli, > This aligns the stack pointer in chacha20_4block_xor_ssse3 to 64 bytes. > Fixes general protection faults and potential kernel panics. I assumed 16-byte alignment according to the System V AMD64 ABI, but this is obviously not true with -mpreferred-stack-boundary=3. The AVX2 version seems to be ok, so is Poly1305. Acked-by: Martin Willi <martin@strongswan.org> -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 22, 2016 at 08:55:24AM +0100, Martin Willi wrote: > Hi Eli, > > > This aligns the stack pointer in chacha20_4block_xor_ssse3 to 64 bytes. > > Fixes general protection faults and potential kernel panics. > > I assumed 16-byte alignment according to the System V AMD64 ABI, but > this is obviously not true with -mpreferred-stack-boundary=3. The AVX2 > version seems to be ok, so is Poly1305. > > Acked-by: Martin Willi <martin@strongswan.org> Patch applied. Thanks!
Can we queue this up for stable too, please?
On Mon, Jan 25, 2016 at 2:59 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> Patch applied. Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 27, 2016 at 01:40:00AM +0100, Jason A. Donenfeld wrote:
> Can we queue this up for stable too, please?
It'll go to stable automatically once Linus pulls it.
Cheers,
diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S b/arch/x86/crypto/chacha20-ssse3-x86_64.S index 712b130..3a33124 100644 --- a/arch/x86/crypto/chacha20-ssse3-x86_64.S +++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S @@ -157,7 +157,9 @@ ENTRY(chacha20_4block_xor_ssse3) # done with the slightly better performing SSSE3 byte shuffling, # 7/12-bit word rotation uses traditional shift+OR. - sub $0x40,%rsp + mov %rsp,%r11 + sub $0x80,%rsp + and $~63,%rsp # x0..15[0-3] = s0..3[0..3] movq 0x00(%rdi),%xmm1 @@ -620,6 +622,6 @@ ENTRY(chacha20_4block_xor_ssse3) pxor %xmm1,%xmm15 movdqu %xmm15,0xf0(%rsi) - add $0x40,%rsp + mov %r11,%rsp ret ENDPROC(chacha20_4block_xor_ssse3)
This aligns the stack pointer in chacha20_4block_xor_ssse3 to 64 bytes. Fixes general protection faults and potential kernel panics. Cc: stable@vger.kernel.org Signed-off-by: Eli Cooper <elicooper@gmx.com> --- arch/x86/crypto/chacha20-ssse3-x86_64.S | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)