From patchwork Wed Mar 14 13:17:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salvatore Mesoraca X-Patchwork-Id: 10282157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5D55A6038F for ; Wed, 14 Mar 2018 13:17:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6A412863D for ; Wed, 14 Mar 2018 13:17:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB541286AC; Wed, 14 Mar 2018 13:17:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A920A28666 for ; Wed, 14 Mar 2018 13:17:49 +0000 (UTC) Received: (qmail 24100 invoked by uid 550); 14 Mar 2018 13:17:47 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24057 invoked from network); 14 Mar 2018 13:17:46 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=iITzaUcfAaEIVDvCIBFgWh/44XucP90daGxl2vB0D0M=; b=Az9dyYXArY23SezouoAE19ezMAD2r1J4tweVh/fAtXdpRL28fo4lMgg3282DWvD1ho ygaZwl6BtnyHWdUBwR/HFCaJCH3fJcF47u7c1f1xHpQNni61g6bjZgruo5t0Q7qi9OWp vN2beI2meH3On9yz1hX2ihh9HrL31+SHIK2RTOMlhl7jfUFWGwcdTtRNdvdmh4LGm7k+ JflbSfYfDH91uy6vXm/nWUmlMxjySIuyun4ZgQsdO2YBEWo8R5RcEscyHH3wfX1QMW1r NzlENdm/hcCa6woyRJs3/e5uf6KpH1kJGnIF+0qRAV1flPnU+dGRQfc3W9GxJdrqT0gm m2lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iITzaUcfAaEIVDvCIBFgWh/44XucP90daGxl2vB0D0M=; b=cbxTtCemYhoDs1GuaR/3YHzXABAdTqY5Fz6JJECZH/eJn1QI6CNcwwU2Osgpg+3bb9 KQ4vmwe7VqD3Qqgld5nYRnRIET39yKktE0dzvFQxO4tkaWf3/JwisnQGlAIZ4MJt+piN mCtiZBEI6m5GEGupL23OODW9t8cu+tDMXNf/qGOamPgL0h/G85fVPfD2DVjP1hCbxbah q2aAjiUy+WSRBd9bdzgyBtztVXxoNIb0ne49f+EIBBeYHUXO/g8/61iuSHK33EoLySOh zxQlsgv3bzw6plCoTP/q0FfUISiiOkhiQAaSssnS9/ERyADpktfQ/PUByiggOz/WxUQI dutg== X-Gm-Message-State: AElRT7EPxT3O9LUC8lLwb/23UoXCSdXp9b1kr5tVkLuH24WTmzfW/4a2 iCNH0g/1WgAIZpWKW7u+cx8= X-Google-Smtp-Source: AG47ELtUfzZecR461FKP5V6Q4kkg9zV0gidvcS8CXRiH1v6KZQxWygj/gg2PpKXeBX/987thWMwRSg== X-Received: by 10.223.187.147 with SMTP id q19mr3633745wrg.150.1521033455411; Wed, 14 Mar 2018 06:17:35 -0700 (PDT) From: Salvatore Mesoraca To: linux-kernel@vger.kernel.org Cc: kernel-hardening@lists.openwall.com, linux-crypto@vger.kernel.org, "David S. Miller" , Herbert Xu , Kees Cook , Salvatore Mesoraca Subject: [PATCH] crypto: ctr: avoid VLA use Date: Wed, 14 Mar 2018 14:17:30 +0100 Message-Id: <1521033450-14447-1-git-send-email-s.mesoraca16@gmail.com> X-Mailer: git-send-email 1.9.1 X-Virus-Scanned: ClamAV using ClamSMTP All ciphers implemented in Linux have a block size less than or equal to 16 bytes and the most demanding hw require 16 bits alignment for the block buffer. We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits alignment, unless the architecture support efficient unaligned accesses. We also check, at runtime, that our assumptions still stand, possibly dynamically allocating a new buffer, just in case something changes in the future. [1] https://lkml.org/lkml/2018/3/7/621 Signed-off-by: Salvatore Mesoraca --- Notes: Can we maybe skip the runtime check? crypto/ctr.c | 50 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 42 insertions(+), 8 deletions(-) diff --git a/crypto/ctr.c b/crypto/ctr.c index 854d924..f37adf0 100644 --- a/crypto/ctr.c +++ b/crypto/ctr.c @@ -35,6 +35,16 @@ struct crypto_rfc3686_req_ctx { struct skcipher_request subreq CRYPTO_MINALIGN_ATTR; }; +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +#define DECLARE_CIPHER_BUFFER(name) u8 name[16] +#else +#define DECLARE_CIPHER_BUFFER(name) u8 __aligned(16) name[16] +#endif + +#define CHECK_CIPHER_BUFFER(name, size, align) \ + likely(size <= sizeof(name) && \ + name == PTR_ALIGN(((u8 *) name), align + 1)) + static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key, unsigned int keylen) { @@ -52,22 +62,35 @@ static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key, return err; } -static void crypto_ctr_crypt_final(struct blkcipher_walk *walk, - struct crypto_cipher *tfm) +static int crypto_ctr_crypt_final(struct blkcipher_walk *walk, + struct crypto_cipher *tfm) { unsigned int bsize = crypto_cipher_blocksize(tfm); unsigned long alignmask = crypto_cipher_alignmask(tfm); u8 *ctrblk = walk->iv; - u8 tmp[bsize + alignmask]; - u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; unsigned int nbytes = walk->nbytes; + DECLARE_CIPHER_BUFFER(tmp); + u8 *keystream, *tmp2; + + if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask)) + keystream = tmp; + else { + tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC); + if (!tmp2) + return -ENOMEM; + keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1); + } crypto_cipher_encrypt_one(tfm, keystream, ctrblk); crypto_xor_cpy(dst, keystream, src, nbytes); crypto_inc(ctrblk, bsize); + + if (unlikely(keystream != tmp)) + kfree(tmp2); + return 0; } static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk, @@ -106,8 +129,17 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk, unsigned int nbytes = walk->nbytes; u8 *ctrblk = walk->iv; u8 *src = walk->src.virt.addr; - u8 tmp[bsize + alignmask]; - u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1); + DECLARE_CIPHER_BUFFER(tmp); + u8 *keystream, *tmp2; + + if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask)) + keystream = tmp; + else { + tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC); + if (!tmp2) + return -ENOMEM; + keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1); + } do { /* create keystream */ @@ -120,6 +152,8 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk, src += bsize; } while ((nbytes -= bsize) >= bsize); + if (unlikely(keystream != tmp)) + kfree(tmp2); return nbytes; } @@ -147,8 +181,8 @@ static int crypto_ctr_crypt(struct blkcipher_desc *desc, } if (walk.nbytes) { - crypto_ctr_crypt_final(&walk, child); - err = blkcipher_walk_done(desc, &walk, 0); + err = crypto_ctr_crypt_final(&walk, child); + err = blkcipher_walk_done(desc, &walk, err); } return err;