From patchwork Wed Sep 19 02:10:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605147 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4297C13AD for ; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31D8B2BF4E for ; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F3002C085; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2AB32BF25 for ; Wed, 19 Sep 2018 02:11:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731022AbeISHqu (ORCPT ); Wed, 19 Sep 2018 03:46:50 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:46673 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731016AbeISHqt (ORCPT ); Wed, 19 Sep 2018 03:46:49 -0400 Received: by mail-pl1-f195.google.com with SMTP id t19-v6so1850862ply.13 for ; Tue, 18 Sep 2018 19:11:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=I2hcMVXGV7OApvUgHaXTiO8YqbUDBC+hO91hCX/pOYw=; b=nWedSD77PbdXbq43SjIv5rQuXu9+93dyqtUH6/E2RiyVzFTAl5iKUDKx1d9d3f9k// EBs2+ESdGv6yui/pPVlJgbM/SkmdHhO3D6FDV1AMgjFOFAzXxA0eAD7sDEjn6h0oz4/E NGqZ/BxSS0uf6ybrKUXv9YHbY/3fiWBNLutSk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I2hcMVXGV7OApvUgHaXTiO8YqbUDBC+hO91hCX/pOYw=; b=aKF0OhXvyKIPB3j93uo62vBcCNPRU9MzYhx9zTwHS6Q+pnPNSmhKr2av9Hecy+46rz 1ewpZu6zCxWtFBbf5WH0XkihMrk62otIWZnQ7CrB8PoVua22eBXJNFYUYUX1aiy8izD1 NJQYP7ikzrISOgtjAvnM6vxios4Td3C3mBJ4cDnQFrEvU/NOIRzBgrf/ZtZpSckU7fDT zxJlrPJ/ZKPN3o1BZs5T/pJH6BjYkkrbxvlT0ZB4Kznpef6u1CP6ig+fOxsq6cpuvWLX C4F4jPfu1FAwAV60fH7iSCEcsnWoxg8yCdGgGDDH23PyECFKAgtvLcKT4QDiQ00jFp/O PQ4A== X-Gm-Message-State: APzg51Ad9JdtR8igMHK/N06kUj8L4veuWH766vxY5BNhutv97QV/vP3E UYTQODTjq6rXp7E64bSPU2SosQ== X-Google-Smtp-Source: ANB0VdakeZpqA62pIHcjWLmODfeSDbBHnDvggQr5UvRBkeFM5mt43H+cjzIv9vb1GM7ykyOWylVM0w== X-Received: by 2002:a17:902:e20b:: with SMTP id ce11-v6mr32383730plb.136.1537323081652; Tue, 18 Sep 2018 19:11:21 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id f62-v6sm27169215pfg.74.2018.09.18.19.11.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 16/23] crypto: sahara - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:53 -0700 Message-Id: <20180919021100.3380-17-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- drivers/crypto/sahara.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index e7540a5b8197..bbf166a97ad3 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -149,7 +149,7 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; }; struct sahara_aes_reqctx { @@ -621,14 +621,14 @@ static int sahara_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); tfm->base.crt_flags &= ~CRYPTO_TFM_RES_MASK; - tfm->base.crt_flags |= crypto_skcipher_get_flags(ctx->fallback) & + tfm->base.crt_flags |= crypto_sync_skcipher_get_flags(ctx->fallback) & CRYPTO_TFM_RES_MASK; return ret; } @@ -666,9 +666,9 @@ static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -688,9 +688,9 @@ static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -710,9 +710,9 @@ static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -732,9 +732,9 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -752,8 +752,7 @@ static int sahara_aes_cra_init(struct crypto_tfm *tfm) const char *name = crypto_tfm_alg_name(tfm); struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); - ctx->fallback = crypto_alloc_skcipher(name, 0, - CRYPTO_ALG_ASYNC | + ctx->fallback = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); @@ -769,7 +768,7 @@ static void sahara_aes_cra_exit(struct crypto_tfm *tfm) { struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev,