From patchwork Tue Jun 30 12:18:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633847 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B03EC138C for ; Tue, 30 Jun 2020 12:19:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E82320774 for ; Tue, 30 Jun 2020 12:19:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519562; bh=+B/5KadR9ph4oZTJC6xZjWzAuzGP8oIUhYf4XHINBnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=PZ4DRJIymnHiAzT8SVeYX784kWb3e7Z6HP1atnQnrjqf0QZdBO3wUXgs96m8hXolI wEAogXd/KDA+3n8XkHCUWctT3j/K1B+7yluoVGUGG+OHg49lN3qdGdlI/N0E/VB/As 4ekgll51OxDYBGx777A2ZFEpw6RuRP1lew/VGG+A= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730344AbgF3MTW (ORCPT ); Tue, 30 Jun 2020 08:19:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:36220 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732509AbgF3MTV (ORCPT ); Tue, 30 Jun 2020 08:19:21 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 61CF5207F5; Tue, 30 Jun 2020 12:19:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519560; bh=+B/5KadR9ph4oZTJC6xZjWzAuzGP8oIUhYf4XHINBnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LP/y7NjiyZhBP+IdZgAC0ccdJps+vWTLYoFpHiMCts5IiITy6ozic+If+PrU2ixYI 0jkYx+Sf2kMark2ArdVoXD/r0REyUYLmih1PRa75DXAzrOmV4E35jKWa/z1nX1WKxd EXdCUQ9uhAUBOFjME1Vb6p/qFgaCFSXFF4kA3n4I= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 01/13] crypto: amlogic-gxl - default to build as module Date: Tue, 30 Jun 2020 14:18:55 +0200 Message-Id: <20200630121907.24274-2-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The AmLogic GXL crypto accelerator driver is built into the kernel if ARCH_MESON is set. However, given the single image policy of arm64, its defconfig enables all platforms by default, and so ARCH_MESON is usually enabled. This means that the AmLogic driver causes the arm64 defconfig build to pull in a huge chunk of the crypto stack as a builtin as well, which is undesirable, so let's make the amlogic GXL driver default to 'm' instead. Signed-off-by: Ard Biesheuvel Tested-by: Corentin Labbe --- drivers/crypto/amlogic/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/amlogic/Kconfig b/drivers/crypto/amlogic/Kconfig index cf9547602670..cf2c676a7093 100644 --- a/drivers/crypto/amlogic/Kconfig +++ b/drivers/crypto/amlogic/Kconfig @@ -1,7 +1,7 @@ config CRYPTO_DEV_AMLOGIC_GXL tristate "Support for amlogic cryptographic offloader" depends on HAS_IOMEM - default y if ARCH_MESON + default m if ARCH_MESON select CRYPTO_SKCIPHER select CRYPTO_ENGINE select CRYPTO_ECB From patchwork Tue Jun 30 12:18:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633849 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7512138C for ; Tue, 30 Jun 2020 12:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8FC4D20780 for ; Tue, 30 Jun 2020 12:19:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519567; bh=DIVfVjdoOEiiANGsoss9lUgmhEyf0Nt3/G3Fd+zHIFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=APxPE5o3CQZLPE2r7TXeS90E0/SsgA0/Pu/DigrZXEaBKzxEiAzhqp2WrKUA9E/iy 6iSWVvkIr5av7RJvdKWXtz+Z7Vo4jPAc0kiXCz1GTMN99usZwfx21s6orJS/Q4losU h4II9Pqfu68YQQ1INvc+oYWYbDlXph/Bias9+E/4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732509AbgF3MT1 (ORCPT ); Tue, 30 Jun 2020 08:19:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:36256 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733295AbgF3MTZ (ORCPT ); Tue, 30 Jun 2020 08:19:25 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 46489207E8; Tue, 30 Jun 2020 12:19:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519565; bh=DIVfVjdoOEiiANGsoss9lUgmhEyf0Nt3/G3Fd+zHIFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aRqqASgs5FBjlvgNEsetDQlVFFScBiwdSRxczscGowA31LpuV9w6uPOSRAD5CwH/2 7UKS+2OjuErj3Y8bniT/I5lvRLtshyvkD+8KtbOBsnXMNbxbS79khp5zk8okR/blTN H1LzmEuSr/8GY105nMJ2m/QOfDxL9c820Gn1BiYM= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 02/13] crypto: amlogic-gxl - permit async skcipher as fallback Date: Tue, 30 Jun 2020 14:18:56 +0200 Message-Id: <20200630121907.24274-3-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the amlogic-gxl driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Tested-by: Corentin Labbe --- drivers/crypto/amlogic/amlogic-gxl-cipher.c | 27 ++++++++++---------- drivers/crypto/amlogic/amlogic-gxl.h | 3 ++- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c index 9819dd50fbad..5880b94dcb32 100644 --- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c +++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c @@ -64,22 +64,20 @@ static int meson_cipher_do_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct meson_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(req, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG algt = container_of(alg, struct meson_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(req, op->fallback_tfm); - skcipher_request_set_callback(req, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(req, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); + if (rctx->op_dir == MESON_DECRYPT) - err = crypto_skcipher_decrypt(req); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(req); - skcipher_request_zero(req); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -321,15 +319,16 @@ int meson_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct meson_alg_template, alg.skcipher); op->mc = algt->mc; - sktfm->reqsize = sizeof(struct meson_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->mc->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct meson_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + op->enginectx.op.do_one_request = meson_handle_cipher_request; op->enginectx.op.prepare_request = NULL; op->enginectx.op.unprepare_request = NULL; @@ -345,7 +344,7 @@ void meson_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); } int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -377,5 +376,5 @@ int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/amlogic-gxl.h index b7f2de91ab76..dc0f142324a3 100644 --- a/drivers/crypto/amlogic/amlogic-gxl.h +++ b/drivers/crypto/amlogic/amlogic-gxl.h @@ -109,6 +109,7 @@ struct meson_dev { struct meson_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -126,7 +127,7 @@ struct meson_cipher_tfm_ctx { u32 keylen; u32 keymode; struct meson_dev *mc; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Tue Jun 30 12:18:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633851 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D964C138C for ; Tue, 30 Jun 2020 12:19:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1A0B207E8 for ; Tue, 30 Jun 2020 12:19:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519571; bh=EkM87xQP80yqnQSxLq/oLvhUOcXLgVR/GSO8f9UrIuE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=yfZ9FvW15Cx/y0lDEZAIiYqPhzRNo7SonXzTjjb7es12NT7n85Gcx6sRKjrP3hWgt vzxrnP1i4aezDB6af5BditKPd5F6SoDndxVZdGNxtlz2VGdizQPUxWt4hg4h2gkm09 DoOvU5dLb6EkOtmTdQ+ziYBotbPHksCcyVhi/o7k= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387619AbgF3MTb (ORCPT ); Tue, 30 Jun 2020 08:19:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:36302 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733295AbgF3MTa (ORCPT ); Tue, 30 Jun 2020 08:19:30 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AFB5220774; Tue, 30 Jun 2020 12:19:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519569; bh=EkM87xQP80yqnQSxLq/oLvhUOcXLgVR/GSO8f9UrIuE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Df1paptExnRaOPBWRj/tLVQGWzPYBdJYSRq+6QSIeBu3FfyCTB2H2LMi7P7Afm213 KPvNtdMzaOISYK9mWv/gBtK8ZKOXsynPZ4jcmPNxa2BU4ss6GJ5SOlFY04dhl/Z71F tUPXJPDoOa7JXDNNP6L7gkCAzlKlWAHMb8p1mTlQ= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 03/13] crypto: omap-aes - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:18:57 +0200 Message-Id: <20200630121907.24274-4-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the omap-aes driver implements asynchronous versions of ecb(aes), cbc(aes) and ctr(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/omap-aes.c | 35 ++++++++++---------- drivers/crypto/omap-aes.h | 3 +- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index b5aff20c5900..25154b74dcc6 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -548,20 +548,18 @@ static int omap_aes_crypt(struct skcipher_request *req, unsigned long mode) !!(mode & FLAGS_CBC)); if (req->cryptlen < aes_fallback_sz) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, NULL, - NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); if (mode & FLAGS_ENCRYPT) - ret = crypto_skcipher_encrypt(subreq); + ret = crypto_skcipher_encrypt(&rctx->fallback_req); else - ret = crypto_skcipher_decrypt(subreq); - - skcipher_request_zero(subreq); + ret = crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } dd = omap_aes_find_dev(rctx); @@ -590,11 +588,11 @@ static int omap_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, memcpy(ctx->key, key, keylen); ctx->keylen = keylen; - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) return 0; @@ -640,15 +638,16 @@ static int omap_aes_init_tfm(struct crypto_skcipher *tfm) { const char *name = crypto_tfm_alg_name(&tfm->base); struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *blk; + struct crypto_skcipher *blk; - blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + blk = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); ctx->fallback = blk; - crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_aes_reqctx) + + crypto_skcipher_reqsize(blk)); ctx->enginectx.op.prepare_request = omap_aes_prepare_req; ctx->enginectx.op.unprepare_request = NULL; @@ -662,7 +661,7 @@ static void omap_aes_exit_tfm(struct crypto_skcipher *tfm) struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); if (ctx->fallback) - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); ctx->fallback = NULL; } diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h index 2d111bf906e1..23d073e87bb8 100644 --- a/drivers/crypto/omap-aes.h +++ b/drivers/crypto/omap-aes.h @@ -97,7 +97,7 @@ struct omap_aes_ctx { int keylen; u32 key[AES_KEYSIZE_256 / sizeof(u32)]; u8 nonce[4]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct omap_aes_gcm_ctx { @@ -110,6 +110,7 @@ struct omap_aes_reqctx { unsigned long mode; u8 iv[AES_BLOCK_SIZE]; u32 auth_tag[AES_BLOCK_SIZE / sizeof(u32)]; + struct skcipher_request fallback_req; // keep at the end }; #define OMAP_AES_QUEUE_LENGTH 1 From patchwork Tue Jun 30 12:18:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633853 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C072513BD for ; Tue, 30 Jun 2020 12:19:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9EDE9207E8 for ; Tue, 30 Jun 2020 12:19:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519576; bh=EeCRvztqBPjIModldKqrALiC118sBnoFFBwkYl587Jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=TFa7Xi80b1p0HbzLtI5s+jH5LRAg5wh9ZWL4UGqiCUSDMY8ykMmgjtYsWsBs1l8/u wQ0Npm2c1JAgAM/t+04qIrE5yvLCaiqw+6E4+p1ZGVoagZIrplirNhM/YUb6IPHKef 7/WpJRjBYwAFrLd0DwvxymKJnEhydAMADClsMxYE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387620AbgF3MTg (ORCPT ); Tue, 30 Jun 2020 08:19:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:36340 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733295AbgF3MTf (ORCPT ); Tue, 30 Jun 2020 08:19:35 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 503172073E; Tue, 30 Jun 2020 12:19:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519574; bh=EeCRvztqBPjIModldKqrALiC118sBnoFFBwkYl587Jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jYNBE982wNbTen0j57nfqIyNqaICA3jR9fCnAjxNeAPFXHcIU1pd+p/ZxR9ugyPp2 EeCK+9j61BzupZTnFkhAXAV6OUEn8CUzhLf63stn3Bgh3QOjog8PtApEMuWQJrK5xb SgHmGW4dRo2dNpLWBVZAoXBoJnLILxs6rYNQGwU0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 04/13] crypto: sun4i - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:18:58 +0200 Message-Id: <20200630121907.24274-5-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun4i driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Tested-by: Corentin Labbe --- drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c | 46 ++++++++++---------- drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h | 3 +- 2 files changed, 25 insertions(+), 24 deletions(-) diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c index 7f22d305178e..b72de8939497 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c @@ -122,19 +122,17 @@ static int noinline_for_stack sun4i_ss_cipher_poll_fallback(struct skcipher_requ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); struct sun4i_tfm_ctx *op = crypto_skcipher_ctx(tfm); struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); int err; - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, - NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&ctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&ctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&ctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (ctx->mode & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&ctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&ctx->fallback_req); return err; } @@ -494,23 +492,25 @@ int sun4i_ss_cipher_init(struct crypto_tfm *tfm) alg.crypto.base); op->ss = algt->ss; - crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), - sizeof(struct sun4i_cipher_req_ctx)); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), + sizeof(struct sun4i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm)); + + err = pm_runtime_get_sync(op->ss->dev); if (err < 0) goto error_pm; return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -518,7 +518,7 @@ void sun4i_ss_cipher_exit(struct crypto_tfm *tfm) { struct sun4i_tfm_ctx *op = crypto_tfm_ctx(tfm); - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put(op->ss->dev); } @@ -546,10 +546,10 @@ int sun4i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the DES key, prepare the mode to be used */ @@ -566,10 +566,10 @@ int sun4i_ss_des_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the 3DES key, prepare the mode to be used */ @@ -586,9 +586,9 @@ int sun4i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h index 2b4c6333eb67..163962f9e284 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h @@ -170,11 +170,12 @@ struct sun4i_tfm_ctx { u32 keylen; u32 keymode; struct sun4i_ss_ctx *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; struct sun4i_cipher_req_ctx { u32 mode; + struct skcipher_request fallback_req; // keep at the end }; struct sun4i_req_ctx { From patchwork Tue Jun 30 12:18:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633855 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FD7A138C for ; Tue, 30 Jun 2020 12:19:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 308DC207F5 for ; Tue, 30 Jun 2020 12:19:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519581; bh=58XfJfuPeqdMpm0BHhqXbun0mUmGjCTlKnfWU+pSRuc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=sWDZX1WJhJCDkwkwtsZAYF2FJjyGSojgVzYrbhMpdiRdZlJ2H0/FWQG8FgHSRFoA0 wgRfTUOocPrrdbwZ+LLRfYmzT7KMJ/OzW2FTeQk8WA2N395mn0rXMv0jcnnnoRXRFE 7y3BvLTeeoxq5xtGodUfg1p/i53yCWn+SFgekdNk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732818AbgF3MTk (ORCPT ); Tue, 30 Jun 2020 08:19:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:36384 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MTk (ORCPT ); Tue, 30 Jun 2020 08:19:40 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B2FE920774; Tue, 30 Jun 2020 12:19:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519579; bh=58XfJfuPeqdMpm0BHhqXbun0mUmGjCTlKnfWU+pSRuc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rumBkc3hTatN+rENqeCm3lLE/zDlN6Z1Ha8jTa6h10tPUVaY6kLQJsoCBGW8AhY89 9RWyYLhXXBxPxjHyHT/8kmnjWPY9k/rFxXBxzxDLs6M1EQsYfzSnJKTvpf9VoGHzyY D8VjuLywJhQWbRyv9wvMtI9luJFhPUyJNdxirGh0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 05/13] crypto: sun8i-ce - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:18:59 +0200 Message-Id: <20200630121907.24274-6-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun8i-ce driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Acked-by: Corentin Labbe --- drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c | 41 ++++++++++---------- drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h | 3 +- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c index a6abb701bfc6..82c99da24dfd 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c @@ -58,23 +58,20 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ce_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & CE_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -335,18 +332,20 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); op->ce = algt->ce; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ce->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ce->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ce_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -358,7 +357,7 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -370,7 +369,7 @@ void sun8i_ce_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync_suspend(op->ce->dev); } @@ -400,10 +399,10 @@ int sun8i_ce_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -425,8 +424,8 @@ int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h index 0e9eac397e1b..4ac0f91e2800 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h @@ -187,6 +187,7 @@ struct sun8i_ce_dev { struct sun8i_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -202,7 +203,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ce_dev *ce; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Tue Jun 30 12:19:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633857 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9313D13BD for ; Tue, 30 Jun 2020 12:19:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7589B2073E for ; Tue, 30 Jun 2020 12:19:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519587; bh=290xA2aTwxrDkfPlAee+hqpaDqZCou+ISCiUKXjAAeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZeLA1SzllTZPU/ejz9tJnVjvUO17zvRQQH5TtiTWO6mNbScVTrqnbMLHenZyY9vgs 50xuMjxoFNuMiCkKXLtvF9hudMIY63mNx04Ye/8k0fvRLWoHuKsrQB3QGOmWg2IrL0 HAcC0toRIZW3xTvGatlYD+a2okETATzC7aGq4v0c= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733233AbgF3MTq (ORCPT ); Tue, 30 Jun 2020 08:19:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:36434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MTp (ORCPT ); Tue, 30 Jun 2020 08:19:45 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CFCF620780; Tue, 30 Jun 2020 12:19:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519584; bh=290xA2aTwxrDkfPlAee+hqpaDqZCou+ISCiUKXjAAeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pItvmjSsp/JEgnciJrs9EXyNRd/2UQAEZ3yhWLfD1iPZFMc6Na7/gn2o0RHhnUW8g N/tzRSpb1bqkMvv0474gFtw1zl5+lu0HRsWmbtemjsziBfNodGdGJXEXqgbNRm3tAd DWNG7lIv90v6SE9AxG982BkZvqNZsAgJzG3kBEWA= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 06/13] crypto: sun8i-ss - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:00 +0200 Message-Id: <20200630121907.24274-7-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun8i-ss driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Acked-by: Corentin Labbe Tested-by: Corentin Labbe --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 39 ++++++++++---------- drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h | 3 +- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index c89cb2ee2496..7a131675a41c 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -73,7 +73,6 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ss_alg_template *algt; @@ -81,15 +80,15 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -334,18 +333,20 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); op->ss = algt->ss; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ss->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ss_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -359,7 +360,7 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -371,7 +372,7 @@ void sun8i_ss_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync(op->ss->dev); } @@ -401,10 +402,10 @@ int sun8i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -427,8 +428,8 @@ int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h index 29c44f279112..42658b134228 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h @@ -159,6 +159,7 @@ struct sun8i_cipher_req_ctx { unsigned int ivlen; unsigned int keylen; void *biv; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -174,7 +175,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ss_dev *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Tue Jun 30 12:19:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633859 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67FEE138C for ; Tue, 30 Jun 2020 12:19:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51F6A207E8 for ; Tue, 30 Jun 2020 12:19:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519590; bh=IfoJQfcz0Zm1QSESqy4YEJnncc0TZMao0pTV9CEAQXY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=BfxXiyTDfTK1Bxng8/ONRL8elkCT6NdWptRB+ydjpt2T91+BLV1h2BJFLe5SNUajf c9FUXEjZKs8DvGlbZ9ozyPwCQsccv8emSRJr0kOxD6r1vqw9tm+fNK1P7fgpj5bgDW 0slEBmyedekTXNEcyHJ4uVhOtZGpEjrbKLCKKI/Q= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733295AbgF3MTt (ORCPT ); Tue, 30 Jun 2020 08:19:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:36470 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MTt (ORCPT ); Tue, 30 Jun 2020 08:19:49 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A28D22078B; Tue, 30 Jun 2020 12:19:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519588; bh=IfoJQfcz0Zm1QSESqy4YEJnncc0TZMao0pTV9CEAQXY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j4X5hH9JDfbmAeZfa2/fyP0zXglsPjkv9wiGHgaNG1BL/E56bJK6oCqNRmLAHeUa7 bAD9dM77S9ePFXZaFwDKxhzC8rKgRjC5F9WbqVgNyxOFPjzHU2NRUhTHZ01EdX9cCp bmozy4olQWBU+9SxMBUHkwQq48chAyjpZ7ZIFo1Y= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 07/13] crypto: ccp - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:01 +0200 Message-Id: <20200630121907.24274-8-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the ccp driver implements an asynchronous version of xts(aes), the fallback it allocates is required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/ccp/ccp-crypto-aes-xts.c | 33 ++++++++++---------- drivers/crypto/ccp/ccp-crypto.h | 4 ++- 2 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 04b2517df955..959168a7ac59 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -98,7 +98,7 @@ static int ccp_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, ctx->u.aes.key_len = key_len / 2; sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len); - return crypto_sync_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); + return crypto_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); } static int ccp_aes_xts_crypt(struct skcipher_request *req, @@ -145,20 +145,19 @@ static int ccp_aes_xts_crypt(struct skcipher_request *req, (ctx->u.aes.key_len != AES_KEYSIZE_256)) fallback = 1; if (fallback) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, - ctx->u.aes.tfm_skcipher); - /* Use the fallback to process the request for any * unsupported unit sizes or key sizes */ - skcipher_request_set_sync_tfm(subreq, ctx->u.aes.tfm_skcipher); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - ret = encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + skcipher_request_set_tfm(&rctx->fallback_req, + ctx->u.aes.tfm_skcipher); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) : + crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -198,13 +197,12 @@ static int ccp_aes_xts_decrypt(struct skcipher_request *req) static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) { struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; ctx->complete = ccp_aes_xts_complete; ctx->u.aes.key_len = 0; - fallback_tfm = crypto_alloc_sync_skcipher("xts(aes)", 0, - CRYPTO_ALG_ASYNC | + fallback_tfm = crypto_alloc_skcipher("xts(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback_tfm)) { pr_warn("could not load fallback driver xts(aes)\n"); @@ -212,7 +210,8 @@ static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) } ctx->u.aes.tfm_skcipher = fallback_tfm; - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx) + + crypto_skcipher_reqsize(fallback_tfm)); return 0; } @@ -221,7 +220,7 @@ static void ccp_aes_xts_exit_tfm(struct crypto_skcipher *tfm) { struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->u.aes.tfm_skcipher); + crypto_free_skcipher(ctx->u.aes.tfm_skcipher); } static int ccp_register_aes_xts_alg(struct list_head *head, diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h index 90a009e6b5c1..aed3d2192d01 100644 --- a/drivers/crypto/ccp/ccp-crypto.h +++ b/drivers/crypto/ccp/ccp-crypto.h @@ -89,7 +89,7 @@ static inline struct ccp_crypto_ahash_alg * /***** AES related defines *****/ struct ccp_aes_ctx { /* Fallback cipher for XTS with unsupported unit sizes */ - struct crypto_sync_skcipher *tfm_skcipher; + struct crypto_skcipher *tfm_skcipher; enum ccp_engine engine; enum ccp_aes_type type; @@ -121,6 +121,8 @@ struct ccp_aes_req_ctx { u8 rfc3686_iv[AES_BLOCK_SIZE]; struct ccp_cmd cmd; + + struct skcipher_request fallback_req; // keep at the end }; struct ccp_aes_cmac_req_ctx { From patchwork Tue Jun 30 12:19:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633861 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F64C13BD for ; Tue, 30 Jun 2020 12:19:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72709207E8 for ; Tue, 30 Jun 2020 12:19:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519595; bh=5ijrrjpzgo0WG8TY0By4td30EhV6ZdQkXPkiQ7w/484=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=rhz2wRtuhF3222Pw8A+kQNhtBPsKNzy+1GD6WTEm8W+rzycqwHQWGGz5mVL7taMVi aUwjOJhqAD7dVH7gyWyjNSEI+krjoTGH4VNLVJLufTAO0FCQzOMaIk16Tcc7FFYNXn Xx6+IlcwueVhqiXFxBEVAn0SFnfWnvegcCoM2t+c= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733268AbgF3MTz (ORCPT ); Tue, 30 Jun 2020 08:19:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:36514 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MTx (ORCPT ); Tue, 30 Jun 2020 08:19:53 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 296C620780; Tue, 30 Jun 2020 12:19:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519592; bh=5ijrrjpzgo0WG8TY0By4td30EhV6ZdQkXPkiQ7w/484=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IH1Y9y50m7CJBva98mV/P7SYg3XPOl4YqJPZ8K0EiQeOufaHJRJ3hfRA4hdLv0o5Z 2zX1hNg7wEldWGmS0yfcqz2CY9TmzGN4VTivfMlmpqzGDlCuddd5k7NXEVHyOgmTsz SMPpCY0O3+1ERJhdXpGQ3OeeOsGHAG7ggY/S7EU0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 08/13] crypto: chelsio - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:02 +0200 Message-Id: <20200630121907.24274-9-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the chelsio driver implements asynchronous versions of cbc(aes) and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/chelsio/chcr_algo.c | 57 ++++++++------------ drivers/crypto/chelsio/chcr_crypto.h | 3 +- 2 files changed, 25 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 4c2553672b6f..a6625b90fb1a 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -690,26 +690,22 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src, return min(srclen, dstlen); } -static int chcr_cipher_fallback(struct crypto_sync_skcipher *cipher, - u32 flags, - struct scatterlist *src, - struct scatterlist *dst, - unsigned int nbytes, +static int chcr_cipher_fallback(struct crypto_skcipher *cipher, + struct skcipher_request *req, u8 *iv, unsigned short op_type) { + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, cipher); - - skcipher_request_set_sync_tfm(subreq, cipher); - skcipher_request_set_callback(subreq, flags, NULL, NULL); - skcipher_request_set_crypt(subreq, src, dst, - nbytes, iv); + skcipher_request_set_tfm(&reqctx->fallback_req, cipher); + skcipher_request_set_callback(&reqctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&reqctx->fallback_req, req->src, req->dst, + req->cryptlen, iv); - err = op_type ? crypto_skcipher_decrypt(subreq) : - crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = op_type ? crypto_skcipher_decrypt(&reqctx->fallback_req) : + crypto_skcipher_encrypt(&reqctx->fallback_req); return err; @@ -924,11 +920,11 @@ static int chcr_cipher_fallback_setkey(struct crypto_skcipher *cipher, { struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); - crypto_sync_skcipher_clear_flags(ablkctx->sw_cipher, + crypto_skcipher_clear_flags(ablkctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ablkctx->sw_cipher, + crypto_skcipher_set_flags(ablkctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ablkctx->sw_cipher, key, keylen); + return crypto_skcipher_setkey(ablkctx->sw_cipher, key, keylen); } static int chcr_aes_cbc_setkey(struct crypto_skcipher *cipher, @@ -1206,13 +1202,8 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req, req); memcpy(req->iv, reqctx->init_iv, IV); atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, - req->iv, - reqctx->op); + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, req->iv, + reqctx->op); goto complete; } @@ -1341,11 +1332,7 @@ static int process_cipher(struct skcipher_request *req, chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); fallback: atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ? reqctx->iv : req->iv, @@ -1486,14 +1473,15 @@ static int chcr_init_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->base.cra_name, 0, + ablkctx->sw_cipher = crypto_alloc_skcipher(alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } init_completion(&ctx->cbc_aes_aio_done); - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1507,13 +1495,14 @@ static int chcr_rfc3686_init(struct crypto_skcipher *tfm) /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) * cannot be used as fallback in chcr_handle_cipher_response */ - ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0, + ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1523,7 +1512,7 @@ static void chcr_exit_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - crypto_free_sync_skcipher(ablkctx->sw_cipher); + crypto_free_skcipher(ablkctx->sw_cipher); } static int get_alg_config(struct algo_param *params, diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index b3fdbdc25acb..55a6631cdbee 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -171,7 +171,7 @@ static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm) } struct ablk_ctx { - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; __be32 key_ctx_hdr; unsigned int enckey_len; unsigned char ciph_mode; @@ -305,6 +305,7 @@ struct chcr_skcipher_req_ctx { u8 init_iv[CHCR_MAX_CRYPTO_IV_LEN]; u16 txqidx; u16 rxqidx; + struct skcipher_request fallback_req; // keep at the end }; struct chcr_alg_template { From patchwork Tue Jun 30 12:19:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633863 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39437138C for ; Tue, 30 Jun 2020 12:20:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 20E2D2078B for ; Tue, 30 Jun 2020 12:20:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519600; bh=qrgEp858d6S/nsBVd7ZX2BiVRKqbjQySrsdS4UYEijM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=nTq1OVYFD5Sgts8Sd68t9Wf8WLeoqyWI78DB25I8xHg06t28KwOM7FCfAY/o8ROSF lfCSed2gSA7F7BNzm16iVodsIHyMWt1TdK/zT+L6Rx+ZqHWx6+D0/V1fYhFTRktTXq gTC76oJk50XKuPwpeodxy4QHnvn0fRepxqkRsKC4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387645AbgF3MT7 (ORCPT ); Tue, 30 Jun 2020 08:19:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:36558 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MT6 (ORCPT ); Tue, 30 Jun 2020 08:19:58 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 72DD920774; Tue, 30 Jun 2020 12:19:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519597; bh=qrgEp858d6S/nsBVd7ZX2BiVRKqbjQySrsdS4UYEijM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GLMdGfwbc0kJH52vcpwUJz3QOcoEBpxN1F/q1+gpzmdPXXa/C4mXdQswIvA0LAxug 8F8fBd2NGkgKyqvaE7xabPdAlNU34odFq6ZKnW3xLuoRzePoLXTiVdvWzsMyqXLDq6 3+aqCFTKGFdLMhWy7XB1+LDp8C7UrdWUDEvo5AYc= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 09/13] crypto: mxs-dcp - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:03 +0200 Message-Id: <20200630121907.24274-10-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the mxs-dcp driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Reviewed-by: Horia Geantă --- drivers/crypto/mxs-dcp.c | 33 ++++++++++---------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index d84530293036..909a7eb748e3 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -97,7 +97,7 @@ struct dcp_async_ctx { unsigned int hot:1; /* Crypto-specific context */ - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; unsigned int key_len; uint8_t key[AES_KEYSIZE_128]; }; @@ -105,6 +105,7 @@ struct dcp_async_ctx { struct dcp_aes_req_ctx { unsigned int enc:1; unsigned int ecb:1; + struct skcipher_request fallback_req; // keep at the end }; struct dcp_sha_req_ctx { @@ -426,21 +427,20 @@ static int dcp_chan_thread_aes(void *data) static int mxs_dcp_block_fallback(struct skcipher_request *req, int enc) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); struct dcp_async_ctx *ctx = crypto_skcipher_ctx(tfm); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); int ret; - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, req->dst, req->cryptlen, req->iv); if (enc) - ret = crypto_skcipher_encrypt(subreq); + ret = crypto_skcipher_encrypt(&rctx->fallback_req); else - ret = crypto_skcipher_decrypt(subreq); - - skcipher_request_zero(subreq); + ret = crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -510,24 +510,25 @@ static int mxs_dcp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, * but is supported by in-kernel software implementation, we use * software fallback. */ - crypto_sync_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(actx->fallback, + crypto_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(actx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(actx->fallback, key, len); + return crypto_skcipher_setkey(actx->fallback, key, len); } static int mxs_dcp_aes_fallback_init_tfm(struct crypto_skcipher *tfm) { const char *name = crypto_tfm_alg_name(crypto_skcipher_tfm(tfm)); struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *blk; + struct crypto_skcipher *blk; - blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + blk = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); actx->fallback = blk; - crypto_skcipher_set_reqsize(tfm, sizeof(struct dcp_aes_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct dcp_aes_req_ctx) + + crypto_skcipher_reqsize(blk)); return 0; } @@ -535,7 +536,7 @@ static void mxs_dcp_aes_fallback_exit_tfm(struct crypto_skcipher *tfm) { struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(actx->fallback); + crypto_free_skcipher(actx->fallback); } /* From patchwork Tue Jun 30 12:19:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633865 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B9AE13BD for ; Tue, 30 Jun 2020 12:20:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 631482078B for ; Tue, 30 Jun 2020 12:20:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519604; bh=rg2ypLnQp5VyF0qc7FaNJ5eAd/7z9AwkuR1gPRSjTfw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=WFAtlaOhc6S654Mwl+yKajDNX98k8eaHfiB8BJJ9wNKmlDPuJyG6fAgtozX0zdRZk E3/Jwy9MIO2zOkj+BGkXwP8ENAe21zyLNjViFS6Qfst8Xvq7WFkQdw/By0Hvi4V1qN TQU1Sb/rf1oEOv5I1DfW6U1Ann5vcQPnE9Te3CZY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387621AbgF3MUD (ORCPT ); Tue, 30 Jun 2020 08:20:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:36610 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MUD (ORCPT ); Tue, 30 Jun 2020 08:20:03 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1350820780; Tue, 30 Jun 2020 12:19:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519602; bh=rg2ypLnQp5VyF0qc7FaNJ5eAd/7z9AwkuR1gPRSjTfw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XKvDXvkdczzkpqFir5Cs8ulfJBXk0b6YZUwPOPAd7vVGDyDJn/B//vCJhv7u0u6ri eZqLjYTVLx7Zbx4uUa9ZGNCJYWdzMxn472wNXed8dPGTwBwDbB9me4C9Wg1sFTDR0R P1bAy6wJv5xfSjIvlKwj4WhcVUE/djw2Epc23LyY= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 10/13] crypto: picoxcell - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:04 +0200 Message-Id: <20200630121907.24274-11-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the picoxcell driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Reviewed-by: Jamie Iles --- drivers/crypto/picoxcell_crypto.c | 38 +++++++++++--------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index 7384e91c8b32..13503e16ce1d 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -86,6 +86,7 @@ struct spacc_req { dma_addr_t src_addr, dst_addr; struct spacc_ddt *src_ddt, *dst_ddt; void (*complete)(struct spacc_req *req); + struct skcipher_request fallback_req; // keep at the end }; struct spacc_aead { @@ -158,7 +159,7 @@ struct spacc_ablk_ctx { * The fallback cipher. If the operation can't be done in hardware, * fallback to a software version. */ - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; }; /* AEAD cipher context. */ @@ -792,13 +793,13 @@ static int spacc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, * Set the fallback transform to use the same request flags as * the hardware transform. */ - crypto_sync_skcipher_clear_flags(ctx->sw_cipher, + crypto_skcipher_clear_flags(ctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->sw_cipher, + crypto_skcipher_set_flags(ctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - err = crypto_sync_skcipher_setkey(ctx->sw_cipher, key, len); + err = crypto_skcipher_setkey(ctx->sw_cipher, key, len); if (err) goto sw_setkey_failed; } @@ -900,7 +901,7 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, struct crypto_tfm *old_tfm = crypto_skcipher_tfm(crypto_skcipher_reqtfm(req)); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); + struct spacc_req *dev_req = skcipher_request_ctx(req); int err; /* @@ -908,13 +909,13 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, * the ciphering has completed, put the old transform back into the * request. */ - skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher); - skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, + skcipher_request_set_tfm(&dev_req->fallback_req, ctx->sw_cipher); + skcipher_request_set_callback(&dev_req->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&dev_req->fallback_req, req->src, req->dst, req->cryptlen, req->iv); - err = is_encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + err = is_encrypt ? crypto_skcipher_encrypt(&dev_req->fallback_req) : + crypto_skcipher_decrypt(&dev_req->fallback_req); return err; } @@ -1007,19 +1008,24 @@ static int spacc_ablk_init_tfm(struct crypto_skcipher *tfm) ctx->generic.flags = spacc_alg->type; ctx->generic.engine = engine; if (alg->base.cra_flags & CRYPTO_ALG_NEED_FALLBACK) { - ctx->sw_cipher = crypto_alloc_sync_skcipher( - alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); + ctx->sw_cipher = crypto_alloc_skcipher(alg->base.cra_name, 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->sw_cipher)) { dev_warn(engine->dev, "failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ctx->sw_cipher); } + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req) + + crypto_skcipher_reqsize(ctx->sw_cipher)); + } else { + /* take the size without the fallback skcipher_request at the end */ + crypto_skcipher_set_reqsize(tfm, offsetof(struct spacc_req, + fallback_req)); } + ctx->generic.key_offs = spacc_alg->key_offs; ctx->generic.iv_offs = spacc_alg->iv_offs; - crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); - return 0; } @@ -1027,7 +1033,7 @@ static void spacc_ablk_exit_tfm(struct crypto_skcipher *tfm) { struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->sw_cipher); + crypto_free_skcipher(ctx->sw_cipher); } static int spacc_ablk_encrypt(struct skcipher_request *req) From patchwork Tue Jun 30 12:19:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633867 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 776316C1 for ; Tue, 30 Jun 2020 12:20:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 605E62078B for ; Tue, 30 Jun 2020 12:20:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519609; bh=z47aA9KxWozhyQQe5M0Dd9poj8oy6PNGecaY82Ab64Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=PaivkaG/rZYDDj6M1PcVXG4zsu4ABESglpx96SNGFE+lX6X2itJeKrP4AZuIj4o9f qgHtKc9GbGYfQ0tb2LmuCiezPijs8DnHktB3qRhaEP/2CkNDBHK2jewX/nOS0yayyS apqVDscT44upQVN9mnfhFsoCNpeb+0zLDjeCErr8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387622AbgF3MUI (ORCPT ); Tue, 30 Jun 2020 08:20:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:36652 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MUH (ORCPT ); Tue, 30 Jun 2020 08:20:07 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F299220672; Tue, 30 Jun 2020 12:20:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519606; bh=z47aA9KxWozhyQQe5M0Dd9poj8oy6PNGecaY82Ab64Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qUO8KsiXamHHCRbFl/vWqQ5bObrN/D1SF6lJ6o26i3foXYEMYMUwqUhnjGfLLGQ9h +o6Xnu6KlfvAcYTxe4tRnWF9ojV99uygHGIqLexIYLyvKBhrA7zAp4JDXDJGj0Tl6q VQYJEeOIsBkzV7MVKNLSuHre0hlVXvYc98vlzV/8= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 11/13] crypto: qce - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:05 +0200 Message-Id: <20200630121907.24274-12-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the qce driver implements asynchronous versions of ecb(aes), cbc(aes)and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. While at it, remove the pointless memset() from qce_skcipher_init(), and remove the unnecessary call to it from qce_skcipher_init_fallback(). Signed-off-by: Ard Biesheuvel --- drivers/crypto/qce/cipher.h | 3 +- drivers/crypto/qce/skcipher.c | 42 ++++++++++---------- 2 files changed, 24 insertions(+), 21 deletions(-) diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h index 7770660bc853..cffa9fc628ff 100644 --- a/drivers/crypto/qce/cipher.h +++ b/drivers/crypto/qce/cipher.h @@ -14,7 +14,7 @@ struct qce_cipher_ctx { u8 enc_key[QCE_MAX_KEY_SIZE]; unsigned int enc_keylen; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; /** @@ -43,6 +43,7 @@ struct qce_cipher_reqctx { struct sg_table src_tbl; struct scatterlist *src_sg; unsigned int cryptlen; + struct skcipher_request fallback_req; // keep at the end }; static inline struct qce_alg_template *to_cipher_tmpl(struct crypto_skcipher *tfm) diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c index 9412433f3b21..a8147381b774 100644 --- a/drivers/crypto/qce/skcipher.c +++ b/drivers/crypto/qce/skcipher.c @@ -178,7 +178,7 @@ static int qce_skcipher_setkey(struct crypto_skcipher *ablk, const u8 *key, break; } - ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) ctx->enc_keylen = keylen; return ret; @@ -235,16 +235,15 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt) req->cryptlen <= aes_sw_max_len) || (IS_XTS(rctx->flags) && req->cryptlen > QCE_SECTOR_SIZE && req->cryptlen % QCE_SECTOR_SIZE))) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - ret = encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) : + crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -263,10 +262,9 @@ static int qce_skcipher_decrypt(struct skcipher_request *req) static int qce_skcipher_init(struct crypto_skcipher *tfm) { - struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); - - memset(ctx, 0, sizeof(*ctx)); - crypto_skcipher_set_reqsize(tfm, sizeof(struct qce_cipher_reqctx)); + /* take the size without the fallback skcipher_request at the end */ + crypto_skcipher_set_reqsize(tfm, offsetof(struct qce_cipher_reqctx, + fallback_req)); return 0; } @@ -274,17 +272,21 @@ static int qce_skcipher_init_fallback(struct crypto_skcipher *tfm) { struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); - qce_skcipher_init(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(&tfm->base), - 0, CRYPTO_ALG_NEED_FALLBACK); - return PTR_ERR_OR_ZERO(ctx->fallback); + ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(&tfm->base), + 0, CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->fallback)) + return PTR_ERR(ctx->fallback); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct qce_cipher_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); + return 0; } static void qce_skcipher_exit(struct crypto_skcipher *tfm) { struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } struct qce_skcipher_def { From patchwork Tue Jun 30 12:19:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633869 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B492E13BD for ; Tue, 30 Jun 2020 12:20:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 965FC2078B for ; Tue, 30 Jun 2020 12:20:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519613; bh=zWLNdORA3abYA5PKC9YwF70F79vbojAK4OFdJeUliHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=HxDfG2kopVjtvLAAr609/KFv8FMAgrngcBVqxUvrXz8GK0C41YH9UkMrSIaM3X11n ZmhzF+4Zb4jl5aeRHN3CSA2J9JIDuxrLkdE7O792UGCAgQKBEJ5DkgRHONvrcL8u+E OxPW90ccn/iGIq00EsszI98yzq7hlYXzKQB/KqQ4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387462AbgF3MUN (ORCPT ); Tue, 30 Jun 2020 08:20:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:36708 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MUM (ORCPT ); Tue, 30 Jun 2020 08:20:12 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 72CD520774; Tue, 30 Jun 2020 12:20:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519611; bh=zWLNdORA3abYA5PKC9YwF70F79vbojAK4OFdJeUliHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iyYfKa86J0nxJHS7+XGNZO1wmzaLGbihepYjHzcMv9IHbqKC1hTde7/beeKJ8GSt8 p2cmnmXLYOHLNExmax+qnpVmMtKxrZn0B3FdtmPk/bZIwEtfxQfzTcIWc3fq8oVUan N7thalaPPrgNPAG2qvsejDhar//5FdrGEjDwqMZs= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 12/13] crypto: sahara - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:19:06 +0200 Message-Id: <20200630121907.24274-13-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sahara driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel Reviewed-by: Horia Geantă --- drivers/crypto/sahara.c | 96 +++++++++----------- 1 file changed, 45 insertions(+), 51 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 466e30bd529c..0c8cb23ae708 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -146,11 +146,12 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct sahara_aes_reqctx { unsigned long mode; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -617,10 +618,10 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + return crypto_skcipher_setkey(ctx->fallback, key, keylen); } static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) @@ -651,21 +652,19 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) static int sahara_aes_ecb_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT); @@ -673,21 +672,19 @@ static int sahara_aes_ecb_encrypt(struct skcipher_request *req) static int sahara_aes_ecb_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, 0); @@ -695,21 +692,19 @@ static int sahara_aes_ecb_decrypt(struct skcipher_request *req) static int sahara_aes_cbc_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); @@ -717,21 +712,19 @@ static int sahara_aes_cbc_encrypt(struct skcipher_request *req) static int sahara_aes_cbc_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_CBC); @@ -742,14 +735,15 @@ static int sahara_aes_init_tfm(struct crypto_skcipher *tfm) const char *name = crypto_tfm_alg_name(&tfm->base); struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(name, 0, + ctx->fallback = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); return PTR_ERR(ctx->fallback); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); return 0; } @@ -758,7 +752,7 @@ static void sahara_aes_exit_tfm(struct crypto_skcipher *tfm) { struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev, From patchwork Tue Jun 30 12:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11633871 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00D9D6C1 for ; Tue, 30 Jun 2020 12:20:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD9492078B for ; Tue, 30 Jun 2020 12:20:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519617; bh=H0011zelgyIx0MUuXYWh9MFPAsRNumkOWpzTnMxw0Ww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=WXHozb2dp2xhMrLvjgBAPtUroyB1l+SZzRCFcMb67CJNhC4VG0ZkdankcMjGCN0p0 CpXrJTJVJpx5KELi4BBTcyfakhtH+3nFFEM+z9/kv8ZXPAr5I3G9c+G/KIXQkjGcSw dH8xjVZDsU5sae04NLfne/PmGZ/roi+j+s/hNS0U= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387623AbgF3MUR (ORCPT ); Tue, 30 Jun 2020 08:20:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:36754 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MUR (ORCPT ); Tue, 30 Jun 2020 08:20:17 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 33E2C20780; Tue, 30 Jun 2020 12:20:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519616; bh=H0011zelgyIx0MUuXYWh9MFPAsRNumkOWpzTnMxw0Ww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cq47PJn5z1xZwhWox2oawvY84eQDArGpEZqyeLsOKTnwUsqc/wcIR1nzZtyCbXqar EFH/avEq6tQ5850MPEgBio+zHtjq4uKZlfm7RgvbZI9kpdEa28mtSHUkWMjYCeRfZz MXtcEbvWGYJVc21xHcinu/dOvuzT/dy8D9ZT8VTc= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 13/13] crypto: mediatek - use AES library for GCM key derivation Date: Tue, 30 Jun 2020 14:19:07 +0200 Message-Id: <20200630121907.24274-14-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The Mediatek accelerator driver calls into a dynamically allocated skcipher of the ctr(aes) variety to perform GCM key derivation, which involves AES encryption of a single block consisting of NUL bytes. There is no point in using the skcipher API for this, so use the AES library interface instead. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 3 +- drivers/crypto/mediatek/mtk-aes.c | 63 +++----------------- 2 files changed, 9 insertions(+), 57 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 802b9ada4e9e..c8c3ebb248f8 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -756,10 +756,9 @@ config CRYPTO_DEV_ZYNQMP_AES config CRYPTO_DEV_MEDIATEK tristate "MediaTek's EIP97 Cryptographic Engine driver" depends on (ARM && ARCH_MEDIATEK) || COMPILE_TEST - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_AEAD select CRYPTO_SKCIPHER - select CRYPTO_CTR select CRYPTO_SHA1 select CRYPTO_SHA256 select CRYPTO_SHA512 diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c index 78d660d963e2..4ad3571ab6af 100644 --- a/drivers/crypto/mediatek/mtk-aes.c +++ b/drivers/crypto/mediatek/mtk-aes.c @@ -137,8 +137,6 @@ struct mtk_aes_gcm_ctx { u32 authsize; size_t textlen; - - struct crypto_skcipher *ctr; }; struct mtk_aes_drv { @@ -996,17 +994,8 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, u32 keylen) { struct mtk_aes_base_ctx *ctx = crypto_aead_ctx(aead); - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(ctx); - struct crypto_skcipher *ctr = gctx->ctr; - struct { - u32 hash[4]; - u8 iv[8]; - - struct crypto_wait wait; - - struct scatterlist sg[1]; - struct skcipher_request req; - } *data; + u8 hash[AES_BLOCK_SIZE] __aligned(4) = {}; + struct crypto_aes_ctx aes_ctx; int err; switch (keylen) { @@ -1026,39 +1015,18 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, ctx->keylen = SIZE_IN_WORDS(keylen); - /* Same as crypto_gcm_setkey() from crypto/gcm.c */ - crypto_skcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctr, crypto_aead_get_flags(aead) & - CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(ctr, key, keylen); + err = aes_expandkey(&aes_ctx, key, keylen); if (err) return err; - data = kzalloc(sizeof(*data) + crypto_skcipher_reqsize(ctr), - GFP_KERNEL); - if (!data) - return -ENOMEM; - - crypto_init_wait(&data->wait); - sg_init_one(data->sg, &data->hash, AES_BLOCK_SIZE); - skcipher_request_set_tfm(&data->req, ctr); - skcipher_request_set_callback(&data->req, CRYPTO_TFM_REQ_MAY_SLEEP | - CRYPTO_TFM_REQ_MAY_BACKLOG, - crypto_req_done, &data->wait); - skcipher_request_set_crypt(&data->req, data->sg, data->sg, - AES_BLOCK_SIZE, data->iv); - - err = crypto_wait_req(crypto_skcipher_encrypt(&data->req), - &data->wait); - if (err) - goto out; + aes_encrypt(&aes_ctx, hash, hash); + memzero_explicit(&aes_ctx, sizeof(aes_ctx)); mtk_aes_write_state_le(ctx->key, (const u32 *)key, keylen); - mtk_aes_write_state_be(ctx->key + ctx->keylen, data->hash, + mtk_aes_write_state_be(ctx->key + ctx->keylen, (const u32 *)hash, AES_BLOCK_SIZE); -out: - kzfree(data); - return err; + + return 0; } static int mtk_aes_gcm_setauthsize(struct crypto_aead *aead, @@ -1095,32 +1063,17 @@ static int mtk_aes_gcm_init(struct crypto_aead *aead) { struct mtk_aes_gcm_ctx *ctx = crypto_aead_ctx(aead); - ctx->ctr = crypto_alloc_skcipher("ctr(aes)", 0, - CRYPTO_ALG_ASYNC); - if (IS_ERR(ctx->ctr)) { - pr_err("Error allocating ctr(aes)\n"); - return PTR_ERR(ctx->ctr); - } - crypto_aead_set_reqsize(aead, sizeof(struct mtk_aes_reqctx)); ctx->base.start = mtk_aes_gcm_start; return 0; } -static void mtk_aes_gcm_exit(struct crypto_aead *aead) -{ - struct mtk_aes_gcm_ctx *ctx = crypto_aead_ctx(aead); - - crypto_free_skcipher(ctx->ctr); -} - static struct aead_alg aes_gcm_alg = { .setkey = mtk_aes_gcm_setkey, .setauthsize = mtk_aes_gcm_setauthsize, .encrypt = mtk_aes_gcm_encrypt, .decrypt = mtk_aes_gcm_decrypt, .init = mtk_aes_gcm_init, - .exit = mtk_aes_gcm_exit, .ivsize = GCM_AES_IV_SIZE, .maxauthsize = AES_BLOCK_SIZE,