From patchwork Sat Jun 27 08:36:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629395 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9ED6F17CB for ; Sat, 27 Jun 2020 08:36:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 875E020B80 for ; Sat, 27 Jun 2020 08:36:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247001; bh=c+inIYObGSZUVYp7oF5MvXqUUSrjOBVfIrZv9HY3hO8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DE1dS1GC5NbBeUkTKQgbyddFHrgpdqFZBFlkzM6RYFipd4k5m1OUBlPcmXul98kID Sye8t+d9GkRYFG+6f+TAQW1VZ2LfLqprFRPsPPj/dtXTwdZSNGpT+mDsfFRZMrGlRw /QuwlNNxIVIjAkK7m2MJ4U/Y96gWoNvDA0s+mkXU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726139AbgF0Igk (ORCPT ); Sat, 27 Jun 2020 04:36:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:60724 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igk (ORCPT ); Sat, 27 Jun 2020 04:36:40 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2B0E920DD4; Sat, 27 Jun 2020 08:36:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247000; bh=c+inIYObGSZUVYp7oF5MvXqUUSrjOBVfIrZv9HY3hO8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=icMJhVS4pVZ4p19YXT5OTE2fNv80ZfFMP/KtA9d9vfps4SvZTtmOO8MFqbxcj0XIv JJIVJU0xgqF+WgY3gTJ3MqhtWjLi2PSAazLHIjiINi73TU4w6wWWEBgCy/Mx5WrbDN flqOplUYMxzrok2lJVkVowauLkX6qT2NoURlfLAE= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 01/13] crypto: amlogic-gxl - default to build as module Date: Sat, 27 Jun 2020 10:36:11 +0200 Message-Id: <20200627083623.2428333-2-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The AmLogic GXL crypto accelerator driver is built into the kernel if ARCH_MESON is set. However, given the single image policy of arm64, its defconfig enables all platforms by default, and so ARCH_MESON is usually enabled. This means that the AmLogic driver causes the arm64 defconfig build to pull in a huge chunk of the crypto stack as a builtin as well, which is undesirable, so let's make the amlogic GXL driver default to 'm' instead. Signed-off-by: Ard Biesheuvel --- drivers/crypto/amlogic/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/amlogic/Kconfig b/drivers/crypto/amlogic/Kconfig index cf9547602670..cf2c676a7093 100644 --- a/drivers/crypto/amlogic/Kconfig +++ b/drivers/crypto/amlogic/Kconfig @@ -1,7 +1,7 @@ config CRYPTO_DEV_AMLOGIC_GXL tristate "Support for amlogic cryptographic offloader" depends on HAS_IOMEM - default y if ARCH_MESON + default m if ARCH_MESON select CRYPTO_SKCIPHER select CRYPTO_ENGINE select CRYPTO_ECB From patchwork Sat Jun 27 08:36:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629399 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C201E14E3 for ; Sat, 27 Jun 2020 08:36:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAAF020DD4 for ; Sat, 27 Jun 2020 08:36:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247006; bh=DOov/VzevH5sYOijwCFcdKlJxFMfsKQgVxpnIqtZ1KY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Ft5tThsvEX8WQrLzjSLt8bvxXe7QVjBwoASU/d9UG2AROBaGg/k9xzX+HFn0BSazF 0w0NxDwH+wLxu9AqMzNPH21Yx9bIxGQtXV0RzXq1wJ9ylh5gOPC6hEEjkstIF9nLq0 zUA+xE5FW10OBBEJmIlvPmEZbnVp4vJ9+wZWcNu8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726175AbgF0Igp (ORCPT ); Sat, 27 Jun 2020 04:36:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:60794 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igp (ORCPT ); Sat, 27 Jun 2020 04:36:45 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 977E2207FC; Sat, 27 Jun 2020 08:36:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247004; bh=DOov/VzevH5sYOijwCFcdKlJxFMfsKQgVxpnIqtZ1KY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d3D4qZ1Udq19A4lVk9G8mcnn+SKDYG8eRwybM6i60UDifVk1tlJZrpBYpOBGmezZN p93KPhKHWW658YZPGd+3PhaqhERvvq5bH8I7mvwdIgweJda0/QDy9qGvWM+7hSsqrS 0GoZSRp0fTzPlGrrigR0JDgGLOQWXvxmqWF1KgfU= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 02/13] crypto: amlogic-gxl - permit async skcipher as fallback Date: Sat, 27 Jun 2020 10:36:12 +0200 Message-Id: <20200627083623.2428333-3-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the amlogic-gxl driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/amlogic/amlogic-gxl-cipher.c | 27 ++++++++++---------- drivers/crypto/amlogic/amlogic-gxl.h | 3 ++- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c index 9819dd50fbad..5880b94dcb32 100644 --- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c +++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c @@ -64,22 +64,20 @@ static int meson_cipher_do_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct meson_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(req, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG algt = container_of(alg, struct meson_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(req, op->fallback_tfm); - skcipher_request_set_callback(req, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(req, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); + if (rctx->op_dir == MESON_DECRYPT) - err = crypto_skcipher_decrypt(req); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(req); - skcipher_request_zero(req); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -321,15 +319,16 @@ int meson_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct meson_alg_template, alg.skcipher); op->mc = algt->mc; - sktfm->reqsize = sizeof(struct meson_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->mc->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct meson_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + op->enginectx.op.do_one_request = meson_handle_cipher_request; op->enginectx.op.prepare_request = NULL; op->enginectx.op.unprepare_request = NULL; @@ -345,7 +344,7 @@ void meson_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); } int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -377,5 +376,5 @@ int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/amlogic-gxl.h index b7f2de91ab76..dc0f142324a3 100644 --- a/drivers/crypto/amlogic/amlogic-gxl.h +++ b/drivers/crypto/amlogic/amlogic-gxl.h @@ -109,6 +109,7 @@ struct meson_dev { struct meson_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -126,7 +127,7 @@ struct meson_cipher_tfm_ctx { u32 keylen; u32 keymode; struct meson_dev *mc; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Sat Jun 27 08:36:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629403 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE51414E3 for ; Sat, 27 Jun 2020 08:36:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8615212CC for ; Sat, 27 Jun 2020 08:36:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247010; bh=7g45QWyrYUg0An4TDCghUfvpEHV1X2Se+dPPjGObVdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Zn4zvkkvlitmjdDoP8aJF4HIcRUhrtetMt0wbhEv3Yj/vfqc2e+c5d3/N0RA9N/Ep qrxQZnnEnnRBL+3+T79W5oOG5l6168Ye3ru2SaLHSn0CZFxIAyG5KCL95unnYwp2yY rkSSTYVCyPnvjFWI4a8k8s8ppTaUsL7f29FbnXFY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726337AbgF0Igu (ORCPT ); Sat, 27 Jun 2020 04:36:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:60884 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igt (ORCPT ); Sat, 27 Jun 2020 04:36:49 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1071F208C7; Sat, 27 Jun 2020 08:36:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247009; bh=7g45QWyrYUg0An4TDCghUfvpEHV1X2Se+dPPjGObVdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m/ajGB074NYPLnbD44erR6c0f/YJu/AftljLUMwznn2XTvzylJ57vf9p77ozT+oMH LnD5P55AI7sAnQiCyMDNQOkzTrkruxcyc0jQAiYMgNHRmip/V38nDze6evPOeL1pLE IE0r1GnBuZ0T4ZicvAT81K/AIf0R2ONMInASwX58= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 03/13] crypto: omap-aes - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:13 +0200 Message-Id: <20200627083623.2428333-4-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the omap-aes driver implements asynchronous versions of ecb(aes), cbc(aes) and ctr(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/omap-aes.c | 35 ++++++++++---------- drivers/crypto/omap-aes.h | 3 +- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index b5aff20c5900..25154b74dcc6 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -548,20 +548,18 @@ static int omap_aes_crypt(struct skcipher_request *req, unsigned long mode) !!(mode & FLAGS_CBC)); if (req->cryptlen < aes_fallback_sz) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, NULL, - NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); if (mode & FLAGS_ENCRYPT) - ret = crypto_skcipher_encrypt(subreq); + ret = crypto_skcipher_encrypt(&rctx->fallback_req); else - ret = crypto_skcipher_decrypt(subreq); - - skcipher_request_zero(subreq); + ret = crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } dd = omap_aes_find_dev(rctx); @@ -590,11 +588,11 @@ static int omap_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, memcpy(ctx->key, key, keylen); ctx->keylen = keylen; - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) return 0; @@ -640,15 +638,16 @@ static int omap_aes_init_tfm(struct crypto_skcipher *tfm) { const char *name = crypto_tfm_alg_name(&tfm->base); struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *blk; + struct crypto_skcipher *blk; - blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + blk = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); ctx->fallback = blk; - crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_aes_reqctx) + + crypto_skcipher_reqsize(blk)); ctx->enginectx.op.prepare_request = omap_aes_prepare_req; ctx->enginectx.op.unprepare_request = NULL; @@ -662,7 +661,7 @@ static void omap_aes_exit_tfm(struct crypto_skcipher *tfm) struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); if (ctx->fallback) - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); ctx->fallback = NULL; } diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h index 2d111bf906e1..23d073e87bb8 100644 --- a/drivers/crypto/omap-aes.h +++ b/drivers/crypto/omap-aes.h @@ -97,7 +97,7 @@ struct omap_aes_ctx { int keylen; u32 key[AES_KEYSIZE_256 / sizeof(u32)]; u8 nonce[4]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct omap_aes_gcm_ctx { @@ -110,6 +110,7 @@ struct omap_aes_reqctx { unsigned long mode; u8 iv[AES_BLOCK_SIZE]; u32 auth_tag[AES_BLOCK_SIZE / sizeof(u32)]; + struct skcipher_request fallback_req; // keep at the end }; #define OMAP_AES_QUEUE_LENGTH 1 From patchwork Sat Jun 27 08:36:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629407 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 857F5618 for ; Sat, 27 Jun 2020 08:36:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F1D521473 for ; Sat, 27 Jun 2020 08:36:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247015; bh=hood+SBSrfD8rlAvUlszp6fbHnc3dUB49g4uRq5KZac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=yAoO1c71mb2OughgwG0Do4NKnY70tqK+dd3Wf2YuadQcqbucgQyXwnGbex59oh7AV EY7iSqychmm8G/GuIPVAzwRmaAtxjpxQMQdVeJemHpitD+RzYfYgGTGmqZYLhRcHKm fvCItHa7MRqbgXd4dCWli70RqHkiJhSR5a8Jn9ns= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726212AbgF0Igy (ORCPT ); Sat, 27 Jun 2020 04:36:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:60970 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igy (ORCPT ); Sat, 27 Jun 2020 04:36:54 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 81D06207FC; Sat, 27 Jun 2020 08:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247013; bh=hood+SBSrfD8rlAvUlszp6fbHnc3dUB49g4uRq5KZac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sG33aql52sl8EBuC4PxvhlewQ85dfoR1O2RPdCFcjveoCWP3KmrPSCat7J1H3HHRq gfRZa++y7vjdIiAhniY4g6WRbZQxPbT86bTsT+rxe1SIzD/07gQJrU8U8WOI4mvBRJ Px4BIvv82/TjM9cIoxUAtjFha/4bTwWmVyLKjpdM= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 04/13] crypto: sun4i - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:14 +0200 Message-Id: <20200627083623.2428333-5-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun4i driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c | 46 ++++++++++---------- drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h | 3 +- 2 files changed, 25 insertions(+), 24 deletions(-) diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c index 7f22d305178e..b72de8939497 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c @@ -122,19 +122,17 @@ static int noinline_for_stack sun4i_ss_cipher_poll_fallback(struct skcipher_requ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); struct sun4i_tfm_ctx *op = crypto_skcipher_ctx(tfm); struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); int err; - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, - NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&ctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&ctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&ctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (ctx->mode & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&ctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&ctx->fallback_req); return err; } @@ -494,23 +492,25 @@ int sun4i_ss_cipher_init(struct crypto_tfm *tfm) alg.crypto.base); op->ss = algt->ss; - crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), - sizeof(struct sun4i_cipher_req_ctx)); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), + sizeof(struct sun4i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm)); + + err = pm_runtime_get_sync(op->ss->dev); if (err < 0) goto error_pm; return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -518,7 +518,7 @@ void sun4i_ss_cipher_exit(struct crypto_tfm *tfm) { struct sun4i_tfm_ctx *op = crypto_tfm_ctx(tfm); - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put(op->ss->dev); } @@ -546,10 +546,10 @@ int sun4i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the DES key, prepare the mode to be used */ @@ -566,10 +566,10 @@ int sun4i_ss_des_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the 3DES key, prepare the mode to be used */ @@ -586,9 +586,9 @@ int sun4i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h index 2b4c6333eb67..163962f9e284 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h @@ -170,11 +170,12 @@ struct sun4i_tfm_ctx { u32 keylen; u32 keymode; struct sun4i_ss_ctx *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; struct sun4i_cipher_req_ctx { u32 mode; + struct skcipher_request fallback_req; // keep at the end }; struct sun4i_req_ctx { From patchwork Sat Jun 27 08:36:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629411 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A290214E3 for ; Sat, 27 Jun 2020 08:36:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8BEE52088E for ; Sat, 27 Jun 2020 08:36:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247019; bh=P1hf02tXhdwPbGM+6yMTQXpLfUnkB8fw+ZcUm1FA84g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=rIO/onyfy6ivZWRbtqYhZ2ju075f5lGXAfrRb9eyRj3o6yzCVSQGHFVRrdpi4IT2D KNWEu0joFkgnN8HxgM97WDO3maZ9WWi+k8zoyz5o4h2QeeyYEatsvj4tDRXA6VTQw0 XjhMFLTlDd339s3b6uLIvU6ybjA/fPUpPrlTI8qc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726378AbgF0Ig6 (ORCPT ); Sat, 27 Jun 2020 04:36:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:32854 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Ig6 (ORCPT ); Sat, 27 Jun 2020 04:36:58 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EDE7420B80; Sat, 27 Jun 2020 08:36:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247018; bh=P1hf02tXhdwPbGM+6yMTQXpLfUnkB8fw+ZcUm1FA84g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AvhUeTeDXVkAVC12r7dha2A/ibM0S8cDE3/3srbuHK3HxXCxVNS/8hXSTRwlz4JDa zJncIPSRrPEiMfjCxQQcnaQaP0PCYv9tl9DAN3G+Zq7AVkI8/zTc7OsdcZdvJHLmFP F74lQuXT+EC+9X1awyYNLRTab5HmJ9QTOMNfdLac= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 05/13] crypto: sun8i-ce - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:15 +0200 Message-Id: <20200627083623.2428333-6-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun8i-ce driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c | 41 ++++++++++---------- drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h | 3 +- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c index a6abb701bfc6..82c99da24dfd 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c @@ -58,23 +58,20 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ce_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & CE_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -335,18 +332,20 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); op->ce = algt->ce; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ce->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ce->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ce_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -358,7 +357,7 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -370,7 +369,7 @@ void sun8i_ce_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync_suspend(op->ce->dev); } @@ -400,10 +399,10 @@ int sun8i_ce_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -425,8 +424,8 @@ int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h index 0e9eac397e1b..4ac0f91e2800 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h @@ -187,6 +187,7 @@ struct sun8i_ce_dev { struct sun8i_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -202,7 +203,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ce_dev *ce; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Sat Jun 27 08:36:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629413 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42FD4912 for ; Sat, 27 Jun 2020 08:37:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 267472088E for ; Sat, 27 Jun 2020 08:37:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247024; bh=tpcze3XSHyZPREkPUbsCkWhR8rfSX5Y6bvgG9Oth1A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=GEcg8Iz+Bw3ydvaJKZM1Cud2OHhAm6YhC7W0eLYg20jZlj+EhkV8MGGk4XpuaZZ+1 kzDJWlLtVCKLE8K0E5ttts6Rl4+VgU86V3SxTQbDNb2J4Lws5QeVqY5d7WbmRfNE+M ZWTwFYKFqTBavfTzV0jhYI6k9KW0tNLXYzOV4EjA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726386AbgF0IhD (ORCPT ); Sat, 27 Jun 2020 04:37:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:32946 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhD (ORCPT ); Sat, 27 Jun 2020 04:37:03 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 66332207FC; Sat, 27 Jun 2020 08:36:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247022; bh=tpcze3XSHyZPREkPUbsCkWhR8rfSX5Y6bvgG9Oth1A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYgIR6EUldgyN0nplR86Hw7Wyi+99IiWT3r73jl6mkFEXhDt6yKIde8iIPeIJBxJZ yuMYQm9s/zAp9B9+QCmCOCkk6J6wNQ1krG7e0TADMxXXPGg3VdGanMLGEVwV/YC3Hr qgFgiAaYD+IlZBl31yJJau7nAJi+mgBeKdkAogCQ= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 06/13] crypto: sun8i-ss - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:16 +0200 Message-Id: <20200627083623.2428333-7-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun8i-ss driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 39 ++++++++++---------- drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h | 3 +- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index c89cb2ee2496..7a131675a41c 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -73,7 +73,6 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ss_alg_template *algt; @@ -81,15 +80,15 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -334,18 +333,20 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); op->ss = algt->ss; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ss->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ss_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -359,7 +360,7 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -371,7 +372,7 @@ void sun8i_ss_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync(op->ss->dev); } @@ -401,10 +402,10 @@ int sun8i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -427,8 +428,8 @@ int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h index 29c44f279112..42658b134228 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h @@ -159,6 +159,7 @@ struct sun8i_cipher_req_ctx { unsigned int ivlen; unsigned int keylen; void *biv; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -174,7 +175,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ss_dev *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Sat Jun 27 08:36:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629419 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEF5F14E3 for ; Sat, 27 Jun 2020 08:37:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98804212CC for ; Sat, 27 Jun 2020 08:37:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247028; bh=DoA6f29T+QZl9PEHAXdrvpSKEzdX6U9CtDe+6HrgxnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=pYfOQpTrrjzjc4hZ1RjKP/umbjA1moGrqS9UWc9AiEn+L0CvVdvyFJy81aFImP0pN 4f5fJI1SqEwOxK4sd6HKelslnJjmL2S3RM42VOI8++jTmUkCDWrusmYBfwX6+4+kV7 cmkYWZOykLMCfRJ9Ec1v8XZDm3DBXH82Y0GhEnmE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726410AbgF0IhH (ORCPT ); Sat, 27 Jun 2020 04:37:07 -0400 Received: from mail.kernel.org ([198.145.29.99]:33042 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhH (ORCPT ); Sat, 27 Jun 2020 04:37:07 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CE2BA208C7; Sat, 27 Jun 2020 08:37:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247026; bh=DoA6f29T+QZl9PEHAXdrvpSKEzdX6U9CtDe+6HrgxnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PXx0Cxudum+aOZZ1BJD27yf9+2R+PZxtRs/sEdd/EbHaLJiLFYj4cLMordpd21Ire l8cI3XNHTEFono/mGqo5pqvCsrKduxQE83N9cR03hkX3zl7EU5uiQrauKEOyCp7txU v7XZGHHgUpJk3tt0GNhOnp7WDdlvbgoA7noElsjQ= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 07/13] crypto: ccp - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:17 +0200 Message-Id: <20200627083623.2428333-8-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the ccp driver implements an asynchronous version of xts(aes), the fallback it allocates is required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/ccp/ccp-crypto-aes-xts.c | 33 ++++++++++---------- drivers/crypto/ccp/ccp-crypto.h | 4 ++- 2 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 04b2517df955..959168a7ac59 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -98,7 +98,7 @@ static int ccp_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, ctx->u.aes.key_len = key_len / 2; sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len); - return crypto_sync_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); + return crypto_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); } static int ccp_aes_xts_crypt(struct skcipher_request *req, @@ -145,20 +145,19 @@ static int ccp_aes_xts_crypt(struct skcipher_request *req, (ctx->u.aes.key_len != AES_KEYSIZE_256)) fallback = 1; if (fallback) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, - ctx->u.aes.tfm_skcipher); - /* Use the fallback to process the request for any * unsupported unit sizes or key sizes */ - skcipher_request_set_sync_tfm(subreq, ctx->u.aes.tfm_skcipher); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - ret = encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + skcipher_request_set_tfm(&rctx->fallback_req, + ctx->u.aes.tfm_skcipher); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) : + crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -198,13 +197,12 @@ static int ccp_aes_xts_decrypt(struct skcipher_request *req) static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) { struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; ctx->complete = ccp_aes_xts_complete; ctx->u.aes.key_len = 0; - fallback_tfm = crypto_alloc_sync_skcipher("xts(aes)", 0, - CRYPTO_ALG_ASYNC | + fallback_tfm = crypto_alloc_skcipher("xts(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback_tfm)) { pr_warn("could not load fallback driver xts(aes)\n"); @@ -212,7 +210,8 @@ static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) } ctx->u.aes.tfm_skcipher = fallback_tfm; - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx) + + crypto_skcipher_reqsize(fallback_tfm)); return 0; } @@ -221,7 +220,7 @@ static void ccp_aes_xts_exit_tfm(struct crypto_skcipher *tfm) { struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->u.aes.tfm_skcipher); + crypto_free_skcipher(ctx->u.aes.tfm_skcipher); } static int ccp_register_aes_xts_alg(struct list_head *head, diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h index 90a009e6b5c1..aed3d2192d01 100644 --- a/drivers/crypto/ccp/ccp-crypto.h +++ b/drivers/crypto/ccp/ccp-crypto.h @@ -89,7 +89,7 @@ static inline struct ccp_crypto_ahash_alg * /***** AES related defines *****/ struct ccp_aes_ctx { /* Fallback cipher for XTS with unsupported unit sizes */ - struct crypto_sync_skcipher *tfm_skcipher; + struct crypto_skcipher *tfm_skcipher; enum ccp_engine engine; enum ccp_aes_type type; @@ -121,6 +121,8 @@ struct ccp_aes_req_ctx { u8 rfc3686_iv[AES_BLOCK_SIZE]; struct ccp_cmd cmd; + + struct skcipher_request fallback_req; // keep at the end }; struct ccp_aes_cmac_req_ctx { From patchwork Sat Jun 27 08:36:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629421 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30B0E618 for ; Sat, 27 Jun 2020 08:37:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 12F6020B80 for ; Sat, 27 Jun 2020 08:37:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247033; bh=VdWXGOSKjJGXQhycLLuQQ27vibyXBPZrm23gn1gDZfA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=x2rToZNTFZpVS9USXh+THtvpEulRXL+g3m5TTovTJ9tTVkn/Ay7fdDhljyvomrHKL EvsOLmDvQWYi+V8q0Al7+oRx2KFHI62L3clNiElGo7CQtAMUz4YrIRIzySRV9vUSEM 6er4E5IJSNij3fM7yIRr9dEPrWinogU7HQBprrAI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726415AbgF0IhM (ORCPT ); Sat, 27 Jun 2020 04:37:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:33128 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhM (ORCPT ); Sat, 27 Jun 2020 04:37:12 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4580C20DD4; Sat, 27 Jun 2020 08:37:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247031; bh=VdWXGOSKjJGXQhycLLuQQ27vibyXBPZrm23gn1gDZfA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nDoTOnSTs2Rz3Sos4WM39GbplROYXqO0QPv/s1ucHafGTnuhZ3PgqYJfncaTsO5OI c7EziQSFZXMvvDuoXwgC0lPImzU11sYszDdaj/miKmQSFxkieHJSF5K/DyxaINyJ+4 Z6SidAFbuqI1S2qty1tnLeINfqHF2svhJKIVhHJ0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 08/13] crypto: chelsio - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:18 +0200 Message-Id: <20200627083623.2428333-9-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the chelsio driver implements asynchronous versions of cbc(aes) and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/chelsio/chcr_algo.c | 57 ++++++++------------ drivers/crypto/chelsio/chcr_crypto.h | 3 +- 2 files changed, 25 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 4c2553672b6f..a6625b90fb1a 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -690,26 +690,22 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src, return min(srclen, dstlen); } -static int chcr_cipher_fallback(struct crypto_sync_skcipher *cipher, - u32 flags, - struct scatterlist *src, - struct scatterlist *dst, - unsigned int nbytes, +static int chcr_cipher_fallback(struct crypto_skcipher *cipher, + struct skcipher_request *req, u8 *iv, unsigned short op_type) { + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, cipher); - - skcipher_request_set_sync_tfm(subreq, cipher); - skcipher_request_set_callback(subreq, flags, NULL, NULL); - skcipher_request_set_crypt(subreq, src, dst, - nbytes, iv); + skcipher_request_set_tfm(&reqctx->fallback_req, cipher); + skcipher_request_set_callback(&reqctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&reqctx->fallback_req, req->src, req->dst, + req->cryptlen, iv); - err = op_type ? crypto_skcipher_decrypt(subreq) : - crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = op_type ? crypto_skcipher_decrypt(&reqctx->fallback_req) : + crypto_skcipher_encrypt(&reqctx->fallback_req); return err; @@ -924,11 +920,11 @@ static int chcr_cipher_fallback_setkey(struct crypto_skcipher *cipher, { struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); - crypto_sync_skcipher_clear_flags(ablkctx->sw_cipher, + crypto_skcipher_clear_flags(ablkctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ablkctx->sw_cipher, + crypto_skcipher_set_flags(ablkctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ablkctx->sw_cipher, key, keylen); + return crypto_skcipher_setkey(ablkctx->sw_cipher, key, keylen); } static int chcr_aes_cbc_setkey(struct crypto_skcipher *cipher, @@ -1206,13 +1202,8 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req, req); memcpy(req->iv, reqctx->init_iv, IV); atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, - req->iv, - reqctx->op); + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, req->iv, + reqctx->op); goto complete; } @@ -1341,11 +1332,7 @@ static int process_cipher(struct skcipher_request *req, chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); fallback: atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ? reqctx->iv : req->iv, @@ -1486,14 +1473,15 @@ static int chcr_init_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->base.cra_name, 0, + ablkctx->sw_cipher = crypto_alloc_skcipher(alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } init_completion(&ctx->cbc_aes_aio_done); - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1507,13 +1495,14 @@ static int chcr_rfc3686_init(struct crypto_skcipher *tfm) /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) * cannot be used as fallback in chcr_handle_cipher_response */ - ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0, + ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1523,7 +1512,7 @@ static void chcr_exit_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - crypto_free_sync_skcipher(ablkctx->sw_cipher); + crypto_free_skcipher(ablkctx->sw_cipher); } static int get_alg_config(struct algo_param *params, diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index b3fdbdc25acb..55a6631cdbee 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -171,7 +171,7 @@ static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm) } struct ablk_ctx { - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; __be32 key_ctx_hdr; unsigned int enckey_len; unsigned char ciph_mode; @@ -305,6 +305,7 @@ struct chcr_skcipher_req_ctx { u8 init_iv[CHCR_MAX_CRYPTO_IV_LEN]; u16 txqidx; u16 rxqidx; + struct skcipher_request fallback_req; // keep at the end }; struct chcr_alg_template { From patchwork Sat Jun 27 08:36:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629425 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59DC214E3 for ; Sat, 27 Jun 2020 08:37:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 429EB2137B for ; Sat, 27 Jun 2020 08:37:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247037; bh=mJ44RN1Ax7UO5kUEhNQNsOxr9UMAVZrtpGUzPdPjV5w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=YwaBg07qHVOGmj0HM9d6bokiQI5ITVnegPq3r6zq3C9kpWCGPKIa4nzfYh6gvad4l H3wrjq0WziK9Vd223Vm8gWGuEwbWvRz0Iz83bFGdu5vivhJUl9/ThU0tk1BO6+8YKB A9+Z8jD1VU108QbWBqep8pwiLoyq6yNa/yKJXrGE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726422AbgF0IhQ (ORCPT ); Sat, 27 Jun 2020 04:37:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:33214 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhQ (ORCPT ); Sat, 27 Jun 2020 04:37:16 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B253C208C7; Sat, 27 Jun 2020 08:37:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247035; bh=mJ44RN1Ax7UO5kUEhNQNsOxr9UMAVZrtpGUzPdPjV5w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JJ8PGniKDwza/T069KYGviPChm/sq/A0O1/cXJD7D4b45EdSD5aBEtWIl1FxD/07T ay4A+7qicQGgq6mgO6mnvOk215t5E7vDxLo7DICzkND1FDakZfQMKX0CFjy2hI42UA IMkKpfDde2o0HFaHQTGSVI+ZbzwsivB1CLA9EFwY= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 09/13] crypto: mxs-dcp - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:19 +0200 Message-Id: <20200627083623.2428333-10-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the mxs-dcp driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/mxs-dcp.c | 33 ++++++++++---------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index d84530293036..909a7eb748e3 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -97,7 +97,7 @@ struct dcp_async_ctx { unsigned int hot:1; /* Crypto-specific context */ - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; unsigned int key_len; uint8_t key[AES_KEYSIZE_128]; }; @@ -105,6 +105,7 @@ struct dcp_async_ctx { struct dcp_aes_req_ctx { unsigned int enc:1; unsigned int ecb:1; + struct skcipher_request fallback_req; // keep at the end }; struct dcp_sha_req_ctx { @@ -426,21 +427,20 @@ static int dcp_chan_thread_aes(void *data) static int mxs_dcp_block_fallback(struct skcipher_request *req, int enc) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); struct dcp_async_ctx *ctx = crypto_skcipher_ctx(tfm); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); int ret; - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, req->dst, req->cryptlen, req->iv); if (enc) - ret = crypto_skcipher_encrypt(subreq); + ret = crypto_skcipher_encrypt(&rctx->fallback_req); else - ret = crypto_skcipher_decrypt(subreq); - - skcipher_request_zero(subreq); + ret = crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -510,24 +510,25 @@ static int mxs_dcp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, * but is supported by in-kernel software implementation, we use * software fallback. */ - crypto_sync_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(actx->fallback, + crypto_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(actx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(actx->fallback, key, len); + return crypto_skcipher_setkey(actx->fallback, key, len); } static int mxs_dcp_aes_fallback_init_tfm(struct crypto_skcipher *tfm) { const char *name = crypto_tfm_alg_name(crypto_skcipher_tfm(tfm)); struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); - struct crypto_sync_skcipher *blk; + struct crypto_skcipher *blk; - blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + blk = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); actx->fallback = blk; - crypto_skcipher_set_reqsize(tfm, sizeof(struct dcp_aes_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct dcp_aes_req_ctx) + + crypto_skcipher_reqsize(blk)); return 0; } @@ -535,7 +536,7 @@ static void mxs_dcp_aes_fallback_exit_tfm(struct crypto_skcipher *tfm) { struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(actx->fallback); + crypto_free_skcipher(actx->fallback); } /* From patchwork Sat Jun 27 08:36:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629431 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0E84912 for ; Sat, 27 Jun 2020 08:37:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA1CE2145D for ; Sat, 27 Jun 2020 08:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247041; bh=EGar2rBAa9p0EmyWbgXPPJUEijUxQLN5bxRPFqvMCTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=weeGumOmZPQ2/uqW1TZ/LpXQ9MA/6e/bvxMVRNidlJj33iBf0zpEk0QTGhOnBXvkd hUfsEY9pRCthQLS1CSkeogXnSSmFC/sS6pFjqNejDu8i5sR6+ZHrGt08ZOOn+kZrLn oulg6PZINegztMdAXt2AthGWTgA9qfAAnFCuwwQo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726424AbgF0IhV (ORCPT ); Sat, 27 Jun 2020 04:37:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:33300 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhU (ORCPT ); Sat, 27 Jun 2020 04:37:20 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 26F03207FC; Sat, 27 Jun 2020 08:37:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247040; bh=EGar2rBAa9p0EmyWbgXPPJUEijUxQLN5bxRPFqvMCTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O43pL46JHTcyFNkA22Y4nrjz3lRUgG/5t9y/PrLc9W2Tqn8PSGrYu/bnqJzG3qDcl nQGyjZS/YFlGgazwQq6VSynO+h/ICt0LjNQzOt7gEtCwS6v4U695FstrLSsd5I2/DY 0yrids79ui3vTeIfzDqlwifzLaPp5993qdI+eQ90= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 10/13] crypto: picoxcell - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:20 +0200 Message-Id: <20200627083623.2428333-11-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the picoxcell driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/picoxcell_crypto.c | 34 +++++++++++--------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index 7384e91c8b32..eea75c7cbdf2 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -86,6 +86,7 @@ struct spacc_req { dma_addr_t src_addr, dst_addr; struct spacc_ddt *src_ddt, *dst_ddt; void (*complete)(struct spacc_req *req); + struct skcipher_request fallback_req; // keep at the end }; struct spacc_aead { @@ -158,7 +159,7 @@ struct spacc_ablk_ctx { * The fallback cipher. If the operation can't be done in hardware, * fallback to a software version. */ - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; }; /* AEAD cipher context. */ @@ -792,13 +793,13 @@ static int spacc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, * Set the fallback transform to use the same request flags as * the hardware transform. */ - crypto_sync_skcipher_clear_flags(ctx->sw_cipher, + crypto_skcipher_clear_flags(ctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->sw_cipher, + crypto_skcipher_set_flags(ctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - err = crypto_sync_skcipher_setkey(ctx->sw_cipher, key, len); + err = crypto_skcipher_setkey(ctx->sw_cipher, key, len); if (err) goto sw_setkey_failed; } @@ -900,7 +901,7 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, struct crypto_tfm *old_tfm = crypto_skcipher_tfm(crypto_skcipher_reqtfm(req)); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); + struct spacc_req *dev_req = skcipher_request_ctx(req); int err; /* @@ -908,13 +909,13 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, * the ciphering has completed, put the old transform back into the * request. */ - skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher); - skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, + skcipher_request_set_tfm(&dev_req->fallback_req, ctx->sw_cipher); + skcipher_request_set_callback(&dev_req->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&dev_req->fallback_req, req->src, req->dst, req->cryptlen, req->iv); - err = is_encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + err = is_encrypt ? crypto_skcipher_encrypt(&dev_req->fallback_req) : + crypto_skcipher_decrypt(&dev_req->fallback_req); return err; } @@ -1007,19 +1008,22 @@ static int spacc_ablk_init_tfm(struct crypto_skcipher *tfm) ctx->generic.flags = spacc_alg->type; ctx->generic.engine = engine; if (alg->base.cra_flags & CRYPTO_ALG_NEED_FALLBACK) { - ctx->sw_cipher = crypto_alloc_sync_skcipher( + ctx->sw_cipher = crypto_alloc_skcipher( alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->sw_cipher)) { dev_warn(engine->dev, "failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ctx->sw_cipher); } + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req) + + crypto_skcipher_reqsize(ctx->sw_cipher)); + } else { + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); } + ctx->generic.key_offs = spacc_alg->key_offs; ctx->generic.iv_offs = spacc_alg->iv_offs; - crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); - return 0; } @@ -1027,7 +1031,7 @@ static void spacc_ablk_exit_tfm(struct crypto_skcipher *tfm) { struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->sw_cipher); + crypto_free_skcipher(ctx->sw_cipher); } static int spacc_ablk_encrypt(struct skcipher_request *req) From patchwork Sat Jun 27 08:36:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629433 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38174912 for ; Sat, 27 Jun 2020 08:37:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FACE20DD4 for ; Sat, 27 Jun 2020 08:37:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247046; bh=SZYJbEe8mCowaOS/Sy7cfSLq4/DGTZWmD8C72wZYJ5s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=g3IkVu+htEUxOwjfaO5RBfuhnlxvmLs5RP3QLWfwtPsdlgiOmgbEcW3HmD0hMM7PC 77VpeHz2nqwhN6XOEm78I+Da+LTycSxWkVh173zW7c/eEoTguuOQF0MQ7JndykWR6z eSaWs7PT4vRi5Wz6NYuXX4um8d0RPJ0IGgZoRXjA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726436AbgF0IhZ (ORCPT ); Sat, 27 Jun 2020 04:37:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:33388 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhZ (ORCPT ); Sat, 27 Jun 2020 04:37:25 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 915F82088E; Sat, 27 Jun 2020 08:37:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247044; bh=SZYJbEe8mCowaOS/Sy7cfSLq4/DGTZWmD8C72wZYJ5s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bh0tF48/Qe5q5cXU4io8BawjUbNBLGrYJCtI8EFMlOj/mpvtvs4b9vxH2eufj5xfe wE6ZPLkwV8tLLl7/4OxD7gTRr+bliN0kRrWnhxPT79p+b8cWdwyWeGXNQyQOEU4Xyx WGkWT79LYToF19D84UiTwTCasT2xakeRJILuW1AA= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 11/13] crypto: qce - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:21 +0200 Message-Id: <20200627083623.2428333-12-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the qce driver implements asynchronous versions of ecb(aes), cbc(aes)and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/qce/cipher.h | 3 ++- drivers/crypto/qce/skcipher.c | 27 ++++++++++---------- 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h index 7770660bc853..cffa9fc628ff 100644 --- a/drivers/crypto/qce/cipher.h +++ b/drivers/crypto/qce/cipher.h @@ -14,7 +14,7 @@ struct qce_cipher_ctx { u8 enc_key[QCE_MAX_KEY_SIZE]; unsigned int enc_keylen; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; /** @@ -43,6 +43,7 @@ struct qce_cipher_reqctx { struct sg_table src_tbl; struct scatterlist *src_sg; unsigned int cryptlen; + struct skcipher_request fallback_req; // keep at the end }; static inline struct qce_alg_template *to_cipher_tmpl(struct crypto_skcipher *tfm) diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c index 9412433f3b21..265afae29901 100644 --- a/drivers/crypto/qce/skcipher.c +++ b/drivers/crypto/qce/skcipher.c @@ -178,7 +178,7 @@ static int qce_skcipher_setkey(struct crypto_skcipher *ablk, const u8 *key, break; } - ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) ctx->enc_keylen = keylen; return ret; @@ -235,16 +235,15 @@ static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt) req->cryptlen <= aes_sw_max_len) || (IS_XTS(rctx->flags) && req->cryptlen > QCE_SECTOR_SIZE && req->cryptlen % QCE_SECTOR_SIZE))) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - ret = encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) : + crypto_skcipher_decrypt(&rctx->fallback_req); return ret; } @@ -275,8 +274,10 @@ static int qce_skcipher_init_fallback(struct crypto_skcipher *tfm) struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); qce_skcipher_init(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(&tfm->base), + ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(&tfm->base), 0, CRYPTO_ALG_NEED_FALLBACK); + crypto_skcipher_set_reqsize(tfm, sizeof(struct qce_cipher_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); return PTR_ERR_OR_ZERO(ctx->fallback); } @@ -284,7 +285,7 @@ static void qce_skcipher_exit(struct crypto_skcipher *tfm) { struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } struct qce_skcipher_def { From patchwork Sat Jun 27 08:36:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629439 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1013714E3 for ; Sat, 27 Jun 2020 08:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED6C020DD4 for ; Sat, 27 Jun 2020 08:37:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247051; bh=1mKbpa4I4Sh07AGQPoHxLj1RNd0nCGZ8qWt4DEk3uDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=jZFW1ZiKTo1MtwKdlkebR9FRO6t51VaZjokyd3s2gMOIhRfq3PrZqxYstulY/r0Pu X9+ZSOAnDB8YEBjmefM1HY/y1Ejc1/8P1Mp7J3toD1rRTesoojZEaNNqVg8hOgigWZ AzSTH6fL4iohdxaPTLH0BB8fBHm1Lo7MdsyC95zQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726389AbgF0Iha (ORCPT ); Sat, 27 Jun 2020 04:37:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:33532 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Iha (ORCPT ); Sat, 27 Jun 2020 04:37:30 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0B600207FC; Sat, 27 Jun 2020 08:37:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247049; bh=1mKbpa4I4Sh07AGQPoHxLj1RNd0nCGZ8qWt4DEk3uDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HciL4sMmgPCTWYfKn8WHC8Cev4c1ptta+YmLvQ1IjcYaB/cqNTB6GUa0rVz+wVnw/ n+tjZSG2qpgKjR00nqqKhrUGl7HDg6Y418OH41PBjrMwBnx+0EbPbWNToCoBEH5FU8 qvFtVw0AnHDBYYYAt+IPmOWLuFUvtFM7L6nBxWaM= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 12/13] crypto: sahara - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:22 +0200 Message-Id: <20200627083623.2428333-13-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sahara driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/sahara.c | 96 +++++++++----------- 1 file changed, 45 insertions(+), 51 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 466e30bd529c..0c8cb23ae708 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -146,11 +146,12 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct sahara_aes_reqctx { unsigned long mode; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -617,10 +618,10 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + return crypto_skcipher_setkey(ctx->fallback, key, keylen); } static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) @@ -651,21 +652,19 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) static int sahara_aes_ecb_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT); @@ -673,21 +672,19 @@ static int sahara_aes_ecb_encrypt(struct skcipher_request *req) static int sahara_aes_ecb_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, 0); @@ -695,21 +692,19 @@ static int sahara_aes_ecb_decrypt(struct skcipher_request *req) static int sahara_aes_cbc_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); @@ -717,21 +712,19 @@ static int sahara_aes_cbc_encrypt(struct skcipher_request *req) static int sahara_aes_cbc_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_CBC); @@ -742,14 +735,15 @@ static int sahara_aes_init_tfm(struct crypto_skcipher *tfm) const char *name = crypto_tfm_alg_name(&tfm->base); struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(name, 0, + ctx->fallback = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); return PTR_ERR(ctx->fallback); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); return 0; } @@ -758,7 +752,7 @@ static void sahara_aes_exit_tfm(struct crypto_skcipher *tfm) { struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev, From patchwork Sat Jun 27 08:36:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11629443 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA44D14E3 for ; Sat, 27 Jun 2020 08:37:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2B1620B80 for ; Sat, 27 Jun 2020 08:37:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247054; bh=zjbLkDlwCXNIbtATEiNU0StbDpQpMVzWqLK75QxxG/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=v+JkBTV1cPjiyHU57IlgnEBnAP9MFN0vVncYHZ5erF3+Ssury6APr9hXrexbcaq5t dBSbinD4wH0EHKTyagB2IHhFEYlrZHCmn1bIH2iQqY24qMtZrzfpeOhovgJ4PjBWYU i2n+xx3fiyEqhqVeFxT1sf/sSpDwBCnvei82fo5I= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726443AbgF0Ihe (ORCPT ); Sat, 27 Jun 2020 04:37:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:33612 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Ihd (ORCPT ); Sat, 27 Jun 2020 04:37:33 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7A1202088E; Sat, 27 Jun 2020 08:37:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247053; bh=zjbLkDlwCXNIbtATEiNU0StbDpQpMVzWqLK75QxxG/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tChlkpS2IZp2KSfyxBkAGl1w2bjXvp7Og3a2jpgn2f0cYCmTNe5yV02fWcVAsYGqd 8d6NLDuF1hX8C0c4eBP9z4LwQWazC9hHpHLzU9a6mJ7nNfhyRzhXi6Hpq0fW4/mChO C8VEqFA/cNoH6xF2wul2e05fVmZQcU8aCN1kIT5E= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 13/13] crypto: mediatek - use AES library for GCM key derivation Date: Sat, 27 Jun 2020 10:36:23 +0200 Message-Id: <20200627083623.2428333-14-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The Mediatek accelerator driver calls into a dynamically allocated skcipher of the ctr(aes) variety to perform GCM key derivation, which involves AES encryption of a single block consisting of NUL bytes. There is no point in using the skcipher API for this, so use the AES library interface instead. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 3 +- drivers/crypto/mediatek/mtk-aes.c | 63 +++----------------- 2 files changed, 9 insertions(+), 57 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 802b9ada4e9e..c8c3ebb248f8 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -756,10 +756,9 @@ config CRYPTO_DEV_ZYNQMP_AES config CRYPTO_DEV_MEDIATEK tristate "MediaTek's EIP97 Cryptographic Engine driver" depends on (ARM && ARCH_MEDIATEK) || COMPILE_TEST - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_AEAD select CRYPTO_SKCIPHER - select CRYPTO_CTR select CRYPTO_SHA1 select CRYPTO_SHA256 select CRYPTO_SHA512 diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c index 78d660d963e2..4ad3571ab6af 100644 --- a/drivers/crypto/mediatek/mtk-aes.c +++ b/drivers/crypto/mediatek/mtk-aes.c @@ -137,8 +137,6 @@ struct mtk_aes_gcm_ctx { u32 authsize; size_t textlen; - - struct crypto_skcipher *ctr; }; struct mtk_aes_drv { @@ -996,17 +994,8 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, u32 keylen) { struct mtk_aes_base_ctx *ctx = crypto_aead_ctx(aead); - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(ctx); - struct crypto_skcipher *ctr = gctx->ctr; - struct { - u32 hash[4]; - u8 iv[8]; - - struct crypto_wait wait; - - struct scatterlist sg[1]; - struct skcipher_request req; - } *data; + u8 hash[AES_BLOCK_SIZE] __aligned(4) = {}; + struct crypto_aes_ctx aes_ctx; int err; switch (keylen) { @@ -1026,39 +1015,18 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, ctx->keylen = SIZE_IN_WORDS(keylen); - /* Same as crypto_gcm_setkey() from crypto/gcm.c */ - crypto_skcipher_clear_flags(ctr, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctr, crypto_aead_get_flags(aead) & - CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(ctr, key, keylen); + err = aes_expandkey(&aes_ctx, key, keylen); if (err) return err; - data = kzalloc(sizeof(*data) + crypto_skcipher_reqsize(ctr), - GFP_KERNEL); - if (!data) - return -ENOMEM; - - crypto_init_wait(&data->wait); - sg_init_one(data->sg, &data->hash, AES_BLOCK_SIZE); - skcipher_request_set_tfm(&data->req, ctr); - skcipher_request_set_callback(&data->req, CRYPTO_TFM_REQ_MAY_SLEEP | - CRYPTO_TFM_REQ_MAY_BACKLOG, - crypto_req_done, &data->wait); - skcipher_request_set_crypt(&data->req, data->sg, data->sg, - AES_BLOCK_SIZE, data->iv); - - err = crypto_wait_req(crypto_skcipher_encrypt(&data->req), - &data->wait); - if (err) - goto out; + aes_encrypt(&aes_ctx, hash, hash); + memzero_explicit(&aes_ctx, sizeof(aes_ctx)); mtk_aes_write_state_le(ctx->key, (const u32 *)key, keylen); - mtk_aes_write_state_be(ctx->key + ctx->keylen, data->hash, + mtk_aes_write_state_be(ctx->key + ctx->keylen, (const u32 *)hash, AES_BLOCK_SIZE); -out: - kzfree(data); - return err; + + return 0; } static int mtk_aes_gcm_setauthsize(struct crypto_aead *aead, @@ -1095,32 +1063,17 @@ static int mtk_aes_gcm_init(struct crypto_aead *aead) { struct mtk_aes_gcm_ctx *ctx = crypto_aead_ctx(aead); - ctx->ctr = crypto_alloc_skcipher("ctr(aes)", 0, - CRYPTO_ALG_ASYNC); - if (IS_ERR(ctx->ctr)) { - pr_err("Error allocating ctr(aes)\n"); - return PTR_ERR(ctx->ctr); - } - crypto_aead_set_reqsize(aead, sizeof(struct mtk_aes_reqctx)); ctx->base.start = mtk_aes_gcm_start; return 0; } -static void mtk_aes_gcm_exit(struct crypto_aead *aead) -{ - struct mtk_aes_gcm_ctx *ctx = crypto_aead_ctx(aead); - - crypto_free_skcipher(ctx->ctr); -} - static struct aead_alg aes_gcm_alg = { .setkey = mtk_aes_gcm_setkey, .setauthsize = mtk_aes_gcm_setauthsize, .encrypt = mtk_aes_gcm_encrypt, .decrypt = mtk_aes_gcm_decrypt, .init = mtk_aes_gcm_init, - .exit = mtk_aes_gcm_exit, .ivsize = GCM_AES_IV_SIZE, .maxauthsize = AES_BLOCK_SIZE,