From patchwork Tue Jul 28 07:18:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688503 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEEFD138A for ; Tue, 28 Jul 2020 07:18:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE7C820792 for ; Tue, 28 Jul 2020 07:18:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727898AbgG1HSm (ORCPT ); Tue, 28 Jul 2020 03:18:42 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54726 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727888AbgG1HSm (ORCPT ); Tue, 28 Jul 2020 03:18:42 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsl-0006Ho-Gf; Tue, 28 Jul 2020 17:18:40 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:39 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:39 +1000 Subject: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Crypto skcipher algorithms in general allow chaining to break large operations into smaller ones based on multiples of the chunk size. However, some algorithms don't support chaining while others (such as cts) only support chaining for the leading blocks. This patch adds the necessary API support for these algorithms. In particular, a new request flag CRYPTO_TFM_REQ_MORE is added to allow chaining for algorithms such as cts that cannot otherwise be chained. A new algorithm attribute final_chunksize has also been added to indicate how many blocks at the end of a request that cannot be chained and therefore must be withheld if chaining is attempted. This attribute can also be used to indicate that no chaining is allowed. Its value should be set to -1 in that case. Signed-off-by: Herbert Xu --- include/crypto/skcipher.h | 24 ++++++++++++++++++++++++ include/linux/crypto.h | 1 + 2 files changed, 25 insertions(+) diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 5663f71198b37..fb90c3e1c26ba 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -97,6 +97,9 @@ struct crypto_sync_skcipher { * @walksize: Equal to the chunk size except in cases where the algorithm is * considerably more efficient if it can operate on multiple chunks * in parallel. Should be a multiple of chunksize. + * @final_chunksize: Number of bytes that must be processed together + * at the end. If set to -1 then chaining is not + * possible. * @base: Definition of a generic crypto algorithm. * * All fields except @ivsize are mandatory and must be filled. @@ -114,6 +117,7 @@ struct skcipher_alg { unsigned int ivsize; unsigned int chunksize; unsigned int walksize; + int final_chunksize; struct crypto_alg base; }; @@ -279,6 +283,11 @@ static inline unsigned int crypto_skcipher_alg_chunksize( return alg->chunksize; } +static inline int crypto_skcipher_alg_final_chunksize(struct skcipher_alg *alg) +{ + return alg->final_chunksize; +} + /** * crypto_skcipher_chunksize() - obtain chunk size * @tfm: cipher handle @@ -296,6 +305,21 @@ static inline unsigned int crypto_skcipher_chunksize( return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm)); } +/** + * crypto_skcipher_final_chunksize() - obtain number of final bytes + * @tfm: cipher handle + * + * For algorithms such as CTS the final chunks cannot be chained. + * This returns the number of final bytes that must be withheld + * when chaining. + * + * Return: number of final bytes + */ +static inline int crypto_skcipher_final_chunksize(struct crypto_skcipher *tfm) +{ + return crypto_skcipher_alg_final_chunksize(crypto_skcipher_alg(tfm)); +} + static inline unsigned int crypto_sync_skcipher_blocksize( struct crypto_sync_skcipher *tfm) { diff --git a/include/linux/crypto.h b/include/linux/crypto.h index ef90e07c9635c..2e624a1d4f832 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -141,6 +141,7 @@ #define CRYPTO_TFM_REQ_FORBID_WEAK_KEYS 0x00000100 #define CRYPTO_TFM_REQ_MAY_SLEEP 0x00000200 #define CRYPTO_TFM_REQ_MAY_BACKLOG 0x00000400 +#define CRYPTO_TFM_REQ_MORE 0x00000800 /* * Miscellaneous stuff. From patchwork Tue Jul 28 07:18:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688505 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F4306C1 for ; Tue, 28 Jul 2020 07:18:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60F9120792 for ; Tue, 28 Jul 2020 07:18:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727902AbgG1HSo (ORCPT ); Tue, 28 Jul 2020 03:18:44 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54730 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727858AbgG1HSo (ORCPT ); Tue, 28 Jul 2020 03:18:44 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsn-0006Hw-QM; Tue, 28 Jul 2020 17:18:42 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:41 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:41 +1000 Subject: [v3 PATCH 2/31] crypto: algif_skcipher - Add support for final_chunksize References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands algif_skcipher assumes all algorithms support chaining. This patch teaches it about the new final_chunksize attribute which can be used to disable chaining on a given algorithm. It can also be used to support chaining on algorithms such as cts that cannot otherwise do chaining. For that case algif_skcipher will also now set the request flag CRYPTO_TFM_REQ_MORE when needed. Signed-off-by: Herbert Xu --- crypto/algif_skcipher.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index a51ba22fef58f..1d50f042dd319 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -57,12 +57,15 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, struct af_alg_ctx *ctx = ask->private; struct crypto_skcipher *tfm = pask->private; unsigned int bs = crypto_skcipher_chunksize(tfm); + unsigned int rflags = CRYPTO_TFM_REQ_MAY_SLEEP; + int fc = crypto_skcipher_final_chunksize(tfm); + unsigned int min = bs + (fc > 0 ? fc : 0); struct af_alg_async_req *areq; int err = 0; size_t len = 0; - if (!ctx->init || (ctx->more && ctx->used < bs)) { - err = af_alg_wait_for_data(sk, flags, bs); + if (!ctx->init || (ctx->more && ctx->used < min)) { + err = af_alg_wait_for_data(sk, flags, min); if (err) return err; } @@ -78,13 +81,23 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, if (err) goto free; + err = -EINVAL; + /* * If more buffers are to be expected to be processed, process only - * full block size buffers. + * full block size buffers and withhold data according to the final + * chunk size. */ - if (ctx->more || len < ctx->used) + if (ctx->more || len < ctx->used) { + if (fc < 0) + goto free; + + len -= fc; len -= len % bs; + rflags |= CRYPTO_TFM_REQ_MORE; + } + /* * Create a per request TX SGL for this request which tracks the * SG entries from the global TX SGL. @@ -116,8 +129,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, areq->outlen = len; skcipher_request_set_callback(&areq->cra_u.skcipher_req, - CRYPTO_TFM_REQ_MAY_SLEEP, - af_alg_async_cb, areq); + rflags, af_alg_async_cb, areq); err = ctx->enc ? crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) : crypto_skcipher_decrypt(&areq->cra_u.skcipher_req); @@ -129,9 +141,9 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, sock_put(sk); } else { /* Synchronous operation */ + rflags |= CRYPTO_TFM_REQ_MAY_BACKLOG; skcipher_request_set_callback(&areq->cra_u.skcipher_req, - CRYPTO_TFM_REQ_MAY_SLEEP | - CRYPTO_TFM_REQ_MAY_BACKLOG, + rflags, crypto_req_done, &ctx->wait); err = crypto_wait_req(ctx->enc ? crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) : From patchwork Tue Jul 28 07:18:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688507 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E80BF6C1 for ; Tue, 28 Jul 2020 07:18:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D782720792 for ; Tue, 28 Jul 2020 07:18:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727905AbgG1HSr (ORCPT ); Tue, 28 Jul 2020 03:18:47 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54742 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727858AbgG1HSr (ORCPT ); Tue, 28 Jul 2020 03:18:47 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsq-0006I8-1l; Tue, 28 Jul 2020 17:18:45 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:44 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:44 +1000 Subject: [v3 PATCH 3/31] crypto: cts - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands cts cannot do chaining. That is, it always performs the cipher-text stealing at the end of a request. This patch adds support for chaining when the CRYPTO_TM_REQ_MORE flag is set. It also sets final_chunksize so that data can be withheld by the caller to enable correct processing at the true end of a request. Signed-off-by: Herbert Xu --- crypto/cts.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/crypto/cts.c b/crypto/cts.c index 3766d47ebcc01..67990146c9b06 100644 --- a/crypto/cts.c +++ b/crypto/cts.c @@ -100,7 +100,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req) struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_request *subreq = &rctx->subreq; - int bsize = crypto_skcipher_blocksize(tfm); + int bsize = crypto_skcipher_chunksize(tfm); u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32)); struct scatterlist *sg; unsigned int offset; @@ -146,7 +146,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req) struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req); struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm); struct skcipher_request *subreq = &rctx->subreq; - int bsize = crypto_skcipher_blocksize(tfm); + int bsize = crypto_skcipher_chunksize(tfm); unsigned int nbytes = req->cryptlen; unsigned int offset; @@ -155,7 +155,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req) if (nbytes < bsize) return -EINVAL; - if (nbytes == bsize) { + if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) { skcipher_request_set_callback(subreq, req->base.flags, req->base.complete, req->base.data); @@ -181,7 +181,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req) struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct skcipher_request *subreq = &rctx->subreq; - int bsize = crypto_skcipher_blocksize(tfm); + int bsize = crypto_skcipher_chunksize(tfm); u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32)); struct scatterlist *sg; unsigned int offset; @@ -240,7 +240,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req) struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req); struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm); struct skcipher_request *subreq = &rctx->subreq; - int bsize = crypto_skcipher_blocksize(tfm); + int bsize = crypto_skcipher_chunksize(tfm); unsigned int nbytes = req->cryptlen; unsigned int offset; u8 *space; @@ -250,7 +250,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req) if (nbytes < bsize) return -EINVAL; - if (nbytes == bsize) { + if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) { skcipher_request_set_callback(subreq, req->base.flags, req->base.complete, req->base.data); @@ -297,7 +297,7 @@ static int crypto_cts_init_tfm(struct crypto_skcipher *tfm) ctx->child = cipher; align = crypto_skcipher_alignmask(tfm); - bsize = crypto_skcipher_blocksize(cipher); + bsize = crypto_skcipher_chunksize(cipher); reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) + crypto_skcipher_reqsize(cipher), crypto_tfm_ctx_alignment()) + @@ -359,11 +359,12 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb) goto err_free_inst; inst->alg.base.cra_priority = alg->base.cra_priority; - inst->alg.base.cra_blocksize = alg->base.cra_blocksize; + inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_alignmask = alg->base.cra_alignmask; inst->alg.ivsize = alg->base.cra_blocksize; - inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg); + inst->alg.chunksize = alg->base.cra_blocksize; + inst->alg.final_chunksize = inst->alg.chunksize * 2; inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg); inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg); From patchwork Tue Jul 28 07:18:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688509 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21ADA6C1 for ; Tue, 28 Jul 2020 07:18:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 074C220792 for ; Tue, 28 Jul 2020 07:18:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727776AbgG1HSt (ORCPT ); Tue, 28 Jul 2020 03:18:49 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54750 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727072AbgG1HSt (ORCPT ); Tue, 28 Jul 2020 03:18:49 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jss-0006IS-GM; Tue, 28 Jul 2020 17:18:47 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:46 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:46 +1000 Subject: [v3 PATCH 4/31] crypto: arm64/aes-glue - Add support for chaining CTS References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands cts cannot do chaining. That is, it always performs the cipher-text stealing at the end of a request. This patch adds support for chaining when the CRYPTO_TM_REQ_MORE flag is set. It also sets the final_chunksize so that data can be withheld by the caller to enable correct processing at the true end of a request. Signed-off-by: Herbert Xu --- arch/arm64/crypto/aes-glue.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 395bbf64b2abb..f63feb00e354d 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -283,11 +283,15 @@ static int cts_cbc_encrypt(struct skcipher_request *req) skcipher_request_set_callback(&subreq, skcipher_request_flags(req), NULL, NULL); - if (req->cryptlen <= AES_BLOCK_SIZE) { - if (req->cryptlen < AES_BLOCK_SIZE) + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + if (req->base.flags & CRYPTO_TFM_REQ_MORE) { + if (req->cryptlen & (AES_BLOCK_SIZE - 1)) return -EINVAL; + cbc_blocks += 2; + } else if (req->cryptlen == AES_BLOCK_SIZE) cbc_blocks = 1; - } if (cbc_blocks > 0) { skcipher_request_set_crypt(&subreq, req->src, req->dst, @@ -299,7 +303,8 @@ static int cts_cbc_encrypt(struct skcipher_request *req) if (err) return err; - if (req->cryptlen == AES_BLOCK_SIZE) + if (req->cryptlen == AES_BLOCK_SIZE || + req->base.flags & CRYPTO_TFM_REQ_MORE) return 0; dst = src = scatterwalk_ffwd(sg_src, req->src, subreq.cryptlen); @@ -738,13 +743,15 @@ static struct skcipher_alg aes_algs[] = { { .cra_driver_name = "__cts-cbc-aes-" MODE, .cra_priority = PRIO, .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, + .cra_blocksize = 1, .cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_module = THIS_MODULE, }, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = 2 * AES_BLOCK_SIZE, .walksize = 2 * AES_BLOCK_SIZE, .setkey = skcipher_aes_setkey, .encrypt = cts_cbc_encrypt, From patchwork Tue Jul 28 07:18:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688511 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0464D138A for ; Tue, 28 Jul 2020 07:18:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EADC5207E8 for ; Tue, 28 Jul 2020 07:18:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727906AbgG1HSw (ORCPT ); Tue, 28 Jul 2020 03:18:52 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54758 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727904AbgG1HSw (ORCPT ); Tue, 28 Jul 2020 03:18:52 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsu-0006JL-QP; Tue, 28 Jul 2020 17:18:49 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:48 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:48 +1000 Subject: [v3 PATCH 5/31] crypto: nitrox - Add support for chaining CTS References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands cts cannot do chaining. That is, it always performs the cipher-text stealing at the end of a request. This patch adds support for chaining when the CRYPTO_TM_REQ_MORE flag is set. It also sets the final_chunksize so that data can be withheld by the caller to enable correct processing at the true end of a request. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/nitrox/nitrox_skcipher.c | 124 ++++++++++++++++++++++--- 1 file changed, 113 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c index a553ac65f3249..7a159a5da30a0 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c +++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c @@ -20,6 +20,16 @@ struct nitrox_cipher { enum flexi_cipher value; }; +struct nitrox_crypto_cts_ctx { + struct nitrox_crypto_ctx base; + union { + u8 *u8p; + u64 ctx_handle; + struct flexi_crypto_context *fctx; + } cbc; + struct crypto_ctx_hdr *cbchdr; +}; + /** * supported cipher list */ @@ -105,6 +115,18 @@ static void nitrox_cbc_cipher_callback(void *arg, int err) nitrox_skcipher_callback(arg, err); } +static void nitrox_cts_cipher_callback(void *arg, int err) +{ + struct skcipher_request *skreq = arg; + + if (skreq->base.flags & CRYPTO_TFM_REQ_MORE) { + nitrox_cbc_cipher_callback(arg, err); + return; + } + + nitrox_skcipher_callback(arg, err); +} + static int nitrox_skcipher_init(struct crypto_skcipher *tfm) { struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm); @@ -162,6 +184,42 @@ static void nitrox_skcipher_exit(struct crypto_skcipher *tfm) nctx->ndev = NULL; } +static int nitrox_cts_init(struct crypto_skcipher *tfm) +{ + struct nitrox_crypto_cts_ctx *ctsctx = crypto_skcipher_ctx(tfm); + struct nitrox_crypto_ctx *nctx = &ctsctx->base; + struct crypto_ctx_hdr *chdr; + int err; + + err = nitrox_skcipher_init(tfm); + if (err) + return err; + + chdr = crypto_alloc_context(nctx->ndev); + if (!chdr) { + nitrox_skcipher_exit(tfm); + return -ENOMEM; + } + + ctsctx->cbchdr = chdr; + ctsctx->cbc.u8p = chdr->vaddr; + ctsctx->cbc.u8p += sizeof(struct ctx_hdr); + nctx->callback = nitrox_cts_cipher_callback; + return 0; +} + +static void nitrox_cts_exit(struct crypto_skcipher *tfm) +{ + struct nitrox_crypto_cts_ctx *ctsctx = crypto_skcipher_ctx(tfm); + struct flexi_crypto_context *fctx = ctsctx->cbc.fctx; + + memset(&fctx->crypto, 0, sizeof(struct crypto_keys)); + memset(&fctx->auth, 0, sizeof(struct auth_keys)); + crypto_free_context(ctsctx->cbchdr); + + nitrox_skcipher_exit(tfm); +} + static inline int nitrox_skcipher_setkey(struct crypto_skcipher *cipher, int aes_keylen, const u8 *key, unsigned int keylen) @@ -244,7 +302,8 @@ static int alloc_dst_sglist(struct skcipher_request *skreq, int ivsize) return 0; } -static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) +static int nitrox_skcipher_crypt_handle(struct skcipher_request *skreq, + bool enc, u64 handle) { struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher); @@ -269,7 +328,7 @@ static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) creq->gph.param2 = cpu_to_be16(ivsize); creq->gph.param3 = 0; - creq->ctx_handle = nctx->u.ctx_handle; + creq->ctx_handle = handle; creq->ctrl.s.ctxl = sizeof(struct flexi_crypto_context); ret = alloc_src_sglist(skreq, ivsize); @@ -287,7 +346,16 @@ static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) skreq); } -static int nitrox_cbc_decrypt(struct skcipher_request *skreq) +static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc) +{ + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); + struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher); + + return nitrox_skcipher_crypt_handle(skreq, enc, nctx->u.ctx_handle); +} + +static int nitrox_cbc_decrypt_handle(struct skcipher_request *skreq, + u64 handle) { struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq); struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); @@ -297,14 +365,46 @@ static int nitrox_cbc_decrypt(struct skcipher_request *skreq) unsigned int start = skreq->cryptlen - ivsize; if (skreq->src != skreq->dst) - return nitrox_skcipher_crypt(skreq, false); + return nitrox_skcipher_crypt_handle(skreq, false, handle); nkreq->iv_out = kmalloc(ivsize, flags); if (!nkreq->iv_out) return -ENOMEM; scatterwalk_map_and_copy(nkreq->iv_out, skreq->src, start, ivsize, 0); - return nitrox_skcipher_crypt(skreq, false); + return nitrox_skcipher_crypt_handle(skreq, false, handle); +} + +static int nitrox_cbc_decrypt(struct skcipher_request *skreq) +{ + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq); + struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher); + + return nitrox_cbc_decrypt_handle(skreq, nctx->u.ctx_handle); +} + +static int nitrox_cts_encrypt(struct skcipher_request *skreq) +{ + if (skreq->base.flags & CRYPTO_TFM_REQ_MORE) { + struct nitrox_crypto_cts_ctx *ctsctx; + + ctsctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(skreq)); + return nitrox_skcipher_crypt_handle(skreq, true, + ctsctx->cbc.ctx_handle); + } + + return nitrox_skcipher_crypt(skreq, true); +} + +static int nitrox_cts_decrypt(struct skcipher_request *skreq) +{ + struct nitrox_crypto_cts_ctx *ctsctx; + + if (!(skreq->base.flags & CRYPTO_TFM_REQ_MORE)) + return nitrox_skcipher_crypt(skreq, false); + + ctsctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(skreq)); + return nitrox_cbc_decrypt_handle(skreq, ctsctx->cbc.ctx_handle); } static int nitrox_aes_encrypt(struct skcipher_request *skreq) @@ -484,19 +584,21 @@ static struct skcipher_alg nitrox_skciphers[] = { { .cra_driver_name = "n5_cts(cbc(aes))", .cra_priority = PRIO, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct nitrox_crypto_cts_ctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, }, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = 2 * AES_BLOCK_SIZE, .setkey = nitrox_aes_setkey, - .encrypt = nitrox_aes_encrypt, - .decrypt = nitrox_aes_decrypt, - .init = nitrox_skcipher_init, - .exit = nitrox_skcipher_exit, + .encrypt = nitrox_cts_encrypt, + .decrypt = nitrox_cts_decrypt, + .init = nitrox_cts_init, + .exit = nitrox_cts_exit, }, { .base = { .cra_name = "cbc(des3_ede)", From patchwork Tue Jul 28 07:18:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688513 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73D5E138A for ; Tue, 28 Jul 2020 07:18:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60119207E8 for ; Tue, 28 Jul 2020 07:18:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727907AbgG1HSy (ORCPT ); Tue, 28 Jul 2020 03:18:54 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54764 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727904AbgG1HSy (ORCPT ); Tue, 28 Jul 2020 03:18:54 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsx-0006Jt-2z; Tue, 28 Jul 2020 17:18:52 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:51 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:51 +1000 Subject: [v3 PATCH 6/31] crypto: ccree - Add support for chaining CTS References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands cts cannot do chaining. That is, it always performs the cipher-text stealing at the end of a request. This patch adds support for chaining when the CRYPTO_TM_REQ_MORE flag is set. It also sets the final_chunksize so that data can be withheld by the caller to enable correct processing at the true end of a request. Signed-off-by: Herbert Xu --- drivers/crypto/ccree/cc_cipher.c | 72 +++++++++++++++++++++++++-------------- 1 file changed, 47 insertions(+), 25 deletions(-) diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c index beeb283c3c949..83567b60d6908 100644 --- a/drivers/crypto/ccree/cc_cipher.c +++ b/drivers/crypto/ccree/cc_cipher.c @@ -61,9 +61,9 @@ struct cc_cipher_ctx { static void cc_cipher_complete(struct device *dev, void *cc_req, int err); -static inline enum cc_key_type cc_key_type(struct crypto_tfm *tfm) +static inline enum cc_key_type cc_key_type(struct crypto_skcipher *tfm) { - struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); return ctx_p->key_type; } @@ -105,12 +105,26 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size) return -EINVAL; } -static int validate_data_size(struct cc_cipher_ctx *ctx_p, +static inline int req_cipher_mode(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); + int cipher_mode = ctx_p->cipher_mode; + + if (cipher_mode == DRV_CIPHER_CBC_CTS && + req->base.flags & CRYPTO_TFM_REQ_MORE) + cipher_mode = DRV_CIPHER_CBC; + + return cipher_mode; +} + +static int validate_data_size(struct skcipher_request *req, + struct cc_cipher_ctx *ctx_p, unsigned int size) { switch (ctx_p->flow_mode) { case S_DIN_to_AES: - switch (ctx_p->cipher_mode) { + switch (req_cipher_mode(req)) { case DRV_CIPHER_XTS: case DRV_CIPHER_CBC_CTS: if (size >= AES_BLOCK_SIZE) @@ -508,17 +522,18 @@ static int cc_out_setup_mode(struct cc_cipher_ctx *ctx_p) } } -static void cc_setup_readiv_desc(struct crypto_tfm *tfm, +static void cc_setup_readiv_desc(struct skcipher_request *req, struct cipher_req_ctx *req_ctx, unsigned int ivsize, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); struct device *dev = drvdata_to_dev(ctx_p->drvdata); - int cipher_mode = ctx_p->cipher_mode; int flow_mode = cc_out_setup_mode(ctx_p); int direction = req_ctx->gen_ctx.op_type; dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr; + int cipher_mode = req_cipher_mode(req); if (ctx_p->key_type == CC_POLICY_PROTECTED_KEY) return; @@ -565,15 +580,16 @@ static void cc_setup_readiv_desc(struct crypto_tfm *tfm, } -static void cc_setup_state_desc(struct crypto_tfm *tfm, +static void cc_setup_state_desc(struct skcipher_request *req, struct cipher_req_ctx *req_ctx, unsigned int ivsize, unsigned int nbytes, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); struct device *dev = drvdata_to_dev(ctx_p->drvdata); - int cipher_mode = ctx_p->cipher_mode; + int cipher_mode = req_cipher_mode(req); int flow_mode = ctx_p->flow_mode; int direction = req_ctx->gen_ctx.op_type; dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr; @@ -610,15 +626,16 @@ static void cc_setup_state_desc(struct crypto_tfm *tfm, } -static void cc_setup_xex_state_desc(struct crypto_tfm *tfm, +static void cc_setup_xex_state_desc(struct skcipher_request *req, struct cipher_req_ctx *req_ctx, unsigned int ivsize, unsigned int nbytes, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); struct device *dev = drvdata_to_dev(ctx_p->drvdata); - int cipher_mode = ctx_p->cipher_mode; + int cipher_mode = req_cipher_mode(req); int flow_mode = ctx_p->flow_mode; int direction = req_ctx->gen_ctx.op_type; dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr; @@ -628,8 +645,8 @@ static void cc_setup_xex_state_desc(struct crypto_tfm *tfm, unsigned int key_offset = key_len; struct cc_crypto_alg *cc_alg = - container_of(tfm->__crt_alg, struct cc_crypto_alg, - skcipher_alg.base); + container_of(crypto_skcipher_alg(tfm), struct cc_crypto_alg, + skcipher_alg); if (cc_alg->data_unit) du_size = cc_alg->data_unit; @@ -697,14 +714,15 @@ static int cc_out_flow_mode(struct cc_cipher_ctx *ctx_p) } } -static void cc_setup_key_desc(struct crypto_tfm *tfm, +static void cc_setup_key_desc(struct skcipher_request *req, struct cipher_req_ctx *req_ctx, unsigned int nbytes, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm); struct device *dev = drvdata_to_dev(ctx_p->drvdata); - int cipher_mode = ctx_p->cipher_mode; + int cipher_mode = req_cipher_mode(req); int flow_mode = ctx_p->flow_mode; int direction = req_ctx->gen_ctx.op_type; dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr; @@ -912,7 +930,7 @@ static int cc_cipher_process(struct skcipher_request *req, /* STAT_PHASE_0: Init and sanity checks */ - if (validate_data_size(ctx_p, nbytes)) { + if (validate_data_size(req, ctx_p, nbytes)) { dev_dbg(dev, "Unsupported data size %d.\n", nbytes); rc = -EINVAL; goto exit_process; @@ -969,17 +987,17 @@ static int cc_cipher_process(struct skcipher_request *req, /* STAT_PHASE_2: Create sequence */ /* Setup state (IV) */ - cc_setup_state_desc(tfm, req_ctx, ivsize, nbytes, desc, &seq_len); + cc_setup_state_desc(req, req_ctx, ivsize, nbytes, desc, &seq_len); /* Setup MLLI line, if needed */ cc_setup_mlli_desc(tfm, req_ctx, dst, src, nbytes, req, desc, &seq_len); /* Setup key */ - cc_setup_key_desc(tfm, req_ctx, nbytes, desc, &seq_len); + cc_setup_key_desc(req, req_ctx, nbytes, desc, &seq_len); /* Setup state (IV and XEX key) */ - cc_setup_xex_state_desc(tfm, req_ctx, ivsize, nbytes, desc, &seq_len); + cc_setup_xex_state_desc(req, req_ctx, ivsize, nbytes, desc, &seq_len); /* Data processing */ cc_setup_flow_desc(tfm, req_ctx, dst, src, nbytes, desc, &seq_len); /* Read next IV */ - cc_setup_readiv_desc(tfm, req_ctx, ivsize, desc, &seq_len); + cc_setup_readiv_desc(req, req_ctx, ivsize, desc, &seq_len); /* STAT_PHASE_3: Lock HW and push sequence */ @@ -1113,7 +1131,7 @@ static const struct cc_alg_template skcipher_algs[] = { { .name = "cts(cbc(paes))", .driver_name = "cts-cbc-paes-ccree", - .blocksize = AES_BLOCK_SIZE, + .blocksize = 1, .template_skcipher = { .setkey = cc_cipher_sethkey, .encrypt = cc_cipher_encrypt, @@ -1121,6 +1139,8 @@ static const struct cc_alg_template skcipher_algs[] = { .min_keysize = CC_HW_KEY_SIZE, .max_keysize = CC_HW_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = 2 * AES_BLOCK_SIZE, }, .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, @@ -1238,7 +1258,7 @@ static const struct cc_alg_template skcipher_algs[] = { { .name = "cts(cbc(aes))", .driver_name = "cts-cbc-aes-ccree", - .blocksize = AES_BLOCK_SIZE, + .blocksize = 1, .template_skcipher = { .setkey = cc_cipher_setkey, .encrypt = cc_cipher_encrypt, @@ -1246,6 +1266,8 @@ static const struct cc_alg_template skcipher_algs[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = 2 * AES_BLOCK_SIZE, }, .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, From patchwork Tue Jul 28 07:18:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688515 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F73F6C1 for ; Tue, 28 Jul 2020 07:18:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0890A21744 for ; Tue, 28 Jul 2020 07:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726990AbgG1HS4 (ORCPT ); Tue, 28 Jul 2020 03:18:56 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54774 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727904AbgG1HS4 (ORCPT ); Tue, 28 Jul 2020 03:18:56 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jsz-0006Ki-GN; Tue, 28 Jul 2020 17:18:54 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:53 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:53 +1000 Subject: [v3 PATCH 7/31] crypto: skcipher - Add alg reqsize field References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it is the reqsize field only exists on the tfm object which means that in order to set it you must provide an init function for the tfm, even if the size is actually static. This patch adds a reqsize field to the skcipher alg object which allows it to be set without having an init function. Signed-off-by: Herbert Xu --- crypto/skcipher.c | 4 +++- include/crypto/skcipher.h | 2 ++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 467af525848a1..3bfa06fd25600 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -668,6 +668,8 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm) struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm); struct skcipher_alg *alg = crypto_skcipher_alg(skcipher); + skcipher->reqsize = alg->reqsize; + skcipher_set_needkey(skcipher); if (alg->exit) @@ -797,7 +799,7 @@ static int skcipher_prepare_alg(struct skcipher_alg *alg) struct crypto_alg *base = &alg->base; if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 || - alg->walksize > PAGE_SIZE / 8) + alg->walksize > PAGE_SIZE / 8 || alg->reqsize > PAGE_SIZE / 8) return -EINVAL; if (!alg->chunksize) diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index fb90c3e1c26ba..c46ea1c157b29 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -100,6 +100,7 @@ struct crypto_sync_skcipher { * @final_chunksize: Number of bytes that must be processed together * at the end. If set to -1 then chaining is not * possible. + * @reqsize: Size of the request data structure. * @base: Definition of a generic crypto algorithm. * * All fields except @ivsize are mandatory and must be filled. @@ -118,6 +119,7 @@ struct skcipher_alg { unsigned int chunksize; unsigned int walksize; int final_chunksize; + unsigned int reqsize; struct crypto_alg base; }; From patchwork Tue Jul 28 07:18:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688517 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5BCF7138A for ; Tue, 28 Jul 2020 07:19:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D9D820792 for ; Tue, 28 Jul 2020 07:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727877AbgG1HS7 (ORCPT ); Tue, 28 Jul 2020 03:18:59 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54782 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727904AbgG1HS6 (ORCPT ); Tue, 28 Jul 2020 03:18:58 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jt1-0006LB-SV; Tue, 28 Jul 2020 17:18:56 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:55 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:55 +1000 Subject: [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch initialises skcipher requests to zero. This allows algorithms to distinguish between the first operation versus subsequent ones. Signed-off-by: Herbert Xu --- include/crypto/skcipher.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index c46ea1c157b29..6db5f83d6e482 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -129,13 +129,14 @@ struct skcipher_alg { * This performs a type-check against the "tfm" argument to make sure * all users have the correct skcipher tfm for doing on-stack requests. */ -#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \ - char __##name##_desc[sizeof(struct skcipher_request) + \ - MAX_SYNC_SKCIPHER_REQSIZE + \ - (!(sizeof((struct crypto_sync_skcipher *)1 == \ - (typeof(tfm))1))) \ - ] CRYPTO_MINALIGN_ATTR; \ - struct skcipher_request *name = (void *)__##name##_desc +#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, sync) \ + struct { \ + struct skcipher_request req; \ + char ext[MAX_SYNC_SKCIPHER_REQSIZE]; \ + } __##name##_desc = { \ + .req.base.tfm = crypto_skcipher_tfm(&sync->base), \ + }; \ + struct skcipher_request *name = &__##name##_desc.req /** * DOC: Symmetric Key Cipher API @@ -519,8 +520,7 @@ static inline struct skcipher_request *skcipher_request_alloc( { struct skcipher_request *req; - req = kmalloc(sizeof(struct skcipher_request) + - crypto_skcipher_reqsize(tfm), gfp); + req = kzalloc(sizeof(*req) + crypto_skcipher_reqsize(tfm), gfp); if (likely(req)) skcipher_request_set_tfm(req, tfm); From patchwork Tue Jul 28 07:18:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688519 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 102D4138A for ; Tue, 28 Jul 2020 07:19:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB3DE20792 for ; Tue, 28 Jul 2020 07:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727923AbgG1HTB (ORCPT ); Tue, 28 Jul 2020 03:19:01 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54790 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727904AbgG1HTB (ORCPT ); Tue, 28 Jul 2020 03:19:01 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jt4-0006Lq-5D; Tue, 28 Jul 2020 17:18:59 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:18:58 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:18:58 +1000 Subject: [v3 PATCH 9/31] crypto: cryptd - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch makes cryptd pass along the CRYPTO_TFM_REQ_MORE flag to its child skcipher as well as inheriting the final chunk size from it. Signed-off-by: Herbert Xu --- crypto/cryptd.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index a1bea0f4baa88..510c23b320082 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -261,13 +261,16 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base, struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); struct crypto_sync_skcipher *child = ctx->child; SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); + unsigned int flags = req->base.flags; if (unlikely(err == -EINPROGRESS)) goto out; + flags &= CRYPTO_TFM_REQ_MORE; + flags |= CRYPTO_TFM_REQ_MAY_SLEEP; + skcipher_request_set_sync_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); + skcipher_request_set_callback(subreq, flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv); @@ -289,13 +292,16 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base, struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); struct crypto_sync_skcipher *child = ctx->child; SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); + unsigned int flags = req->base.flags; if (unlikely(err == -EINPROGRESS)) goto out; + flags &= CRYPTO_TFM_REQ_MORE; + flags |= CRYPTO_TFM_REQ_MAY_SLEEP; + skcipher_request_set_sync_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); + skcipher_request_set_callback(subreq, flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv); @@ -400,6 +406,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl, (alg->base.cra_flags & CRYPTO_ALG_INTERNAL); inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg); inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg); + inst->alg.final_chunksize = crypto_skcipher_alg_final_chunksize(alg); inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg); inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg); From patchwork Tue Jul 28 07:19:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688521 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 94FEC138C for ; Tue, 28 Jul 2020 07:19:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86937207F5 for ; Tue, 28 Jul 2020 07:19:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727929AbgG1HTD (ORCPT ); Tue, 28 Jul 2020 03:19:03 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54798 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727924AbgG1HTD (ORCPT ); Tue, 28 Jul 2020 03:19:03 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jt6-0006MN-FB; Tue, 28 Jul 2020 17:19:01 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:00 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:00 +1000 Subject: [v3 PATCH 10/31] crypto: chacha-generic - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands chacha cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- crypto/chacha_generic.c | 43 +++++++++++++++++++++++++-------------- include/crypto/internal/chacha.h | 8 ++++++- 2 files changed, 35 insertions(+), 16 deletions(-) diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c index 8beea79ab1178..f74ac54d7aa5b 100644 --- a/crypto/chacha_generic.c +++ b/crypto/chacha_generic.c @@ -6,22 +6,21 @@ * Copyright (C) 2018 Google LLC */ -#include -#include #include -#include +#include #include +#include -static int chacha_stream_xor(struct skcipher_request *req, - const struct chacha_ctx *ctx, const u8 *iv) +static int chacha_stream_xor(struct skcipher_request *req, int nrounds) { + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct skcipher_walk walk; - u32 state[16]; + u32 *state = rctx->state; int err; - err = skcipher_walk_virt(&walk, req, false); + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; - chacha_init_generic(state, ctx->key, iv); + err = skcipher_walk_virt(&walk, req, false); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -30,7 +29,7 @@ static int chacha_stream_xor(struct skcipher_request *req, nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE); chacha_crypt_generic(state, walk.dst.virt.addr, - walk.src.virt.addr, nbytes, ctx->nrounds); + walk.src.virt.addr, nbytes, nrounds); err = skcipher_walk_done(&walk, walk.nbytes - nbytes); } @@ -40,30 +39,41 @@ static int chacha_stream_xor(struct skcipher_request *req, static int crypto_chacha_crypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - return chacha_stream_xor(req, ctx, req->iv); + if (!rctx->init) + chacha_init_generic(rctx->state, ctx->key, req->iv); + + return chacha_stream_xor(req, ctx->nrounds); } static int crypto_xchacha_crypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - struct chacha_ctx subctx; - u32 state[16]; + int nrounds = ctx->nrounds; + u32 *state = rctx->state; u8 real_iv[16]; + u32 key[8]; + + if (rctx->init) + goto skip_init; /* Compute the subkey given the original key and first 128 nonce bits */ chacha_init_generic(state, ctx->key, req->iv); - hchacha_block_generic(state, subctx.key, ctx->nrounds); - subctx.nrounds = ctx->nrounds; + hchacha_block_generic(state, key, nrounds); /* Build the real IV */ memcpy(&real_iv[0], req->iv + 24, 8); /* stream position */ memcpy(&real_iv[8], req->iv + 16, 8); /* remaining 64 nonce bits */ + chacha_init_generic(state, key, real_iv); + +skip_init: /* Generate the stream and XOR it with the data */ - return chacha_stream_xor(req, &subctx, real_iv); + return chacha_stream_xor(req, nrounds); } static struct skcipher_alg algs[] = { @@ -79,6 +89,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = crypto_chacha_crypt, .decrypt = crypto_chacha_crypt, @@ -94,6 +105,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = crypto_xchacha_crypt, .decrypt = crypto_xchacha_crypt, @@ -109,6 +121,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha12_setkey, .encrypt = crypto_xchacha_crypt, .decrypt = crypto_xchacha_crypt, diff --git a/include/crypto/internal/chacha.h b/include/crypto/internal/chacha.h index b085dc1ac1516..149ff90fa4afd 100644 --- a/include/crypto/internal/chacha.h +++ b/include/crypto/internal/chacha.h @@ -3,15 +3,21 @@ #ifndef _CRYPTO_INTERNAL_CHACHA_H #define _CRYPTO_INTERNAL_CHACHA_H +#include #include #include -#include +#include struct chacha_ctx { u32 key[8]; int nrounds; }; +struct chacha_reqctx { + u32 state[16]; + bool init; +}; + static inline int chacha_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keysize, int nrounds) { From patchwork Tue Jul 28 07:19:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688523 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3EDA138A for ; Tue, 28 Jul 2020 07:19:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A606820792 for ; Tue, 28 Jul 2020 07:19:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727849AbgG1HTG (ORCPT ); Tue, 28 Jul 2020 03:19:06 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54804 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTF (ORCPT ); Tue, 28 Jul 2020 03:19:05 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jt8-0006Mx-Tq; Tue, 28 Jul 2020 17:19:04 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:02 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:02 +1000 Subject: [v3 PATCH 11/31] crypto: arm/chacha - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands chacha cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- arch/arm/crypto/chacha-glue.c | 48 ++++++++++++++++++++++++++++-------------- 1 file changed, 32 insertions(+), 16 deletions(-) diff --git a/arch/arm/crypto/chacha-glue.c b/arch/arm/crypto/chacha-glue.c index 59da6c0b63b62..7f753fc54137a 100644 --- a/arch/arm/crypto/chacha-glue.c +++ b/arch/arm/crypto/chacha-glue.c @@ -7,10 +7,8 @@ * Copyright (C) 2015 Martin Willi */ -#include #include #include -#include #include #include #include @@ -106,16 +104,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes, EXPORT_SYMBOL(chacha_crypt_arch); static int chacha_stream_xor(struct skcipher_request *req, - const struct chacha_ctx *ctx, const u8 *iv, - bool neon) + int nrounds, bool neon) { + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct skcipher_walk walk; - u32 state[16]; + u32 *state = rctx->state; int err; - err = skcipher_walk_virt(&walk, req, false); + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; - chacha_init_generic(state, ctx->key, iv); + err = skcipher_walk_virt(&walk, req, false); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -125,12 +123,12 @@ static int chacha_stream_xor(struct skcipher_request *req, if (!IS_ENABLED(CONFIG_KERNEL_MODE_NEON) || !neon) { chacha_doarm(walk.dst.virt.addr, walk.src.virt.addr, - nbytes, state, ctx->nrounds); + nbytes, state, nrounds); state[12] += DIV_ROUND_UP(nbytes, CHACHA_BLOCK_SIZE); } else { kernel_neon_begin(); chacha_doneon(state, walk.dst.virt.addr, - walk.src.virt.addr, nbytes, ctx->nrounds); + walk.src.virt.addr, nbytes, nrounds); kernel_neon_end(); } err = skcipher_walk_done(&walk, walk.nbytes - nbytes); @@ -142,9 +140,13 @@ static int chacha_stream_xor(struct skcipher_request *req, static int do_chacha(struct skcipher_request *req, bool neon) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - return chacha_stream_xor(req, ctx, req->iv, neon); + if (!rctx->init) + chacha_init_generic(rctx->state, ctx->key, req->iv); + + return chacha_stream_xor(req, ctx->nrounds, neon); } static int chacha_arm(struct skcipher_request *req) @@ -160,25 +162,33 @@ static int chacha_neon(struct skcipher_request *req) static int do_xchacha(struct skcipher_request *req, bool neon) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - struct chacha_ctx subctx; - u32 state[16]; + int nrounds = ctx->nrounds; + u32 *state = rctx->state; u8 real_iv[16]; + u32 key[8]; + + if (rctx->init) + goto skip_init; chacha_init_generic(state, ctx->key, req->iv); if (!IS_ENABLED(CONFIG_KERNEL_MODE_NEON) || !neon) { - hchacha_block_arm(state, subctx.key, ctx->nrounds); + hchacha_block_arm(state, key, nrounds); } else { kernel_neon_begin(); - hchacha_block_neon(state, subctx.key, ctx->nrounds); + hchacha_block_neon(state, key, nrounds); kernel_neon_end(); } - subctx.nrounds = ctx->nrounds; memcpy(&real_iv[0], req->iv + 24, 8); memcpy(&real_iv[8], req->iv + 16, 8); - return chacha_stream_xor(req, &subctx, real_iv, neon); + + chacha_init_generic(state, key, real_iv); + +skip_init: + return chacha_stream_xor(req, nrounds, neon); } static int xchacha_arm(struct skcipher_request *req) @@ -204,6 +214,7 @@ static struct skcipher_alg arm_algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = chacha_arm, .decrypt = chacha_arm, @@ -219,6 +230,7 @@ static struct skcipher_alg arm_algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = xchacha_arm, .decrypt = xchacha_arm, @@ -234,6 +246,7 @@ static struct skcipher_alg arm_algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha12_setkey, .encrypt = xchacha_arm, .decrypt = xchacha_arm, @@ -254,6 +267,7 @@ static struct skcipher_alg neon_algs[] = { .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 4 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = chacha_neon, .decrypt = chacha_neon, @@ -270,6 +284,7 @@ static struct skcipher_alg neon_algs[] = { .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 4 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = xchacha_neon, .decrypt = xchacha_neon, @@ -286,6 +301,7 @@ static struct skcipher_alg neon_algs[] = { .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 4 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha12_setkey, .encrypt = xchacha_neon, .decrypt = xchacha_neon, From patchwork Tue Jul 28 07:19:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688525 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 514906C1 for ; Tue, 28 Jul 2020 07:19:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42D22207F5 for ; Tue, 28 Jul 2020 07:19:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726314AbgG1HTI (ORCPT ); Tue, 28 Jul 2020 03:19:08 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54812 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTI (ORCPT ); Tue, 28 Jul 2020 03:19:08 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtB-0006Np-A3; Tue, 28 Jul 2020 17:19:06 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:05 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:05 +1000 Subject: [v3 PATCH 12/31] crypto: arm64/chacha - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands chacha cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- arch/arm64/crypto/chacha-neon-glue.c | 43 ++++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c index af2bbca38e70f..d82c574ddcc00 100644 --- a/arch/arm64/crypto/chacha-neon-glue.c +++ b/arch/arm64/crypto/chacha-neon-glue.c @@ -19,10 +19,8 @@ * (at your option) any later version. */ -#include #include #include -#include #include #include #include @@ -101,16 +99,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes, } EXPORT_SYMBOL(chacha_crypt_arch); -static int chacha_neon_stream_xor(struct skcipher_request *req, - const struct chacha_ctx *ctx, const u8 *iv) +static int chacha_neon_stream_xor(struct skcipher_request *req, int nrounds) { + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct skcipher_walk walk; - u32 state[16]; + u32 *state = rctx->state; int err; - err = skcipher_walk_virt(&walk, req, false); + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; - chacha_init_generic(state, ctx->key, iv); + err = skcipher_walk_virt(&walk, req, false); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -122,11 +120,11 @@ static int chacha_neon_stream_xor(struct skcipher_request *req, !crypto_simd_usable()) { chacha_crypt_generic(state, walk.dst.virt.addr, walk.src.virt.addr, nbytes, - ctx->nrounds); + nrounds); } else { kernel_neon_begin(); chacha_doneon(state, walk.dst.virt.addr, - walk.src.virt.addr, nbytes, ctx->nrounds); + walk.src.virt.addr, nbytes, nrounds); kernel_neon_end(); } err = skcipher_walk_done(&walk, walk.nbytes - nbytes); @@ -138,26 +136,38 @@ static int chacha_neon_stream_xor(struct skcipher_request *req, static int chacha_neon(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - return chacha_neon_stream_xor(req, ctx, req->iv); + if (!rctx->init) + chacha_init_generic(rctx->state, ctx->key, req->iv); + + return chacha_neon_stream_xor(req, ctx->nrounds); } static int xchacha_neon(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - struct chacha_ctx subctx; - u32 state[16]; + int nrounds = ctx->nrounds; + u32 *state = rctx->state; u8 real_iv[16]; + u32 key[8]; + + if (rctx->init) + goto skip_init; chacha_init_generic(state, ctx->key, req->iv); - hchacha_block_arch(state, subctx.key, ctx->nrounds); - subctx.nrounds = ctx->nrounds; + hchacha_block_arch(state, key, nrounds); memcpy(&real_iv[0], req->iv + 24, 8); memcpy(&real_iv[8], req->iv + 16, 8); - return chacha_neon_stream_xor(req, &subctx, real_iv); + + chacha_init_generic(state, key, real_iv); + +skip_init: + return chacha_neon_stream_xor(req, nrounds); } static struct skcipher_alg algs[] = { @@ -174,6 +184,7 @@ static struct skcipher_alg algs[] = { .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 5 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = chacha_neon, .decrypt = chacha_neon, @@ -190,6 +201,7 @@ static struct skcipher_alg algs[] = { .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 5 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = xchacha_neon, .decrypt = xchacha_neon, @@ -206,6 +218,7 @@ static struct skcipher_alg algs[] = { .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, .walksize = 5 * CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha12_setkey, .encrypt = xchacha_neon, .decrypt = xchacha_neon, From patchwork Tue Jul 28 07:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688527 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B63A56C1 for ; Tue, 28 Jul 2020 07:19:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A97C920829 for ; Tue, 28 Jul 2020 07:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727936AbgG1HTL (ORCPT ); Tue, 28 Jul 2020 03:19:11 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54822 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTK (ORCPT ); Tue, 28 Jul 2020 03:19:10 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtD-0006OE-Pm; Tue, 28 Jul 2020 17:19:08 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:07 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:07 +1000 Subject: [v3 PATCH 13/31] crypto: mips/chacha - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands chacha cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- arch/mips/crypto/chacha-glue.c | 41 +++++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 14 deletions(-) diff --git a/arch/mips/crypto/chacha-glue.c b/arch/mips/crypto/chacha-glue.c index d1fd23e6ef844..658412bfdea29 100644 --- a/arch/mips/crypto/chacha-glue.c +++ b/arch/mips/crypto/chacha-glue.c @@ -7,9 +7,7 @@ */ #include -#include #include -#include #include #include @@ -26,16 +24,16 @@ void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv) } EXPORT_SYMBOL(chacha_init_arch); -static int chacha_mips_stream_xor(struct skcipher_request *req, - const struct chacha_ctx *ctx, const u8 *iv) +static int chacha_mips_stream_xor(struct skcipher_request *req, int nrounds) { + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct skcipher_walk walk; - u32 state[16]; + u32 *state = rctx->state; int err; - err = skcipher_walk_virt(&walk, req, false); + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; - chacha_init_generic(state, ctx->key, iv); + err = skcipher_walk_virt(&walk, req, false); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -44,7 +42,7 @@ static int chacha_mips_stream_xor(struct skcipher_request *req, nbytes = round_down(nbytes, walk.stride); chacha_crypt(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes, ctx->nrounds); + nbytes, nrounds); err = skcipher_walk_done(&walk, walk.nbytes - nbytes); } @@ -54,27 +52,39 @@ static int chacha_mips_stream_xor(struct skcipher_request *req, static int chacha_mips(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - return chacha_mips_stream_xor(req, ctx, req->iv); + if (!rctx->init) + chacha_init_generic(rctx->state, ctx->key, req->iv); + + return chacha_mips_stream_xor(req, ctx->nrounds); } static int xchacha_mips(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = skcipher_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - struct chacha_ctx subctx; - u32 state[16]; + int nrounds = ctx->nrounds; + u32 *state = rctx->state; u8 real_iv[16]; + u32 key[8]; + + if (rctx->init) + goto skip_init; chacha_init_generic(state, ctx->key, req->iv); - hchacha_block(state, subctx.key, ctx->nrounds); - subctx.nrounds = ctx->nrounds; + hchacha_block(state, key, nrounds); memcpy(&real_iv[0], req->iv + 24, 8); memcpy(&real_iv[8], req->iv + 16, 8); - return chacha_mips_stream_xor(req, &subctx, real_iv); + + chacha_init_generic(rctx->state, key, real_iv); + +skip_init: + return chacha_mips_stream_xor(req, nrounds); } static struct skcipher_alg algs[] = { @@ -90,6 +100,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = chacha_mips, .decrypt = chacha_mips, @@ -105,6 +116,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha20_setkey, .encrypt = xchacha_mips, .decrypt = xchacha_mips, @@ -120,6 +132,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = sizeof(struct chacha_reqctx), .setkey = chacha12_setkey, .encrypt = xchacha_mips, .decrypt = xchacha_mips, From patchwork Tue Jul 28 07:19:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688529 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 16A8F6C1 for ; Tue, 28 Jul 2020 07:19:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0944B207F5 for ; Tue, 28 Jul 2020 07:19:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727935AbgG1HTN (ORCPT ); Tue, 28 Jul 2020 03:19:13 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54828 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTN (ORCPT ); Tue, 28 Jul 2020 03:19:13 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtG-0006Ot-1C; Tue, 28 Jul 2020 17:19:11 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:10 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:10 +1000 Subject: [v3 PATCH 14/31] crypto: x86/chacha - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands chacha cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- arch/x86/crypto/chacha_glue.c | 55 +++++++++++++++++++++++++++++------------- 1 file changed, 39 insertions(+), 16 deletions(-) diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c index e67a59130025e..96cbdcbfe4f8f 100644 --- a/arch/x86/crypto/chacha_glue.c +++ b/arch/x86/crypto/chacha_glue.c @@ -6,14 +6,16 @@ * Copyright (C) 2015 Martin Willi */ -#include #include #include -#include #include #include #include +#define CHACHA_STATE_ALIGN 16 +#define CHACHA_REQSIZE sizeof(struct chacha_reqctx) + \ + ((CHACHA_STATE_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) + asmlinkage void chacha_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src, unsigned int len, int nrounds); asmlinkage void chacha_4block_xor_ssse3(u32 *state, u8 *dst, const u8 *src, @@ -38,6 +40,12 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_simd); static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_avx2); static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_avx512vl); +static inline struct chacha_reqctx *chacha_request_ctx( + struct skcipher_request *req) +{ + return PTR_ALIGN(skcipher_request_ctx(req), CHACHA_STATE_ALIGN); +} + static unsigned int chacha_advance(unsigned int len, unsigned int maxblocks) { len = min(len, maxblocks * CHACHA_BLOCK_SIZE); @@ -159,16 +167,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes, } EXPORT_SYMBOL(chacha_crypt_arch); -static int chacha_simd_stream_xor(struct skcipher_request *req, - const struct chacha_ctx *ctx, const u8 *iv) +static int chacha_simd_stream_xor(struct skcipher_request *req, int nrounds) { - u32 state[CHACHA_STATE_WORDS] __aligned(8); + struct chacha_reqctx *rctx = chacha_request_ctx(req); struct skcipher_walk walk; + u32 *state = rctx->state; int err; - err = skcipher_walk_virt(&walk, req, false); + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; - chacha_init_generic(state, ctx->key, iv); + err = skcipher_walk_virt(&walk, req, false); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -180,12 +188,12 @@ static int chacha_simd_stream_xor(struct skcipher_request *req, !crypto_simd_usable()) { chacha_crypt_generic(state, walk.dst.virt.addr, walk.src.virt.addr, nbytes, - ctx->nrounds); + nrounds); } else { kernel_fpu_begin(); chacha_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr, nbytes, - ctx->nrounds); + nrounds); kernel_fpu_end(); } err = skcipher_walk_done(&walk, walk.nbytes - nbytes); @@ -197,33 +205,45 @@ static int chacha_simd_stream_xor(struct skcipher_request *req, static int chacha_simd(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = chacha_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - return chacha_simd_stream_xor(req, ctx, req->iv); + if (!rctx->init) + chacha_init_generic(rctx->state, ctx->key, req->iv); + + return chacha_simd_stream_xor(req, ctx->nrounds); } static int xchacha_simd(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chacha_reqctx *rctx = chacha_request_ctx(req); struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); - u32 state[CHACHA_STATE_WORDS] __aligned(8); - struct chacha_ctx subctx; + int nrounds = ctx->nrounds; + u32 *state = rctx->state; u8 real_iv[16]; + u32 key[8]; + + if (rctx->init) + goto skip_init; chacha_init_generic(state, ctx->key, req->iv); if (req->cryptlen > CHACHA_BLOCK_SIZE && crypto_simd_usable()) { kernel_fpu_begin(); - hchacha_block_ssse3(state, subctx.key, ctx->nrounds); + hchacha_block_ssse3(state, key, nrounds); kernel_fpu_end(); } else { - hchacha_block_generic(state, subctx.key, ctx->nrounds); + hchacha_block_generic(state, key, nrounds); } - subctx.nrounds = ctx->nrounds; memcpy(&real_iv[0], req->iv + 24, 8); memcpy(&real_iv[8], req->iv + 16, 8); - return chacha_simd_stream_xor(req, &subctx, real_iv); + + chacha_init_generic(state, key, real_iv); + +skip_init: + return chacha_simd_stream_xor(req, nrounds); } static struct skcipher_alg algs[] = { @@ -239,6 +259,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = CHACHA_REQSIZE, .setkey = chacha20_setkey, .encrypt = chacha_simd, .decrypt = chacha_simd, @@ -254,6 +275,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = CHACHA_REQSIZE, .setkey = chacha20_setkey, .encrypt = xchacha_simd, .decrypt = xchacha_simd, @@ -269,6 +291,7 @@ static struct skcipher_alg algs[] = { .max_keysize = CHACHA_KEY_SIZE, .ivsize = XCHACHA_IV_SIZE, .chunksize = CHACHA_BLOCK_SIZE, + .reqsize = CHACHA_REQSIZE, .setkey = chacha12_setkey, .encrypt = xchacha_simd, .decrypt = xchacha_simd, From patchwork Tue Jul 28 07:19:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688531 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52A19138A for ; Tue, 28 Jul 2020 07:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4232720829 for ; Tue, 28 Jul 2020 07:19:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727937AbgG1HTP (ORCPT ); Tue, 28 Jul 2020 03:19:15 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54838 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTP (ORCPT ); Tue, 28 Jul 2020 03:19:15 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtI-0006PK-BV; Tue, 28 Jul 2020 17:19:13 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:12 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:12 +1000 Subject: [v3 PATCH 15/31] crypto: inside-secure - Set final_chunksize on chacha References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The chacha implementation in inside-secure does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. This patch also sets the chunksize to the chacha block size. Signed-off-by: Herbert Xu --- drivers/crypto/inside-secure/safexcel_cipher.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index 1ac3253b7903a..ef04a394ff49d 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -2859,6 +2859,8 @@ struct safexcel_alg_template safexcel_alg_chacha20 = { .min_keysize = CHACHA_KEY_SIZE, .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .final_chunksize = -1, .base = { .cra_name = "chacha20", .cra_driver_name = "safexcel-chacha20", From patchwork Tue Jul 28 07:19:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688533 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 814EC6C1 for ; Tue, 28 Jul 2020 07:19:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 710F4207E8 for ; Tue, 28 Jul 2020 07:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727939AbgG1HTS (ORCPT ); Tue, 28 Jul 2020 03:19:18 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54846 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTR (ORCPT ); Tue, 28 Jul 2020 03:19:17 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtK-0006QL-VA; Tue, 28 Jul 2020 17:19:16 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:14 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:14 +1000 Subject: [v3 PATCH 16/31] crypto: caam/qi2 - Set final_chunksize on chacha References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The chacha implementation in caam/qi2 does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. This patch also sets the chunksize to the chacha block size. Signed-off-by: Herbert Xu Reviewed-by: Horia Geantă --- drivers/crypto/caam/caamalg_qi2.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c index 1b0c286759065..6294c104bf7a9 100644 --- a/drivers/crypto/caam/caamalg_qi2.c +++ b/drivers/crypto/caam/caamalg_qi2.c @@ -1689,6 +1689,8 @@ static struct caam_skcipher_alg driver_algs[] = { .min_keysize = CHACHA_KEY_SIZE, .max_keysize = CHACHA_KEY_SIZE, .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .final_chunksize = -1, }, .caam.class1_alg_type = OP_ALG_ALGSEL_CHACHA20, }, From patchwork Tue Jul 28 07:19:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688535 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBDD2138A for ; Tue, 28 Jul 2020 07:19:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B210C207E8 for ; Tue, 28 Jul 2020 07:19:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727940AbgG1HTU (ORCPT ); Tue, 28 Jul 2020 03:19:20 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54852 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTU (ORCPT ); Tue, 28 Jul 2020 03:19:20 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtN-0006Qt-8G; Tue, 28 Jul 2020 17:19:18 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:17 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:17 +1000 Subject: [v3 PATCH 17/31] crypto: ctr - Allow rfc3686 to be chained References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands rfc3686 cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- crypto/ctr.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/crypto/ctr.c b/crypto/ctr.c index c39fcffba27f5..eccfab07f2fbb 100644 --- a/crypto/ctr.c +++ b/crypto/ctr.c @@ -5,7 +5,6 @@ * (C) Copyright IBM Corp. 2007 - Joy Latten */ -#include #include #include #include @@ -21,7 +20,8 @@ struct crypto_rfc3686_ctx { struct crypto_rfc3686_req_ctx { u8 iv[CTR_RFC3686_BLOCK_SIZE]; - struct skcipher_request subreq CRYPTO_MINALIGN_ATTR; + bool init; + struct skcipher_request subreq; }; static void crypto_ctr_crypt_final(struct skcipher_walk *walk, @@ -197,6 +197,9 @@ static int crypto_rfc3686_crypt(struct skcipher_request *req) struct skcipher_request *subreq = &rctx->subreq; u8 *iv = rctx->iv; + if (rctx->init) + goto skip_init; + /* set up counter block */ memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE); memcpy(iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE); @@ -205,6 +208,9 @@ static int crypto_rfc3686_crypt(struct skcipher_request *req) *(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) = cpu_to_be32(1); +skip_init: + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; + skcipher_request_set_tfm(subreq, child); skcipher_request_set_callback(subreq, req->base.flags, req->base.complete, req->base.data); From patchwork Tue Jul 28 07:19:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688537 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB497138A for ; Tue, 28 Jul 2020 07:19:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD118207F5 for ; Tue, 28 Jul 2020 07:19:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727941AbgG1HTW (ORCPT ); Tue, 28 Jul 2020 03:19:22 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54860 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTW (ORCPT ); Tue, 28 Jul 2020 03:19:22 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtP-0006RQ-GU; Tue, 28 Jul 2020 17:19:20 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:19 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:19 +1000 Subject: [v3 PATCH 18/31] crypto: crypto4xx - Remove rfc3686 implementation References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in crypto4xx is pretty much the same as the generic rfc3686 wrapper. So it can simply be removed to reduce complexity. Signed-off-by: Herbert Xu --- drivers/crypto/amcc/crypto4xx_alg.c | 47 ----------------------------------- drivers/crypto/amcc/crypto4xx_core.c | 20 -------------- drivers/crypto/amcc/crypto4xx_core.h | 4 -- 3 files changed, 71 deletions(-) diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c index f7fc0c4641254..a7c17cdb1deb2 100644 --- a/drivers/crypto/amcc/crypto4xx_alg.c +++ b/drivers/crypto/amcc/crypto4xx_alg.c @@ -202,53 +202,6 @@ int crypto4xx_setkey_aes_ofb(struct crypto_skcipher *cipher, CRYPTO_FEEDBACK_MODE_64BIT_OFB); } -int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher); - int rc; - - rc = crypto4xx_setkey_aes(cipher, key, keylen - CTR_RFC3686_NONCE_SIZE, - CRYPTO_MODE_CTR, CRYPTO_FEEDBACK_MODE_NO_FB); - if (rc) - return rc; - - ctx->iv_nonce = cpu_to_le32p((u32 *)&key[keylen - - CTR_RFC3686_NONCE_SIZE]); - - return 0; -} - -int crypto4xx_rfc3686_encrypt(struct skcipher_request *req) -{ - struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); - struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher); - __le32 iv[AES_IV_SIZE / 4] = { - ctx->iv_nonce, - cpu_to_le32p((u32 *) req->iv), - cpu_to_le32p((u32 *) (req->iv + 4)), - cpu_to_le32(1) }; - - return crypto4xx_build_pd(&req->base, ctx, req->src, req->dst, - req->cryptlen, iv, AES_IV_SIZE, - ctx->sa_out, ctx->sa_len, 0, NULL); -} - -int crypto4xx_rfc3686_decrypt(struct skcipher_request *req) -{ - struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); - struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher); - __le32 iv[AES_IV_SIZE / 4] = { - ctx->iv_nonce, - cpu_to_le32p((u32 *) req->iv), - cpu_to_le32p((u32 *) (req->iv + 4)), - cpu_to_le32(1) }; - - return crypto4xx_build_pd(&req->base, ctx, req->src, req->dst, - req->cryptlen, iv, AES_IV_SIZE, - ctx->sa_out, ctx->sa_len, 0, NULL); -} - static int crypto4xx_ctr_crypt(struct skcipher_request *req, bool encrypt) { diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c index 981de43ea5e24..2054e216440b5 100644 --- a/drivers/crypto/amcc/crypto4xx_core.c +++ b/drivers/crypto/amcc/crypto4xx_core.c @@ -1252,26 +1252,6 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { .init = crypto4xx_sk_init, .exit = crypto4xx_sk_exit, } }, - { .type = CRYPTO_ALG_TYPE_SKCIPHER, .u.cipher = { - .base = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "rfc3686-ctr-aes-ppc4xx", - .cra_priority = CRYPTO4XX_CRYPTO_PRIORITY, - .cra_flags = CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct crypto4xx_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .setkey = crypto4xx_setkey_rfc3686, - .encrypt = crypto4xx_rfc3686_encrypt, - .decrypt = crypto4xx_rfc3686_decrypt, - .init = crypto4xx_sk_init, - .exit = crypto4xx_sk_exit, - } }, { .type = CRYPTO_ALG_TYPE_SKCIPHER, .u.cipher = { .base = { .cra_name = "ecb(aes)", diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h index 6b68413591905..97f625fc5e8b1 100644 --- a/drivers/crypto/amcc/crypto4xx_core.h +++ b/drivers/crypto/amcc/crypto4xx_core.h @@ -169,8 +169,6 @@ int crypto4xx_setkey_aes_ecb(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen); int crypto4xx_setkey_aes_ofb(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen); -int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher, - const u8 *key, unsigned int keylen); int crypto4xx_encrypt_ctr(struct skcipher_request *req); int crypto4xx_decrypt_ctr(struct skcipher_request *req); int crypto4xx_encrypt_iv_stream(struct skcipher_request *req); @@ -179,8 +177,6 @@ int crypto4xx_encrypt_iv_block(struct skcipher_request *req); int crypto4xx_decrypt_iv_block(struct skcipher_request *req); int crypto4xx_encrypt_noiv_block(struct skcipher_request *req); int crypto4xx_decrypt_noiv_block(struct skcipher_request *req); -int crypto4xx_rfc3686_encrypt(struct skcipher_request *req); -int crypto4xx_rfc3686_decrypt(struct skcipher_request *req); int crypto4xx_sha1_alg_init(struct crypto_tfm *tfm); int crypto4xx_hash_digest(struct ahash_request *req); int crypto4xx_hash_final(struct ahash_request *req); From patchwork Tue Jul 28 07:19:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688539 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4E616C1 for ; Tue, 28 Jul 2020 07:19:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CBFB621744 for ; Tue, 28 Jul 2020 07:19:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727908AbgG1HTZ (ORCPT ); Tue, 28 Jul 2020 03:19:25 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54870 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTZ (ORCPT ); Tue, 28 Jul 2020 03:19:25 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtR-0006Rx-Oq; Tue, 28 Jul 2020 17:19:22 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:21 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:21 +1000 Subject: [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementations in caam are pretty much the same as the generic rfc3686 wrapper. So they can simply be removed to reduce complexity. Signed-off-by: Herbert Xu --- drivers/crypto/caam/caamalg.c | 54 +------------------------------------ drivers/crypto/caam/caamalg_desc.c | 46 +------------------------------ drivers/crypto/caam/caamalg_desc.h | 6 +--- drivers/crypto/caam/caamalg_qi.c | 52 +---------------------------------- drivers/crypto/caam/caamalg_qi2.c | 54 +------------------------------------ 5 files changed, 10 insertions(+), 202 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index e94e7f27f1d0d..4a787f74ebf9c 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -725,13 +725,9 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen, const u32 ctx1_iv_off) { struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct caam_skcipher_alg *alg = - container_of(crypto_skcipher_alg(skcipher), typeof(*alg), - skcipher); struct device *jrdev = ctx->jrdev; unsigned int ivsize = crypto_skcipher_ivsize(skcipher); u32 *desc; - const bool is_rfc3686 = alg->caam.rfc3686; print_hex_dump_debug("key in @"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); @@ -742,15 +738,13 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, /* skcipher_encrypt shared descriptor */ desc = ctx->sh_desc_enc; - cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, is_rfc3686, - ctx1_iv_off); + cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, ctx1_iv_off); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), ctx->dir); /* skcipher_decrypt shared descriptor */ desc = ctx->sh_desc_dec; - cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, is_rfc3686, - ctx1_iv_off); + cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, ctx1_iv_off); dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, desc_bytes(desc), ctx->dir); @@ -769,27 +763,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher, return skcipher_setkey(skcipher, key, keylen, 0); } -static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher, - const u8 *key, unsigned int keylen) -{ - u32 ctx1_iv_off; - int err; - - /* - * RFC3686 specific: - * | CONTEXT1[255:128] = {NONCE, IV, COUNTER} - * | *key = {KEY, NONCE} - */ - ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE; - keylen -= CTR_RFC3686_NONCE_SIZE; - - err = aes_check_keylen(keylen); - if (err) - return err; - - return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off); -} - static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { @@ -1877,29 +1850,6 @@ static struct caam_skcipher_alg driver_algs[] = { .caam.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CTR_MOD128, }, - { - .skcipher = { - .base = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "rfc3686-ctr-aes-caam", - .cra_blocksize = 1, - }, - .setkey = rfc3686_skcipher_setkey, - .encrypt = skcipher_encrypt, - .decrypt = skcipher_decrypt, - .min_keysize = AES_MIN_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .chunksize = AES_BLOCK_SIZE, - }, - .caam = { - .class1_alg_type = OP_ALG_ALGSEL_AES | - OP_ALG_AAI_CTR_MOD128, - .rfc3686 = true, - }, - }, { .skcipher = { .base = { diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c index d6c58184bb57c..e9b32f151b0da 100644 --- a/drivers/crypto/caam/caamalg_desc.c +++ b/drivers/crypto/caam/caamalg_desc.c @@ -1371,12 +1371,10 @@ static inline void skcipher_append_src_dst(u32 *desc) * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128 * - OP_ALG_ALGSEL_CHACHA20 * @ivsize: initialization vector size - * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @ctx1_iv_off: IV offset in CONTEXT1 register */ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata, - unsigned int ivsize, const bool is_rfc3686, - const u32 ctx1_iv_off) + unsigned int ivsize, const u32 ctx1_iv_off) { u32 *key_jump_cmd; u32 options = cdata->algtype | OP_ALG_AS_INIT | OP_ALG_ENCRYPT; @@ -1392,18 +1390,6 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata, append_key_as_imm(desc, cdata->key_virt, cdata->keylen, cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG); - /* Load nonce into CONTEXT1 reg */ - if (is_rfc3686) { - const u8 *nonce = cdata->key_virt + cdata->keylen; - - append_load_as_imm(desc, nonce, CTR_RFC3686_NONCE_SIZE, - LDST_CLASS_IND_CCB | - LDST_SRCDST_BYTE_OUTFIFO | LDST_IMM); - append_move(desc, MOVE_WAITCOMP | MOVE_SRC_OUTFIFO | - MOVE_DEST_CLASS1CTX | (16 << MOVE_OFFSET_SHIFT) | - (CTR_RFC3686_NONCE_SIZE << MOVE_LEN_SHIFT)); - } - set_jump_tgt_here(desc, key_jump_cmd); /* Load IV, if there is one */ @@ -1412,13 +1398,6 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata, LDST_CLASS_1_CCB | (ctx1_iv_off << LDST_OFFSET_SHIFT)); - /* Load counter into CONTEXT1 reg */ - if (is_rfc3686) - append_load_imm_be32(desc, 1, LDST_IMM | LDST_CLASS_1_CCB | - LDST_SRCDST_BYTE_CONTEXT | - ((ctx1_iv_off + CTR_RFC3686_IV_SIZE) << - LDST_OFFSET_SHIFT)); - /* Load operation */ if (is_chacha20) options |= OP_ALG_AS_FINALIZE; @@ -1447,12 +1426,10 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_encap); * with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128 * - OP_ALG_ALGSEL_CHACHA20 * @ivsize: initialization vector size - * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @ctx1_iv_off: IV offset in CONTEXT1 register */ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata, - unsigned int ivsize, const bool is_rfc3686, - const u32 ctx1_iv_off) + unsigned int ivsize, const u32 ctx1_iv_off) { u32 *key_jump_cmd; bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) == @@ -1467,18 +1444,6 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata, append_key_as_imm(desc, cdata->key_virt, cdata->keylen, cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG); - /* Load nonce into CONTEXT1 reg */ - if (is_rfc3686) { - const u8 *nonce = cdata->key_virt + cdata->keylen; - - append_load_as_imm(desc, nonce, CTR_RFC3686_NONCE_SIZE, - LDST_CLASS_IND_CCB | - LDST_SRCDST_BYTE_OUTFIFO | LDST_IMM); - append_move(desc, MOVE_WAITCOMP | MOVE_SRC_OUTFIFO | - MOVE_DEST_CLASS1CTX | (16 << MOVE_OFFSET_SHIFT) | - (CTR_RFC3686_NONCE_SIZE << MOVE_LEN_SHIFT)); - } - set_jump_tgt_here(desc, key_jump_cmd); /* Load IV, if there is one */ @@ -1487,13 +1452,6 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata, LDST_CLASS_1_CCB | (ctx1_iv_off << LDST_OFFSET_SHIFT)); - /* Load counter into CONTEXT1 reg */ - if (is_rfc3686) - append_load_imm_be32(desc, 1, LDST_IMM | LDST_CLASS_1_CCB | - LDST_SRCDST_BYTE_CONTEXT | - ((ctx1_iv_off + CTR_RFC3686_IV_SIZE) << - LDST_OFFSET_SHIFT)); - /* Choose operation */ if (ctx1_iv_off) append_operation(desc, cdata->algtype | OP_ALG_AS_INIT | diff --git a/drivers/crypto/caam/caamalg_desc.h b/drivers/crypto/caam/caamalg_desc.h index f2893393ba5e7..ac3d3ebc544e2 100644 --- a/drivers/crypto/caam/caamalg_desc.h +++ b/drivers/crypto/caam/caamalg_desc.h @@ -102,12 +102,10 @@ void cnstr_shdsc_chachapoly(u32 * const desc, struct alginfo *cdata, const bool is_qi); void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata, - unsigned int ivsize, const bool is_rfc3686, - const u32 ctx1_iv_off); + unsigned int ivsize, const u32 ctx1_iv_off); void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata, - unsigned int ivsize, const bool is_rfc3686, - const u32 ctx1_iv_off); + unsigned int ivsize, const u32 ctx1_iv_off); void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata); diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c index efe8f15a4a51a..a140e9090d244 100644 --- a/drivers/crypto/caam/caamalg_qi.c +++ b/drivers/crypto/caam/caamalg_qi.c @@ -610,12 +610,8 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen, const u32 ctx1_iv_off) { struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct caam_skcipher_alg *alg = - container_of(crypto_skcipher_alg(skcipher), typeof(*alg), - skcipher); struct device *jrdev = ctx->jrdev; unsigned int ivsize = crypto_skcipher_ivsize(skcipher); - const bool is_rfc3686 = alg->caam.rfc3686; int ret = 0; print_hex_dump_debug("key in @" __stringify(__LINE__)": ", @@ -627,9 +623,9 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, /* skcipher encrypt, decrypt shared descriptors */ cnstr_shdsc_skcipher_encap(ctx->sh_desc_enc, &ctx->cdata, ivsize, - is_rfc3686, ctx1_iv_off); + ctx1_iv_off); cnstr_shdsc_skcipher_decap(ctx->sh_desc_dec, &ctx->cdata, ivsize, - is_rfc3686, ctx1_iv_off); + ctx1_iv_off); /* Now update the driver contexts with the new shared descriptor */ if (ctx->drv_ctx[ENCRYPT]) { @@ -665,27 +661,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher, return skcipher_setkey(skcipher, key, keylen, 0); } -static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher, - const u8 *key, unsigned int keylen) -{ - u32 ctx1_iv_off; - int err; - - /* - * RFC3686 specific: - * | CONTEXT1[255:128] = {NONCE, IV, COUNTER} - * | *key = {KEY, NONCE} - */ - ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE; - keylen -= CTR_RFC3686_NONCE_SIZE; - - err = aes_check_keylen(keylen); - if (err) - return err; - - return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off); -} - static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { @@ -1479,29 +1454,6 @@ static struct caam_skcipher_alg driver_algs[] = { .caam.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CTR_MOD128, }, - { - .skcipher = { - .base = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "rfc3686-ctr-aes-caam-qi", - .cra_blocksize = 1, - }, - .setkey = rfc3686_skcipher_setkey, - .encrypt = skcipher_encrypt, - .decrypt = skcipher_decrypt, - .min_keysize = AES_MIN_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .chunksize = AES_BLOCK_SIZE, - }, - .caam = { - .class1_alg_type = OP_ALG_ALGSEL_AES | - OP_ALG_AAI_CTR_MOD128, - .rfc3686 = true, - }, - }, { .skcipher = { .base = { diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c index 6294c104bf7a9..fd0f070fb9971 100644 --- a/drivers/crypto/caam/caamalg_qi2.c +++ b/drivers/crypto/caam/caamalg_qi2.c @@ -934,14 +934,10 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen, const u32 ctx1_iv_off) { struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct caam_skcipher_alg *alg = - container_of(crypto_skcipher_alg(skcipher), - struct caam_skcipher_alg, skcipher); struct device *dev = ctx->dev; struct caam_flc *flc; unsigned int ivsize = crypto_skcipher_ivsize(skcipher); u32 *desc; - const bool is_rfc3686 = alg->caam.rfc3686; print_hex_dump_debug("key in @" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); @@ -953,8 +949,7 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, /* skcipher_encrypt shared descriptor */ flc = &ctx->flc[ENCRYPT]; desc = flc->sh_desc; - cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, is_rfc3686, - ctx1_iv_off); + cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, ctx1_iv_off); flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */ dma_sync_single_for_device(dev, ctx->flc_dma[ENCRYPT], sizeof(flc->flc) + desc_bytes(desc), @@ -963,8 +958,7 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, /* skcipher_decrypt shared descriptor */ flc = &ctx->flc[DECRYPT]; desc = flc->sh_desc; - cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, is_rfc3686, - ctx1_iv_off); + cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, ctx1_iv_off); flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */ dma_sync_single_for_device(dev, ctx->flc_dma[DECRYPT], sizeof(flc->flc) + desc_bytes(desc), @@ -985,27 +979,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher, return skcipher_setkey(skcipher, key, keylen, 0); } -static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher, - const u8 *key, unsigned int keylen) -{ - u32 ctx1_iv_off; - int err; - - /* - * RFC3686 specific: - * | CONTEXT1[255:128] = {NONCE, IV, COUNTER} - * | *key = {KEY, NONCE} - */ - ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE; - keylen -= CTR_RFC3686_NONCE_SIZE; - - err = aes_check_keylen(keylen); - if (err) - return err; - - return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off); -} - static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { @@ -1637,29 +1610,6 @@ static struct caam_skcipher_alg driver_algs[] = { .caam.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CTR_MOD128, }, - { - .skcipher = { - .base = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "rfc3686-ctr-aes-caam-qi2", - .cra_blocksize = 1, - }, - .setkey = rfc3686_skcipher_setkey, - .encrypt = skcipher_encrypt, - .decrypt = skcipher_decrypt, - .min_keysize = AES_MIN_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .chunksize = AES_BLOCK_SIZE, - }, - .caam = { - .class1_alg_type = OP_ALG_ALGSEL_AES | - OP_ALG_AAI_CTR_MOD128, - .rfc3686 = true, - }, - }, { .skcipher = { .base = { From patchwork Tue Jul 28 07:19:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688541 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC7606C1 for ; Tue, 28 Jul 2020 07:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE6A121744 for ; Tue, 28 Jul 2020 07:19:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727942AbgG1HT1 (ORCPT ); Tue, 28 Jul 2020 03:19:27 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54878 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HT1 (ORCPT ); Tue, 28 Jul 2020 03:19:27 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtU-0006SU-25; Tue, 28 Jul 2020 17:19:25 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:24 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:24 +1000 Subject: [v3 PATCH 20/31] crypto: nitrox - Set final_chunksize on rfc3686 References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in cavium/nitrox does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. This patch also sets the chunksize to the AES block size. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/nitrox/nitrox_skcipher.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c index 7a159a5da30a0..0b597c6aa68af 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c +++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c @@ -573,6 +573,8 @@ static struct skcipher_alg nitrox_skciphers[] = { { .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .ivsize = CTR_RFC3686_IV_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = -1, .init = nitrox_skcipher_init, .exit = nitrox_skcipher_exit, .setkey = nitrox_aes_ctr_rfc3686_setkey, From patchwork Tue Jul 28 07:19:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688543 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0047F6C1 for ; Tue, 28 Jul 2020 07:19:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6952207F5 for ; Tue, 28 Jul 2020 07:19:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727945AbgG1HT3 (ORCPT ); Tue, 28 Jul 2020 03:19:29 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54884 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HT3 (ORCPT ); Tue, 28 Jul 2020 03:19:29 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtW-0006St-CN; Tue, 28 Jul 2020 17:19:27 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:26 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:26 +1000 Subject: [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in ccp is pretty much the same as the generic rfc3686 wrapper. So it can simply be removed to reduce complexity. Signed-off-by: Herbert Xu Acked-by: John Allen --- drivers/crypto/ccp/ccp-crypto-aes.c | 99 ------------------------------------ drivers/crypto/ccp/ccp-crypto.h | 6 -- 2 files changed, 105 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes.c b/drivers/crypto/ccp/ccp-crypto-aes.c index e6dcd8cedd53e..a45e5c994e381 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes.c +++ b/drivers/crypto/ccp/ccp-crypto-aes.c @@ -131,78 +131,6 @@ static int ccp_aes_init_tfm(struct crypto_skcipher *tfm) return 0; } -static int ccp_aes_rfc3686_complete(struct crypto_async_request *async_req, - int ret) -{ - struct skcipher_request *req = skcipher_request_cast(async_req); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); - - /* Restore the original pointer */ - req->iv = rctx->rfc3686_info; - - return ccp_aes_complete(async_req, ret); -} - -static int ccp_aes_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, - unsigned int key_len) -{ - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - - if (key_len < CTR_RFC3686_NONCE_SIZE) - return -EINVAL; - - key_len -= CTR_RFC3686_NONCE_SIZE; - memcpy(ctx->u.aes.nonce, key + key_len, CTR_RFC3686_NONCE_SIZE); - - return ccp_aes_setkey(tfm, key, key_len); -} - -static int ccp_aes_rfc3686_crypt(struct skcipher_request *req, bool encrypt) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); - u8 *iv; - - /* Initialize the CTR block */ - iv = rctx->rfc3686_iv; - memcpy(iv, ctx->u.aes.nonce, CTR_RFC3686_NONCE_SIZE); - - iv += CTR_RFC3686_NONCE_SIZE; - memcpy(iv, req->iv, CTR_RFC3686_IV_SIZE); - - iv += CTR_RFC3686_IV_SIZE; - *(__be32 *)iv = cpu_to_be32(1); - - /* Point to the new IV */ - rctx->rfc3686_info = req->iv; - req->iv = rctx->rfc3686_iv; - - return ccp_aes_crypt(req, encrypt); -} - -static int ccp_aes_rfc3686_encrypt(struct skcipher_request *req) -{ - return ccp_aes_rfc3686_crypt(req, true); -} - -static int ccp_aes_rfc3686_decrypt(struct skcipher_request *req) -{ - return ccp_aes_rfc3686_crypt(req, false); -} - -static int ccp_aes_rfc3686_init_tfm(struct crypto_skcipher *tfm) -{ - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - - ctx->complete = ccp_aes_rfc3686_complete; - ctx->u.aes.key_len = 0; - - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); - - return 0; -} - static const struct skcipher_alg ccp_aes_defaults = { .setkey = ccp_aes_setkey, .encrypt = ccp_aes_encrypt, @@ -221,24 +149,6 @@ static const struct skcipher_alg ccp_aes_defaults = { .base.cra_module = THIS_MODULE, }; -static const struct skcipher_alg ccp_aes_rfc3686_defaults = { - .setkey = ccp_aes_rfc3686_setkey, - .encrypt = ccp_aes_rfc3686_encrypt, - .decrypt = ccp_aes_rfc3686_decrypt, - .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .init = ccp_aes_rfc3686_init_tfm, - - .base.cra_flags = CRYPTO_ALG_ASYNC | - CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_NEED_FALLBACK, - .base.cra_blocksize = CTR_RFC3686_BLOCK_SIZE, - .base.cra_ctxsize = sizeof(struct ccp_ctx), - .base.cra_priority = CCP_CRA_PRIORITY, - .base.cra_module = THIS_MODULE, -}; - struct ccp_aes_def { enum ccp_aes_mode mode; unsigned int version; @@ -295,15 +205,6 @@ static struct ccp_aes_def aes_algs[] = { .ivsize = AES_BLOCK_SIZE, .alg_defaults = &ccp_aes_defaults, }, - { - .mode = CCP_AES_MODE_CTR, - .version = CCP_VERSION(3, 0), - .name = "rfc3686(ctr(aes))", - .driver_name = "rfc3686-ctr-aes-ccp", - .blocksize = 1, - .ivsize = CTR_RFC3686_IV_SIZE, - .alg_defaults = &ccp_aes_rfc3686_defaults, - }, }; static int ccp_register_aes_alg(struct list_head *head, diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h index aed3d2192d013..a837b2a994d9f 100644 --- a/drivers/crypto/ccp/ccp-crypto.h +++ b/drivers/crypto/ccp/ccp-crypto.h @@ -99,8 +99,6 @@ struct ccp_aes_ctx { unsigned int key_len; u8 key[AES_MAX_KEY_SIZE * 2]; - u8 nonce[CTR_RFC3686_NONCE_SIZE]; - /* CMAC key structures */ struct scatterlist k1_sg; struct scatterlist k2_sg; @@ -116,10 +114,6 @@ struct ccp_aes_req_ctx { struct scatterlist tag_sg; u8 tag[AES_BLOCK_SIZE]; - /* Fields used for RFC3686 requests */ - u8 *rfc3686_info; - u8 rfc3686_iv[AES_BLOCK_SIZE]; - struct ccp_cmd cmd; struct skcipher_request fallback_req; // keep at the end From patchwork Tue Jul 28 07:19:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688545 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B657A138A for ; Tue, 28 Jul 2020 07:19:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7C0B21744 for ; Tue, 28 Jul 2020 07:19:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727948AbgG1HTc (ORCPT ); Tue, 28 Jul 2020 03:19:32 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54894 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTc (ORCPT ); Tue, 28 Jul 2020 03:19:32 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0JtY-0006TY-Ki; Tue, 28 Jul 2020 17:19:29 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:28 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:28 +1000 Subject: [v3 PATCH 22/31] crypto: chelsio - Remove rfc3686 implementation References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in chelsio is pretty much the same as the generic rfc3686 wrapper. So it can simply be removed to reduce complexity. Signed-off-by: Herbert Xu --- drivers/crypto/chelsio/chcr_algo.c | 109 +------------------------------------ 1 file changed, 4 insertions(+), 105 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 13b908ea48738..8374be72454db 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -856,9 +856,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) chcr_req->key_ctx.ctx_hdr = ablkctx->key_ctx_hdr; if ((reqctx->op == CHCR_DECRYPT_OP) && (!(get_cryptoalg_subtype(tfm) == - CRYPTO_ALG_SUB_TYPE_CTR)) && - (!(get_cryptoalg_subtype(tfm) == - CRYPTO_ALG_SUB_TYPE_CTR_RFC3686))) { + CRYPTO_ALG_SUB_TYPE_CTR))) { generate_copy_rrkey(ablkctx, &chcr_req->key_ctx); } else { if ((ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC) || @@ -988,42 +986,6 @@ static int chcr_aes_ctr_setkey(struct crypto_skcipher *cipher, return err; } -static int chcr_aes_rfc3686_setkey(struct crypto_skcipher *cipher, - const u8 *key, - unsigned int keylen) -{ - struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); - unsigned int ck_size, context_size; - u16 alignment = 0; - int err; - - if (keylen < CTR_RFC3686_NONCE_SIZE) - return -EINVAL; - memcpy(ablkctx->nonce, key + (keylen - CTR_RFC3686_NONCE_SIZE), - CTR_RFC3686_NONCE_SIZE); - - keylen -= CTR_RFC3686_NONCE_SIZE; - err = chcr_cipher_fallback_setkey(cipher, key, keylen); - if (err) - goto badkey_err; - - ck_size = chcr_keyctx_ck_size(keylen); - alignment = (ck_size == CHCR_KEYCTX_CIPHER_KEY_SIZE_192) ? 8 : 0; - memcpy(ablkctx->key, key, keylen); - ablkctx->enckey_len = keylen; - context_size = (KEY_CONTEXT_HDR_SALT_AND_PAD + - keylen + alignment) >> 4; - - ablkctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, CHCR_KEYCTX_NO_KEY, - 0, 0, context_size); - ablkctx->ciph_mode = CHCR_SCMD_CIPHER_MODE_AES_CTR; - - return 0; -badkey_err: - ablkctx->enckey_len = 0; - - return err; -} static void ctr_add_iv(u8 *dstiv, u8 *srciv, u32 add) { unsigned int size = AES_BLOCK_SIZE; @@ -1107,10 +1069,6 @@ static int chcr_update_cipher_iv(struct skcipher_request *req, if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) ctr_add_iv(iv, req->iv, (reqctx->processed / AES_BLOCK_SIZE)); - else if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) - *(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE + - CTR_RFC3686_IV_SIZE) = cpu_to_be32((reqctx->processed / - AES_BLOCK_SIZE) + 1); else if (subtype == CRYPTO_ALG_SUB_TYPE_XTS) ret = chcr_update_tweak(req, iv, 0); else if (subtype == CRYPTO_ALG_SUB_TYPE_CBC) { @@ -1125,11 +1083,6 @@ static int chcr_update_cipher_iv(struct skcipher_request *req, } -/* We need separate function for final iv because in rfc3686 Initial counter - * starts from 1 and buffer size of iv is 8 byte only which remains constant - * for subsequent update requests - */ - static int chcr_final_cipher_iv(struct skcipher_request *req, struct cpl_fw6_pld *fw6_pld, u8 *iv) { @@ -1313,30 +1266,16 @@ static int process_cipher(struct skcipher_request *req, if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) { bytes = adjust_ctr_overflow(req->iv, bytes); } - if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) { - memcpy(reqctx->iv, ablkctx->nonce, CTR_RFC3686_NONCE_SIZE); - memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->iv, - CTR_RFC3686_IV_SIZE); - - /* initialize counter portion of counter block */ - *(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE + - CTR_RFC3686_IV_SIZE) = cpu_to_be32(1); - memcpy(reqctx->init_iv, reqctx->iv, IV); - } else { + memcpy(reqctx->iv, req->iv, IV); + memcpy(reqctx->init_iv, req->iv, IV); - memcpy(reqctx->iv, req->iv, IV); - memcpy(reqctx->init_iv, req->iv, IV); - } if (unlikely(bytes == 0)) { chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); fallback: atomic_inc(&adap->chcr_stats.fallback); err = chcr_cipher_fallback(ablkctx->sw_cipher, req, - subtype == - CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ? - reqctx->iv : req->iv, - op_type); + req->iv, op_type); goto error; } reqctx->op = op_type; @@ -1486,27 +1425,6 @@ static int chcr_init_tfm(struct crypto_skcipher *tfm) return chcr_device_init(ctx); } -static int chcr_rfc3686_init(struct crypto_skcipher *tfm) -{ - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - struct chcr_context *ctx = crypto_skcipher_ctx(tfm); - struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - - /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) - * cannot be used as fallback in chcr_handle_cipher_response - */ - ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0, - CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(ablkctx->sw_cipher)) { - pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); - return PTR_ERR(ablkctx->sw_cipher); - } - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + - crypto_skcipher_reqsize(ablkctx->sw_cipher)); - return chcr_device_init(ctx); -} - - static void chcr_exit_tfm(struct crypto_skcipher *tfm) { struct chcr_context *ctx = crypto_skcipher_ctx(tfm); @@ -3894,25 +3812,6 @@ static struct chcr_alg_template driver_algs[] = { .decrypt = chcr_aes_decrypt, } }, - { - .type = CRYPTO_ALG_TYPE_SKCIPHER | - CRYPTO_ALG_SUB_TYPE_CTR_RFC3686, - .is_registered = 0, - .alg.skcipher = { - .base.cra_name = "rfc3686(ctr(aes))", - .base.cra_driver_name = "rfc3686-ctr-aes-chcr", - .base.cra_blocksize = 1, - - .init = chcr_rfc3686_init, - .exit = chcr_exit_tfm, - .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .setkey = chcr_aes_rfc3686_setkey, - .encrypt = chcr_aes_encrypt, - .decrypt = chcr_aes_decrypt, - } - }, /* SHA */ { .type = CRYPTO_ALG_TYPE_AHASH, From patchwork Tue Jul 28 07:19:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688547 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B66CF138A for ; Tue, 28 Jul 2020 07:19:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A65E520792 for ; Tue, 28 Jul 2020 07:19:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727949AbgG1HTe (ORCPT ); Tue, 28 Jul 2020 03:19:34 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54900 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTe (ORCPT ); Tue, 28 Jul 2020 03:19:34 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtb-0006U6-5n; Tue, 28 Jul 2020 17:19:32 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:31 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:31 +1000 Subject: [v3 PATCH 23/31] crypto: inside-secure - Set final_chunksize on rfc3686 References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in inside-secure does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. This patch also sets the chunksize to the underlying block size. Signed-off-by: Herbert Xu --- drivers/crypto/inside-secure/safexcel_cipher.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index ef04a394ff49d..4e269e92c25dc 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -1484,6 +1484,8 @@ struct safexcel_alg_template safexcel_alg_ctr_aes = { .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .ivsize = CTR_RFC3686_IV_SIZE, + .chunksize = AES_BLOCK_SIZE, + .final_chunksize = -1, .base = { .cra_name = "rfc3686(ctr(aes))", .cra_driver_name = "safexcel-ctr-aes", @@ -3309,6 +3311,8 @@ struct safexcel_alg_template safexcel_alg_ctr_sm4 = { .min_keysize = SM4_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .max_keysize = SM4_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, .ivsize = CTR_RFC3686_IV_SIZE, + .chunksize = SM4_BLOCK_SIZE, + .final_chunksize = -1, .base = { .cra_name = "rfc3686(ctr(sm4))", .cra_driver_name = "safexcel-ctr-sm4", From patchwork Tue Jul 28 07:19:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688549 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B3D06C1 for ; Tue, 28 Jul 2020 07:19:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BB4520792 for ; Tue, 28 Jul 2020 07:19:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727855AbgG1HTg (ORCPT ); Tue, 28 Jul 2020 03:19:36 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54910 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTg (ORCPT ); Tue, 28 Jul 2020 03:19:36 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtd-0006Ue-HU; Tue, 28 Jul 2020 17:19:34 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:33 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:33 +1000 Subject: [v3 PATCH 24/31] crypto: ixp4xx - Remove rfc3686 implementation References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in ixp4xx is pretty much the same as the generic rfc3686 wrapper. So it can simply be removed to reduce complexity. Signed-off-by: Herbert Xu --- drivers/crypto/ixp4xx_crypto.c | 53 ----------------------------------------- 1 file changed, 53 deletions(-) diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c index f478bb0a566af..c93f5db8d0503 100644 --- a/drivers/crypto/ixp4xx_crypto.c +++ b/drivers/crypto/ixp4xx_crypto.c @@ -180,7 +180,6 @@ struct ixp_ctx { int enckey_len; u8 enckey[MAX_KEYLEN]; u8 salt[MAX_IVLEN]; - u8 nonce[CTR_RFC3686_NONCE_SIZE]; unsigned salted; atomic_t configuring; struct completion completion; @@ -848,22 +847,6 @@ static int ablk_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, ablk_setkey(tfm, key, key_len); } -static int ablk_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, - unsigned int key_len) -{ - struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); - - /* the nonce is stored in bytes at end of key */ - if (key_len < CTR_RFC3686_NONCE_SIZE) - return -EINVAL; - - memcpy(ctx->nonce, key + (key_len - CTR_RFC3686_NONCE_SIZE), - CTR_RFC3686_NONCE_SIZE); - - key_len -= CTR_RFC3686_NONCE_SIZE; - return ablk_setkey(tfm, key, key_len); -} - static int ablk_perform(struct skcipher_request *req, int encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -947,28 +930,6 @@ static int ablk_decrypt(struct skcipher_request *req) return ablk_perform(req, 0); } -static int ablk_rfc3686_crypt(struct skcipher_request *req) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); - u8 iv[CTR_RFC3686_BLOCK_SIZE]; - u8 *info = req->iv; - int ret; - - /* set up counter block */ - memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE); - memcpy(iv + CTR_RFC3686_NONCE_SIZE, info, CTR_RFC3686_IV_SIZE); - - /* initialize counter portion of counter block */ - *(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) = - cpu_to_be32(1); - - req->iv = iv; - ret = ablk_perform(req, 1); - req->iv = info; - return ret; -} - static int aead_perform(struct aead_request *req, int encrypt, int cryptoffset, int eff_cryptlen, u8 *iv) { @@ -1269,20 +1230,6 @@ static struct ixp_alg ixp4xx_algos[] = { }, .cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR, .cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR, -}, { - .crypto = { - .base.cra_name = "rfc3686(ctr(aes))", - .base.cra_blocksize = 1, - - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ablk_rfc3686_setkey, - .encrypt = ablk_rfc3686_crypt, - .decrypt = ablk_rfc3686_crypt, - }, - .cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR, - .cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR, } }; static struct ixp_aead_alg ixp4xx_aeads[] = { From patchwork Tue Jul 28 07:19:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688551 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FDDA6C1 for ; Tue, 28 Jul 2020 07:19:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 912DA207F5 for ; Tue, 28 Jul 2020 07:19:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727950AbgG1HTj (ORCPT ); Tue, 28 Jul 2020 03:19:39 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54918 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTi (ORCPT ); Tue, 28 Jul 2020 03:19:38 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtf-0006VB-Vo; Tue, 28 Jul 2020 17:19:37 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:35 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:35 +1000 Subject: [v3 PATCH 25/31] crypto: nx - Set final_chunksize on rfc3686 References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc3686 implementation in nx does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. Signed-off-by: Herbert Xu --- drivers/crypto/nx/nx-aes-ctr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c index 6d5ce1a66f1ee..0e95e975cf5ba 100644 --- a/drivers/crypto/nx/nx-aes-ctr.c +++ b/drivers/crypto/nx/nx-aes-ctr.c @@ -142,4 +142,5 @@ struct skcipher_alg nx_ctr3686_aes_alg = { .encrypt = ctr3686_aes_nx_crypt, .decrypt = ctr3686_aes_nx_crypt, .chunksize = AES_BLOCK_SIZE, + .final_chunksize = -1, }; From patchwork Tue Jul 28 07:19:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688553 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C116138A for ; Tue, 28 Jul 2020 07:19:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F070020792 for ; Tue, 28 Jul 2020 07:19:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727951AbgG1HTl (ORCPT ); Tue, 28 Jul 2020 03:19:41 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54924 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgG1HTl (ORCPT ); Tue, 28 Jul 2020 03:19:41 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jti-0006Vi-Dx; Tue, 28 Jul 2020 17:19:39 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:38 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:38 +1000 Subject: [v3 PATCH 26/31] crypto: essiv - Set final_chunksize References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The essiv template does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. Signed-off-by: Herbert Xu --- crypto/essiv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/crypto/essiv.c b/crypto/essiv.c index d012be23d496d..dd19cfefe559c 100644 --- a/crypto/essiv.c +++ b/crypto/essiv.c @@ -580,6 +580,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) skcipher_inst->alg.ivsize = ivsize; skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg); skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg); + skcipher_inst->alg.final_chunksize = -1; skcipher_inst->free = essiv_skcipher_free_instance; From patchwork Tue Jul 28 07:19:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688555 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B62E138A for ; Tue, 28 Jul 2020 07:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BE0B207E8 for ; Tue, 28 Jul 2020 07:19:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727852AbgG1HTn (ORCPT ); Tue, 28 Jul 2020 03:19:43 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54934 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727118AbgG1HTn (ORCPT ); Tue, 28 Jul 2020 03:19:43 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtk-0006WG-Oz; Tue, 28 Jul 2020 17:19:41 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:40 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:40 +1000 Subject: [v3 PATCH 27/31] crypto: simd - Add support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch sets the simd final chunk size from its child skcipher. Signed-off-by: Herbert Xu --- crypto/simd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/crypto/simd.c b/crypto/simd.c index edaa479a1ec5e..260c26ad92fdf 100644 --- a/crypto/simd.c +++ b/crypto/simd.c @@ -181,6 +181,7 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname, alg->ivsize = ialg->ivsize; alg->chunksize = ialg->chunksize; + alg->final_chunksize = ialg->final_chunksize; alg->min_keysize = ialg->min_keysize; alg->max_keysize = ialg->max_keysize; From patchwork Tue Jul 28 07:19:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688557 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08AD26C1 for ; Tue, 28 Jul 2020 07:19:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDE89207F5 for ; Tue, 28 Jul 2020 07:19:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727777AbgG1HTq (ORCPT ); Tue, 28 Jul 2020 03:19:46 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54942 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727118AbgG1HTq (ORCPT ); Tue, 28 Jul 2020 03:19:46 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtn-0006Wn-1L; Tue, 28 Jul 2020 17:19:44 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:43 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:43 +1000 Subject: [v3 PATCH 28/31] crypto: arm64/essiv - Set final_chunksize References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 essiv implementation does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. Signed-off-by: Herbert Xu --- arch/arm64/crypto/aes-glue.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index f63feb00e354d..a0ac7bf070d53 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -769,6 +769,7 @@ static struct skcipher_alg aes_algs[] = { { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .final_chunksize = -1, .setkey = essiv_cbc_set_key, .encrypt = essiv_cbc_encrypt, .decrypt = essiv_cbc_decrypt, From patchwork Tue Jul 28 07:19:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688559 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1FCB138A for ; Tue, 28 Jul 2020 07:19:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E299620829 for ; Tue, 28 Jul 2020 07:19:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727808AbgG1HTs (ORCPT ); Tue, 28 Jul 2020 03:19:48 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54950 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727118AbgG1HTs (ORCPT ); Tue, 28 Jul 2020 03:19:48 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtp-0006XK-DW; Tue, 28 Jul 2020 17:19:46 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:45 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:45 +1000 Subject: [v3 PATCH 29/31] crypto: ccree - Set final_chunksize on essiv References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The ccree essiv implementation does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. Signed-off-by: Herbert Xu --- drivers/crypto/ccree/cc_cipher.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c index 83567b60d6908..a380391b2186a 100644 --- a/drivers/crypto/ccree/cc_cipher.c +++ b/drivers/crypto/ccree/cc_cipher.c @@ -1067,6 +1067,7 @@ static const struct cc_alg_template skcipher_algs[] = { .min_keysize = CC_HW_KEY_SIZE, .max_keysize = CC_HW_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .final_chunksize = -1, }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, @@ -1198,6 +1199,7 @@ static const struct cc_alg_template skcipher_algs[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, + .final_chunksize = -1, }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, From patchwork Tue Jul 28 07:19:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688561 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 56968138A for ; Tue, 28 Jul 2020 07:19:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4794E207F5 for ; Tue, 28 Jul 2020 07:19:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbgG1HTu (ORCPT ); Tue, 28 Jul 2020 03:19:50 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54956 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727118AbgG1HTu (ORCPT ); Tue, 28 Jul 2020 03:19:50 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtr-0006Xr-RA; Tue, 28 Jul 2020 17:19:48 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:47 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:47 +1000 Subject: [v3 PATCH 30/31] crypto: kw - Set final_chunksize References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The kw algorithm does not support partial operation and therefore this patch sets its final_chunksize to -1 to mark this fact. Signed-off-by: Herbert Xu --- crypto/keywrap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/crypto/keywrap.c b/crypto/keywrap.c index 0355cce21b1e2..b99568c6d032c 100644 --- a/crypto/keywrap.c +++ b/crypto/keywrap.c @@ -280,6 +280,7 @@ static int crypto_kw_create(struct crypto_template *tmpl, struct rtattr **tb) inst->alg.base.cra_blocksize = SEMIBSIZE; inst->alg.base.cra_alignmask = 0; inst->alg.ivsize = SEMIBSIZE; + inst->alg.final_chunksize = -1; inst->alg.encrypt = crypto_kw_encrypt; inst->alg.decrypt = crypto_kw_decrypt; From patchwork Tue Jul 28 07:19:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 11688563 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 825C36C1 for ; Tue, 28 Jul 2020 07:19:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A603207F5 for ; Tue, 28 Jul 2020 07:19:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727955AbgG1HTx (ORCPT ); Tue, 28 Jul 2020 03:19:53 -0400 Received: from helcar.hmeau.com ([216.24.177.18]:54964 "EHLO fornost.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727118AbgG1HTx (ORCPT ); Tue, 28 Jul 2020 03:19:53 -0400 Received: from gwarestrin.arnor.me.apana.org.au ([192.168.0.7]) by fornost.hmeau.com with smtp (Exim 4.92 #5 (Debian)) id 1k0Jtu-0006YO-7O; Tue, 28 Jul 2020 17:19:51 +1000 Received: by gwarestrin.arnor.me.apana.org.au (sSMTP sendmail emulation); Tue, 28 Jul 2020 17:19:50 +1000 From: "Herbert Xu" Date: Tue, 28 Jul 2020 17:19:50 +1000 Subject: [v3 PATCH 31/31] crypto: salsa20-generic - dd support for chaining References: <20200728071746.GA22352@gondor.apana.org.au> To: Ard Biesheuvel , Stephan Mueller , Linux Crypto Mailing List , Eric Biggers Message-Id: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org As it stands salsa20 cannot do chaining. That is, it has to handle each request as a whole. This patch adds support for chaining when the CRYPTO_TFM_REQ_MORE flag is set. Signed-off-by: Herbert Xu --- crypto/salsa20_generic.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c index 3418869dabefd..dd4b4cc8e76b9 100644 --- a/crypto/salsa20_generic.c +++ b/crypto/salsa20_generic.c @@ -21,7 +21,10 @@ #include #include +#include +#include #include +#include #define SALSA20_IV_SIZE 8 #define SALSA20_MIN_KEY_SIZE 16 @@ -32,6 +35,11 @@ struct salsa20_ctx { u32 initial_state[16]; }; +struct salsa20_reqctx { + u32 state[16]; + bool init; +}; + static void salsa20_block(u32 *state, __le32 *stream) { u32 x[16]; @@ -154,13 +162,16 @@ static int salsa20_crypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); const struct salsa20_ctx *ctx = crypto_skcipher_ctx(tfm); + struct salsa20_reqctx *rctx = skcipher_request_ctx(req); struct skcipher_walk walk; - u32 state[16]; int err; err = skcipher_walk_virt(&walk, req, false); - salsa20_init(state, ctx, req->iv); + if (!rctx->init) + salsa20_init(rctx->state, ctx, req->iv); + + rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE; while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -168,8 +179,8 @@ static int salsa20_crypt(struct skcipher_request *req) if (nbytes < walk.total) nbytes = round_down(nbytes, walk.stride); - salsa20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes); + salsa20_docrypt(rctx->state, walk.dst.virt.addr, + walk.src.virt.addr, nbytes); err = skcipher_walk_done(&walk, walk.nbytes - nbytes); } @@ -188,6 +199,7 @@ static struct skcipher_alg alg = { .max_keysize = SALSA20_MAX_KEY_SIZE, .ivsize = SALSA20_IV_SIZE, .chunksize = SALSA20_BLOCK_SIZE, + .reqsize = sizeof(struct salsa20_reqctx), .setkey = salsa20_setkey, .encrypt = salsa20_crypt, .decrypt = salsa20_crypt,