From patchwork Tue Jan 31 08:01:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122352 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97B0BC38142 for ; Tue, 31 Jan 2023 08:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229693AbjAaIBu (ORCPT ); Tue, 31 Jan 2023 03:01:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjAaIBt (ORCPT ); Tue, 31 Jan 2023 03:01:49 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C61340BF5 for ; Tue, 31 Jan 2023 00:01:48 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaL-005vdz-D5; Tue, 31 Jan 2023 16:01:46 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:45 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:45 +0800 Subject: [PATCH 1/32] crypto: api - Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The crypto completion function currently takes a pointer to a struct crypto_async_request object. However, in reality the API does not allow the use of any part of the object apart from the data field. For example, ahash/shash will create a fake object on the stack to pass along a different data field. This leads to potential bugs where the user may try to dereference or otherwise use the crypto_async_request object. This patch adds some temporary scaffolding so that the completion function can take a void * instead. Once affected users have been converted this can be removed. The helper crypto_request_complete will remain even after the conversion is complete. It should be used instead of calling the completion functino directly. Signed-off-by: Herbert Xu Reviewed-by: Giovanni Cabiddu --- include/crypto/algapi.h | 7 +++++++ include/linux/crypto.h | 6 ++++++ 2 files changed, 13 insertions(+) diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 61b327206b55..1fd81e74a174 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -302,4 +302,11 @@ enum { CRYPTO_MSG_ALG_LOADED, }; +static inline void crypto_request_complete(struct crypto_async_request *req, + int err) +{ + crypto_completion_t complete = req->complete; + complete(req, err); +} + #endif /* _CRYPTO_ALGAPI_H */ diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 5d1e961f810e..b18f6e669fb1 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -176,6 +176,7 @@ struct crypto_async_request; struct crypto_tfm; struct crypto_type; +typedef struct crypto_async_request crypto_completion_data_t; typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err); /** @@ -595,6 +596,11 @@ struct crypto_wait { /* * Async ops completion helper functioons */ +static inline void *crypto_get_completion_data(crypto_completion_data_t *req) +{ + return req->data; +} + void crypto_req_done(struct crypto_async_request *req, int err); static inline int crypto_wait_req(int err, struct crypto_wait *wait) From patchwork Tue Jan 31 08:01:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122353 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80739C636CC for ; Tue, 31 Jan 2023 08:01:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229706AbjAaIBw (ORCPT ); Tue, 31 Jan 2023 03:01:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjAaIBv (ORCPT ); Tue, 31 Jan 2023 03:01:51 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A4EE40BF5 for ; Tue, 31 Jan 2023 00:01:50 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaN-005ve7-GT; Tue, 31 Jan 2023 16:01:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:47 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:47 +0800 Subject: [PATCH 2/32] crypto: cryptd - Use subreq for AEAD References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AEAD reuses the existing request object for its child. This is error-prone and unnecessary. This patch adds a subrequest object just like we do for skcipher and hash. This patch also restores the original completion function as we do for skcipher/hash. Signed-off-by: Herbert Xu --- crypto/cryptd.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 1ff58a021d57..c0c416eda8e8 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -93,6 +93,7 @@ struct cryptd_aead_ctx { struct cryptd_aead_request_ctx { crypto_completion_t complete; + struct aead_request req; }; static void cryptd_queue_worker(struct work_struct *work); @@ -715,6 +716,7 @@ static void cryptd_aead_crypt(struct aead_request *req, int (*crypt)(struct aead_request *req)) { struct cryptd_aead_request_ctx *rctx; + struct aead_request *subreq; struct cryptd_aead_ctx *ctx; crypto_completion_t compl; struct crypto_aead *tfm; @@ -722,14 +724,24 @@ static void cryptd_aead_crypt(struct aead_request *req, rctx = aead_request_ctx(req); compl = rctx->complete; + subreq = &rctx->req; tfm = crypto_aead_reqtfm(req); if (unlikely(err == -EINPROGRESS)) goto out; - aead_request_set_tfm(req, child); + + aead_request_set_tfm(subreq, child); + aead_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, + NULL, NULL); + aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + aead_request_set_ad(subreq, req->assoclen); + err = crypt( req ); + req->base.complete = compl; + out: ctx = crypto_aead_ctx(tfm); refcnt = refcount_read(&ctx->refcnt); @@ -798,8 +810,8 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm) ctx->child = cipher; crypto_aead_set_reqsize( - tfm, max((unsigned)sizeof(struct cryptd_aead_request_ctx), - crypto_aead_reqsize(cipher))); + tfm, sizeof(struct cryptd_aead_request_ctx) + + crypto_aead_reqsize(cipher)); return 0; } From patchwork Tue Jan 31 08:01:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122354 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1C67C636CC for ; Tue, 31 Jan 2023 08:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230239AbjAaIB5 (ORCPT ); Tue, 31 Jan 2023 03:01:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjAaIBx (ORCPT ); Tue, 31 Jan 2023 03:01:53 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F47542BD3 for ; Tue, 31 Jan 2023 00:01:52 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaP-005veJ-L6; Tue, 31 Jan 2023 16:01:50 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:49 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:49 +0800 Subject: [PATCH 3/32] crypto: acompress - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu Reviewed-by: Giovanni Cabiddu --- include/crypto/internal/acompress.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h index 49339003bd2c..978b57a3f4f0 100644 --- a/include/crypto/internal/acompress.h +++ b/include/crypto/internal/acompress.h @@ -28,7 +28,7 @@ static inline void *acomp_tfm_ctx(struct crypto_acomp *tfm) static inline void acomp_request_complete(struct acomp_req *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline const char *acomp_alg_name(struct crypto_acomp *tfm) From patchwork Tue Jan 31 08:01:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122355 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51F9AC38142 for ; Tue, 31 Jan 2023 08:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229826AbjAaIB5 (ORCPT ); Tue, 31 Jan 2023 03:01:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbjAaIBz (ORCPT ); Tue, 31 Jan 2023 03:01:55 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EF8140BF5 for ; Tue, 31 Jan 2023 00:01:54 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaR-005veW-Oc; Tue, 31 Jan 2023 16:01:52 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:51 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:51 +0800 Subject: [PATCH 4/32] crypto: aead - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/aead.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h index cd8cb1e921b7..28a95eb3182d 100644 --- a/include/crypto/internal/aead.h +++ b/include/crypto/internal/aead.h @@ -82,7 +82,7 @@ static inline void *aead_request_ctx_dma(struct aead_request *req) static inline void aead_request_complete(struct aead_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline u32 aead_request_flags(struct aead_request *req) From patchwork Tue Jan 31 08:01:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122356 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83AB6C38142 for ; Tue, 31 Jan 2023 08:02:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230255AbjAaICE (ORCPT ); Tue, 31 Jan 2023 03:02:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjAaIB5 (ORCPT ); Tue, 31 Jan 2023 03:01:57 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F68A41B6D for ; Tue, 31 Jan 2023 00:01:56 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaT-005vfG-RY; Tue, 31 Jan 2023 16:01:54 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:53 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:53 +0800 Subject: [PATCH 5/32] crypto: akcipher - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/akcipher.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/akcipher.h b/include/crypto/internal/akcipher.h index aaf1092b93b8..a0fba4b2eccf 100644 --- a/include/crypto/internal/akcipher.h +++ b/include/crypto/internal/akcipher.h @@ -69,7 +69,7 @@ static inline void *akcipher_tfm_ctx_dma(struct crypto_akcipher *tfm) static inline void akcipher_request_complete(struct akcipher_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline const char *akcipher_alg_name(struct crypto_akcipher *tfm) From patchwork Tue Jan 31 08:01:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122357 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A51DEC636CD for ; Tue, 31 Jan 2023 08:02:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229651AbjAaICF (ORCPT ); Tue, 31 Jan 2023 03:02:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbjAaICA (ORCPT ); Tue, 31 Jan 2023 03:02:00 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF8CD42BE2 for ; Tue, 31 Jan 2023 00:01:58 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaV-005vfk-Vj; Tue, 31 Jan 2023 16:01:57 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:55 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:55 +0800 Subject: [PATCH 6/32] crypto: hash - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu --- crypto/ahash.c | 137 +++++++++++++---------------------------- include/crypto/internal/hash.h | 2 2 files changed, 45 insertions(+), 94 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 4b089f1b770f..369447e483cd 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -190,121 +190,69 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_ahash_setkey); -static inline unsigned int ahash_align_buffer_size(unsigned len, - unsigned long mask) -{ - return len + (mask & ~(crypto_tfm_ctx_alignment() - 1)); -} - static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); unsigned long alignmask = crypto_ahash_alignmask(tfm); unsigned int ds = crypto_ahash_digestsize(tfm); - struct ahash_request_priv *priv; + struct ahash_request *subreq; + unsigned int subreq_size; + unsigned int reqsize; + u8 *result; + u32 flags; - priv = kmalloc(sizeof(*priv) + ahash_align_buffer_size(ds, alignmask), - (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC); - if (!priv) + subreq_size = sizeof(*subreq); + reqsize = crypto_ahash_reqsize(tfm); + reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment()); + subreq_size += reqsize; + subreq_size += ds; + subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1); + + flags = ahash_request_flags(req); + subreq = kmalloc(subreq_size, (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC); + if (!subreq) return -ENOMEM; - /* - * WARNING: Voodoo programming below! - * - * The code below is obscure and hard to understand, thus explanation - * is necessary. See include/crypto/hash.h and include/linux/crypto.h - * to understand the layout of structures used here! - * - * The code here will replace portions of the ORIGINAL request with - * pointers to new code and buffers so the hashing operation can store - * the result in aligned buffer. We will call the modified request - * an ADJUSTED request. - * - * The newly mangled request will look as such: - * - * req { - * .result = ADJUSTED[new aligned buffer] - * .base.complete = ADJUSTED[pointer to completion function] - * .base.data = ADJUSTED[*req (pointer to self)] - * .priv = ADJUSTED[new priv] { - * .result = ORIGINAL(result) - * .complete = ORIGINAL(base.complete) - * .data = ORIGINAL(base.data) - * } - */ - - priv->result = req->result; - priv->complete = req->base.complete; - priv->data = req->base.data; - priv->flags = req->base.flags; - - /* - * WARNING: We do not backup req->priv here! The req->priv - * is for internal use of the Crypto API and the - * user must _NOT_ _EVER_ depend on it's content! - */ - - req->result = PTR_ALIGN((u8 *)priv->ubuf, alignmask + 1); - req->base.complete = cplt; - req->base.data = req; - req->priv = priv; + ahash_request_set_tfm(subreq, tfm); + ahash_request_set_callback(subreq, flags, cplt, req); + + result = (u8 *)(subreq + 1) + reqsize; + result = PTR_ALIGN(result, alignmask + 1); + + ahash_request_set_crypt(subreq, req->src, result, req->nbytes); + + req->priv = subreq; return 0; } static void ahash_restore_req(struct ahash_request *req, int err) { - struct ahash_request_priv *priv = req->priv; + struct ahash_request *subreq = req->priv; if (!err) - memcpy(priv->result, req->result, + memcpy(req->result, subreq->result, crypto_ahash_digestsize(crypto_ahash_reqtfm(req))); - /* Restore the original crypto request. */ - req->result = priv->result; - - ahash_request_set_callback(req, priv->flags, - priv->complete, priv->data); req->priv = NULL; - /* Free the req->priv.priv from the ADJUSTED request. */ - kfree_sensitive(priv); -} - -static void ahash_notify_einprogress(struct ahash_request *req) -{ - struct ahash_request_priv *priv = req->priv; - struct crypto_async_request oreq; - - oreq.data = priv->data; - - priv->complete(&oreq, -EINPROGRESS); + kfree_sensitive(subreq); } static void ahash_op_unaligned_done(struct crypto_async_request *req, int err) { struct ahash_request *areq = req->data; - if (err == -EINPROGRESS) { - ahash_notify_einprogress(areq); - return; - } - - /* - * Restore the original request, see ahash_op_unaligned() for what - * goes where. - * - * The "struct ahash_request *req" here is in fact the "req.base" - * from the ADJUSTED request from ahash_op_unaligned(), thus as it - * is a pointer to self, it is also the ADJUSTED "req" . - */ + if (err == -EINPROGRESS) + goto out; /* First copy req->result into req->priv.result */ ahash_restore_req(areq, err); +out: /* Complete the ORIGINAL request. */ - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } static int ahash_op_unaligned(struct ahash_request *req, @@ -391,15 +339,17 @@ static void ahash_def_finup_done2(struct crypto_async_request *req, int err) ahash_restore_req(areq, err); - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } static int ahash_def_finup_finish1(struct ahash_request *req, int err) { + struct ahash_request *subreq = req->priv; + if (err) goto out; - req->base.complete = ahash_def_finup_done2; + subreq->base.complete = ahash_def_finup_done2; err = crypto_ahash_reqtfm(req)->final(req); if (err == -EINPROGRESS || err == -EBUSY) @@ -413,19 +363,20 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err) static void ahash_def_finup_done1(struct crypto_async_request *req, int err) { struct ahash_request *areq = req->data; + struct ahash_request *subreq; - if (err == -EINPROGRESS) { - ahash_notify_einprogress(areq); - return; - } + if (err == -EINPROGRESS) + goto out; - areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + subreq = areq->priv; + subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; err = ahash_def_finup_finish1(areq, err); - if (areq->priv) + if (err == -EINPROGRESS || err == -EBUSY) return; - areq->base.complete(&areq->base, err); +out: + ahash_request_complete(areq, err); } static int ahash_def_finup(struct ahash_request *req) diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 1a2a41b79253..0b259dbb97af 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -199,7 +199,7 @@ static inline void *ahash_request_ctx_dma(struct ahash_request *req) static inline void ahash_request_complete(struct ahash_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline u32 ahash_request_flags(struct ahash_request *req) From patchwork Tue Jan 31 08:01:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122358 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05DFAC636CC for ; Tue, 31 Jan 2023 08:02:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229880AbjAaICL (ORCPT ); Tue, 31 Jan 2023 03:02:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230011AbjAaICB (ORCPT ); Tue, 31 Jan 2023 03:02:01 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C260D41B40 for ; Tue, 31 Jan 2023 00:02:00 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaY-005vgC-2q; Tue, 31 Jan 2023 16:01:59 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:58 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:58 +0800 Subject: [PATCH 7/32] crypto: kpp - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/kpp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/kpp.h b/include/crypto/internal/kpp.h index 3c9726e89f53..0a6db8c4a9a0 100644 --- a/include/crypto/internal/kpp.h +++ b/include/crypto/internal/kpp.h @@ -85,7 +85,7 @@ static inline void *kpp_tfm_ctx_dma(struct crypto_kpp *tfm) static inline void kpp_request_complete(struct kpp_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline const char *kpp_alg_name(struct crypto_kpp *tfm) From patchwork Tue Jan 31 08:02:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122359 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB95C38142 for ; Tue, 31 Jan 2023 08:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbjAaICM (ORCPT ); Tue, 31 Jan 2023 03:02:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230154AbjAaICD (ORCPT ); Tue, 31 Jan 2023 03:02:03 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3FA641B66 for ; Tue, 31 Jan 2023 00:02:02 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaa-005vgt-5u; Tue, 31 Jan 2023 16:02:01 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:00 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:00 +0800 Subject: [PATCH 8/32] crypto: skcipher - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/skcipher.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index 06d0a5491cf3..fb3d9e899f52 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -94,7 +94,7 @@ static inline void *skcipher_instance_ctx(struct skcipher_instance *inst) static inline void skcipher_request_complete(struct skcipher_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn, From patchwork Tue Jan 31 08:02:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122360 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E5BFC636D6 for ; Tue, 31 Jan 2023 08:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbjAaICN (ORCPT ); Tue, 31 Jan 2023 03:02:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229692AbjAaICF (ORCPT ); Tue, 31 Jan 2023 03:02:05 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AC8940BF5 for ; Tue, 31 Jan 2023 00:02:05 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlac-005vhS-9R; Tue, 31 Jan 2023 16:02:03 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:02 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:02 +0800 Subject: [PATCH 9/32] crypto: engine - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- crypto/crypto_engine.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 64dc9aa3ca24..21f791615114 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -54,7 +54,7 @@ static void crypto_finalize_request(struct crypto_engine *engine, } } lockdep_assert_in_softirq(); - req->complete(req, err); + crypto_request_complete(req, err); kthread_queue_work(engine->kworker, &engine->pump_requests); } @@ -130,7 +130,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, engine->cur_req = async_req; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (engine->busy) was_busy = true; @@ -214,7 +214,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, } req_err_2: - async_req->complete(async_req, ret); + crypto_request_complete(async_req, ret); retry: /* If retry mechanism is supported, send new requests to engine */ From patchwork Tue Jan 31 08:02:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122361 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6A1BC38142 for ; Tue, 31 Jan 2023 08:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbjAaICS (ORCPT ); Tue, 31 Jan 2023 03:02:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbjAaICH (ORCPT ); Tue, 31 Jan 2023 03:02:07 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AADF410BD for ; Tue, 31 Jan 2023 00:02:07 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlae-005vhm-Cz; Tue, 31 Jan 2023 16:02:05 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:04 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:04 +0800 Subject: [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the akcipher_request_complete helper instead of calling the completion function directly. In fact the previous code was buggy in that EINPROGRESS was never passed back to the original caller. Fixes: 3d5b1ecdea6f ("crypto: rsa - RSA padding algorithm") Signed-off-by: Herbert Xu --- crypto/rsa-pkcs1pad.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c index 6ee5b8a060c0..4e9d2244ee31 100644 --- a/crypto/rsa-pkcs1pad.c +++ b/crypto/rsa-pkcs1pad.c @@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; + + err = pkcs1pad_encrypt_sign_complete(req, err); - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, - pkcs1pad_encrypt_sign_complete(req, err)); +out: + akcipher_request_complete(req, err); } static int pkcs1pad_encrypt(struct akcipher_request *req) @@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; + + err = pkcs1pad_decrypt_complete(req, err); - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err)); +out: + akcipher_request_complete(req, err); } static int pkcs1pad_decrypt(struct akcipher_request *req) @@ -513,15 +510,14 @@ static void pkcs1pad_verify_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, pkcs1pad_verify_complete(req, err)); + err = pkcs1pad_verify_complete(req, err); + +out: + akcipher_request_complete(req, err); } /* From patchwork Tue Jan 31 08:02:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122362 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1702CC636CC for ; Tue, 31 Jan 2023 08:02:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231296AbjAaIC0 (ORCPT ); Tue, 31 Jan 2023 03:02:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229851AbjAaICL (ORCPT ); Tue, 31 Jan 2023 03:02:11 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 223E842BF0 for ; Tue, 31 Jan 2023 00:02:09 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlag-005viF-F5; Tue, 31 Jan 2023 16:02:07 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:06 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:06 +0800 Subject: [PATCH 11/32] crypto: cryptd - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- crypto/cryptd.c | 228 +++++++++++++++++++++++++++++--------------------------- 1 file changed, 120 insertions(+), 108 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index c0c416eda8e8..06ef3fcbe4ae 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -72,7 +72,6 @@ struct cryptd_skcipher_ctx { }; struct cryptd_skcipher_request_ctx { - crypto_completion_t complete; struct skcipher_request req; }; @@ -83,6 +82,7 @@ struct cryptd_hash_ctx { struct cryptd_hash_request_ctx { crypto_completion_t complete; + void *data; struct shash_desc desc; }; @@ -92,7 +92,6 @@ struct cryptd_aead_ctx { }; struct cryptd_aead_request_ctx { - crypto_completion_t complete; struct aead_request req; }; @@ -178,8 +177,8 @@ static void cryptd_queue_worker(struct work_struct *work) return; if (backlog) - backlog->complete(backlog, -EINPROGRESS); - req->complete(req, 0); + crypto_request_complete(backlog, -EINPROGRESS); + crypto_request_complete(req, 0); if (cpu_queue->queue.qlen) queue_work(cryptd_wq, &cpu_queue->work); @@ -238,18 +237,47 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent, return crypto_skcipher_setkey(child, key, keylen); } -static void cryptd_skcipher_complete(struct skcipher_request *req, int err) +static struct skcipher_request *cryptd_skcipher_prepare( + struct skcipher_request *req, int err) +{ + struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->req; + struct cryptd_skcipher_ctx *ctx; + struct crypto_skcipher *child; + + req->base.complete = subreq->base.complete; + req->base.data = subreq->base.data; + + if (unlikely(err == -EINPROGRESS)) + return NULL; + + ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + child = ctx->child; + + skcipher_request_set_tfm(subreq, child); + skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, + NULL, NULL); + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + + return subreq; +} + +static void cryptd_skcipher_complete(struct skcipher_request *req, int err, + crypto_completion_t complete) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); int refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - rctx->complete(&req->base, err); + skcipher_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (unlikely(err == -EINPROGRESS)) { + req->base.complete = complete; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_skcipher(tfm); } @@ -257,54 +285,26 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base, int err) { struct skcipher_request *req = skcipher_request_cast(base); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_request *subreq = &rctx->req; - struct crypto_skcipher *child = ctx->child; - - if (unlikely(err == -EINPROGRESS)) - goto out; - - skcipher_request_set_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - req->iv); - - err = crypto_skcipher_encrypt(subreq); + struct skcipher_request *subreq; - req->base.complete = rctx->complete; + subreq = cryptd_skcipher_prepare(req, err); + if (likely(subreq)) + err = crypto_skcipher_encrypt(subreq); -out: - cryptd_skcipher_complete(req, err); + cryptd_skcipher_complete(req, err, cryptd_skcipher_encrypt); } static void cryptd_skcipher_decrypt(struct crypto_async_request *base, int err) { struct skcipher_request *req = skcipher_request_cast(base); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_request *subreq = &rctx->req; - struct crypto_skcipher *child = ctx->child; + struct skcipher_request *subreq; - if (unlikely(err == -EINPROGRESS)) - goto out; - - skcipher_request_set_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - req->iv); - - err = crypto_skcipher_decrypt(subreq); - - req->base.complete = rctx->complete; + subreq = cryptd_skcipher_prepare(req, err); + if (likely(subreq)) + err = crypto_skcipher_decrypt(subreq); -out: - cryptd_skcipher_complete(req, err); + cryptd_skcipher_complete(req, err, cryptd_skcipher_decrypt); } static int cryptd_skcipher_enqueue(struct skcipher_request *req, @@ -312,11 +312,14 @@ static int cryptd_skcipher_enqueue(struct skcipher_request *req, { struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct skcipher_request *subreq = &rctx->req; struct cryptd_queue *queue; queue = cryptd_get_queue(crypto_skcipher_tfm(tfm)); - rctx->complete = req->base.complete; + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } @@ -469,45 +472,63 @@ static int cryptd_hash_enqueue(struct ahash_request *req, cryptd_get_queue(crypto_ahash_tfm(tfm)); rctx->complete = req->base.complete; + rctx->data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } -static void cryptd_hash_complete(struct ahash_request *req, int err) +static struct shash_desc *cryptd_hash_prepare(struct ahash_request *req, + int err) +{ + struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); + + req->base.complete = rctx->complete; + req->base.data = rctx->data; + + if (unlikely(err == -EINPROGRESS)) + return NULL; + + return &rctx->desc; +} + +static void cryptd_hash_complete(struct ahash_request *req, int err, + crypto_completion_t complete) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); int refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - rctx->complete(&req->base, err); + ahash_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (err == -EINPROGRESS) { + req->base.complete = complete; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_ahash(tfm); } static void cryptd_hash_init(struct crypto_async_request *req_async, int err) { - struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm); - struct crypto_shash *child = ctx->child; struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - struct shash_desc *desc = &rctx->desc; + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_shash *child = ctx->child; + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) + desc = cryptd_hash_prepare(req, err); + if (unlikely(!desc)) goto out; desc->tfm = child; err = crypto_shash_init(desc); - req->base.complete = rctx->complete; - out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_init); } static int cryptd_hash_init_enqueue(struct ahash_request *req) @@ -518,19 +539,13 @@ static int cryptd_hash_init_enqueue(struct ahash_request *req) static void cryptd_hash_update(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx; - - rctx = ahash_request_ctx(req); - - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = shash_ahash_update(req, &rctx->desc); + struct shash_desc *desc; - req->base.complete = rctx->complete; + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = shash_ahash_update(req, desc); -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_update); } static int cryptd_hash_update_enqueue(struct ahash_request *req) @@ -541,17 +556,13 @@ static int cryptd_hash_update_enqueue(struct ahash_request *req) static void cryptd_hash_final(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = crypto_shash_final(&rctx->desc, req->result); + struct shash_desc *desc; - req->base.complete = rctx->complete; + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = crypto_shash_final(desc, req->result); -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_final); } static int cryptd_hash_final_enqueue(struct ahash_request *req) @@ -562,17 +573,13 @@ static int cryptd_hash_final_enqueue(struct ahash_request *req) static void cryptd_hash_finup(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = shash_ahash_finup(req, &rctx->desc); + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = shash_ahash_finup(req, desc); - req->base.complete = rctx->complete; - -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_finup); } static int cryptd_hash_finup_enqueue(struct ahash_request *req) @@ -582,23 +589,22 @@ static int cryptd_hash_finup_enqueue(struct ahash_request *req) static void cryptd_hash_digest(struct crypto_async_request *req_async, int err) { - struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm); - struct crypto_shash *child = ctx->child; struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - struct shash_desc *desc = &rctx->desc; + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_shash *child = ctx->child; + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) + desc = cryptd_hash_prepare(req, err); + if (unlikely(!desc)) goto out; desc->tfm = child; err = shash_ahash_digest(req, desc); - req->base.complete = rctx->complete; - out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_digest); } static int cryptd_hash_digest_enqueue(struct ahash_request *req) @@ -711,20 +717,20 @@ static int cryptd_aead_setauthsize(struct crypto_aead *parent, } static void cryptd_aead_crypt(struct aead_request *req, - struct crypto_aead *child, - int err, - int (*crypt)(struct aead_request *req)) + struct crypto_aead *child, int err, + int (*crypt)(struct aead_request *req), + crypto_completion_t compl) { struct cryptd_aead_request_ctx *rctx; struct aead_request *subreq; struct cryptd_aead_ctx *ctx; - crypto_completion_t compl; struct crypto_aead *tfm; int refcnt; rctx = aead_request_ctx(req); - compl = rctx->complete; subreq = &rctx->req; + req->base.complete = subreq->base.complete; + req->base.data = subreq->base.data; tfm = crypto_aead_reqtfm(req); @@ -740,17 +746,18 @@ static void cryptd_aead_crypt(struct aead_request *req, err = crypt( req ); - req->base.complete = compl; - out: ctx = crypto_aead_ctx(tfm); refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - compl(&req->base, err); + aead_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (err == -EINPROGRESS) { + req->base.complete = compl; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_aead(tfm); } @@ -761,7 +768,8 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err) struct aead_request *req; req = container_of(areq, struct aead_request, base); - cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt); + cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt, + cryptd_aead_encrypt); } static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) @@ -771,7 +779,8 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) struct aead_request *req; req = container_of(areq, struct aead_request, base); - cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt); + cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt, + cryptd_aead_decrypt); } static int cryptd_aead_enqueue(struct aead_request *req, @@ -780,9 +789,12 @@ static int cryptd_aead_enqueue(struct aead_request *req, struct cryptd_aead_request_ctx *rctx = aead_request_ctx(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cryptd_queue *queue = cryptd_get_queue(crypto_aead_tfm(tfm)); + struct aead_request *subreq = &rctx->req; - rctx->complete = req->base.complete; + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } From patchwork Tue Jan 31 08:02:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122363 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6831DC38142 for ; Tue, 31 Jan 2023 08:02:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229851AbjAaIC1 (ORCPT ); Tue, 31 Jan 2023 03:02:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229613AbjAaICM (ORCPT ); Tue, 31 Jan 2023 03:02:12 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EA6841B40 for ; Tue, 31 Jan 2023 00:02:11 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlai-005vix-I8; Tue, 31 Jan 2023 16:02:09 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:08 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:08 +0800 Subject: [PATCH 12/32] crypto: atmel - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/atmel-aes.c | 4 ++-- drivers/crypto/atmel-sha.c | 4 ++-- drivers/crypto/atmel-tdes.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c index e90e4a6cc37a..ed10f2ae4523 100644 --- a/drivers/crypto/atmel-aes.c +++ b/drivers/crypto/atmel-aes.c @@ -554,7 +554,7 @@ static inline int atmel_aes_complete(struct atmel_aes_dev *dd, int err) } if (dd->is_async) - dd->areq->complete(dd->areq, err); + crypto_request_complete(dd->areq, err); tasklet_schedule(&dd->queue_task); @@ -955,7 +955,7 @@ static int atmel_aes_handle_queue(struct atmel_aes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(areq->tfm); diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c index 00be792e605c..a77cf0da0816 100644 --- a/drivers/crypto/atmel-sha.c +++ b/drivers/crypto/atmel-sha.c @@ -292,7 +292,7 @@ static inline int atmel_sha_complete(struct atmel_sha_dev *dd, int err) clk_disable(dd->iclk); if ((dd->is_async || dd->force_complete) && req->base.complete) - req->base.complete(&req->base, err); + ahash_request_complete(req, err); /* handle new request */ tasklet_schedule(&dd->queue_task); @@ -1080,7 +1080,7 @@ static int atmel_sha_handle_queue(struct atmel_sha_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(async_req->tfm); diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c index 8b7bc1076e0d..b2d48c1649b9 100644 --- a/drivers/crypto/atmel-tdes.c +++ b/drivers/crypto/atmel-tdes.c @@ -590,7 +590,7 @@ static void atmel_tdes_finish_req(struct atmel_tdes_dev *dd, int err) if (!err && (rctx->mode & TDES_FLAGS_OPMODE_MASK) != TDES_FLAGS_ECB) atmel_tdes_set_iv_as_last_ciphertext_block(dd); - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); } static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, @@ -619,7 +619,7 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = skcipher_request_cast(async_req); From patchwork Tue Jan 31 08:02:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122365 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F92C636CD for ; Tue, 31 Jan 2023 08:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229613AbjAaIC2 (ORCPT ); Tue, 31 Jan 2023 03:02:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbjAaICO (ORCPT ); Tue, 31 Jan 2023 03:02:14 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8535241B6D for ; Tue, 31 Jan 2023 00:02:13 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlak-005vjY-N1; Tue, 31 Jan 2023 16:02:11 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:10 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:10 +0800 Subject: [PATCH 13/32] crypto: artpec6 - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu Acked-by: Jesper Nilsson --- drivers/crypto/axis/artpec6_crypto.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c index f6f41e316dfe..8493a45e1bd4 100644 --- a/drivers/crypto/axis/artpec6_crypto.c +++ b/drivers/crypto/axis/artpec6_crypto.c @@ -2143,13 +2143,13 @@ static void artpec6_crypto_task(unsigned long data) list_for_each_entry_safe(req, n, &complete_in_progress, complete_in_progress) { - req->req->complete(req->req, -EINPROGRESS); + crypto_request_complete(req->req, -EINPROGRESS); } } static void artpec6_crypto_complete_crypto(struct crypto_async_request *req) { - req->complete(req, 0); + crypto_request_complete(req, 0); } static void @@ -2161,7 +2161,7 @@ artpec6_crypto_complete_cbc_decrypt(struct crypto_async_request *req) scatterwalk_map_and_copy(cipher_req->iv, cipher_req->src, cipher_req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); - req->complete(req, 0); + skcipher_request_complete(cipher_req, 0); } static void @@ -2173,7 +2173,7 @@ artpec6_crypto_complete_cbc_encrypt(struct crypto_async_request *req) scatterwalk_map_and_copy(cipher_req->iv, cipher_req->dst, cipher_req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); - req->complete(req, 0); + skcipher_request_complete(cipher_req, 0); } static void artpec6_crypto_complete_aead(struct crypto_async_request *req) @@ -2211,12 +2211,12 @@ static void artpec6_crypto_complete_aead(struct crypto_async_request *req) } } - req->complete(req, result); + aead_request_complete(areq, result); } static void artpec6_crypto_complete_hash(struct crypto_async_request *req) { - req->complete(req, 0); + crypto_request_complete(req, 0); } From patchwork Tue Jan 31 08:02:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122364 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB44EC636CC for ; Tue, 31 Jan 2023 08:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229634AbjAaICa (ORCPT ); Tue, 31 Jan 2023 03:02:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbjAaICR (ORCPT ); Tue, 31 Jan 2023 03:02:17 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6E0545BEA for ; Tue, 31 Jan 2023 00:02:15 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlam-005vk0-PF; Tue, 31 Jan 2023 16:02:13 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:12 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:12 +0800 Subject: [PATCH 14/32] crypto: bcm - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/bcm/cipher.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c index f8e035039aeb..70b911baab26 100644 --- a/drivers/crypto/bcm/cipher.c +++ b/drivers/crypto/bcm/cipher.c @@ -1614,7 +1614,7 @@ static void finish_req(struct iproc_reqctx_s *rctx, int err) spu_chunk_cleanup(rctx); if (areq) - areq->complete(areq, err); + crypto_request_complete(areq, err); } /** From patchwork Tue Jan 31 08:02:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122366 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0D37C636D6 for ; Tue, 31 Jan 2023 08:02:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230007AbjAaICc (ORCPT ); Tue, 31 Jan 2023 03:02:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbjAaICU (ORCPT ); Tue, 31 Jan 2023 03:02:20 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2FBE42BE8 for ; Tue, 31 Jan 2023 00:02:17 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlap-005vl8-50; Tue, 31 Jan 2023 16:02:16 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:15 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:15 +0800 Subject: [PATCH 15/32] crypto: cpt - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/cpt/cptvf_algs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c index 0b38c2600b86..ee476c6c7f82 100644 --- a/drivers/crypto/cavium/cpt/cptvf_algs.c +++ b/drivers/crypto/cavium/cpt/cptvf_algs.c @@ -28,7 +28,7 @@ static void cvm_callback(u32 status, void *arg) { struct crypto_async_request *req = (struct crypto_async_request *)arg; - req->complete(req, !status); + crypto_request_complete(req, !status); } static inline void update_input_iv(struct cpt_request_info *req_info, From patchwork Tue Jan 31 08:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122367 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEF3AC636CC for ; Tue, 31 Jan 2023 08:02:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231311AbjAaICt (ORCPT ); Tue, 31 Jan 2023 03:02:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230324AbjAaICY (ORCPT ); Tue, 31 Jan 2023 03:02:24 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB61B45F67 for ; Tue, 31 Jan 2023 00:02:19 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlar-005vm9-7P; Tue, 31 Jan 2023 16:02:18 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:17 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:17 +0800 Subject: [PATCH 16/32] crypto: nitrox - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/nitrox/nitrox_aead.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/cavium/nitrox/nitrox_aead.c b/drivers/crypto/cavium/nitrox/nitrox_aead.c index 0653484df23f..b0e53034164a 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_aead.c +++ b/drivers/crypto/cavium/nitrox/nitrox_aead.c @@ -199,7 +199,7 @@ static void nitrox_aead_callback(void *arg, int err) err = -EINVAL; } - areq->base.complete(&areq->base, err); + aead_request_complete(areq, err); } static inline bool nitrox_aes_gcm_assoclen_supported(unsigned int assoclen) @@ -434,7 +434,7 @@ static void nitrox_rfc4106_callback(void *arg, int err) err = -EINVAL; } - areq->base.complete(&areq->base, err); + aead_request_complete(areq, err); } static int nitrox_rfc4106_enc(struct aead_request *areq) From patchwork Tue Jan 31 08:02:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122368 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D02AC636CC for ; Tue, 31 Jan 2023 08:02:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231292AbjAaIC5 (ORCPT ); Tue, 31 Jan 2023 03:02:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231290AbjAaIC0 (ORCPT ); Tue, 31 Jan 2023 03:02:26 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0990E42BEE for ; Tue, 31 Jan 2023 00:02:22 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlat-005vmv-9T; Tue, 31 Jan 2023 16:02:20 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:19 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:19 +0800 Subject: [PATCH 17/32] crypto: ccp - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu Acked-by: Tom Lendacky --- drivers/crypto/ccp/ccp-crypto-main.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c index 73442a382f68..ecd58b38c46e 100644 --- a/drivers/crypto/ccp/ccp-crypto-main.c +++ b/drivers/crypto/ccp/ccp-crypto-main.c @@ -146,7 +146,7 @@ static void ccp_crypto_complete(void *data, int err) /* Only propagate the -EINPROGRESS if necessary */ if (crypto_cmd->ret == -EBUSY) { crypto_cmd->ret = -EINPROGRESS; - req->complete(req, -EINPROGRESS); + crypto_request_complete(req, -EINPROGRESS); } return; @@ -159,18 +159,18 @@ static void ccp_crypto_complete(void *data, int err) held = ccp_crypto_cmd_complete(crypto_cmd, &backlog); if (backlog) { backlog->ret = -EINPROGRESS; - backlog->req->complete(backlog->req, -EINPROGRESS); + crypto_request_complete(backlog->req, -EINPROGRESS); } /* Transition the state from -EBUSY to -EINPROGRESS first */ if (crypto_cmd->ret == -EBUSY) - req->complete(req, -EINPROGRESS); + crypto_request_complete(req, -EINPROGRESS); /* Completion callbacks */ ret = err; if (ctx->complete) ret = ctx->complete(req, ret); - req->complete(req, ret); + crypto_request_complete(req, ret); /* Submit the next cmd */ while (held) { @@ -186,12 +186,12 @@ static void ccp_crypto_complete(void *data, int err) ctx = crypto_tfm_ctx_dma(held->req->tfm); if (ctx->complete) ret = ctx->complete(held->req, ret); - held->req->complete(held->req, ret); + crypto_request_complete(held->req, ret); next = ccp_crypto_cmd_complete(held, &backlog); if (backlog) { backlog->ret = -EINPROGRESS; - backlog->req->complete(backlog->req, -EINPROGRESS); + crypto_request_complete(backlog->req, -EINPROGRESS); } kfree(held); From patchwork Tue Jan 31 08:02:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122369 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4462CC38142 for ; Tue, 31 Jan 2023 08:02:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231344AbjAaIC6 (ORCPT ); Tue, 31 Jan 2023 03:02:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaIC0 (ORCPT ); Tue, 31 Jan 2023 03:02:26 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16A9345F55 for ; Tue, 31 Jan 2023 00:02:24 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlav-005vnS-Cz; Tue, 31 Jan 2023 16:02:22 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:21 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:21 +0800 Subject: [PATCH 18/32] crypto: chelsio - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/chelsio/chcr_algo.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 68d65773ef2b..0eade4fa6695 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -220,7 +220,7 @@ static inline int chcr_handle_aead_resp(struct aead_request *req, reqctx->verify = VERIFY_HW; } chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + aead_request_complete(req, err); return err; } @@ -1235,7 +1235,7 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req, complete(&ctx->cbc_aes_aio_done); } chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); return err; } @@ -2132,7 +2132,7 @@ static inline void chcr_handle_ahash_resp(struct ahash_request *req, out: chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + ahash_request_complete(req, err); } /* From patchwork Tue Jan 31 08:02:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122370 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88CB2C636CC for ; Tue, 31 Jan 2023 08:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231360AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230150AbjAaICl (ORCPT ); Tue, 31 Jan 2023 03:02:41 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2280745F73 for ; Tue, 31 Jan 2023 00:02:26 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlax-005vo9-FA; Tue, 31 Jan 2023 16:02:24 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:23 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:23 +0800 Subject: [PATCH 19/32] crypto: hifn_795x - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/hifn_795x.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/hifn_795x.c b/drivers/crypto/hifn_795x.c index 7e7a8f01ea6b..5a7f6611803c 100644 --- a/drivers/crypto/hifn_795x.c +++ b/drivers/crypto/hifn_795x.c @@ -1705,7 +1705,7 @@ static void hifn_process_ready(struct skcipher_request *req, int error) hifn_cipher_walk_exit(&rctx->walk); } - req->base.complete(&req->base, error); + skcipher_request_complete(req, error); } static void hifn_clear_rings(struct hifn_device *dev, int error) @@ -2054,7 +2054,7 @@ static int hifn_process_queue(struct hifn_device *dev) break; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = skcipher_request_cast(async_req); From patchwork Tue Jan 31 08:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122371 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E29AC38142 for ; Tue, 31 Jan 2023 08:03:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231364AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbjAaICq (ORCPT ); Tue, 31 Jan 2023 03:02:46 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EFB84109B for ; Tue, 31 Jan 2023 00:02:28 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaz-005voU-HY; Tue, 31 Jan 2023 16:02:26 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:25 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:25 +0800 Subject: [PATCH 20/32] crypto: hisilicon - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/hisilicon/sec/sec_algs.c | 6 +++--- drivers/crypto/hisilicon/sec2/sec_crypto.c | 10 ++++------ 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c index 490e1542305e..1189effcdad0 100644 --- a/drivers/crypto/hisilicon/sec/sec_algs.c +++ b/drivers/crypto/hisilicon/sec/sec_algs.c @@ -504,8 +504,8 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp, kfifo_avail(&ctx->queue->softqueue) > backlog_req->num_elements)) { sec_send_request(backlog_req, ctx->queue); - backlog_req->req_base->complete(backlog_req->req_base, - -EINPROGRESS); + crypto_request_complete(backlog_req->req_base, + -EINPROGRESS); list_del(&backlog_req->backlog_head); } } @@ -534,7 +534,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp, if (skreq->src != skreq->dst) dma_unmap_sg(dev, skreq->dst, sec_req->len_out, DMA_BIDIRECTIONAL); - skreq->base.complete(&skreq->base, sec_req->err); + skcipher_request_complete(skreq, sec_req->err); } } diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index f5bfc9755a4a..074e50ef512c 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -1459,12 +1459,11 @@ static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req, break; backlog_sk_req = backlog_req->c_req.sk_req; - backlog_sk_req->base.complete(&backlog_sk_req->base, - -EINPROGRESS); + skcipher_request_complete(backlog_sk_req, -EINPROGRESS); atomic64_inc(&ctx->sec->debug.dfx.recv_busy_cnt); } - sk_req->base.complete(&sk_req->base, err); + skcipher_request_complete(sk_req, err); } static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req) @@ -1736,12 +1735,11 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err) break; backlog_aead_req = backlog_req->aead_req.aead_req; - backlog_aead_req->base.complete(&backlog_aead_req->base, - -EINPROGRESS); + aead_request_complete(backlog_aead_req, -EINPROGRESS); atomic64_inc(&c->sec->debug.dfx.recv_busy_cnt); } - a_req->base.complete(&a_req->base, err); + aead_request_complete(a_req, err); } static void sec_request_uninit(struct sec_ctx *ctx, struct sec_req *req) From patchwork Tue Jan 31 08:02:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122372 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF47DC636CD for ; Tue, 31 Jan 2023 08:03:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230305AbjAaIDG (ORCPT ); Tue, 31 Jan 2023 03:03:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230496AbjAaICr (ORCPT ); Tue, 31 Jan 2023 03:02:47 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69A9745BF8 for ; Tue, 31 Jan 2023 00:02:30 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb1-005voq-JW; Tue, 31 Jan 2023 16:02:28 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:27 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:27 +0800 Subject: [PATCH 21/32] crypto: img-hash - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/img-hash.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c index 9629e98bd68b..b4155993a7fc 100644 --- a/drivers/crypto/img-hash.c +++ b/drivers/crypto/img-hash.c @@ -308,7 +308,7 @@ static void img_hash_finish_req(struct ahash_request *req, int err) DRIVER_FLAGS_CPU | DRIVER_FLAGS_BUSY | DRIVER_FLAGS_FINAL); if (req->base.complete) - req->base.complete(&req->base, err); + ahash_request_complete(req, err); } static int img_hash_write_via_dma(struct img_hash_dev *hdev) @@ -526,7 +526,7 @@ static int img_hash_handle_queue(struct img_hash_dev *hdev, return res; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = ahash_request_cast(async_req); hdev->req = req; From patchwork Tue Jan 31 08:02:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122373 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BC53C38142 for ; Tue, 31 Jan 2023 08:03:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbjAaIDQ (ORCPT ); Tue, 31 Jan 2023 03:03:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbjAaICt (ORCPT ); Tue, 31 Jan 2023 03:02:49 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DF584608A for ; Tue, 31 Jan 2023 00:02:32 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb3-005vpH-N2; Tue, 31 Jan 2023 16:02:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:29 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:29 +0800 Subject: [PATCH 22/32] crypto: safexcel - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/inside-secure/safexcel.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c index ae6110376e21..5fac36d98070 100644 --- a/drivers/crypto/inside-secure/safexcel.c +++ b/drivers/crypto/inside-secure/safexcel.c @@ -850,7 +850,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) goto request_failed; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); /* In case the send() helper did not issue any command to push * to the engine because the input data was cached, continue to @@ -1050,7 +1050,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv if (should_complete) { local_bh_disable(); - req->complete(req, ret); + crypto_request_complete(req, ret); local_bh_enable(); } From patchwork Tue Jan 31 08:02:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122374 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97CA1C38142 for ; Tue, 31 Jan 2023 08:03:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229868AbjAaIDS (ORCPT ); Tue, 31 Jan 2023 03:03:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231131AbjAaICu (ORCPT ); Tue, 31 Jan 2023 03:02:50 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AEF645BFD for ; Tue, 31 Jan 2023 00:02:34 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb5-005vqD-PL; Tue, 31 Jan 2023 16:02:32 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:31 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:31 +0800 Subject: [PATCH 23/32] crypto: ixp4xx - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/ixp4xx_crypto.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c index 984b3cc0237c..b63e2359a133 100644 --- a/drivers/crypto/ixp4xx_crypto.c +++ b/drivers/crypto/ixp4xx_crypto.c @@ -382,7 +382,7 @@ static void one_packet(dma_addr_t phys) if (req_ctx->hmac_virt) finish_scattered_hmac(crypt); - req->base.complete(&req->base, failed); + aead_request_complete(req, failed); break; } case CTL_FLAG_PERFORM_ABLK: { @@ -407,7 +407,7 @@ static void one_packet(dma_addr_t phys) free_buf_chain(dev, req_ctx->dst, crypt->dst_buf); free_buf_chain(dev, req_ctx->src, crypt->src_buf); - req->base.complete(&req->base, failed); + skcipher_request_complete(req, failed); break; } case CTL_FLAG_GEN_ICV: From patchwork Tue Jan 31 08:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122375 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DD2FC636CC for ; Tue, 31 Jan 2023 08:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbjAaIDk (ORCPT ); Tue, 31 Jan 2023 03:03:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230484AbjAaIC4 (ORCPT ); Tue, 31 Jan 2023 03:02:56 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 953494608F for ; Tue, 31 Jan 2023 00:02:36 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb7-005vrK-Tf; Tue, 31 Jan 2023 16:02:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:33 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:33 +0800 Subject: [PATCH 24/32] crypto: marvell/cesa - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/cesa/cesa.c | 4 ++-- drivers/crypto/marvell/cesa/tdma.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/marvell/cesa/cesa.c b/drivers/crypto/marvell/cesa/cesa.c index 5cd332880653..b61e35b932e5 100644 --- a/drivers/crypto/marvell/cesa/cesa.c +++ b/drivers/crypto/marvell/cesa/cesa.c @@ -66,7 +66,7 @@ static void mv_cesa_rearm_engine(struct mv_cesa_engine *engine) return; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(req->tfm); ctx->ops->step(req); @@ -106,7 +106,7 @@ mv_cesa_complete_req(struct mv_cesa_ctx *ctx, struct crypto_async_request *req, { ctx->ops->cleanup(req); local_bh_disable(); - req->complete(req, res); + crypto_request_complete(req, res); local_bh_enable(); } diff --git a/drivers/crypto/marvell/cesa/tdma.c b/drivers/crypto/marvell/cesa/tdma.c index f0b5537038c2..388a06e180d6 100644 --- a/drivers/crypto/marvell/cesa/tdma.c +++ b/drivers/crypto/marvell/cesa/tdma.c @@ -168,7 +168,7 @@ int mv_cesa_tdma_process(struct mv_cesa_engine *engine, u32 status) req); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); } if (res || tdma->cur_dma == tdma_cur) From patchwork Tue Jan 31 08:02:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122376 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A6F2C38142 for ; Tue, 31 Jan 2023 08:03:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjAaIDk (ORCPT ); Tue, 31 Jan 2023 03:03:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231215AbjAaIC4 (ORCPT ); Tue, 31 Jan 2023 03:02:56 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF8AD46097 for ; Tue, 31 Jan 2023 00:02:38 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb9-005vsN-Vt; Tue, 31 Jan 2023 16:02:37 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:35 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:35 +0800 Subject: [PATCH 25/32] crypto: octeontx - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/octeontx/otx_cptvf_algs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c index 83493dd0416f..1c2c870e887a 100644 --- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c +++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c @@ -138,7 +138,7 @@ static void otx_cpt_aead_callback(int status, void *arg1, void *arg2) complete: if (areq) - areq->complete(areq, status); + crypto_request_complete(areq, status); } static void output_iv_copyback(struct crypto_async_request *areq) @@ -188,7 +188,7 @@ static void otx_cpt_skcipher_callback(int status, void *arg1, void *arg2) pdev = cpt_info->pdev; do_request_cleanup(pdev, cpt_info); } - areq->complete(areq, status); + crypto_request_complete(areq, status); } } From patchwork Tue Jan 31 08:02:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122378 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32E6DC636CD for ; Tue, 31 Jan 2023 08:03:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229972AbjAaIDl (ORCPT ); Tue, 31 Jan 2023 03:03:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231290AbjAaIC5 (ORCPT ); Tue, 31 Jan 2023 03:02:57 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11402460B2 for ; Tue, 31 Jan 2023 00:02:41 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbC-005vsq-3j; Tue, 31 Jan 2023 16:02:39 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:38 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:38 +0800 Subject: [PATCH 26/32] crypto: octeontx2 - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c index 443202caa140..e27ddd3c4e55 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c @@ -120,7 +120,7 @@ static void otx2_cpt_aead_callback(int status, void *arg1, void *arg2) otx2_cpt_info_destroy(pdev, inst_info); } if (areq) - areq->complete(areq, status); + crypto_request_complete(areq, status); } static void output_iv_copyback(struct crypto_async_request *areq) @@ -170,7 +170,7 @@ static void otx2_cpt_skcipher_callback(int status, void *arg1, void *arg2) pdev = inst_info->pdev; otx2_cpt_info_destroy(pdev, inst_info); } - areq->complete(areq, status); + crypto_request_complete(areq, status); } } From patchwork Tue Jan 31 08:02:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122377 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C86B4C636D6 for ; Tue, 31 Jan 2023 08:03:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229973AbjAaIDm (ORCPT ); Tue, 31 Jan 2023 03:03:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229828AbjAaIC6 (ORCPT ); Tue, 31 Jan 2023 03:02:58 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45D1046084 for ; Tue, 31 Jan 2023 00:02:43 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbE-005vtF-9Z; Tue, 31 Jan 2023 16:02:41 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:40 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:40 +0800 Subject: [PATCH 27/32] crypto: mxs-dcp - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/mxs-dcp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index d6f9e2fe863d..1c11946a4f0b 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -413,11 +413,11 @@ static int dcp_chan_thread_aes(void *data) set_current_state(TASK_RUNNING); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (arq) { ret = mxs_dcp_aes_block_crypt(arq); - arq->complete(arq, ret); + crypto_request_complete(arq, ret); } } @@ -709,11 +709,11 @@ static int dcp_chan_thread_sha(void *data) set_current_state(TASK_RUNNING); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (arq) { ret = dcp_sha_req_to_buf(arq); - arq->complete(arq, ret); + crypto_request_complete(arq, ret); } } From patchwork Tue Jan 31 08:02:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122379 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75EEEC636CC for ; Tue, 31 Jan 2023 08:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbjAaIDn (ORCPT ); Tue, 31 Jan 2023 03:03:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaIDD (ORCPT ); Tue, 31 Jan 2023 03:03:03 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD314614E for ; Tue, 31 Jan 2023 00:02:45 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbG-005vte-GJ; Tue, 31 Jan 2023 16:02:43 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:42 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:42 +0800 Subject: [PATCH 28/32] crypto: qat - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu Reviewed-by: Giovanni Cabiddu --- drivers/crypto/qat/qat_common/qat_algs.c | 4 ++-- drivers/crypto/qat/qat_common/qat_algs_send.c | 3 ++- drivers/crypto/qat/qat_common/qat_comp_algs.c | 4 ++-- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index b4b9f0aa59b9..bcd239b11ec0 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -676,7 +676,7 @@ static void qat_aead_alg_callback(struct icp_qat_fw_la_resp *qat_resp, qat_bl_free_bufl(inst->accel_dev, &qat_req->buf); if (unlikely(qat_res != ICP_QAT_FW_COMN_STATUS_FLAG_OK)) res = -EBADMSG; - areq->base.complete(&areq->base, res); + aead_request_complete(areq, res); } static void qat_alg_update_iv_ctr_mode(struct qat_crypto_request *qat_req) @@ -752,7 +752,7 @@ static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, memcpy(sreq->iv, qat_req->iv, AES_BLOCK_SIZE); - sreq->base.complete(&sreq->base, res); + skcipher_request_complete(sreq, res); } void qat_alg_callback(void *resp) diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.c b/drivers/crypto/qat/qat_common/qat_algs_send.c index ff5b4347f783..bb80455b3e81 100644 --- a/drivers/crypto/qat/qat_common/qat_algs_send.c +++ b/drivers/crypto/qat/qat_common/qat_algs_send.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2022 Intel Corporation */ +#include #include "adf_transport.h" #include "qat_algs_send.h" #include "qat_crypto.h" @@ -34,7 +35,7 @@ void qat_alg_send_backlog(struct qat_instance_backlog *backlog) break; } list_del(&req->list); - req->base->complete(req->base, -EINPROGRESS); + crypto_request_complete(req->base, -EINPROGRESS); } spin_unlock_bh(&backlog->lock); } diff --git a/drivers/crypto/qat/qat_common/qat_comp_algs.c b/drivers/crypto/qat/qat_common/qat_comp_algs.c index 1480d36a8d2b..cf0ac9f8ea83 100644 --- a/drivers/crypto/qat/qat_common/qat_comp_algs.c +++ b/drivers/crypto/qat/qat_common/qat_comp_algs.c @@ -94,7 +94,7 @@ static void qat_comp_resubmit(struct work_struct *work) err: qat_bl_free_bufl(accel_dev, qat_bufs); - areq->base.complete(&areq->base, ret); + acomp_request_complete(areq, ret); } static void qat_comp_generic_callback(struct qat_compression_req *qat_req, @@ -169,7 +169,7 @@ static void qat_comp_generic_callback(struct qat_compression_req *qat_req, end: qat_bl_free_bufl(accel_dev, &qat_req->buf); - areq->base.complete(&areq->base, res); + acomp_request_complete(areq, res); } void qat_comp_alg_callback(void *resp) From patchwork Tue Jan 31 08:02:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122380 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2AE3C63797 for ; Tue, 31 Jan 2023 08:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229725AbjAaIDn (ORCPT ); Tue, 31 Jan 2023 03:03:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231359AbjAaIDD (ORCPT ); Tue, 31 Jan 2023 03:03:03 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D1AD43444 for ; Tue, 31 Jan 2023 00:02:47 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbI-005vu4-K7; Tue, 31 Jan 2023 16:02:45 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:44 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:44 +0800 Subject: [PATCH 29/32] crypto: qce - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/qce/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c index d3780be44a76..74deca4f96e0 100644 --- a/drivers/crypto/qce/core.c +++ b/drivers/crypto/qce/core.c @@ -107,7 +107,7 @@ static int qce_handle_queue(struct qce_device *qce, if (backlog) { spin_lock_bh(&qce->lock); - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); spin_unlock_bh(&qce->lock); } @@ -132,7 +132,7 @@ static void qce_tasklet_req_done(unsigned long data) spin_unlock_irqrestore(&qce->lock, flags); if (req) - req->complete(req, qce->result); + crypto_request_complete(req, qce->result); qce_handle_queue(qce, NULL); } From patchwork Tue Jan 31 08:02:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122381 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B389C38142 for ; Tue, 31 Jan 2023 08:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229991AbjAaIDo (ORCPT ); Tue, 31 Jan 2023 03:03:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230112AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A9D42BEB for ; Tue, 31 Jan 2023 00:02:49 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbK-005vuQ-Mb; Tue, 31 Jan 2023 16:02:47 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:46 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:46 +0800 Subject: [PATCH 30/32] crypto: s5p-sss - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/s5p-sss.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index b79e49aa724f..1c4d5fb05d69 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -499,7 +499,7 @@ static void s5p_sg_done(struct s5p_aes_dev *dev) /* Calls the completion. Cannot be called with dev->lock hold. */ static void s5p_aes_complete(struct skcipher_request *req, int err) { - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); } static void s5p_unset_outdata(struct s5p_aes_dev *dev) @@ -1355,7 +1355,7 @@ static void s5p_hash_finish_req(struct ahash_request *req, int err) spin_unlock_irqrestore(&dd->hash_lock, flags); if (req->base.complete) - req->base.complete(&req->base, err); + ahash_request_complete(req, err); } /** @@ -1397,7 +1397,7 @@ static int s5p_hash_handle_queue(struct s5p_aes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = ahash_request_cast(async_req); dd->hash_req = req; @@ -1991,7 +1991,7 @@ static void s5p_tasklet_cb(unsigned long data) spin_unlock_irqrestore(&dev->lock, flags); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); dev->req = skcipher_request_cast(async_req); dev->ctx = crypto_tfm_ctx(dev->req->base.tfm); From patchwork Tue Jan 31 08:02:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122382 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F532C636D6 for ; Tue, 31 Jan 2023 08:03:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbjAaIDo (ORCPT ); Tue, 31 Jan 2023 03:03:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230399AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2D3CC9 for ; Tue, 31 Jan 2023 00:02:51 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbM-005vur-QD; Tue, 31 Jan 2023 16:02:49 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:48 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:48 +0800 Subject: [PATCH 31/32] crypto: sahara - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/sahara.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 7ab20fb95166..dd4c703cd855 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -1049,7 +1049,7 @@ static int sahara_queue_manage(void *data) spin_unlock_bh(&dev->queue_spinlock); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (async_req) { if (crypto_tfm_alg_type(async_req->tfm) == @@ -1065,7 +1065,7 @@ static int sahara_queue_manage(void *data) ret = sahara_aes_process(req); } - async_req->complete(async_req, ret); + crypto_request_complete(async_req, ret); continue; } From patchwork Tue Jan 31 08:02:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13122818 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E2EC636CD for ; Tue, 31 Jan 2023 10:22:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230271AbjAaKWW (ORCPT ); Tue, 31 Jan 2023 05:22:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaKWU (ORCPT ); Tue, 31 Jan 2023 05:22:20 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA6118AB4 for ; Tue, 31 Jan 2023 02:22:17 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbO-005vvP-V3; Tue, 31 Jan 2023 16:02:52 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:50 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:50 +0800 Subject: [PATCH 32/32] crypto: talitos - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/talitos.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index d62ec68e3183..bb27f011cf31 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -1560,7 +1560,7 @@ static void skcipher_done(struct device *dev, kfree(edesc); - areq->base.complete(&areq->base, err); + skcipher_request_complete(areq, err); } static int common_nonsnoop(struct talitos_edesc *edesc, @@ -1759,7 +1759,7 @@ static void ahash_done(struct device *dev, kfree(edesc); - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } /*