From patchwork Mon Feb 6 10:22:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129709 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC084C636D6 for ; Mon, 6 Feb 2023 11:34:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230184AbjBFLeW (ORCPT ); Mon, 6 Feb 2023 06:34:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230082AbjBFLdv (ORCPT ); Mon, 6 Feb 2023 06:33:51 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E334166DC; Mon, 6 Feb 2023 03:33:44 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydY-007zgU-U2; Mon, 06 Feb 2023 18:22:13 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:12 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:12 +0800 Subject: [PATCH 1/17] dm: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu Acked-by: Mike Snitzer --- drivers/md/dm-crypt.c | 8 +++----- drivers/md/dm-integrity.c | 4 ++-- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 2653516bcdef..7609fe39ab8c 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1458,8 +1458,7 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, return r; } -static void kcryptd_async_done(struct crypto_async_request *async_req, - int error); +static void kcryptd_async_done(crypto_completion_data_t *async_req, int error); static int crypt_alloc_req_skcipher(struct crypt_config *cc, struct convert_context *ctx) @@ -2147,10 +2146,9 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io) crypt_dec_pending(io); } -static void kcryptd_async_done(struct crypto_async_request *async_req, - int error) +static void kcryptd_async_done(crypto_completion_data_t *data, int error) { - struct dm_crypt_request *dmreq = async_req->data; + struct dm_crypt_request *dmreq = crypto_get_completion_data(data); struct convert_context *ctx = dmreq->ctx; struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx); struct crypt_config *cc = io->cc; diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 1388ee35571e..eefe25ed841e 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -955,9 +955,9 @@ static void xor_journal(struct dm_integrity_c *ic, bool encrypt, unsigned sectio async_tx_issue_pending_all(); } -static void complete_journal_encrypt(struct crypto_async_request *req, int err) +static void complete_journal_encrypt(crypto_completion_data_t *data, int err) { - struct journal_completion *comp = req->data; + struct journal_completion *comp = crypto_get_completion_data(data); if (unlikely(err)) { if (likely(err == -EINPROGRESS)) { complete(&comp->ic->crypto_backoff); From patchwork Mon Feb 6 10:22:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129708 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68B61C61DA4 for ; Mon, 6 Feb 2023 11:34:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230072AbjBFLeV (ORCPT ); Mon, 6 Feb 2023 06:34:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229983AbjBFLdv (ORCPT ); Mon, 6 Feb 2023 06:33:51 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76DBF1E1EB; Mon, 6 Feb 2023 03:33:44 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydb-007zgc-0p; Mon, 06 Feb 2023 18:22:16 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:15 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:15 +0800 Subject: [PATCH 2/17] net: macsec: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu Acked-by: Jarkko Sakkinen --- drivers/net/macsec.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index bf8ac7a3ded7..b7d9d487ccd2 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -528,9 +528,9 @@ static void count_tx(struct net_device *dev, int ret, int len) } } -static void macsec_encrypt_done(struct crypto_async_request *base, int err) +static void macsec_encrypt_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct net_device *dev = skb->dev; struct macsec_dev *macsec = macsec_priv(dev); struct macsec_tx_sa *sa = macsec_skb_cb(skb)->tx_sa; @@ -835,9 +835,9 @@ static void count_rx(struct net_device *dev, int len) u64_stats_update_end(&stats->syncp); } -static void macsec_decrypt_done(struct crypto_async_request *base, int err) +static void macsec_decrypt_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct net_device *dev = skb->dev; struct macsec_dev *macsec = macsec_priv(dev); struct macsec_rx_sa *rx_sa = macsec_skb_cb(skb)->rx_sa; From patchwork Mon Feb 6 10:22:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129705 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1A8BC61DA4 for ; Mon, 6 Feb 2023 11:34:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbjBFLeS (ORCPT ); Mon, 6 Feb 2023 06:34:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230072AbjBFLdo (ORCPT ); Mon, 6 Feb 2023 06:33:44 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BA121E1FE; Mon, 6 Feb 2023 03:33:41 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydd-007zgo-4c; Mon, 06 Feb 2023 18:22:18 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:17 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:17 +0800 Subject: [PATCH 3/17] fs: ecryptfs: Use crypto_wait_req References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu Acked-by: Jarkko Sakkinen --- fs/ecryptfs/crypto.c | 30 +++--------------------------- 1 file changed, 3 insertions(+), 27 deletions(-) diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c index e3f5d7f3c8a0..c3057539f088 100644 --- a/fs/ecryptfs/crypto.c +++ b/fs/ecryptfs/crypto.c @@ -260,22 +260,6 @@ int virt_to_scatterlist(const void *addr, int size, struct scatterlist *sg, return i; } -struct extent_crypt_result { - struct completion completion; - int rc; -}; - -static void extent_crypt_complete(struct crypto_async_request *req, int rc) -{ - struct extent_crypt_result *ecr = req->data; - - if (rc == -EINPROGRESS) - return; - - ecr->rc = rc; - complete(&ecr->completion); -} - /** * crypt_scatterlist * @crypt_stat: Pointer to the crypt_stat struct to initialize. @@ -293,7 +277,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat, unsigned char *iv, int op) { struct skcipher_request *req = NULL; - struct extent_crypt_result ecr; + DECLARE_CRYPTO_WAIT(ecr); int rc = 0; if (unlikely(ecryptfs_verbosity > 0)) { @@ -303,8 +287,6 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat, crypt_stat->key_size); } - init_completion(&ecr.completion); - mutex_lock(&crypt_stat->cs_tfm_mutex); req = skcipher_request_alloc(crypt_stat->tfm, GFP_NOFS); if (!req) { @@ -315,7 +297,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat, skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - extent_crypt_complete, &ecr); + crypto_req_done, &ecr); /* Consider doing this once, when the file is opened */ if (!(crypt_stat->flags & ECRYPTFS_KEY_SET)) { rc = crypto_skcipher_setkey(crypt_stat->tfm, crypt_stat->key, @@ -334,13 +316,7 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat, skcipher_request_set_crypt(req, src_sg, dst_sg, size, iv); rc = op == ENCRYPT ? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req); - if (rc == -EINPROGRESS || rc == -EBUSY) { - struct extent_crypt_result *ecr = req->base.data; - - wait_for_completion(&ecr->completion); - rc = ecr->rc; - reinit_completion(&ecr->completion); - } + rc = crypto_wait_req(rc, &ecr); out: skcipher_request_free(req); return rc; From patchwork Mon Feb 6 10:22:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129682 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ED04C636D6 for ; Mon, 6 Feb 2023 11:33:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230031AbjBFLdQ (ORCPT ); Mon, 6 Feb 2023 06:33:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229522AbjBFLdO (ORCPT ); Mon, 6 Feb 2023 06:33:14 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F2FC13516; Mon, 6 Feb 2023 03:33:09 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydf-007zgz-7o; Mon, 06 Feb 2023 18:22:20 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:19 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:19 +0800 Subject: [PATCH 4/17] Bluetooth: Use crypto_wait_req References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu --- net/bluetooth/ecdh_helper.c | 37 ++++++------------------------------- 1 file changed, 6 insertions(+), 31 deletions(-) diff --git a/net/bluetooth/ecdh_helper.c b/net/bluetooth/ecdh_helper.c index 989401f116e9..0efc93fdae8a 100644 --- a/net/bluetooth/ecdh_helper.c +++ b/net/bluetooth/ecdh_helper.c @@ -25,22 +25,6 @@ #include #include -struct ecdh_completion { - struct completion completion; - int err; -}; - -static void ecdh_complete(struct crypto_async_request *req, int err) -{ - struct ecdh_completion *res = req->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(&res->completion); -} - static inline void swap_digits(u64 *in, u64 *out, unsigned int ndigits) { int i; @@ -60,9 +44,9 @@ static inline void swap_digits(u64 *in, u64 *out, unsigned int ndigits) int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 public_key[64], u8 secret[32]) { + DECLARE_CRYPTO_WAIT(result); struct kpp_request *req; u8 *tmp; - struct ecdh_completion result; struct scatterlist src, dst; int err; @@ -76,8 +60,6 @@ int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 public_key[64], goto free_tmp; } - init_completion(&result.completion); - swap_digits((u64 *)public_key, (u64 *)tmp, 4); /* x */ swap_digits((u64 *)&public_key[32], (u64 *)&tmp[32], 4); /* y */ @@ -86,12 +68,9 @@ int compute_ecdh_secret(struct crypto_kpp *tfm, const u8 public_key[64], kpp_request_set_input(req, &src, 64); kpp_request_set_output(req, &dst, 32); kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - ecdh_complete, &result); + crypto_req_done, &result); err = crypto_kpp_compute_shared_secret(req); - if (err == -EINPROGRESS) { - wait_for_completion(&result.completion); - err = result.err; - } + err = crypto_wait_req(err, &result); if (err < 0) { pr_err("alg: ecdh: compute shared secret failed. err %d\n", err); @@ -165,9 +144,9 @@ int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 private_key[32]) */ int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 public_key[64]) { + DECLARE_CRYPTO_WAIT(result); struct kpp_request *req; u8 *tmp; - struct ecdh_completion result; struct scatterlist dst; int err; @@ -181,18 +160,14 @@ int generate_ecdh_public_key(struct crypto_kpp *tfm, u8 public_key[64]) goto free_tmp; } - init_completion(&result.completion); sg_init_one(&dst, tmp, 64); kpp_request_set_input(req, NULL, 0); kpp_request_set_output(req, &dst, 64); kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - ecdh_complete, &result); + crypto_req_done, &result); err = crypto_kpp_generate_public_key(req); - if (err == -EINPROGRESS) { - wait_for_completion(&result.completion); - err = result.err; - } + err = crypto_wait_req(err, &result); if (err < 0) goto free_all; From patchwork Mon Feb 6 10:22:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129703 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69262C63797 for ; Mon, 6 Feb 2023 11:34:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230172AbjBFLeS (ORCPT ); Mon, 6 Feb 2023 06:34:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjBFLdo (ORCPT ); Mon, 6 Feb 2023 06:33:44 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3EED13536; Mon, 6 Feb 2023 03:33:41 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydh-007zhJ-B3; Mon, 06 Feb 2023 18:22:22 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:21 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:21 +0800 Subject: [PATCH 5/17] net: ipv4: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu --- net/ipv4/ah4.c | 8 ++++---- net/ipv4/esp4.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c index ee4e578c7f20..1fc0231eb1ee 100644 --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@ -117,11 +117,11 @@ static int ip_clear_mutable_options(const struct iphdr *iph, __be32 *daddr) return 0; } -static void ah_output_done(struct crypto_async_request *base, int err) +static void ah_output_done(crypto_completion_data_t *data, int err) { u8 *icv; struct iphdr *iph; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = skb_dst(skb)->xfrm; struct ah_data *ahp = x->data; struct iphdr *top_iph = ip_hdr(skb); @@ -262,12 +262,12 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb) return err; } -static void ah_input_done(struct crypto_async_request *base, int err) +static void ah_input_done(crypto_completion_data_t *data, int err) { u8 *auth_data; u8 *icv; struct iphdr *work_iph; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = xfrm_input_state(skb); struct ah_data *ahp = x->data; struct ip_auth_hdr *ah = ip_auth_hdr(skb); diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 52c8047efedb..8abe07c1ff28 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -244,9 +244,9 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb) } #endif -static void esp_output_done(struct crypto_async_request *base, int err) +static void esp_output_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_offload *xo = xfrm_offload(skb); void *tmp; struct xfrm_state *x; @@ -332,12 +332,12 @@ static struct ip_esp_hdr *esp_output_set_extra(struct sk_buff *skb, return esph; } -static void esp_output_done_esn(struct crypto_async_request *base, int err) +static void esp_output_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_output_restore_header(skb); - esp_output_done(base, err); + esp_output_done(data, err); } static struct ip_esp_hdr *esp_output_udp_encap(struct sk_buff *skb, @@ -830,9 +830,9 @@ int esp_input_done2(struct sk_buff *skb, int err) } EXPORT_SYMBOL_GPL(esp_input_done2); -static void esp_input_done(struct crypto_async_request *base, int err) +static void esp_input_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); xfrm_input_resume(skb, esp_input_done2(skb, err)); } @@ -860,12 +860,12 @@ static void esp_input_set_header(struct sk_buff *skb, __be32 *seqhi) } } -static void esp_input_done_esn(struct crypto_async_request *base, int err) +static void esp_input_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_input_restore_header(skb); - esp_input_done(base, err); + esp_input_done(data, err); } /* From patchwork Mon Feb 6 10:22:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129707 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B522C64EC4 for ; Mon, 6 Feb 2023 11:34:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230064AbjBFLeV (ORCPT ); Mon, 6 Feb 2023 06:34:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230063AbjBFLdo (ORCPT ); Mon, 6 Feb 2023 06:33:44 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02DD81449B; Mon, 6 Feb 2023 03:33:39 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydj-007zhY-EX; Mon, 06 Feb 2023 18:22:24 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:23 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:23 +0800 Subject: [PATCH 6/17] net: ipv6: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu --- net/ipv6/ah6.c | 8 ++++---- net/ipv6/esp6.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c index 5228d2716289..e43735578a76 100644 --- a/net/ipv6/ah6.c +++ b/net/ipv6/ah6.c @@ -281,12 +281,12 @@ static int ipv6_clear_mutable_options(struct ipv6hdr *iph, int len, int dir) return 0; } -static void ah6_output_done(struct crypto_async_request *base, int err) +static void ah6_output_done(crypto_completion_data_t *data, int err) { int extlen; u8 *iph_base; u8 *icv; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = skb_dst(skb)->xfrm; struct ah_data *ahp = x->data; struct ipv6hdr *top_iph = ipv6_hdr(skb); @@ -451,12 +451,12 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb) return err; } -static void ah6_input_done(struct crypto_async_request *base, int err) +static void ah6_input_done(crypto_completion_data_t *data, int err) { u8 *auth_data; u8 *icv; u8 *work_iph; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = xfrm_input_state(skb); struct ah_data *ahp = x->data; struct ip_auth_hdr *ah = ip_auth_hdr(skb); diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index 14ed868680c6..b9ee81c7dfcf 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -278,9 +278,9 @@ static void esp_output_encap_csum(struct sk_buff *skb) } } -static void esp_output_done(struct crypto_async_request *base, int err) +static void esp_output_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_offload *xo = xfrm_offload(skb); void *tmp; struct xfrm_state *x; @@ -368,12 +368,12 @@ static struct ip_esp_hdr *esp_output_set_esn(struct sk_buff *skb, return esph; } -static void esp_output_done_esn(struct crypto_async_request *base, int err) +static void esp_output_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_output_restore_header(skb); - esp_output_done(base, err); + esp_output_done(data, err); } static struct ip_esp_hdr *esp6_output_udp_encap(struct sk_buff *skb, @@ -879,9 +879,9 @@ int esp6_input_done2(struct sk_buff *skb, int err) } EXPORT_SYMBOL_GPL(esp6_input_done2); -static void esp_input_done(struct crypto_async_request *base, int err) +static void esp_input_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); xfrm_input_resume(skb, esp6_input_done2(skb, err)); } @@ -909,12 +909,12 @@ static void esp_input_set_header(struct sk_buff *skb, __be32 *seqhi) } } -static void esp_input_done_esn(struct crypto_async_request *base, int err) +static void esp_input_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_input_restore_header(skb); - esp_input_done(base, err); + esp_input_done(data, err); } static int esp6_input(struct xfrm_state *x, struct sk_buff *skb) From patchwork Mon Feb 6 10:22:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129706 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6933EC636D6 for ; Mon, 6 Feb 2023 11:34:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbjBFLeT (ORCPT ); Mon, 6 Feb 2023 06:34:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbjBFLdo (ORCPT ); Mon, 6 Feb 2023 06:33:44 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C71C114211; Mon, 6 Feb 2023 03:33:39 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydl-007zhn-Io; Mon, 06 Feb 2023 18:22:26 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:25 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:25 +0800 Subject: [PATCH 7/17] tipc: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu --- net/tipc/crypto.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c index d67440de011e..ab356e7a3870 100644 --- a/net/tipc/crypto.c +++ b/net/tipc/crypto.c @@ -267,10 +267,10 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb, struct tipc_bearer *b, struct tipc_media_addr *dst, struct tipc_node *__dnode); -static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err); +static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err); static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead, struct sk_buff *skb, struct tipc_bearer *b); -static void tipc_aead_decrypt_done(struct crypto_async_request *base, int err); +static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err); static inline int tipc_ehdr_size(struct tipc_ehdr *ehdr); static int tipc_ehdr_build(struct net *net, struct tipc_aead *aead, u8 tx_key, struct sk_buff *skb, @@ -830,9 +830,9 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb, return rc; } -static void tipc_aead_encrypt_done(struct crypto_async_request *base, int err) +static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct tipc_crypto_tx_ctx *tx_ctx = TIPC_SKB_CB(skb)->crypto_ctx; struct tipc_bearer *b = tx_ctx->bearer; struct tipc_aead *aead = tx_ctx->aead; @@ -954,9 +954,9 @@ static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead, return rc; } -static void tipc_aead_decrypt_done(struct crypto_async_request *base, int err) +static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct tipc_crypto_rx_ctx *rx_ctx = TIPC_SKB_CB(skb)->crypto_ctx; struct tipc_bearer *b = rx_ctx->bearer; struct tipc_aead *aead = rx_ctx->aead; From patchwork Mon Feb 6 10:22:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129704 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4724C05027 for ; Mon, 6 Feb 2023 11:34:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230161AbjBFLeR (ORCPT ); Mon, 6 Feb 2023 06:34:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230106AbjBFLdn (ORCPT ); Mon, 6 Feb 2023 06:33:43 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45BA813DF7; Mon, 6 Feb 2023 03:33:34 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydn-007zi3-LG; Mon, 06 Feb 2023 18:22:28 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:27 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:27 +0800 Subject: [PATCH 8/17] tls: Only use data field in crypto completion function References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The crypto_async_request passed to the completion is not guaranteed to be the original request object. Only the data field can be relied upon. Fix this by storing the socket pointer with the AEAD request. Signed-off-by: Herbert Xu --- net/tls/tls.h | 2 ++ net/tls/tls_sw.c | 40 +++++++++++++++++++++++++++++----------- 2 files changed, 31 insertions(+), 11 deletions(-) diff --git a/net/tls/tls.h b/net/tls/tls.h index 0e840a0c3437..804c3880d028 100644 --- a/net/tls/tls.h +++ b/net/tls/tls.h @@ -70,6 +70,8 @@ struct tls_rec { char content_type; struct scatterlist sg_content_type; + struct sock *sk; + char aad_space[TLS_AAD_SPACE_SIZE]; u8 iv_data[MAX_IV_SIZE]; struct aead_request aead_req; diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 9ed978634125..5b7f67a7d394 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -57,6 +58,7 @@ struct tls_decrypt_arg { }; struct tls_decrypt_ctx { + struct sock *sk; u8 iv[MAX_IV_SIZE]; u8 aad[TLS_MAX_AAD_SIZE]; u8 tail; @@ -177,18 +179,25 @@ static int tls_padding_length(struct tls_prot_info *prot, struct sk_buff *skb, return sub; } -static void tls_decrypt_done(struct crypto_async_request *req, int err) +static void tls_decrypt_done(crypto_completion_data_t *data, int err) { - struct aead_request *aead_req = (struct aead_request *)req; + struct aead_request *aead_req = crypto_get_completion_data(data); + struct crypto_aead *aead = crypto_aead_reqtfm(aead_req); struct scatterlist *sgout = aead_req->dst; struct scatterlist *sgin = aead_req->src; struct tls_sw_context_rx *ctx; + struct tls_decrypt_ctx *dctx; struct tls_context *tls_ctx; struct scatterlist *sg; unsigned int pages; struct sock *sk; + int aead_size; - sk = (struct sock *)req->data; + aead_size = sizeof(*aead_req) + crypto_aead_reqsize(aead); + aead_size = ALIGN(aead_size, __alignof__(*dctx)); + dctx = (void *)((u8 *)aead_req + aead_size); + + sk = dctx->sk; tls_ctx = tls_get_ctx(sk); ctx = tls_sw_ctx_rx(tls_ctx); @@ -240,7 +249,7 @@ static int tls_do_decryption(struct sock *sk, if (darg->async) { aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG, - tls_decrypt_done, sk); + tls_decrypt_done, aead_req); atomic_inc(&ctx->decrypt_pending); } else { aead_request_set_callback(aead_req, @@ -336,6 +345,8 @@ static struct tls_rec *tls_get_rec(struct sock *sk) sg_set_buf(&rec->sg_aead_out[0], rec->aad_space, prot->aad_size); sg_unmark_end(&rec->sg_aead_out[1]); + rec->sk = sk; + return rec; } @@ -417,22 +428,27 @@ int tls_tx_records(struct sock *sk, int flags) return rc; } -static void tls_encrypt_done(struct crypto_async_request *req, int err) +static void tls_encrypt_done(crypto_completion_data_t *data, int err) { - struct aead_request *aead_req = (struct aead_request *)req; - struct sock *sk = req->data; - struct tls_context *tls_ctx = tls_get_ctx(sk); - struct tls_prot_info *prot = &tls_ctx->prot_info; - struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); + struct aead_request *aead_req = crypto_get_completion_data(data); + struct tls_sw_context_tx *ctx; + struct tls_context *tls_ctx; + struct tls_prot_info *prot; struct scatterlist *sge; struct sk_msg *msg_en; struct tls_rec *rec; bool ready = false; + struct sock *sk; int pending; rec = container_of(aead_req, struct tls_rec, aead_req); msg_en = &rec->msg_encrypted; + sk = rec->sk; + tls_ctx = tls_get_ctx(sk); + prot = &tls_ctx->prot_info; + ctx = tls_sw_ctx_tx(tls_ctx); + sge = sk_msg_elem(msg_en, msg_en->sg.curr); sge->offset -= prot->prepend_size; sge->length += prot->prepend_size; @@ -520,7 +536,7 @@ static int tls_do_encryption(struct sock *sk, data_len, rec->iv_data); aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG, - tls_encrypt_done, sk); + tls_encrypt_done, aead_req); /* Add the record in tx_list */ list_add_tail((struct list_head *)&rec->list, &ctx->tx_list); @@ -1485,6 +1501,7 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, * Both structs are variable length. */ aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv); + aead_size = ALIGN(aead_size, __alignof__(*dctx)); mem = kmalloc(aead_size + struct_size(dctx, sg, n_sgin + n_sgout), sk->sk_allocation); if (!mem) { @@ -1495,6 +1512,7 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, /* Segment the allocated memory */ aead_req = (struct aead_request *)mem; dctx = (struct tls_decrypt_ctx *)(mem + aead_size); + dctx->sk = sk; sgin = &dctx->sg[0]; sgout = &dctx->sg[n_sgin]; From patchwork Mon Feb 6 10:22:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129702 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25290C61DA4 for ; Mon, 6 Feb 2023 11:34:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230145AbjBFLeQ (ORCPT ); Mon, 6 Feb 2023 06:34:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230105AbjBFLdn (ORCPT ); Mon, 6 Feb 2023 06:33:43 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 119921EBF7; Mon, 6 Feb 2023 03:33:31 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydp-007ziH-Oe; Mon, 06 Feb 2023 18:22:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:29 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:29 +0800 Subject: [PATCH 9/17] KEYS: DH: Use crypto_wait_req References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch replaces the custom crypto completion function with crypto_req_done. Signed-off-by: Herbert Xu --- security/keys/dh.c | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-) diff --git a/security/keys/dh.c b/security/keys/dh.c index b339760a31dd..da64c358474b 100644 --- a/security/keys/dh.c +++ b/security/keys/dh.c @@ -64,22 +64,6 @@ static void dh_free_data(struct dh *dh) kfree_sensitive(dh->g); } -struct dh_completion { - struct completion completion; - int err; -}; - -static void dh_crypto_done(struct crypto_async_request *req, int err) -{ - struct dh_completion *compl = req->data; - - if (err == -EINPROGRESS) - return; - - compl->err = err; - complete(&compl->completion); -} - static int kdf_alloc(struct crypto_shash **hash, char *hashname) { struct crypto_shash *tfm; @@ -146,7 +130,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params, struct keyctl_dh_params pcopy; struct dh dh_inputs; struct scatterlist outsg; - struct dh_completion compl; + DECLARE_CRYPTO_WAIT(compl); struct crypto_kpp *tfm; struct kpp_request *req; uint8_t *secret; @@ -266,22 +250,18 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params, kpp_request_set_input(req, NULL, 0); kpp_request_set_output(req, &outsg, outlen); - init_completion(&compl.completion); kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - dh_crypto_done, &compl); + crypto_req_done, &compl); /* * For DH, generate_public_key and generate_shared_secret are * the same calculation */ ret = crypto_kpp_generate_public_key(req); - if (ret == -EINPROGRESS) { - wait_for_completion(&compl.completion); - ret = compl.err; - if (ret) - goto out6; - } + ret = crypto_wait_req(ret, &compl); + if (ret) + goto out6; if (kdfcopy) { /* From patchwork Mon Feb 6 10:22:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129701 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E7D0C61DA4 for ; Mon, 6 Feb 2023 11:34:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230128AbjBFLeC (ORCPT ); Mon, 6 Feb 2023 06:34:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbjBFLdn (ORCPT ); Mon, 6 Feb 2023 06:33:43 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 361AE1815C; Mon, 6 Feb 2023 03:33:24 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydr-007ziX-T2; Mon, 06 Feb 2023 18:22:32 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:31 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:31 +0800 Subject: [PATCH 10/17] crypto: api - Use data directly in completion function References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch does the final flag day conversion of all completion functions which are now all contained in the Crypto API. Signed-off-by: Herbert Xu --- crypto/adiantum.c | 5 +--- crypto/af_alg.c | 6 ++--- crypto/ahash.c | 12 +++++----- crypto/api.c | 4 +-- crypto/authenc.c | 14 +++++------- crypto/authencesn.c | 15 +++++------- crypto/ccm.c | 9 +++---- crypto/chacha20poly1305.c | 40 +++++++++++++++++----------------- crypto/cryptd.c | 52 ++++++++++++++++++++++----------------------- crypto/cts.c | 12 +++++----- crypto/dh.c | 5 +--- crypto/essiv.c | 8 +++--- crypto/gcm.c | 36 ++++++++++++++----------------- crypto/hctr2.c | 5 +--- crypto/lrw.c | 4 +-- crypto/pcrypt.c | 4 +-- crypto/rsa-pkcs1pad.c | 15 +++++------- crypto/seqiv.c | 5 +--- crypto/xts.c | 12 +++++----- drivers/crypto/atmel-sha.c | 5 +--- include/crypto/if_alg.h | 4 --- include/linux/crypto.h | 10 ++++---- 22 files changed, 132 insertions(+), 150 deletions(-) diff --git a/crypto/adiantum.c b/crypto/adiantum.c index 84450130cb6b..c33ba22a6638 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -308,10 +308,9 @@ static int adiantum_finish(struct skcipher_request *req) return 0; } -static void adiantum_streamcipher_done(struct crypto_async_request *areq, - int err) +static void adiantum_streamcipher_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (!err) err = adiantum_finish(req); diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 0a4fa2a429e2..5f7252a5b7b4 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -1186,7 +1186,7 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources); /** * af_alg_async_cb - AIO callback handler - * @_req: async request info + * @data: async request completion data * @err: if non-zero, error result to be returned via ki_complete(); * otherwise return the AIO output length via ki_complete(). * @@ -1196,9 +1196,9 @@ EXPORT_SYMBOL_GPL(af_alg_free_resources); * The number of bytes to be generated with the AIO operation must be set * in areq->outlen before the AIO callback handler is invoked. */ -void af_alg_async_cb(struct crypto_async_request *_req, int err) +void af_alg_async_cb(void *data, int err) { - struct af_alg_async_req *areq = _req->data; + struct af_alg_async_req *areq = data; struct sock *sk = areq->sk; struct kiocb *iocb = areq->iocb; unsigned int resultlen; diff --git a/crypto/ahash.c b/crypto/ahash.c index 369447e483cd..5a0f21cb2059 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -240,9 +240,9 @@ static void ahash_restore_req(struct ahash_request *req, int err) kfree_sensitive(subreq); } -static void ahash_op_unaligned_done(struct crypto_async_request *req, int err) +static void ahash_op_unaligned_done(void *data, int err) { - struct ahash_request *areq = req->data; + struct ahash_request *areq = data; if (err == -EINPROGRESS) goto out; @@ -330,9 +330,9 @@ int crypto_ahash_digest(struct ahash_request *req) } EXPORT_SYMBOL_GPL(crypto_ahash_digest); -static void ahash_def_finup_done2(struct crypto_async_request *req, int err) +static void ahash_def_finup_done2(void *data, int err) { - struct ahash_request *areq = req->data; + struct ahash_request *areq = data; if (err == -EINPROGRESS) return; @@ -360,9 +360,9 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err) return err; } -static void ahash_def_finup_done1(struct crypto_async_request *req, int err) +static void ahash_def_finup_done1(void *data, int err) { - struct ahash_request *areq = req->data; + struct ahash_request *areq = data; struct ahash_request *subreq; if (err == -EINPROGRESS) diff --git a/crypto/api.c b/crypto/api.c index b022702f6436..e67cc63368ed 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -643,9 +643,9 @@ int crypto_has_alg(const char *name, u32 type, u32 mask) } EXPORT_SYMBOL_GPL(crypto_has_alg); -void crypto_req_done(struct crypto_async_request *req, int err) +void crypto_req_done(void *data, int err) { - struct crypto_wait *wait = req->data; + struct crypto_wait *wait = data; if (err == -EINPROGRESS) return; diff --git a/crypto/authenc.c b/crypto/authenc.c index 17f674a7cdff..3326c7343e86 100644 --- a/crypto/authenc.c +++ b/crypto/authenc.c @@ -109,9 +109,9 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key, return err; } -static void authenc_geniv_ahash_done(struct crypto_async_request *areq, int err) +static void authenc_geniv_ahash_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; struct crypto_aead *authenc = crypto_aead_reqtfm(req); struct aead_instance *inst = aead_alg_instance(authenc); struct authenc_instance_ctx *ictx = aead_instance_ctx(inst); @@ -160,10 +160,9 @@ static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags) return 0; } -static void crypto_authenc_encrypt_done(struct crypto_async_request *req, - int err) +static void crypto_authenc_encrypt_done(void *data, int err) { - struct aead_request *areq = req->data; + struct aead_request *areq = data; if (err) goto out; @@ -261,10 +260,9 @@ static int crypto_authenc_decrypt_tail(struct aead_request *req, return crypto_skcipher_decrypt(skreq); } -static void authenc_verify_ahash_done(struct crypto_async_request *areq, - int err) +static void authenc_verify_ahash_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; diff --git a/crypto/authencesn.c b/crypto/authencesn.c index b60e61b1904c..91424e791d5c 100644 --- a/crypto/authencesn.c +++ b/crypto/authencesn.c @@ -107,10 +107,9 @@ static int crypto_authenc_esn_genicv_tail(struct aead_request *req, return 0; } -static void authenc_esn_geniv_ahash_done(struct crypto_async_request *areq, - int err) +static void authenc_esn_geniv_ahash_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; err = err ?: crypto_authenc_esn_genicv_tail(req, 0); aead_request_complete(req, err); @@ -153,10 +152,9 @@ static int crypto_authenc_esn_genicv(struct aead_request *req, } -static void crypto_authenc_esn_encrypt_done(struct crypto_async_request *req, - int err) +static void crypto_authenc_esn_encrypt_done(void *data, int err) { - struct aead_request *areq = req->data; + struct aead_request *areq = data; if (!err) err = crypto_authenc_esn_genicv(areq, 0); @@ -258,10 +256,9 @@ static int crypto_authenc_esn_decrypt_tail(struct aead_request *req, return crypto_skcipher_decrypt(skreq); } -static void authenc_esn_verify_ahash_done(struct crypto_async_request *areq, - int err) +static void authenc_esn_verify_ahash_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; err = err ?: crypto_authenc_esn_decrypt_tail(req, 0); authenc_esn_request_complete(req, err); diff --git a/crypto/ccm.c b/crypto/ccm.c index 30dbae72728f..a9453129c51c 100644 --- a/crypto/ccm.c +++ b/crypto/ccm.c @@ -224,9 +224,9 @@ static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain, return err; } -static void crypto_ccm_encrypt_done(struct crypto_async_request *areq, int err) +static void crypto_ccm_encrypt_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; struct crypto_aead *aead = crypto_aead_reqtfm(req); struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req); u8 *odata = pctx->odata; @@ -320,10 +320,9 @@ static int crypto_ccm_encrypt(struct aead_request *req) return err; } -static void crypto_ccm_decrypt_done(struct crypto_async_request *areq, - int err) +static void crypto_ccm_decrypt_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req); struct crypto_aead *aead = crypto_aead_reqtfm(req); unsigned int authsize = crypto_aead_authsize(aead); diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c index 97bbb135e9a6..3a905c5d8f53 100644 --- a/crypto/chacha20poly1305.c +++ b/crypto/chacha20poly1305.c @@ -115,9 +115,9 @@ static int poly_copy_tag(struct aead_request *req) return 0; } -static void chacha_decrypt_done(struct crypto_async_request *areq, int err) +static void chacha_decrypt_done(void *data, int err) { - async_done_continue(areq->data, err, poly_verify_tag); + async_done_continue(data, err, poly_verify_tag); } static int chacha_decrypt(struct aead_request *req) @@ -161,9 +161,9 @@ static int poly_tail_continue(struct aead_request *req) return chacha_decrypt(req); } -static void poly_tail_done(struct crypto_async_request *areq, int err) +static void poly_tail_done(void *data, int err) { - async_done_continue(areq->data, err, poly_tail_continue); + async_done_continue(data, err, poly_tail_continue); } static int poly_tail(struct aead_request *req) @@ -191,9 +191,9 @@ static int poly_tail(struct aead_request *req) return poly_tail_continue(req); } -static void poly_cipherpad_done(struct crypto_async_request *areq, int err) +static void poly_cipherpad_done(void *data, int err) { - async_done_continue(areq->data, err, poly_tail); + async_done_continue(data, err, poly_tail); } static int poly_cipherpad(struct aead_request *req) @@ -220,9 +220,9 @@ static int poly_cipherpad(struct aead_request *req) return poly_tail(req); } -static void poly_cipher_done(struct crypto_async_request *areq, int err) +static void poly_cipher_done(void *data, int err) { - async_done_continue(areq->data, err, poly_cipherpad); + async_done_continue(data, err, poly_cipherpad); } static int poly_cipher(struct aead_request *req) @@ -250,9 +250,9 @@ static int poly_cipher(struct aead_request *req) return poly_cipherpad(req); } -static void poly_adpad_done(struct crypto_async_request *areq, int err) +static void poly_adpad_done(void *data, int err) { - async_done_continue(areq->data, err, poly_cipher); + async_done_continue(data, err, poly_cipher); } static int poly_adpad(struct aead_request *req) @@ -279,9 +279,9 @@ static int poly_adpad(struct aead_request *req) return poly_cipher(req); } -static void poly_ad_done(struct crypto_async_request *areq, int err) +static void poly_ad_done(void *data, int err) { - async_done_continue(areq->data, err, poly_adpad); + async_done_continue(data, err, poly_adpad); } static int poly_ad(struct aead_request *req) @@ -303,9 +303,9 @@ static int poly_ad(struct aead_request *req) return poly_adpad(req); } -static void poly_setkey_done(struct crypto_async_request *areq, int err) +static void poly_setkey_done(void *data, int err) { - async_done_continue(areq->data, err, poly_ad); + async_done_continue(data, err, poly_ad); } static int poly_setkey(struct aead_request *req) @@ -329,9 +329,9 @@ static int poly_setkey(struct aead_request *req) return poly_ad(req); } -static void poly_init_done(struct crypto_async_request *areq, int err) +static void poly_init_done(void *data, int err) { - async_done_continue(areq->data, err, poly_setkey); + async_done_continue(data, err, poly_setkey); } static int poly_init(struct aead_request *req) @@ -352,9 +352,9 @@ static int poly_init(struct aead_request *req) return poly_setkey(req); } -static void poly_genkey_done(struct crypto_async_request *areq, int err) +static void poly_genkey_done(void *data, int err) { - async_done_continue(areq->data, err, poly_init); + async_done_continue(data, err, poly_init); } static int poly_genkey(struct aead_request *req) @@ -391,9 +391,9 @@ static int poly_genkey(struct aead_request *req) return poly_init(req); } -static void chacha_encrypt_done(struct crypto_async_request *areq, int err) +static void chacha_encrypt_done(void *data, int err) { - async_done_continue(areq->data, err, poly_genkey); + async_done_continue(data, err, poly_genkey); } static int chacha_encrypt(struct aead_request *req) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 06ef3fcbe4ae..1de54eea514d 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -281,10 +281,9 @@ static void cryptd_skcipher_complete(struct skcipher_request *req, int err, crypto_free_skcipher(tfm); } -static void cryptd_skcipher_encrypt(struct crypto_async_request *base, - int err) +static void cryptd_skcipher_encrypt(void *data, int err) { - struct skcipher_request *req = skcipher_request_cast(base); + struct skcipher_request *req = data; struct skcipher_request *subreq; subreq = cryptd_skcipher_prepare(req, err); @@ -294,10 +293,9 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base, cryptd_skcipher_complete(req, err, cryptd_skcipher_encrypt); } -static void cryptd_skcipher_decrypt(struct crypto_async_request *base, - int err) +static void cryptd_skcipher_decrypt(void *data, int err) { - struct skcipher_request *req = skcipher_request_cast(base); + struct skcipher_request *req = data; struct skcipher_request *subreq; subreq = cryptd_skcipher_prepare(req, err); @@ -511,9 +509,9 @@ static void cryptd_hash_complete(struct ahash_request *req, int err, crypto_free_ahash(tfm); } -static void cryptd_hash_init(struct crypto_async_request *req_async, int err) +static void cryptd_hash_init(void *data, int err) { - struct ahash_request *req = ahash_request_cast(req_async); + struct ahash_request *req = data; struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); struct crypto_shash *child = ctx->child; @@ -536,9 +534,9 @@ static int cryptd_hash_init_enqueue(struct ahash_request *req) return cryptd_hash_enqueue(req, cryptd_hash_init); } -static void cryptd_hash_update(struct crypto_async_request *req_async, int err) +static void cryptd_hash_update(void *data, int err) { - struct ahash_request *req = ahash_request_cast(req_async); + struct ahash_request *req = data; struct shash_desc *desc; desc = cryptd_hash_prepare(req, err); @@ -553,9 +551,9 @@ static int cryptd_hash_update_enqueue(struct ahash_request *req) return cryptd_hash_enqueue(req, cryptd_hash_update); } -static void cryptd_hash_final(struct crypto_async_request *req_async, int err) +static void cryptd_hash_final(void *data, int err) { - struct ahash_request *req = ahash_request_cast(req_async); + struct ahash_request *req = data; struct shash_desc *desc; desc = cryptd_hash_prepare(req, err); @@ -570,9 +568,9 @@ static int cryptd_hash_final_enqueue(struct ahash_request *req) return cryptd_hash_enqueue(req, cryptd_hash_final); } -static void cryptd_hash_finup(struct crypto_async_request *req_async, int err) +static void cryptd_hash_finup(void *data, int err) { - struct ahash_request *req = ahash_request_cast(req_async); + struct ahash_request *req = data; struct shash_desc *desc; desc = cryptd_hash_prepare(req, err); @@ -587,9 +585,9 @@ static int cryptd_hash_finup_enqueue(struct ahash_request *req) return cryptd_hash_enqueue(req, cryptd_hash_finup); } -static void cryptd_hash_digest(struct crypto_async_request *req_async, int err) +static void cryptd_hash_digest(void *data, int err) { - struct ahash_request *req = ahash_request_cast(req_async); + struct ahash_request *req = data; struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); struct crypto_shash *child = ctx->child; @@ -761,24 +759,26 @@ static void cryptd_aead_crypt(struct aead_request *req, crypto_free_aead(tfm); } -static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err) +static void cryptd_aead_encrypt(void *data, int err) { - struct cryptd_aead_ctx *ctx = crypto_tfm_ctx(areq->tfm); - struct crypto_aead *child = ctx->child; - struct aead_request *req; + struct aead_request *req = data; + struct cryptd_aead_ctx *ctx; + struct crypto_aead *child; - req = container_of(areq, struct aead_request, base); + ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + child = ctx->child; cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt, cryptd_aead_encrypt); } -static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) +static void cryptd_aead_decrypt(void *data, int err) { - struct cryptd_aead_ctx *ctx = crypto_tfm_ctx(areq->tfm); - struct crypto_aead *child = ctx->child; - struct aead_request *req; + struct aead_request *req = data; + struct cryptd_aead_ctx *ctx; + struct crypto_aead *child; - req = container_of(areq, struct aead_request, base); + ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + child = ctx->child; cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt, cryptd_aead_decrypt); } diff --git a/crypto/cts.c b/crypto/cts.c index 3766d47ebcc0..8f604f6554b1 100644 --- a/crypto/cts.c +++ b/crypto/cts.c @@ -85,9 +85,9 @@ static int crypto_cts_setkey(struct crypto_skcipher *parent, const u8 *key, return crypto_skcipher_setkey(child, key, keylen); } -static void cts_cbc_crypt_done(struct crypto_async_request *areq, int err) +static void cts_cbc_crypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (err == -EINPROGRESS) return; @@ -125,9 +125,9 @@ static int cts_cbc_encrypt(struct skcipher_request *req) return crypto_skcipher_encrypt(subreq); } -static void crypto_cts_encrypt_done(struct crypto_async_request *areq, int err) +static void crypto_cts_encrypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (err) goto out; @@ -219,9 +219,9 @@ static int cts_cbc_decrypt(struct skcipher_request *req) return crypto_skcipher_decrypt(subreq); } -static void crypto_cts_decrypt_done(struct crypto_async_request *areq, int err) +static void crypto_cts_decrypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (err) goto out; diff --git a/crypto/dh.c b/crypto/dh.c index e39c1bde1ac0..0fcad279e6fe 100644 --- a/crypto/dh.c +++ b/crypto/dh.c @@ -503,10 +503,9 @@ static int dh_safe_prime_set_secret(struct crypto_kpp *tfm, const void *buffer, return err; } -static void dh_safe_prime_complete_req(struct crypto_async_request *dh_req, - int err) +static void dh_safe_prime_complete_req(void *data, int err) { - struct kpp_request *req = dh_req->data; + struct kpp_request *req = data; kpp_request_complete(req, err); } diff --git a/crypto/essiv.c b/crypto/essiv.c index 307eba74b901..f7d4ef4837e5 100644 --- a/crypto/essiv.c +++ b/crypto/essiv.c @@ -131,9 +131,9 @@ static int essiv_aead_setauthsize(struct crypto_aead *tfm, return crypto_aead_setauthsize(tctx->u.aead, authsize); } -static void essiv_skcipher_done(struct crypto_async_request *areq, int err) +static void essiv_skcipher_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; skcipher_request_complete(req, err); } @@ -166,9 +166,9 @@ static int essiv_skcipher_decrypt(struct skcipher_request *req) return essiv_skcipher_crypt(req, false); } -static void essiv_aead_done(struct crypto_async_request *areq, int err) +static void essiv_aead_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); if (err == -EINPROGRESS) diff --git a/crypto/gcm.c b/crypto/gcm.c index 338ee0769747..4ba624450c3f 100644 --- a/crypto/gcm.c +++ b/crypto/gcm.c @@ -197,7 +197,7 @@ static inline unsigned int gcm_remain(unsigned int len) return len ? 16 - len : 0; } -static void gcm_hash_len_done(struct crypto_async_request *areq, int err); +static void gcm_hash_len_done(void *data, int err); static int gcm_hash_update(struct aead_request *req, crypto_completion_t compl, @@ -246,9 +246,9 @@ static int gcm_hash_len_continue(struct aead_request *req, u32 flags) return gctx->complete(req, flags); } -static void gcm_hash_len_done(struct crypto_async_request *areq, int err) +static void gcm_hash_len_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -267,10 +267,9 @@ static int gcm_hash_crypt_remain_continue(struct aead_request *req, u32 flags) gcm_hash_len_continue(req, flags); } -static void gcm_hash_crypt_remain_done(struct crypto_async_request *areq, - int err) +static void gcm_hash_crypt_remain_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -298,9 +297,9 @@ static int gcm_hash_crypt_continue(struct aead_request *req, u32 flags) return gcm_hash_crypt_remain_continue(req, flags); } -static void gcm_hash_crypt_done(struct crypto_async_request *areq, int err) +static void gcm_hash_crypt_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -326,10 +325,9 @@ static int gcm_hash_assoc_remain_continue(struct aead_request *req, u32 flags) return gcm_hash_crypt_remain_continue(req, flags); } -static void gcm_hash_assoc_remain_done(struct crypto_async_request *areq, - int err) +static void gcm_hash_assoc_remain_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -355,9 +353,9 @@ static int gcm_hash_assoc_continue(struct aead_request *req, u32 flags) return gcm_hash_assoc_remain_continue(req, flags); } -static void gcm_hash_assoc_done(struct crypto_async_request *areq, int err) +static void gcm_hash_assoc_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -380,9 +378,9 @@ static int gcm_hash_init_continue(struct aead_request *req, u32 flags) return gcm_hash_assoc_remain_continue(req, flags); } -static void gcm_hash_init_done(struct crypto_async_request *areq, int err) +static void gcm_hash_init_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -433,9 +431,9 @@ static int gcm_encrypt_continue(struct aead_request *req, u32 flags) return gcm_hash(req, flags); } -static void gcm_encrypt_done(struct crypto_async_request *areq, int err) +static void gcm_encrypt_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (err) goto out; @@ -477,9 +475,9 @@ static int crypto_gcm_verify(struct aead_request *req) return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0; } -static void gcm_decrypt_done(struct crypto_async_request *areq, int err) +static void gcm_decrypt_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; if (!err) err = crypto_gcm_verify(req); diff --git a/crypto/hctr2.c b/crypto/hctr2.c index 7d00a3bcb667..6f4c1884d0e9 100644 --- a/crypto/hctr2.c +++ b/crypto/hctr2.c @@ -252,10 +252,9 @@ static int hctr2_finish(struct skcipher_request *req) return 0; } -static void hctr2_xctr_done(struct crypto_async_request *areq, - int err) +static void hctr2_xctr_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (!err) err = hctr2_finish(req); diff --git a/crypto/lrw.c b/crypto/lrw.c index 8d59a66b6525..1b0f76ba3eb5 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -205,9 +205,9 @@ static int lrw_xor_tweak_post(struct skcipher_request *req) return lrw_xor_tweak(req, true); } -static void lrw_crypt_done(struct crypto_async_request *areq, int err) +static void lrw_crypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (!err) { struct lrw_request_ctx *rctx = skcipher_request_ctx(req); diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c index 9d10b846ccf7..8c1d0ca41213 100644 --- a/crypto/pcrypt.c +++ b/crypto/pcrypt.c @@ -63,9 +63,9 @@ static void pcrypt_aead_serial(struct padata_priv *padata) aead_request_complete(req->base.data, padata->info); } -static void pcrypt_aead_done(struct crypto_async_request *areq, int err) +static void pcrypt_aead_done(void *data, int err) { - struct aead_request *req = areq->data; + struct aead_request *req = data; struct pcrypt_request *preq = aead_request_ctx(req); struct padata_priv *padata = pcrypt_request_padata(preq); diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c index 02028670331d..d2e5e104f8cf 100644 --- a/crypto/rsa-pkcs1pad.c +++ b/crypto/rsa-pkcs1pad.c @@ -210,10 +210,9 @@ static int pkcs1pad_encrypt_sign_complete(struct akcipher_request *req, int err) return err; } -static void pkcs1pad_encrypt_sign_complete_cb( - struct crypto_async_request *child_async_req, int err) +static void pkcs1pad_encrypt_sign_complete_cb(void *data, int err) { - struct akcipher_request *req = child_async_req->data; + struct akcipher_request *req = data; if (err == -EINPROGRESS) goto out; @@ -326,10 +325,9 @@ static int pkcs1pad_decrypt_complete(struct akcipher_request *req, int err) return err; } -static void pkcs1pad_decrypt_complete_cb( - struct crypto_async_request *child_async_req, int err) +static void pkcs1pad_decrypt_complete_cb(void *data, int err) { - struct akcipher_request *req = child_async_req->data; + struct akcipher_request *req = data; if (err == -EINPROGRESS) goto out; @@ -506,10 +504,9 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err) return err; } -static void pkcs1pad_verify_complete_cb( - struct crypto_async_request *child_async_req, int err) +static void pkcs1pad_verify_complete_cb(void *data, int err) { - struct akcipher_request *req = child_async_req->data; + struct akcipher_request *req = data; if (err == -EINPROGRESS) goto out; diff --git a/crypto/seqiv.c b/crypto/seqiv.c index b1bcfe537daf..17e11d51ddc3 100644 --- a/crypto/seqiv.c +++ b/crypto/seqiv.c @@ -36,10 +36,9 @@ static void seqiv_aead_encrypt_complete2(struct aead_request *req, int err) kfree_sensitive(subreq->iv); } -static void seqiv_aead_encrypt_complete(struct crypto_async_request *base, - int err) +static void seqiv_aead_encrypt_complete(void *data, int err) { - struct aead_request *req = base->data; + struct aead_request *req = data; seqiv_aead_encrypt_complete2(req, err); aead_request_complete(req, err); diff --git a/crypto/xts.c b/crypto/xts.c index de6cbcf69bbd..09be909a6a1a 100644 --- a/crypto/xts.c +++ b/crypto/xts.c @@ -140,9 +140,9 @@ static int xts_xor_tweak_post(struct skcipher_request *req, bool enc) return xts_xor_tweak(req, true, enc); } -static void xts_cts_done(struct crypto_async_request *areq, int err) +static void xts_cts_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; le128 b; if (!err) { @@ -196,9 +196,9 @@ static int xts_cts_final(struct skcipher_request *req, return 0; } -static void xts_encrypt_done(struct crypto_async_request *areq, int err) +static void xts_encrypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (!err) { struct xts_request_ctx *rctx = skcipher_request_ctx(req); @@ -216,9 +216,9 @@ static void xts_encrypt_done(struct crypto_async_request *areq, int err) skcipher_request_complete(req, err); } -static void xts_decrypt_done(struct crypto_async_request *areq, int err) +static void xts_decrypt_done(void *data, int err) { - struct skcipher_request *req = areq->data; + struct skcipher_request *req = data; if (!err) { struct xts_request_ctx *rctx = skcipher_request_ctx(req); diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c index a77cf0da0816..e7c1db2739ec 100644 --- a/drivers/crypto/atmel-sha.c +++ b/drivers/crypto/atmel-sha.c @@ -2099,10 +2099,9 @@ struct atmel_sha_authenc_reqctx { unsigned int digestlen; }; -static void atmel_sha_authenc_complete(struct crypto_async_request *areq, - int err) +static void atmel_sha_authenc_complete(void *data, int err) { - struct ahash_request *req = areq->data; + struct ahash_request *req = data; struct atmel_sha_authenc_reqctx *authctx = ahash_request_ctx(req); authctx->cb(authctx->aes_dev, err, authctx->base.dd->is_async); diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h index a5db86670bdf..7e76623f9ec3 100644 --- a/include/crypto/if_alg.h +++ b/include/crypto/if_alg.h @@ -21,8 +21,6 @@ #define ALG_MAX_PAGES 16 -struct crypto_async_request; - struct alg_sock { /* struct sock must be the first member of struct alg_sock */ struct sock sk; @@ -235,7 +233,7 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, ssize_t af_alg_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags); void af_alg_free_resources(struct af_alg_async_req *areq); -void af_alg_async_cb(struct crypto_async_request *_req, int err); +void af_alg_async_cb(void *data, int err); __poll_t af_alg_poll(struct file *file, struct socket *sock, poll_table *wait); struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk, diff --git a/include/linux/crypto.h b/include/linux/crypto.h index b18f6e669fb1..80f6350fb588 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -176,8 +176,8 @@ struct crypto_async_request; struct crypto_tfm; struct crypto_type; -typedef struct crypto_async_request crypto_completion_data_t; -typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err); +typedef void crypto_completion_data_t; +typedef void (*crypto_completion_t)(void *req, int err); /** * DOC: Block Cipher Context Data Structures @@ -596,12 +596,12 @@ struct crypto_wait { /* * Async ops completion helper functioons */ -static inline void *crypto_get_completion_data(crypto_completion_data_t *req) +static inline void *crypto_get_completion_data(void *data) { - return req->data; + return data; } -void crypto_req_done(struct crypto_async_request *req, int err); +void crypto_req_done(void *req, int err); static inline int crypto_wait_req(int err, struct crypto_wait *wait) { From patchwork Mon Feb 6 10:22:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129689 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB6EC61DA4 for ; Mon, 6 Feb 2023 11:33:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230121AbjBFLdw (ORCPT ); Mon, 6 Feb 2023 06:33:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbjBFLde (ORCPT ); Mon, 6 Feb 2023 06:33:34 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 409C41E1D8; Mon, 6 Feb 2023 03:33:24 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydu-007zio-0W; Mon, 06 Feb 2023 18:22:35 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:34 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:34 +0800 Subject: [PATCH 11/17] dm: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu Acked-by: Mike Snitzer --- drivers/md/dm-crypt.c | 6 +++--- drivers/md/dm-integrity.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 7609fe39ab8c..3aeeb8f2802f 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1458,7 +1458,7 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, return r; } -static void kcryptd_async_done(crypto_completion_data_t *async_req, int error); +static void kcryptd_async_done(void *async_req, int error); static int crypt_alloc_req_skcipher(struct crypt_config *cc, struct convert_context *ctx) @@ -2146,9 +2146,9 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io) crypt_dec_pending(io); } -static void kcryptd_async_done(crypto_completion_data_t *data, int error) +static void kcryptd_async_done(void *data, int error) { - struct dm_crypt_request *dmreq = crypto_get_completion_data(data); + struct dm_crypt_request *dmreq = data; struct convert_context *ctx = dmreq->ctx; struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx); struct crypt_config *cc = io->cc; diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index eefe25ed841e..c58156deb2b1 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -955,9 +955,9 @@ static void xor_journal(struct dm_integrity_c *ic, bool encrypt, unsigned sectio async_tx_issue_pending_all(); } -static void complete_journal_encrypt(crypto_completion_data_t *data, int err) +static void complete_journal_encrypt(void *data, int err) { - struct journal_completion *comp = crypto_get_completion_data(data); + struct journal_completion *comp = data; if (unlikely(err)) { if (likely(err == -EINPROGRESS)) { complete(&comp->ic->crypto_backoff); From patchwork Mon Feb 6 10:22:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129681 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C831BC05027 for ; Mon, 6 Feb 2023 11:33:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229733AbjBFLdM (ORCPT ); Mon, 6 Feb 2023 06:33:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229522AbjBFLdL (ORCPT ); Mon, 6 Feb 2023 06:33:11 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 241E710271; Mon, 6 Feb 2023 03:33:08 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydw-007zjD-3q; Mon, 06 Feb 2023 18:22:37 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:36 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:36 +0800 Subject: [PATCH 12/17] net: macsec: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu --- drivers/net/macsec.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index b7d9d487ccd2..becb04123d3e 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -528,9 +528,9 @@ static void count_tx(struct net_device *dev, int ret, int len) } } -static void macsec_encrypt_done(crypto_completion_data_t *data, int err) +static void macsec_encrypt_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct net_device *dev = skb->dev; struct macsec_dev *macsec = macsec_priv(dev); struct macsec_tx_sa *sa = macsec_skb_cb(skb)->tx_sa; @@ -835,9 +835,9 @@ static void count_rx(struct net_device *dev, int len) u64_stats_update_end(&stats->syncp); } -static void macsec_decrypt_done(crypto_completion_data_t *data, int err) +static void macsec_decrypt_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct net_device *dev = skb->dev; struct macsec_dev *macsec = macsec_priv(dev); struct macsec_rx_sa *rx_sa = macsec_skb_cb(skb)->rx_sa; From patchwork Mon Feb 6 10:22:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129687 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E064EC61DA4 for ; Mon, 6 Feb 2023 11:33:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229548AbjBFLdh (ORCPT ); Mon, 6 Feb 2023 06:33:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229987AbjBFLdd (ORCPT ); Mon, 6 Feb 2023 06:33:33 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2562814211; Mon, 6 Feb 2023 03:33:20 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydy-007zjk-6G; Mon, 06 Feb 2023 18:22:39 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:38 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:38 +0800 Subject: [PATCH 13/17] net: ipv4: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu --- net/ipv4/ah4.c | 8 ++++---- net/ipv4/esp4.c | 16 ++++++++-------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c index 1fc0231eb1ee..015c0f4ec5ba 100644 --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@ -117,11 +117,11 @@ static int ip_clear_mutable_options(const struct iphdr *iph, __be32 *daddr) return 0; } -static void ah_output_done(crypto_completion_data_t *data, int err) +static void ah_output_done(void *data, int err) { u8 *icv; struct iphdr *iph; - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_state *x = skb_dst(skb)->xfrm; struct ah_data *ahp = x->data; struct iphdr *top_iph = ip_hdr(skb); @@ -262,12 +262,12 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb) return err; } -static void ah_input_done(crypto_completion_data_t *data, int err) +static void ah_input_done(void *data, int err) { u8 *auth_data; u8 *icv; struct iphdr *work_iph; - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_state *x = xfrm_input_state(skb); struct ah_data *ahp = x->data; struct ip_auth_hdr *ah = ip_auth_hdr(skb); diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 8abe07c1ff28..ba06ed42e428 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -244,9 +244,9 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb) } #endif -static void esp_output_done(crypto_completion_data_t *data, int err) +static void esp_output_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_offload *xo = xfrm_offload(skb); void *tmp; struct xfrm_state *x; @@ -332,9 +332,9 @@ static struct ip_esp_hdr *esp_output_set_extra(struct sk_buff *skb, return esph; } -static void esp_output_done_esn(crypto_completion_data_t *data, int err) +static void esp_output_done_esn(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; esp_output_restore_header(skb); esp_output_done(data, err); @@ -830,9 +830,9 @@ int esp_input_done2(struct sk_buff *skb, int err) } EXPORT_SYMBOL_GPL(esp_input_done2); -static void esp_input_done(crypto_completion_data_t *data, int err) +static void esp_input_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; xfrm_input_resume(skb, esp_input_done2(skb, err)); } @@ -860,9 +860,9 @@ static void esp_input_set_header(struct sk_buff *skb, __be32 *seqhi) } } -static void esp_input_done_esn(crypto_completion_data_t *data, int err) +static void esp_input_done_esn(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; esp_input_restore_header(skb); esp_input_done(data, err); From patchwork Mon Feb 6 10:22:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129688 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61820C05027 for ; Mon, 6 Feb 2023 11:33:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbjBFLdj (ORCPT ); Mon, 6 Feb 2023 06:33:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbjBFLdd (ORCPT ); Mon, 6 Feb 2023 06:33:33 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2574F1449B; Mon, 6 Feb 2023 03:33:20 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOye0-007zk1-9e; Mon, 06 Feb 2023 18:22:41 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:40 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:40 +0800 Subject: [PATCH 14/17] net: ipv6: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu --- net/ipv6/ah6.c | 8 ++++---- net/ipv6/esp6.c | 16 ++++++++-------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c index e43735578a76..01005035ad10 100644 --- a/net/ipv6/ah6.c +++ b/net/ipv6/ah6.c @@ -281,12 +281,12 @@ static int ipv6_clear_mutable_options(struct ipv6hdr *iph, int len, int dir) return 0; } -static void ah6_output_done(crypto_completion_data_t *data, int err) +static void ah6_output_done(void *data, int err) { int extlen; u8 *iph_base; u8 *icv; - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_state *x = skb_dst(skb)->xfrm; struct ah_data *ahp = x->data; struct ipv6hdr *top_iph = ipv6_hdr(skb); @@ -451,12 +451,12 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb) return err; } -static void ah6_input_done(crypto_completion_data_t *data, int err) +static void ah6_input_done(void *data, int err) { u8 *auth_data; u8 *icv; u8 *work_iph; - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_state *x = xfrm_input_state(skb); struct ah_data *ahp = x->data; struct ip_auth_hdr *ah = ip_auth_hdr(skb); diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index b9ee81c7dfcf..fddd0cbdede1 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -278,9 +278,9 @@ static void esp_output_encap_csum(struct sk_buff *skb) } } -static void esp_output_done(crypto_completion_data_t *data, int err) +static void esp_output_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct xfrm_offload *xo = xfrm_offload(skb); void *tmp; struct xfrm_state *x; @@ -368,9 +368,9 @@ static struct ip_esp_hdr *esp_output_set_esn(struct sk_buff *skb, return esph; } -static void esp_output_done_esn(crypto_completion_data_t *data, int err) +static void esp_output_done_esn(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; esp_output_restore_header(skb); esp_output_done(data, err); @@ -879,9 +879,9 @@ int esp6_input_done2(struct sk_buff *skb, int err) } EXPORT_SYMBOL_GPL(esp6_input_done2); -static void esp_input_done(crypto_completion_data_t *data, int err) +static void esp_input_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; xfrm_input_resume(skb, esp6_input_done2(skb, err)); } @@ -909,9 +909,9 @@ static void esp_input_set_header(struct sk_buff *skb, __be32 *seqhi) } } -static void esp_input_done_esn(crypto_completion_data_t *data, int err) +static void esp_input_done_esn(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; esp_input_restore_header(skb); esp_input_done(data, err); From patchwork Mon Feb 6 10:22:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129685 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A9B8C61DA4 for ; Mon, 6 Feb 2023 11:33:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjBFLde (ORCPT ); Mon, 6 Feb 2023 06:33:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229841AbjBFLdT (ORCPT ); Mon, 6 Feb 2023 06:33:19 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5C2813DF7; Mon, 6 Feb 2023 03:33:16 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOye2-007zkJ-CU; Mon, 06 Feb 2023 18:22:43 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:42 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:42 +0800 Subject: [PATCH 15/17] tipc: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu --- net/tipc/crypto.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c index ab356e7a3870..577fa5af33ec 100644 --- a/net/tipc/crypto.c +++ b/net/tipc/crypto.c @@ -267,10 +267,10 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb, struct tipc_bearer *b, struct tipc_media_addr *dst, struct tipc_node *__dnode); -static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err); +static void tipc_aead_encrypt_done(void *data, int err); static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead, struct sk_buff *skb, struct tipc_bearer *b); -static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err); +static void tipc_aead_decrypt_done(void *data, int err); static inline int tipc_ehdr_size(struct tipc_ehdr *ehdr); static int tipc_ehdr_build(struct net *net, struct tipc_aead *aead, u8 tx_key, struct sk_buff *skb, @@ -830,9 +830,9 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb, return rc; } -static void tipc_aead_encrypt_done(crypto_completion_data_t *data, int err) +static void tipc_aead_encrypt_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct tipc_crypto_tx_ctx *tx_ctx = TIPC_SKB_CB(skb)->crypto_ctx; struct tipc_bearer *b = tx_ctx->bearer; struct tipc_aead *aead = tx_ctx->aead; @@ -954,9 +954,9 @@ static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead, return rc; } -static void tipc_aead_decrypt_done(crypto_completion_data_t *data, int err) +static void tipc_aead_decrypt_done(void *data, int err) { - struct sk_buff *skb = crypto_get_completion_data(data); + struct sk_buff *skb = data; struct tipc_crypto_rx_ctx *rx_ctx = TIPC_SKB_CB(skb)->crypto_ctx; struct tipc_bearer *b = rx_ctx->bearer; struct tipc_aead *aead = rx_ctx->aead; From patchwork Mon Feb 6 10:22:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129686 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 750D4C636D6 for ; Mon, 6 Feb 2023 11:33:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229556AbjBFLdg (ORCPT ); Mon, 6 Feb 2023 06:33:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230008AbjBFLdT (ORCPT ); Mon, 6 Feb 2023 06:33:19 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDA201E1EB; Mon, 6 Feb 2023 03:33:14 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOye4-007zka-Fr; Mon, 06 Feb 2023 18:22:45 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:44 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:44 +0800 Subject: [PATCH 16/17] tls: Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu --- net/tls/tls_sw.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 5b7f67a7d394..0515cda32fe2 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -179,9 +179,9 @@ static int tls_padding_length(struct tls_prot_info *prot, struct sk_buff *skb, return sub; } -static void tls_decrypt_done(crypto_completion_data_t *data, int err) +static void tls_decrypt_done(void *data, int err) { - struct aead_request *aead_req = crypto_get_completion_data(data); + struct aead_request *aead_req = data; struct crypto_aead *aead = crypto_aead_reqtfm(aead_req); struct scatterlist *sgout = aead_req->dst; struct scatterlist *sgin = aead_req->src; @@ -428,9 +428,9 @@ int tls_tx_records(struct sock *sk, int flags) return rc; } -static void tls_encrypt_done(crypto_completion_data_t *data, int err) +static void tls_encrypt_done(void *data, int err) { - struct aead_request *aead_req = crypto_get_completion_data(data); + struct aead_request *aead_req = data; struct tls_sw_context_tx *ctx; struct tls_context *tls_ctx; struct tls_prot_info *prot; From patchwork Mon Feb 6 10:22:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129684 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3421CC05027 for ; Mon, 6 Feb 2023 11:33:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230042AbjBFLdV (ORCPT ); Mon, 6 Feb 2023 06:33:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbjBFLdQ (ORCPT ); Mon, 6 Feb 2023 06:33:16 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7997413D52; Mon, 6 Feb 2023 03:33:14 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOye6-007zks-J4; Mon, 06 Feb 2023 18:22:47 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:46 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:46 +0800 Subject: [PATCH 17/17] crypto: api - Remove completion function scaffolding References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This patch removes the temporary scaffolding now that the comletion function signature has been converted. Signed-off-by: Herbert Xu Acked-by: Jarkko Sakkinen --- include/linux/crypto.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 80f6350fb588..bb1d9b0e1647 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -176,7 +176,6 @@ struct crypto_async_request; struct crypto_tfm; struct crypto_type; -typedef void crypto_completion_data_t; typedef void (*crypto_completion_t)(void *req, int err); /** @@ -596,11 +595,6 @@ struct crypto_wait { /* * Async ops completion helper functioons */ -static inline void *crypto_get_completion_data(void *data) -{ - return data; -} - void crypto_req_done(void *req, int err); static inline int crypto_wait_req(int err, struct crypto_wait *wait)