From patchwork Sun Nov 17 22:30:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248709 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D613614E5 for ; Sun, 17 Nov 2019 22:31:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BA56620727 for ; Sun, 17 Nov 2019 22:31:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726157AbfKQWbE (ORCPT ); Sun, 17 Nov 2019 17:31:04 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56860 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726171AbfKQWbE (ORCPT ); Sun, 17 Nov 2019 17:31:04 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id CE2771A0D0D; Sun, 17 Nov 2019 23:31:02 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C1D601A0D0A; Sun, 17 Nov 2019 23:31:02 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 5C67B202AF; Sun, 17 Nov 2019 23:31:02 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 01/12] crypto: add helper function for akcipher_request Date: Mon, 18 Nov 2019 00:30:34 +0200 Message-Id: <1574029845-22796-2-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add akcipher_request_cast function to get an akcipher_request struct from a crypto_async_request struct. Remove this function from ccp driver. Signed-off-by: Iuliana Prodan Reviewed-by: Corentin Labbe Reviewed-by: Horia Geantă Acked-by: Gary R Hook --- drivers/crypto/ccp/ccp-crypto-rsa.c | 6 ------ include/crypto/akcipher.h | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c b/drivers/crypto/ccp/ccp-crypto-rsa.c index 649c91d..3ab659d 100644 --- a/drivers/crypto/ccp/ccp-crypto-rsa.c +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c @@ -19,12 +19,6 @@ #include "ccp-crypto.h" -static inline struct akcipher_request *akcipher_request_cast( - struct crypto_async_request *req) -{ - return container_of(req, struct akcipher_request, base); -} - static inline int ccp_copy_and_save_keypart(u8 **kpbuf, unsigned int *kplen, const u8 *buf, size_t sz) { diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h index 6924b09..4365edd 100644 --- a/include/crypto/akcipher.h +++ b/include/crypto/akcipher.h @@ -170,6 +170,12 @@ static inline struct crypto_akcipher *crypto_akcipher_reqtfm( return __crypto_akcipher_tfm(req->base.tfm); } +static inline struct akcipher_request *akcipher_request_cast( + struct crypto_async_request *req) +{ + return container_of(req, struct akcipher_request, base); +} + /** * crypto_free_akcipher() - free AKCIPHER tfm handle * From patchwork Sun Nov 17 22:30:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248729 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5244014DB for ; Sun, 17 Nov 2019 22:31:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2C96A20730 for ; Sun, 17 Nov 2019 22:31:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726451AbfKQWbl (ORCPT ); Sun, 17 Nov 2019 17:31:41 -0500 Received: from inva021.nxp.com ([92.121.34.21]:44614 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726203AbfKQWbH (ORCPT ); Sun, 17 Nov 2019 17:31:07 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 49C89200301; Sun, 17 Nov 2019 23:31:03 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3AAF8200182; Sun, 17 Nov 2019 23:31:03 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id CFA1D202AF; Sun, 17 Nov 2019 23:31:02 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 02/12] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions Date: Mon, 18 Nov 2019 00:30:35 +0200 Message-Id: <1574029845-22796-3-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Create a common crypt function for each skcipher/aead/gcm/chachapoly algorithms and call it for encrypt/decrypt with the specific boolean - true for encrypt and false for decrypt. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geantă --- drivers/crypto/caam/caamalg.c | 268 +++++++++--------------------------------- 1 file changed, 53 insertions(+), 215 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 2912006..6e021692 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -960,8 +960,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc, edesc->sec4_sg_dma, edesc->sec4_sg_bytes); } -static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) +static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, + void *context) { struct aead_request *req = context; struct aead_edesc *edesc; @@ -981,69 +981,8 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, aead_request_complete(req, ecode); } -static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) -{ - struct aead_request *req = context; - struct aead_edesc *edesc; - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct aead_edesc, hw_desc[0]); - - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - aead_unmap(jrdev, edesc, req); - - kfree(edesc); - - aead_request_complete(req, ecode); -} - -static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) -{ - struct skcipher_request *req = context; - struct skcipher_edesc *edesc; - struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); - int ivsize = crypto_skcipher_ivsize(skcipher); - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]); - - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - skcipher_unmap(jrdev, edesc, req); - - /* - * The crypto API expects us to set the IV (req->iv) to the last - * ciphertext block (CBC mode) or last counter (CTR mode). - * This is used e.g. by the CTS mode. - */ - if (ivsize && !ecode) { - memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes, - ivsize); - print_hex_dump_debug("dstiv @"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->iv, - edesc->src_nents > 1 ? 100 : ivsize, 1); - } - - caam_dump_sg("dst @" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->dst, - edesc->dst_nents > 1 ? 100 : req->cryptlen, 1); - - kfree(edesc); - - skcipher_request_complete(req, ecode); -} - -static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) +static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, + void *context) { struct skcipher_request *req = context; struct skcipher_edesc *edesc; @@ -1455,41 +1394,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, return edesc; } -static int gcm_encrypt(struct aead_request *req) -{ - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret = 0; - - /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, true); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor */ - init_gcm_job(req, edesc, all_contig, true); - - print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } - - return ret; -} - -static int chachapoly_encrypt(struct aead_request *req) +static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1500,18 +1405,18 @@ static int chachapoly_encrypt(struct aead_request *req) int ret; edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, - true); + encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); desc = edesc->hw_desc; - init_chachapoly_job(req, edesc, all_contig, true); + init_chachapoly_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1522,45 +1427,17 @@ static int chachapoly_encrypt(struct aead_request *req) return ret; } -static int chachapoly_decrypt(struct aead_request *req) +static int chachapoly_encrypt(struct aead_request *req) { - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret; - - edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, - false); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - desc = edesc->hw_desc; - - init_chachapoly_job(req, edesc, all_contig, false); - print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), - 1); - - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } - - return ret; + return chachapoly_crypt(req, true); } -static int ipsec_gcm_encrypt(struct aead_request *req) +static int chachapoly_decrypt(struct aead_request *req) { - return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req); + return chachapoly_crypt(req, false); } -static int aead_encrypt(struct aead_request *req) +static inline int aead_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1572,19 +1449,19 @@ static int aead_encrypt(struct aead_request *req) /* allocate extended descriptor */ edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN, - &all_contig, true); + &all_contig, encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); /* Create and submit job descriptor */ - init_authenc_job(req, edesc, all_contig, true); + init_authenc_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1595,7 +1472,17 @@ static int aead_encrypt(struct aead_request *req) return ret; } -static int gcm_decrypt(struct aead_request *req) +static int aead_encrypt(struct aead_request *req) +{ + return aead_crypt(req, true); +} + +static int aead_decrypt(struct aead_request *req) +{ + return aead_crypt(req, false); +} + +static inline int gcm_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1606,19 +1493,20 @@ static int gcm_decrypt(struct aead_request *req) int ret = 0; /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, false); + edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, + encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); - /* Create and submit job descriptor*/ - init_gcm_job(req, edesc, all_contig, false); + /* Create and submit job descriptor */ + init_gcm_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1629,48 +1517,24 @@ static int gcm_decrypt(struct aead_request *req) return ret; } -static int ipsec_gcm_decrypt(struct aead_request *req) +static int gcm_encrypt(struct aead_request *req) { - return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req); + return gcm_crypt(req, true); } -static int aead_decrypt(struct aead_request *req) +static int gcm_decrypt(struct aead_request *req) { - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret = 0; - - caam_dump_sg("dec src@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->src, - req->assoclen + req->cryptlen, 1); - - /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN, - &all_contig, false); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor*/ - init_authenc_job(req, edesc, all_contig, false); - - print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); + return gcm_crypt(req, false); +} - desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } +static int ipsec_gcm_encrypt(struct aead_request *req) +{ + return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req); +} - return ret; +static int ipsec_gcm_decrypt(struct aead_request *req) +{ + return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req); } /* @@ -1834,7 +1698,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, return edesc; } -static int skcipher_encrypt(struct skcipher_request *req) +static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) { struct skcipher_edesc *edesc; struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); @@ -1852,14 +1716,14 @@ static int skcipher_encrypt(struct skcipher_request *req) return PTR_ERR(edesc); /* Create and submit job descriptor*/ - init_skcipher_job(req, edesc, true); + init_skcipher_job(req, edesc, encrypt); print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, skcipher_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req); if (!ret) { ret = -EINPROGRESS; @@ -1871,40 +1735,14 @@ static int skcipher_encrypt(struct skcipher_request *req) return ret; } -static int skcipher_decrypt(struct skcipher_request *req) +static int skcipher_encrypt(struct skcipher_request *req) { - struct skcipher_edesc *edesc; - struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); - struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct device *jrdev = ctx->jrdev; - u32 *desc; - int ret = 0; - - if (!req->cryptlen) - return 0; - - /* allocate extended descriptor */ - edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor*/ - init_skcipher_job(req, edesc, false); - desc = edesc->hw_desc; - - print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - ret = caam_jr_enqueue(jrdev, desc, skcipher_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - skcipher_unmap(jrdev, edesc, req); - kfree(edesc); - } + return skcipher_crypt(req, true); +} - return ret; +static int skcipher_decrypt(struct skcipher_request *req) +{ + return skcipher_crypt(req, false); } static struct caam_skcipher_alg driver_algs[] = { From patchwork Sun Nov 17 22:30:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248711 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7CC0314DB for ; Sun, 17 Nov 2019 22:31:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F75A20727 for ; Sun, 17 Nov 2019 22:31:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726425AbfKQWbG (ORCPT ); Sun, 17 Nov 2019 17:31:06 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56904 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726206AbfKQWbG (ORCPT ); Sun, 17 Nov 2019 17:31:06 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BAFE71A0D0A; Sun, 17 Nov 2019 23:31:03 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id AE87F1A0014; Sun, 17 Nov 2019 23:31:03 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 483F520395; Sun, 17 Nov 2019 23:31:03 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 03/12] crypto: caam - refactor ahash_done callbacks Date: Mon, 18 Nov 2019 00:30:36 +0200 Message-Id: <1574029845-22796-4-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Create two common ahash_done_* functions with the dma direction as parameter. Then, these 2 are called with the proper direction for unmap. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geantă --- drivers/crypto/caam/caamhash.c | 80 ++++++++++++------------------------------ 1 file changed, 22 insertions(+), 58 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 65399cb..3d6e978 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -597,8 +597,8 @@ static inline void ahash_unmap_ctx(struct device *dev, ahash_unmap(dev, edesc, req, dst_len); } -static void ahash_done(struct device *jrdev, u32 *desc, u32 err, - void *context) +static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, + void *context, enum dma_data_direction dir) { struct ahash_request *req = context; struct ahash_edesc *edesc; @@ -614,7 +614,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, if (err) ecode = caam_jr_strstatus(jrdev, err); - ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + ahash_unmap_ctx(jrdev, edesc, req, digestsize, dir); memcpy(req->result, state->caam_ctx, digestsize); kfree(edesc); @@ -625,68 +625,20 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, req->base.complete(&req->base, ecode); } -static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, - void *context) +static void ahash_done(struct device *jrdev, u32 *desc, u32 err, + void *context) { - struct ahash_request *req = context; - struct ahash_edesc *edesc; - struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); - struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); - struct caam_hash_state *state = ahash_request_ctx(req); - int digestsize = crypto_ahash_digestsize(ahash); - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL); - switch_buf(state); - kfree(edesc); - - print_hex_dump_debug("ctx@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, - ctx->ctx_len, 1); - if (req->result) - print_hex_dump_debug("result@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->result, - digestsize, 1); - - req->base.complete(&req->base, ecode); + ahash_done_cpy(jrdev, desc, err, context, DMA_FROM_DEVICE); } static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, void *context) { - struct ahash_request *req = context; - struct ahash_edesc *edesc; - struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); - int digestsize = crypto_ahash_digestsize(ahash); - struct caam_hash_state *state = ahash_request_ctx(req); - struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); - memcpy(req->result, state->caam_ctx, digestsize); - kfree(edesc); - - print_hex_dump_debug("ctx@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, - ctx->ctx_len, 1); - - req->base.complete(&req->base, ecode); + ahash_done_cpy(jrdev, desc, err, context, DMA_BIDIRECTIONAL); } -static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, - void *context) +static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, + void *context, enum dma_data_direction dir) { struct ahash_request *req = context; struct ahash_edesc *edesc; @@ -702,7 +654,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, if (err) ecode = caam_jr_strstatus(jrdev, err); - ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE); + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, dir); switch_buf(state); kfree(edesc); @@ -717,6 +669,18 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, req->base.complete(&req->base, ecode); } +static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, + void *context) +{ + ahash_done_switch(jrdev, desc, err, context, DMA_BIDIRECTIONAL); +} + +static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, + void *context) +{ + ahash_done_switch(jrdev, desc, err, context, DMA_FROM_DEVICE); +} + /* * Allocate an enhanced descriptor, which contains the hardware descriptor * and space for hardware scatter table containing sg_num entries. From patchwork Sun Nov 17 22:30:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248727 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F90114DB for ; Sun, 17 Nov 2019 22:31:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1287E2084D for ; Sun, 17 Nov 2019 22:31:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726536AbfKQWbI (ORCPT ); Sun, 17 Nov 2019 17:31:08 -0500 Received: from inva021.nxp.com ([92.121.34.21]:44644 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726325AbfKQWbG (ORCPT ); Sun, 17 Nov 2019 17:31:06 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3628C200166; Sun, 17 Nov 2019 23:31:04 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 261B92000AA; Sun, 17 Nov 2019 23:31:04 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id BC12E20395; Sun, 17 Nov 2019 23:31:03 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 04/12] crypto: caam - refactor ahash_edesc_alloc Date: Mon, 18 Nov 2019 00:30:37 +0200 Message-Id: <1574029845-22796-5-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Changed parameters for ahash_edesc_alloc function: - remove flags since they can be computed in ahash_edesc_alloc, the only place they are needed; - use ahash_request instead of caam_hash_ctx, which can be obtained from request. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geantă --- drivers/crypto/caam/caamhash.c | 62 +++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 3d6e978..5f9f16c 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -685,11 +685,14 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, * Allocate an enhanced descriptor, which contains the hardware descriptor * and space for hardware scatter table containing sg_num entries. */ -static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx, +static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, int sg_num, u32 *sh_desc, - dma_addr_t sh_desc_dma, - gfp_t flags) + dma_addr_t sh_desc_dma) { + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); + gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC; struct ahash_edesc *edesc; unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry); @@ -748,8 +751,6 @@ static int ahash_update_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = current_buf(state); int *buflen = current_buflen(state); u8 *next_buf = alt_buf(state); @@ -805,8 +806,8 @@ static int ahash_update_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, pad_nents, ctx->sh_desc_update, - ctx->sh_desc_update_dma, flags); + edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update, + ctx->sh_desc_update_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -887,8 +888,6 @@ static int ahash_final_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = *current_buflen(state); u32 *desc; int sec4_sg_bytes; @@ -900,8 +899,8 @@ static int ahash_final_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, 4, ctx->sh_desc_fin, - ctx->sh_desc_fin_dma, flags); + edesc = ahash_edesc_alloc(req, 4, ctx->sh_desc_fin, + ctx->sh_desc_fin_dma); if (!edesc) return -ENOMEM; @@ -953,8 +952,6 @@ static int ahash_finup_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = *current_buflen(state); u32 *desc; int sec4_sg_src_index; @@ -983,9 +980,8 @@ static int ahash_finup_ctx(struct ahash_request *req) sec4_sg_src_index = 1 + (buflen ? 1 : 0); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, - ctx->sh_desc_fin, ctx->sh_desc_fin_dma, - flags); + edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents, + ctx->sh_desc_fin, ctx->sh_desc_fin_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1033,8 +1029,6 @@ static int ahash_digest(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u32 *desc; int digestsize = crypto_ahash_digestsize(ahash); int src_nents, mapped_nents; @@ -1061,9 +1055,8 @@ static int ahash_digest(struct ahash_request *req) } /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? mapped_nents : 0, - ctx->sh_desc_digest, ctx->sh_desc_digest_dma, - flags); + edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0, + ctx->sh_desc_digest, ctx->sh_desc_digest_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1110,8 +1103,6 @@ static int ahash_final_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = current_buf(state); int buflen = *current_buflen(state); u32 *desc; @@ -1120,8 +1111,8 @@ static int ahash_final_no_ctx(struct ahash_request *req) int ret; /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, 0, ctx->sh_desc_digest, - ctx->sh_desc_digest_dma, flags); + edesc = ahash_edesc_alloc(req, 0, ctx->sh_desc_digest, + ctx->sh_desc_digest_dma); if (!edesc) return -ENOMEM; @@ -1169,8 +1160,6 @@ static int ahash_update_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = current_buf(state); int *buflen = current_buflen(state); int blocksize = crypto_ahash_blocksize(ahash); @@ -1224,10 +1213,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, pad_nents, + edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update_first, - ctx->sh_desc_update_first_dma, - flags); + ctx->sh_desc_update_first_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1304,8 +1292,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = *current_buflen(state); u32 *desc; int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents; @@ -1335,9 +1321,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, - ctx->sh_desc_digest, ctx->sh_desc_digest_dma, - flags); + edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents, + ctx->sh_desc_digest, ctx->sh_desc_digest_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1390,8 +1375,6 @@ static int ahash_update_first(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *next_buf = alt_buf(state); int *next_buflen = alt_buflen(state); int to_hash; @@ -1438,11 +1421,10 @@ static int ahash_update_first(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? + edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0, ctx->sh_desc_update_first, - ctx->sh_desc_update_first_dma, - flags); + ctx->sh_desc_update_first_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; From patchwork Sun Nov 17 22:30:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248733 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC5081599 for ; Sun, 17 Nov 2019 22:31:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 97DA020730 for ; Sun, 17 Nov 2019 22:31:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727018AbfKQWbv (ORCPT ); Sun, 17 Nov 2019 17:31:51 -0500 Received: from inva021.nxp.com ([92.121.34.21]:44654 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726171AbfKQWbH (ORCPT ); Sun, 17 Nov 2019 17:31:07 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id A70EB200310; Sun, 17 Nov 2019 23:31:04 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 997C32000AA; Sun, 17 Nov 2019 23:31:04 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 33CBC20395; Sun, 17 Nov 2019 23:31:04 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 05/12] crypto: caam - refactor RSA private key _done callbacks Date: Mon, 18 Nov 2019 00:30:38 +0200 Message-Id: <1574029845-22796-6-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Create a common rsa_priv_f_done function, which based on private key form calls the specific unmap function. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geantă --- drivers/crypto/caam/caampkc.c | 61 +++++++++++++------------------------------ 1 file changed, 18 insertions(+), 43 deletions(-) diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index 6619c51..ebf1677 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -132,29 +132,13 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context) akcipher_request_complete(req, ecode); } -static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err, - void *context) -{ - struct akcipher_request *req = context; - struct rsa_edesc *edesc; - int ecode = 0; - - if (err) - ecode = caam_jr_strstatus(dev, err); - - edesc = container_of(desc, struct rsa_edesc, hw_desc[0]); - - rsa_priv_f1_unmap(dev, edesc, req); - rsa_io_unmap(dev, edesc, req); - kfree(edesc); - - akcipher_request_complete(req, ecode); -} - -static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err, - void *context) +static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err, + void *context) { struct akcipher_request *req = context; + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_rsa_key *key = &ctx->key; struct rsa_edesc *edesc; int ecode = 0; @@ -163,26 +147,17 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err, edesc = container_of(desc, struct rsa_edesc, hw_desc[0]); - rsa_priv_f2_unmap(dev, edesc, req); - rsa_io_unmap(dev, edesc, req); - kfree(edesc); - - akcipher_request_complete(req, ecode); -} - -static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err, - void *context) -{ - struct akcipher_request *req = context; - struct rsa_edesc *edesc; - int ecode = 0; - - if (err) - ecode = caam_jr_strstatus(dev, err); - - edesc = container_of(desc, struct rsa_edesc, hw_desc[0]); + switch (key->priv_form) { + case FORM1: + rsa_priv_f1_unmap(dev, edesc, req); + break; + case FORM2: + rsa_priv_f2_unmap(dev, edesc, req); + break; + case FORM3: + rsa_priv_f3_unmap(dev, edesc, req); + } - rsa_priv_f3_unmap(dev, edesc, req); rsa_io_unmap(dev, edesc, req); kfree(edesc); @@ -691,7 +666,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f1_done, req); + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); if (!ret) return -EINPROGRESS; @@ -724,7 +699,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f2_done, req); + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); if (!ret) return -EINPROGRESS; @@ -757,7 +732,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f3_done, req); + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); if (!ret) return -EINPROGRESS; From patchwork Sun Nov 17 22:30:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248725 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85A5E1599 for ; Sun, 17 Nov 2019 22:31:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6969B2084D for ; Sun, 17 Nov 2019 22:31:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726551AbfKQWbI (ORCPT ); Sun, 17 Nov 2019 17:31:08 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56950 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726371AbfKQWbH (ORCPT ); Sun, 17 Nov 2019 17:31:07 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1EB641A00B9; Sun, 17 Nov 2019 23:31:05 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 11A811A0D0E; Sun, 17 Nov 2019 23:31:05 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id A75D820395; Sun, 17 Nov 2019 23:31:04 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 06/12] crypto: caam - change return code in caam_jr_enqueue function Date: Mon, 18 Nov 2019 00:30:39 +0200 Message-Id: <1574029845-22796-7-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Change the return code of caam_jr_enqueue function to -EINPROGRESS, in case of success, -ENOSPC in case the CAAM is busy (has no space left in job ring queue), -EIO if it cannot map the caller's descriptor. Update, also, the cases for resource-freeing for each algorithm type. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geantă --- drivers/crypto/caam/caamalg.c | 16 ++++------------ drivers/crypto/caam/caamhash.c | 34 +++++++++++----------------------- drivers/crypto/caam/caampkc.c | 16 ++++++++-------- drivers/crypto/caam/caamrng.c | 4 ++-- drivers/crypto/caam/jr.c | 8 ++++---- drivers/crypto/caam/key_gen.c | 2 +- 6 files changed, 30 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 6e021692..21b6172 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -1417,9 +1417,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) 1); ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1462,9 +1460,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1507,9 +1503,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1725,9 +1719,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { skcipher_unmap(jrdev, edesc, req); kfree(edesc); } diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 5f9f16c..baf4ab1 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -422,7 +422,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key, init_completion(&result.completion); ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); - if (!ret) { + if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); ret = result.err; @@ -858,10 +858,8 @@ static int ahash_update_ctx(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - - ret = -EINPROGRESS; } else if (*next_buflen) { scatterwalk_map_and_copy(buf + *buflen, req->src, 0, req->nbytes, 0); @@ -936,10 +934,9 @@ static int ahash_final_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); - if (ret) - goto unmap_ctx; + if (ret == -EINPROGRESS) + return ret; - return -EINPROGRESS; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -1013,10 +1010,9 @@ static int ahash_finup_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); - if (ret) - goto unmap_ctx; + if (ret == -EINPROGRESS) + return ret; - return -EINPROGRESS; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -1086,9 +1082,7 @@ static int ahash_digest(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1138,9 +1132,7 @@ static int ahash_final_no_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1258,10 +1250,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - ret = -EINPROGRESS; state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; state->final = ahash_final_ctx; @@ -1353,9 +1344,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1452,10 +1441,9 @@ static int ahash_update_first(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - ret = -EINPROGRESS; state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; state->final = ahash_final_ctx; diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index ebf1677..7f7ea32 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -634,8 +634,8 @@ static int caam_rsa_enc(struct akcipher_request *req) init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_pub_unmap(jrdev, edesc, req); @@ -667,8 +667,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f1_unmap(jrdev, edesc, req); @@ -700,8 +700,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f2_unmap(jrdev, edesc, req); @@ -733,8 +733,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f3_unmap(jrdev, edesc, req); diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c index e8baaca..e3e4bf2 100644 --- a/drivers/crypto/caam/caamrng.c +++ b/drivers/crypto/caam/caamrng.c @@ -133,7 +133,7 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current) dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf)); init_completion(&bd->filled); err = caam_jr_enqueue(jrdev, desc, rng_done, ctx); - if (err) + if (err != EINPROGRESS) complete(&bd->filled); /* don't wait on failed job*/ else atomic_inc(&bd->empty); /* note if pending */ @@ -153,7 +153,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait) if (atomic_read(&bd->empty) == BUF_EMPTY) { err = submit_job(ctx, 1); /* if can't submit job, can't even wait */ - if (err) + if (err != EINPROGRESS) return 0; } /* no immediate data, so exit if not waiting */ diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index fc97cde..df2a050 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -324,8 +324,8 @@ void caam_jr_free(struct device *rdev) EXPORT_SYMBOL(caam_jr_free); /** - * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK, - * -EBUSY if the queue is full, -EIO if it cannot map the caller's + * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS + * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's * descriptor. * @dev: device of the job ring to be used. This device should have * been assigned prior by caam_jr_register(). @@ -377,7 +377,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { spin_unlock_bh(&jrp->inplock); dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); - return -EBUSY; + return -ENOSPC; } head_entry = &jrp->entinfo[head]; @@ -414,7 +414,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, spin_unlock_bh(&jrp->inplock); - return 0; + return -EINPROGRESS; } EXPORT_SYMBOL(caam_jr_enqueue); diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c index 5a851dd..b0e8a49 100644 --- a/drivers/crypto/caam/key_gen.c +++ b/drivers/crypto/caam/key_gen.c @@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, init_completion(&result.completion); ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); - if (!ret) { + if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); ret = result.err; From patchwork Sun Nov 17 22:30:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248713 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4FEA14DB for ; Sun, 17 Nov 2019 22:31:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7AE5C20740 for ; Sun, 17 Nov 2019 22:31:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726638AbfKQWbK (ORCPT ); Sun, 17 Nov 2019 17:31:10 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56970 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726414AbfKQWbJ (ORCPT ); Sun, 17 Nov 2019 17:31:09 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8E5F51A0D13; Sun, 17 Nov 2019 23:31:05 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7FA7C1A0D12; Sun, 17 Nov 2019 23:31:05 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 1F71520395; Sun, 17 Nov 2019 23:31:05 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 07/12] crypto: caam - refactor caam_jr_enqueue Date: Mon, 18 Nov 2019 00:30:40 +0200 Message-Id: <1574029845-22796-8-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Added a new struct - caam_jr_request_entry, to keep each request information. This has a crypto_async_request, used to determine the request type, and a bool to check if the request has backlog flag or not. This struct is passed to CAAM, via enqueue function - caam_jr_enqueue. The new added caam_jr_enqueue_no_bklog function is used to enqueue a job descriptor head for cases like caamrng, key_gen, digest_key, where we don't have backlogged requests. This is done for later use, on backlogging support in CAAM. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg.c | 29 +++++++++++++++++----- drivers/crypto/caam/caamhash.c | 56 ++++++++++++++++++++++++++++++++---------- drivers/crypto/caam/caampkc.c | 32 +++++++++++++++++++----- drivers/crypto/caam/caampkc.h | 3 +++ drivers/crypto/caam/caamrng.c | 3 ++- drivers/crypto/caam/intern.h | 10 ++++++++ drivers/crypto/caam/jr.c | 53 +++++++++++++++++++++++++++++++++------ drivers/crypto/caam/jr.h | 4 +++ drivers/crypto/caam/key_gen.c | 2 +- 9 files changed, 158 insertions(+), 34 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 21b6172..abebcfc 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -878,6 +878,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, * @mapped_dst_nents: number of segments in output h/w link table * @sec4_sg_bytes: length of dma mapped sec4_sg space * @sec4_sg_dma: bus physical mapped address of h/w link table + * @jrentry: information about the current request that is processed by a ring * @sec4_sg: pointer to h/w link table * @hw_desc: the h/w job descriptor followed by any referenced link tables */ @@ -888,6 +889,7 @@ struct aead_edesc { int mapped_dst_nents; int sec4_sg_bytes; dma_addr_t sec4_sg_dma; + struct caam_jr_request_entry jrentry; struct sec4_sg_entry *sec4_sg; u32 hw_desc[]; }; @@ -901,6 +903,7 @@ struct aead_edesc { * @iv_dma: dma address of iv for checking continuity and link table * @sec4_sg_bytes: length of dma mapped sec4_sg space * @sec4_sg_dma: bus physical mapped address of h/w link table + * @jrentry: information about the current request that is processed by a ring * @sec4_sg: pointer to h/w link table * @hw_desc: the h/w job descriptor followed by any referenced link tables * and IV @@ -913,6 +916,7 @@ struct skcipher_edesc { dma_addr_t iv_dma; int sec4_sg_bytes; dma_addr_t sec4_sg_dma; + struct caam_jr_request_entry jrentry; struct sec4_sg_entry *sec4_sg; u32 hw_desc[0]; }; @@ -963,7 +967,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc, static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, void *context) { - struct aead_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct aead_request *req = aead_request_cast(jrentry->base); struct aead_edesc *edesc; int ecode = 0; @@ -984,7 +989,8 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, void *context) { - struct skcipher_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct skcipher_request *req = skcipher_request_cast(jrentry->base); struct skcipher_edesc *edesc; struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); int ivsize = crypto_skcipher_ivsize(skcipher); @@ -1364,6 +1370,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, edesc->mapped_dst_nents = mapped_dst_nents; edesc->sec4_sg = (void *)edesc + sizeof(struct aead_edesc) + desc_bytes; + edesc->jrentry.base = &req->base; + *all_contig_ptr = !(mapped_src_nents > 1); sec4_sg_index = 0; @@ -1416,7 +1424,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry); if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); @@ -1440,6 +1448,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct caam_jr_request_entry *jrentry; struct device *jrdev = ctx->jrdev; bool all_contig; u32 *desc; @@ -1459,7 +1468,9 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry); if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); @@ -1484,6 +1495,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt) struct crypto_aead *aead = crypto_aead_reqtfm(req); struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; + struct caam_jr_request_entry *jrentry; bool all_contig; u32 *desc; int ret = 0; @@ -1502,7 +1514,9 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt) desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry); if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); @@ -1637,6 +1651,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, edesc->sec4_sg_bytes = sec4_sg_bytes; edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc + desc_bytes); + edesc->jrentry.base = &req->base; /* Make sure IV is located in a DMAable area */ if (ivsize) { @@ -1698,6 +1713,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); struct device *jrdev = ctx->jrdev; + struct caam_jr_request_entry *jrentry; u32 *desc; int ret = 0; @@ -1717,8 +1733,9 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req); + jrentry = &edesc->jrentry; + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, jrentry); if (ret != -EINPROGRESS) { skcipher_unmap(jrdev, edesc, req); kfree(edesc); diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index baf4ab1..d9de3dc 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -421,7 +421,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key, result.err = 0; init_completion(&result.completion); - ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); + ret = caam_jr_enqueue_no_bklog(jrdev, desc, split_key_done, &result); if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); @@ -553,6 +553,7 @@ static int acmac_setkey(struct crypto_ahash *ahash, const u8 *key, * @sec4_sg_dma: physical mapped address of h/w link table * @src_nents: number of segments in input scatterlist * @sec4_sg_bytes: length of dma mapped sec4_sg space + * @jrentry: information about the current request that is processed by a ring * @hw_desc: the h/w job descriptor followed by any referenced link tables * @sec4_sg: h/w link table */ @@ -560,6 +561,7 @@ struct ahash_edesc { dma_addr_t sec4_sg_dma; int src_nents; int sec4_sg_bytes; + struct caam_jr_request_entry jrentry; u32 hw_desc[DESC_JOB_IO_LEN_MAX / sizeof(u32)] ____cacheline_aligned; struct sec4_sg_entry sec4_sg[0]; }; @@ -600,7 +602,8 @@ static inline void ahash_unmap_ctx(struct device *dev, static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, void *context, enum dma_data_direction dir) { - struct ahash_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct ahash_request *req = ahash_request_cast(jrentry->base); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); int digestsize = crypto_ahash_digestsize(ahash); @@ -640,7 +643,8 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, void *context, enum dma_data_direction dir) { - struct ahash_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct ahash_request *req = ahash_request_cast(jrentry->base); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); @@ -702,6 +706,8 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, return NULL; } + edesc->jrentry.base = &req->base; + init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc), HDR_SHARE_DEFER | HDR_REVERSE); @@ -760,6 +766,7 @@ static int ahash_update_ctx(struct ahash_request *req) u32 *desc; int src_nents, mapped_nents, sec4_sg_bytes, sec4_sg_src_index; struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret = 0; last_buflen = *next_buflen; @@ -857,7 +864,9 @@ static int ahash_update_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, jrentry); if (ret != -EINPROGRESS) goto unmap_ctx; } else if (*next_buflen) { @@ -891,6 +900,7 @@ static int ahash_final_ctx(struct ahash_request *req) int sec4_sg_bytes; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; sec4_sg_bytes = pad_sg_nents(1 + (buflen ? 1 : 0)) * @@ -933,11 +943,13 @@ static int ahash_final_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry); if (ret == -EINPROGRESS) return ret; - unmap_ctx: +unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); return ret; @@ -955,6 +967,7 @@ static int ahash_finup_ctx(struct ahash_request *req) int src_nents, mapped_nents; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; src_nents = sg_nents_for_len(req->src, req->nbytes); @@ -1009,11 +1022,13 @@ static int ahash_finup_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry); if (ret == -EINPROGRESS) return ret; - unmap_ctx: +unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); return ret; @@ -1029,6 +1044,7 @@ static int ahash_digest(struct ahash_request *req) int digestsize = crypto_ahash_digestsize(ahash); int src_nents, mapped_nents; struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; state->buf_dma = 0; @@ -1081,7 +1097,9 @@ static int ahash_digest(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); @@ -1102,6 +1120,7 @@ static int ahash_final_no_ctx(struct ahash_request *req) u32 *desc; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; /* allocate space for base edesc and hw desc commands, link tables */ @@ -1131,7 +1150,9 @@ static int ahash_final_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); @@ -1160,6 +1181,7 @@ static int ahash_update_no_ctx(struct ahash_request *req) int in_len = *buflen + req->nbytes, to_hash; int sec4_sg_bytes, src_nents, mapped_nents; struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; u32 *desc; int ret = 0; @@ -1249,7 +1271,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry); if (ret != -EINPROGRESS) goto unmap_ctx; @@ -1288,6 +1312,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req) int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; src_nents = sg_nents_for_len(req->src, req->nbytes); @@ -1343,7 +1368,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); @@ -1371,6 +1398,7 @@ static int ahash_update_first(struct ahash_request *req) u32 *desc; int src_nents, mapped_nents; struct ahash_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret = 0; *next_buflen = req->nbytes & (blocksize - 1); @@ -1440,7 +1468,9 @@ static int ahash_update_first(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); + jrentry = &edesc->jrentry; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry); if (ret != -EINPROGRESS) goto unmap_ctx; diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index 7f7ea32..bb0e4b9 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -116,7 +116,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc, /* RSA Job Completion handler */ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context) { - struct akcipher_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct akcipher_request *req = akcipher_request_cast(jrentry->base); struct rsa_edesc *edesc; int ecode = 0; @@ -135,7 +136,8 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context) static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err, void *context) { - struct akcipher_request *req = context; + struct caam_jr_request_entry *jrentry = context; + struct akcipher_request *req = akcipher_request_cast(jrentry->base); struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); struct caam_rsa_key *key = &ctx->key; @@ -315,6 +317,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req, edesc->mapped_src_nents = mapped_src_nents; edesc->mapped_dst_nents = mapped_dst_nents; + edesc->jrentry.base = &req->base; + edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg, sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(dev, edesc->sec4_sg_dma)) { @@ -609,6 +613,7 @@ static int caam_rsa_enc(struct akcipher_request *req) struct caam_rsa_key *key = &ctx->key; struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; if (unlikely(!key->n || !key->e)) @@ -633,7 +638,10 @@ static int caam_rsa_enc(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req); + jrentry = &edesc->jrentry; + jrentry->base = &req->base; + + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, jrentry); if (ret == -EINPROGRESS) return ret; @@ -651,6 +659,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; /* Allocate extended descriptor */ @@ -666,7 +675,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); + jrentry = &edesc->jrentry; + jrentry->base = &req->base; + + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); if (ret == -EINPROGRESS) return ret; @@ -684,6 +696,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; /* Allocate extended descriptor */ @@ -699,7 +712,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); + jrentry = &edesc->jrentry; + jrentry->base = &req->base; + + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); if (ret == -EINPROGRESS) return ret; @@ -717,6 +733,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; + struct caam_jr_request_entry *jrentry; int ret; /* Allocate extended descriptor */ @@ -732,7 +749,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) /* Initialize Job Descriptor */ init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); + jrentry = &edesc->jrentry; + jrentry->base = &req->base; + + ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); if (ret == -EINPROGRESS) return ret; diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h index c68fb4c..fe46d73 100644 --- a/drivers/crypto/caam/caampkc.h +++ b/drivers/crypto/caam/caampkc.h @@ -11,6 +11,7 @@ #ifndef _PKC_DESC_H_ #define _PKC_DESC_H_ #include "compat.h" +#include "intern.h" #include "pdb.h" /** @@ -118,6 +119,7 @@ struct caam_rsa_req_ctx { * @mapped_dst_nents: number of segments in output h/w link table * @sec4_sg_bytes : length of h/w link table * @sec4_sg_dma : dma address of h/w link table + * @jrentry : info about the current request that is processed by a ring * @sec4_sg : pointer to h/w link table * @pdb : specific RSA Protocol Data Block (PDB) * @hw_desc : descriptor followed by link tables if any @@ -129,6 +131,7 @@ struct rsa_edesc { int mapped_dst_nents; int sec4_sg_bytes; dma_addr_t sec4_sg_dma; + struct caam_jr_request_entry jrentry; struct sec4_sg_entry *sec4_sg; union { struct rsa_pub_pdb pub; diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c index e3e4bf2..96891b6 100644 --- a/drivers/crypto/caam/caamrng.c +++ b/drivers/crypto/caam/caamrng.c @@ -132,7 +132,8 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current) dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf)); init_completion(&bd->filled); - err = caam_jr_enqueue(jrdev, desc, rng_done, ctx); + + err = caam_jr_enqueue_no_bklog(jrdev, desc, rng_done, ctx); if (err != EINPROGRESS) complete(&bd->filled); /* don't wait on failed job*/ else diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h index c7c10c9..58be66c 100644 --- a/drivers/crypto/caam/intern.h +++ b/drivers/crypto/caam/intern.h @@ -11,6 +11,7 @@ #define INTERN_H #include "ctrl.h" +#include "regs.h" /* Currently comes from Kconfig param as a ^2 (driver-required) */ #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE) @@ -104,6 +105,15 @@ struct caam_drv_private { #endif }; +/* + * Storage for tracking each request that is processed by a ring + */ +struct caam_jr_request_entry { + /* Common attributes for async crypto requests */ + struct crypto_async_request *base; + bool bklog; /* Stored to determine if the request needs backlog */ +}; + #ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API int caam_algapi_init(struct device *dev); diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index df2a050..544cafa 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -324,9 +324,9 @@ void caam_jr_free(struct device *rdev) EXPORT_SYMBOL(caam_jr_free); /** - * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS - * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's - * descriptor. + * caam_jr_enqueue_no_bklog() - Enqueue a job descriptor head for no + * backlogging requests. Returns -EINPROGRESS if OK, -ENOSPC if the queue + * is full, -EIO if it cannot map the caller's descriptor. * @dev: device of the job ring to be used. This device should have * been assigned prior by caam_jr_register(). * @desc: points to a job descriptor that execute our request. All @@ -351,10 +351,10 @@ EXPORT_SYMBOL(caam_jr_free); * @areq: optional pointer to a user argument for use at callback * time. **/ -int caam_jr_enqueue(struct device *dev, u32 *desc, - void (*cbk)(struct device *dev, u32 *desc, - u32 status, void *areq), - void *areq) +int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc, + void (*cbk)(struct device *dev, u32 *desc, + u32 status, void *areq), + void *areq) { struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); struct caam_jrentry_info *head_entry; @@ -416,6 +416,45 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, return -EINPROGRESS; } +EXPORT_SYMBOL(caam_jr_enqueue_no_bklog); + +/** + * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS + * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's + * descriptor. + * @dev: device of the job ring to be used. This device should have + * been assigned prior by caam_jr_register(). + * @desc: points to a job descriptor that execute our request. All + * descriptors (and all referenced data) must be in a DMAable + * region, and all data references must be physical addresses + * accessible to CAAM (i.e. within a PAMU window granted + * to it). + * @cbk: pointer to a callback function to be invoked upon completion + * of this request. This has the form: + * callback(struct device *dev, u32 *desc, u32 stat, void *arg) + * where: + * @dev: contains the job ring device that processed this + * response. + * @desc: descriptor that initiated the request, same as + * "desc" being argued to caam_jr_enqueue(). + * @status: untranslated status received from CAAM. See the + * reference manual for a detailed description of + * error meaning, or see the JRSTA definitions in the + * register header file + * @areq: optional pointer to an argument passed with the + * original request + * @areq: optional pointer to a user argument for use at callback + * time. + **/ +int caam_jr_enqueue(struct device *dev, u32 *desc, + void (*cbk)(struct device *dev, u32 *desc, + u32 status, void *areq), + void *areq) +{ + struct caam_jr_request_entry *jrentry = areq; + + return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry); +} EXPORT_SYMBOL(caam_jr_enqueue); /* diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h index eab6115..c47a0cd 100644 --- a/drivers/crypto/caam/jr.h +++ b/drivers/crypto/caam/jr.h @@ -11,6 +11,10 @@ /* Prototypes for backend-level services exposed to APIs */ struct device *caam_jr_alloc(void); void caam_jr_free(struct device *rdev); +int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc, + void (*cbk)(struct device *dev, u32 *desc, + u32 status, void *areq), + void *areq); int caam_jr_enqueue(struct device *dev, u32 *desc, void (*cbk)(struct device *dev, u32 *desc, u32 status, void *areq), diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c index b0e8a49..854e718 100644 --- a/drivers/crypto/caam/key_gen.c +++ b/drivers/crypto/caam/key_gen.c @@ -107,7 +107,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, result.err = 0; init_completion(&result.completion); - ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); + ret = caam_jr_enqueue_no_bklog(jrdev, desc, split_key_done, &result); if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); From patchwork Sun Nov 17 22:30:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248721 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C25F1599 for ; Sun, 17 Nov 2019 22:31:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 37DF820740 for ; Sun, 17 Nov 2019 22:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726325AbfKQWbJ (ORCPT ); Sun, 17 Nov 2019 17:31:09 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56986 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726423AbfKQWbI (ORCPT ); Sun, 17 Nov 2019 17:31:08 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0F6641A0D0F; Sun, 17 Nov 2019 23:31:06 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id EBBE91A0D0C; Sun, 17 Nov 2019 23:31:05 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 8D36D20395; Sun, 17 Nov 2019 23:31:05 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 08/12] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Date: Mon, 18 Nov 2019 00:30:41 +0200 Message-Id: <1574029845-22796-9-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Integrate crypto_engine into CAAM, to make use of the engine queue. Add support for SKCIPHER algorithms. This is intended to be used for CAAM backlogging support. The requests, with backlog flag (e.g. from dm-crypt) will be listed into crypto-engine queue and processed by CAAM when free. This changes the return codes for caam_jr_enqueue: -EINPROGRESS if OK, -EBUSY if request is backlogged, -ENOSPC if the queue is full, -EIO if it cannot map the caller's descriptor, -EINVAL if crypto_tfm not supported by crypto_engine. Signed-off-by: Iuliana Prodan Signed-off-by: Franck LENORMAND Reviewed-by: Horia Geantă --- drivers/crypto/caam/Kconfig | 1 + drivers/crypto/caam/caamalg.c | 84 +++++++++++++++++++++++++++++++++++-------- drivers/crypto/caam/intern.h | 2 ++ drivers/crypto/caam/jr.c | 51 ++++++++++++++++++++++++-- 4 files changed, 122 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig index 87053e4..1930e19 100644 --- a/drivers/crypto/caam/Kconfig +++ b/drivers/crypto/caam/Kconfig @@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG menuconfig CRYPTO_DEV_FSL_CAAM_JR tristate "Freescale CAAM Job Ring driver backend" + select CRYPTO_ENGINE default y help Enables the driver module for Job Rings which are part of diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index abebcfc..23de94d 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -56,6 +56,7 @@ #include "sg_sw_sec4.h" #include "key_gen.h" #include "caamalg_desc.h" +#include /* * crypto alg @@ -101,6 +102,7 @@ struct caam_skcipher_alg { * per-session context */ struct caam_ctx { + struct crypto_engine_ctx enginectx; u32 sh_desc_enc[DESC_MAX_USED_LEN]; u32 sh_desc_dec[DESC_MAX_USED_LEN]; u8 key[CAAM_MAX_KEY_SIZE]; @@ -114,6 +116,12 @@ struct caam_ctx { unsigned int authsize; }; +struct caam_skcipher_req_ctx { + struct skcipher_edesc *edesc; + void (*skcipher_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); +}; + static int aead_null_set_sh_desc(struct crypto_aead *aead) { struct caam_ctx *ctx = crypto_aead_ctx(aead); @@ -992,13 +1000,15 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, struct caam_jr_request_entry *jrentry = context; struct skcipher_request *req = skcipher_request_cast(jrentry->base); struct skcipher_edesc *edesc; + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); int ivsize = crypto_skcipher_ivsize(skcipher); int ecode = 0; dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]); + edesc = rctx->edesc; if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -1024,7 +1034,14 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, kfree(edesc); - skcipher_request_complete(req, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + skcipher_request_complete(req, ecode); + else + crypto_finalize_skcipher_request(jrp->engine, req, ecode); } /* @@ -1553,6 +1570,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, { struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); struct device *jrdev = ctx->jrdev; gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; @@ -1653,6 +1671,9 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, desc_bytes); edesc->jrentry.base = &req->base; + rctx->edesc = edesc; + rctx->skcipher_op_done = skcipher_crypt_done; + /* Make sure IV is located in a DMAable area */ if (ivsize) { iv = (u8 *)edesc->sec4_sg + sec4_sg_bytes; @@ -1707,13 +1728,37 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, return edesc; } +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct skcipher_request *req = skcipher_request_cast(areq); + struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); + struct caam_jr_request_entry *jrentry; + u32 *desc = rctx->edesc->hw_desc; + int ret; + + jrentry = &rctx->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc, + rctx->skcipher_op_done, jrentry); + + if (ret != -EINPROGRESS) { + skcipher_unmap(ctx->jrdev, rctx->edesc, req); + kfree(rctx->edesc); + } else { + ret = 0; + } + + return ret; +} + static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) { struct skcipher_edesc *edesc; struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); struct device *jrdev = ctx->jrdev; - struct caam_jr_request_entry *jrentry; u32 *desc; int ret = 0; @@ -1727,16 +1772,15 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) /* Create and submit job descriptor*/ init_skcipher_job(req, edesc, encrypt); + desc = edesc->hw_desc; print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - desc = edesc->hw_desc; - jrentry = &edesc->jrentry; + DUMP_PREFIX_ADDRESS, 16, 4, desc, + desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, + &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { skcipher_unmap(jrdev, edesc, req); kfree(edesc); } @@ -3272,7 +3316,9 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_enc, offsetof(struct caam_ctx, - sh_desc_enc_dma), + sh_desc_enc_dma) - + offsetof(struct caam_ctx, + sh_desc_enc), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (dma_mapping_error(ctx->jrdev, dma_addr)) { dev_err(ctx->jrdev, "unable to map key, shared descriptors\n"); @@ -3282,8 +3328,12 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, ctx->sh_desc_enc_dma = dma_addr; ctx->sh_desc_dec_dma = dma_addr + offsetof(struct caam_ctx, - sh_desc_dec); - ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key); + sh_desc_dec) - + offsetof(struct caam_ctx, + sh_desc_enc); + ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key) - + offsetof(struct caam_ctx, + sh_desc_enc); /* copy descriptor header template value */ ctx->cdata.algtype = OP_TYPE_CLASS1_ALG | caam->class1_alg_type; @@ -3297,6 +3347,11 @@ static int caam_cra_init(struct crypto_skcipher *tfm) struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct caam_skcipher_alg *caam_alg = container_of(alg, typeof(*caam_alg), skcipher); + struct caam_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx)); + + ctx->enginectx.op.do_one_request = skcipher_do_one_req; return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam, false); @@ -3315,7 +3370,8 @@ static int caam_aead_init(struct crypto_aead *tfm) static void caam_exit_common(struct caam_ctx *ctx) { dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_enc_dma, - offsetof(struct caam_ctx, sh_desc_enc_dma), + offsetof(struct caam_ctx, sh_desc_enc_dma) - + offsetof(struct caam_ctx, sh_desc_enc), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); caam_jr_free(ctx->jrdev); } diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h index 58be66c..31abb94 100644 --- a/drivers/crypto/caam/intern.h +++ b/drivers/crypto/caam/intern.h @@ -12,6 +12,7 @@ #include "ctrl.h" #include "regs.h" +#include /* Currently comes from Kconfig param as a ^2 (driver-required) */ #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE) @@ -61,6 +62,7 @@ struct caam_drv_private_jr { int out_ring_read_index; /* Output index "tail" */ int tail; /* entinfo (s/w ring) tail index */ void *outring; /* Base of output ring, DMA-safe */ + struct crypto_engine *engine; }; /* diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 544cafa..5c55d3d 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -62,6 +62,15 @@ static void unregister_algs(void) mutex_unlock(&algs_lock); } +static void caam_jr_crypto_engine_exit(void *data) +{ + struct device *jrdev = data; + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev); + + /* Free the resources of crypto-engine */ + crypto_engine_exit(jrpriv->engine); +} + static int caam_reset_hw_jr(struct device *dev) { struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); @@ -418,10 +427,23 @@ int caam_jr_enqueue_no_bklog(struct device *dev, u32 *desc, } EXPORT_SYMBOL(caam_jr_enqueue_no_bklog); +static int transfer_request_to_engine(struct crypto_engine *engine, + struct crypto_async_request *req) +{ + switch (crypto_tfm_alg_type(req->tfm)) { + case CRYPTO_ALG_TYPE_SKCIPHER: + return crypto_transfer_skcipher_request_to_engine(engine, + skcipher_request_cast(req)); + default: + return -EINVAL; + } +} + /** * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS - * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's - * descriptor. + * if OK, -EBUSY if request is backlogged, -ENOSPC if the queue is full, + * -EIO if it cannot map the caller's descriptor, -EINVAL if crypto_tfm + * not supported by crypto_engine. * @dev: device of the job ring to be used. This device should have * been assigned prior by caam_jr_register(). * @desc: points to a job descriptor that execute our request. All @@ -451,7 +473,12 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, u32 status, void *areq), void *areq) { + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev); struct caam_jr_request_entry *jrentry = areq; + struct crypto_async_request *req = jrentry->base; + + if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + return transfer_request_to_engine(jrpriv->engine, req); return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry); } @@ -577,6 +604,26 @@ static int caam_jr_probe(struct platform_device *pdev) return error; } + /* Initialize crypto engine */ + jrpriv->engine = crypto_engine_alloc_init(jrdev, false); + if (!jrpriv->engine) { + dev_err(jrdev, "Could not init crypto-engine\n"); + return -ENOMEM; + } + + /* Start crypto engine */ + error = crypto_engine_start(jrpriv->engine); + if (error) { + dev_err(jrdev, "Could not start crypto-engine\n"); + crypto_engine_exit(jrpriv->engine); + return error; + } + + error = devm_add_action_or_reset(jrdev, caam_jr_crypto_engine_exit, + jrdev); + if (error) + return error; + /* Identify the interrupt */ jrpriv->irq = irq_of_parse_and_map(nprop, 0); if (!jrpriv->irq) { From patchwork Sun Nov 17 22:30:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248723 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB29B14DB for ; Sun, 17 Nov 2019 22:31:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D675120850 for ; Sun, 17 Nov 2019 22:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726933AbfKQWbe (ORCPT ); Sun, 17 Nov 2019 17:31:34 -0500 Received: from inva020.nxp.com ([92.121.34.13]:57008 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726451AbfKQWbI (ORCPT ); Sun, 17 Nov 2019 17:31:08 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 77FE21A0D15; Sun, 17 Nov 2019 23:31:06 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6AF511A0D11; Sun, 17 Nov 2019 23:31:06 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 0577320395; Sun, 17 Nov 2019 23:31:05 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 09/12] crypto: caam - bypass crypto-engine sw queue, if empty Date: Mon, 18 Nov 2019 00:30:42 +0200 Message-Id: <1574029845-22796-10-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Bypass crypto-engine software queue, if empty, and send the request directly to hardware. If this returns -ENOSPC, transfer the request to crypto-engine and let it handle it. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/jr.c | 29 ++++++++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 5c55d3d..ddf3d39 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -476,10 +476,33 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, struct caam_drv_private_jr *jrpriv = dev_get_drvdata(dev); struct caam_jr_request_entry *jrentry = areq; struct crypto_async_request *req = jrentry->base; + int ret; - if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) - return transfer_request_to_engine(jrpriv->engine, req); - + if (req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + if (crypto_queue_len(&jrpriv->engine->queue) == 0) { + /* + * send the request to CAAM, if crypto-engine queue + * is empty + */ + ret = caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry); + if (ret == -ENOSPC) + /* + * CAAM has no space, so transfer the request + * to crypto-engine + */ + return transfer_request_to_engine(jrpriv->engine, + req); + else + return ret; + } else { + /* + * crypto-engine queue is not empty, so transfer the + * request to crypto-engine, to keep the order + * of requests + */ + return transfer_request_to_engine(jrpriv->engine, req); + } + } return caam_jr_enqueue_no_bklog(dev, desc, cbk, jrentry); } EXPORT_SYMBOL(caam_jr_enqueue); From patchwork Sun Nov 17 22:30:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248719 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3011914DB for ; Sun, 17 Nov 2019 22:31:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 123AF20740 for ; Sun, 17 Nov 2019 22:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726541AbfKQWb2 (ORCPT ); Sun, 17 Nov 2019 17:31:28 -0500 Received: from inva020.nxp.com ([92.121.34.13]:57022 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726481AbfKQWbJ (ORCPT ); Sun, 17 Nov 2019 17:31:09 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D98521A0D11; Sun, 17 Nov 2019 23:31:06 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D689C1A0D0C; Sun, 17 Nov 2019 23:31:06 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 78C2C202AF; Sun, 17 Nov 2019 23:31:06 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 10/12] crypto: caam - add crypto_engine support for AEAD algorithms Date: Mon, 18 Nov 2019 00:30:43 +0200 Message-Id: <1574029845-22796-11-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add crypto_engine support for AEAD algorithms, to make use of the engine queue. The requests, with backlog flag, will be listed into crypto-engine queue and processed by CAAM when free. In case the queue is empty, the request is directly sent to CAAM. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg.c | 80 +++++++++++++++++++++++++++++++++---------- drivers/crypto/caam/jr.c | 3 ++ 2 files changed, 64 insertions(+), 19 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 23de94d..786713a 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -122,6 +122,12 @@ struct caam_skcipher_req_ctx { void *context); }; +struct caam_aead_req_ctx { + struct aead_edesc *edesc; + void (*aead_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); +}; + static int aead_null_set_sh_desc(struct crypto_aead *aead) { struct caam_ctx *ctx = crypto_aead_ctx(aead); @@ -977,12 +983,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, { struct caam_jr_request_entry *jrentry = context; struct aead_request *req = aead_request_cast(jrentry->base); + struct caam_aead_req_ctx *rctx = aead_request_ctx(req); + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); struct aead_edesc *edesc; int ecode = 0; dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct aead_edesc, hw_desc[0]); + edesc = rctx->edesc; if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -991,7 +999,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, kfree(edesc); - aead_request_complete(req, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + aead_request_complete(req, ecode); + else + crypto_finalize_aead_request(jrp->engine, req, ecode); } static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, @@ -1287,6 +1302,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, struct crypto_aead *aead = crypto_aead_reqtfm(req); struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; + struct caam_aead_req_ctx *rctx = aead_request_ctx(req); gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0; @@ -1389,6 +1405,9 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, desc_bytes; edesc->jrentry.base = &req->base; + rctx->edesc = edesc; + rctx->aead_op_done = aead_crypt_done; + *all_contig_ptr = !(mapped_src_nents > 1); sec4_sg_index = 0; @@ -1442,7 +1461,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) 1); ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry); - if (ret != -EINPROGRESS) { + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1465,7 +1484,6 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct caam_jr_request_entry *jrentry; struct device *jrdev = ctx->jrdev; bool all_contig; u32 *desc; @@ -1479,16 +1497,14 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) /* Create and submit job descriptor */ init_authenc_job(req, edesc, all_contig, encrypt); + desc = edesc->hw_desc; print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - desc = edesc->hw_desc; - jrentry = &edesc->jrentry; + DUMP_PREFIX_ADDRESS, 16, 4, desc, + desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1506,13 +1522,37 @@ static int aead_decrypt(struct aead_request *req) return aead_crypt(req, false); } +static int aead_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct aead_request *req = aead_request_cast(areq); + struct caam_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct caam_aead_req_ctx *rctx = aead_request_ctx(req); + struct caam_jr_request_entry *jrentry; + u32 *desc = rctx->edesc->hw_desc; + int ret; + + jrentry = &rctx->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue_no_bklog(ctx->jrdev, desc, rctx->aead_op_done, + jrentry); + + if (ret != -EINPROGRESS) { + aead_unmap(ctx->jrdev, rctx->edesc, req); + kfree(rctx->edesc); + } else { + ret = 0; + } + + return ret; +} + static inline int gcm_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; - struct caam_jr_request_entry *jrentry; bool all_contig; u32 *desc; int ret = 0; @@ -1525,16 +1565,14 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt) /* Create and submit job descriptor */ init_gcm_job(req, edesc, all_contig, encrypt); + desc = edesc->hw_desc; print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - desc = edesc->hw_desc; - jrentry = &edesc->jrentry; + DUMP_PREFIX_ADDRESS, 16, 4, desc, + desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -3364,6 +3402,10 @@ static int caam_aead_init(struct crypto_aead *tfm) container_of(alg, struct caam_aead_alg, aead); struct caam_ctx *ctx = crypto_aead_ctx(tfm); + crypto_aead_set_reqsize(tfm, sizeof(struct caam_aead_req_ctx)); + + ctx->enginectx.op.do_one_request = aead_do_one_req; + return caam_init_common(ctx, &caam_alg->caam, !caam_alg->caam.nodkp); } diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index ddf3d39..7e6632d 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -434,6 +434,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine, case CRYPTO_ALG_TYPE_SKCIPHER: return crypto_transfer_skcipher_request_to_engine(engine, skcipher_request_cast(req)); + case CRYPTO_ALG_TYPE_AEAD: + return crypto_transfer_aead_request_to_engine(engine, + aead_request_cast(req)); default: return -EINVAL; } From patchwork Sun Nov 17 22:30:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248715 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D35A214E5 for ; Sun, 17 Nov 2019 22:31:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BDC4E20748 for ; Sun, 17 Nov 2019 22:31:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726647AbfKQWbK (ORCPT ); Sun, 17 Nov 2019 17:31:10 -0500 Received: from inva021.nxp.com ([92.121.34.21]:44710 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726525AbfKQWbK (ORCPT ); Sun, 17 Nov 2019 17:31:10 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 69C65200182; Sun, 17 Nov 2019 23:31:07 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 4FD372000AA; Sun, 17 Nov 2019 23:31:07 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id E466E202AF; Sun, 17 Nov 2019 23:31:06 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 11/12] crypto: caam - add crypto_engine support for RSA algorithms Date: Mon, 18 Nov 2019 00:30:44 +0200 Message-Id: <1574029845-22796-12-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add crypto_engine support for RSA algorithms, to make use of the engine queue. The requests, with backlog flag, will be listed into crypto-engine queue and processed by CAAM when free. In case the queue is empty, the request is directly sent to CAAM. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caampkc.c | 124 ++++++++++++++++++++++++++++++------------ drivers/crypto/caam/caampkc.h | 8 +++ drivers/crypto/caam/jr.c | 3 + 3 files changed, 101 insertions(+), 34 deletions(-) diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index bb0e4b9..8ffce06 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -118,19 +118,28 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context) { struct caam_jr_request_entry *jrentry = context; struct akcipher_request *req = akcipher_request_cast(jrentry->base); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); + struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); struct rsa_edesc *edesc; int ecode = 0; if (err) ecode = caam_jr_strstatus(dev, err); - edesc = container_of(desc, struct rsa_edesc, hw_desc[0]); + edesc = req_ctx->edesc; rsa_pub_unmap(dev, edesc, req); rsa_io_unmap(dev, edesc, req); kfree(edesc); - akcipher_request_complete(req, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + akcipher_request_complete(req, ecode); + else + crypto_finalize_akcipher_request(jrp->engine, req, ecode); } static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err, @@ -139,15 +148,17 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err, struct caam_jr_request_entry *jrentry = context; struct akcipher_request *req = akcipher_request_cast(jrentry->base); struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); struct caam_rsa_key *key = &ctx->key; + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); struct rsa_edesc *edesc; int ecode = 0; if (err) ecode = caam_jr_strstatus(dev, err); - edesc = container_of(desc, struct rsa_edesc, hw_desc[0]); + edesc = req_ctx->edesc; switch (key->priv_form) { case FORM1: @@ -163,7 +174,14 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err, rsa_io_unmap(dev, edesc, req); kfree(edesc); - akcipher_request_complete(req, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + akcipher_request_complete(req, ecode); + else + crypto_finalize_akcipher_request(jrp->engine, req, ecode); } /** @@ -311,14 +329,16 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req, edesc->src_nents = src_nents; edesc->dst_nents = dst_nents; + edesc->jrentry.base = &req->base; + + req_ctx->edesc = edesc; + if (!sec4_sg_bytes) return edesc; edesc->mapped_src_nents = mapped_src_nents; edesc->mapped_dst_nents = mapped_dst_nents; - edesc->jrentry.base = &req->base; - edesc->sec4_sg_dma = dma_map_single(dev, edesc->sec4_sg, sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(dev, edesc->sec4_sg_dma)) { @@ -343,6 +363,34 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req, return ERR_PTR(-ENOMEM); } +static int akcipher_do_one_req(struct crypto_engine *engine, void *areq) +{ + int ret; + struct akcipher_request *req = akcipher_request_cast(areq); + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_jr_request_entry *jrentry; + struct device *jrdev = ctx->dev; + u32 *desc = req_ctx->edesc->hw_desc; + + jrentry = &req_ctx->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue_no_bklog(jrdev, desc, req_ctx->akcipher_op_done, + jrentry); + + if (ret != -EINPROGRESS) { + rsa_pub_unmap(jrdev, req_ctx->edesc, req); + rsa_io_unmap(jrdev, req_ctx->edesc, req); + kfree(req_ctx->edesc); + } else { + ret = 0; + } + + return ret; +} + static int set_rsa_pub_pdb(struct akcipher_request *req, struct rsa_edesc *edesc) { @@ -610,10 +658,11 @@ static int caam_rsa_enc(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); struct caam_rsa_key *key = &ctx->key; struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; - struct caam_jr_request_entry *jrentry; + u32 *desc; int ret; if (unlikely(!key->n || !key->e)) @@ -635,14 +684,14 @@ static int caam_rsa_enc(struct akcipher_request *req) if (ret) goto init_fail; - /* Initialize Job Descriptor */ - init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub); + desc = edesc->hw_desc; - jrentry = &edesc->jrentry; - jrentry->base = &req->base; + /* Initialize Job Descriptor */ + init_rsa_pub_desc(desc, &edesc->pdb.pub); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, jrentry); - if (ret == -EINPROGRESS) + req_ctx->akcipher_op_done = rsa_pub_done; + ret = caam_jr_enqueue(jrdev, desc, rsa_pub_done, &edesc->jrentry); + if (ret == -EINPROGRESS || ret == -EBUSY) return ret; rsa_pub_unmap(jrdev, edesc, req); @@ -657,9 +706,10 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; - struct caam_jr_request_entry *jrentry; + u32 *desc; int ret; /* Allocate extended descriptor */ @@ -672,14 +722,14 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) if (ret) goto init_fail; - /* Initialize Job Descriptor */ - init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1); + desc = edesc->hw_desc; - jrentry = &edesc->jrentry; - jrentry->base = &req->base; + /* Initialize Job Descriptor */ + init_rsa_priv_f1_desc(desc, &edesc->pdb.priv_f1); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); - if (ret == -EINPROGRESS) + req_ctx->akcipher_op_done = rsa_priv_f_done; + ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry); + if (ret == -EINPROGRESS || ret == -EBUSY) return ret; rsa_priv_f1_unmap(jrdev, edesc, req); @@ -694,9 +744,10 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; - struct caam_jr_request_entry *jrentry; + u32 *desc; int ret; /* Allocate extended descriptor */ @@ -709,14 +760,14 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) if (ret) goto init_fail; - /* Initialize Job Descriptor */ - init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2); + desc = edesc->hw_desc; - jrentry = &edesc->jrentry; - jrentry->base = &req->base; + /* Initialize Job Descriptor */ + init_rsa_priv_f2_desc(desc, &edesc->pdb.priv_f2); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); - if (ret == -EINPROGRESS) + req_ctx->akcipher_op_done = rsa_priv_f_done; + ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry); + if (ret == -EINPROGRESS || ret == -EBUSY) return ret; rsa_priv_f2_unmap(jrdev, edesc, req); @@ -731,9 +782,10 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm); + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); struct device *jrdev = ctx->dev; struct rsa_edesc *edesc; - struct caam_jr_request_entry *jrentry; + u32 *desc; int ret; /* Allocate extended descriptor */ @@ -746,14 +798,14 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) if (ret) goto init_fail; - /* Initialize Job Descriptor */ - init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3); + desc = edesc->hw_desc; - jrentry = &edesc->jrentry; - jrentry->base = &req->base; + /* Initialize Job Descriptor */ + init_rsa_priv_f3_desc(desc, &edesc->pdb.priv_f3); - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, jrentry); - if (ret == -EINPROGRESS) + req_ctx->akcipher_op_done = rsa_priv_f_done; + ret = caam_jr_enqueue(jrdev, desc, rsa_priv_f_done, &edesc->jrentry); + if (ret == -EINPROGRESS || ret == -EBUSY) return ret; rsa_priv_f3_unmap(jrdev, edesc, req); @@ -1049,6 +1101,10 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm) return -ENOMEM; } + ctx->enginectx.op.do_one_request = akcipher_do_one_req; + + akcipher_set_reqsize(tfm, sizeof(struct caam_rsa_req_ctx)); + return 0; } diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h index fe46d73..d31b040 100644 --- a/drivers/crypto/caam/caampkc.h +++ b/drivers/crypto/caam/caampkc.h @@ -13,6 +13,7 @@ #include "compat.h" #include "intern.h" #include "pdb.h" +#include /** * caam_priv_key_form - CAAM RSA private key representation @@ -88,11 +89,13 @@ struct caam_rsa_key { /** * caam_rsa_ctx - per session context. + * @enginectx : crypto engine context * @key : RSA key in DMA zone * @dev : device structure * @padding_dma : dma address of padding, for adding it to the input */ struct caam_rsa_ctx { + struct crypto_engine_ctx enginectx; struct caam_rsa_key key; struct device *dev; dma_addr_t padding_dma; @@ -104,11 +107,16 @@ struct caam_rsa_ctx { * @src : input scatterlist (stripped of leading zeros) * @fixup_src : input scatterlist (that might be stripped of leading zeros) * @fixup_src_len : length of the fixup_src input scatterlist + * @edesc : s/w-extended rsa descriptor + * @akcipher_op_done : callback used when operation is done */ struct caam_rsa_req_ctx { struct scatterlist src[2]; struct scatterlist *fixup_src; unsigned int fixup_src_len; + struct rsa_edesc *edesc; + void (*akcipher_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); }; /** diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 7e6632d..579b1ba 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -437,6 +437,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine, case CRYPTO_ALG_TYPE_AEAD: return crypto_transfer_aead_request_to_engine(engine, aead_request_cast(req)); + case CRYPTO_ALG_TYPE_AKCIPHER: + return crypto_transfer_akcipher_request_to_engine(engine, + akcipher_request_cast(req)); default: return -EINVAL; } From patchwork Sun Nov 17 22:30:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11248717 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43E2014E5 for ; Sun, 17 Nov 2019 22:31:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1477320740 for ; Sun, 17 Nov 2019 22:31:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726317AbfKQWbX (ORCPT ); Sun, 17 Nov 2019 17:31:23 -0500 Received: from inva020.nxp.com ([92.121.34.13]:56950 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726541AbfKQWbL (ORCPT ); Sun, 17 Nov 2019 17:31:11 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id CA69B1A0014; Sun, 17 Nov 2019 23:31:07 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BCA1D1A0D0A; Sun, 17 Nov 2019 23:31:07 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 5DBA720395; Sun, 17 Nov 2019 23:31:07 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , Tom Lendacky , Gary Hook , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 12/12] crypto: caam - add crypto_engine support for HASH algorithms Date: Mon, 18 Nov 2019 00:30:45 +0200 Message-Id: <1574029845-22796-13-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> References: <1574029845-22796-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add crypto_engine support for HASH algorithms, to make use of the engine queue. The requests, with backlog flag, will be listed into crypto-engine queue and processed by CAAM when free. In case the queue is empty, the request is directly sent to CAAM. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamhash.c | 155 +++++++++++++++++++++++++++++------------ drivers/crypto/caam/jr.c | 3 + 2 files changed, 113 insertions(+), 45 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index d9de3dc..7f9ffde 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -65,6 +65,7 @@ #include "sg_sw_sec4.h" #include "key_gen.h" #include "caamhash_desc.h" +#include #define CAAM_CRA_PRIORITY 3000 @@ -86,6 +87,7 @@ static struct list_head hash_list; /* ahash per-session context */ struct caam_hash_ctx { + struct crypto_engine_ctx enginectx; u32 sh_desc_update[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; u32 sh_desc_update_first[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; u32 sh_desc_fin[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; @@ -112,10 +114,13 @@ struct caam_hash_state { u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned; int buflen_1; u8 caam_ctx[MAX_CTX_LEN] ____cacheline_aligned; - int (*update)(struct ahash_request *req); + int (*update)(struct ahash_request *req) ____cacheline_aligned; int (*final)(struct ahash_request *req); int (*finup)(struct ahash_request *req); int current_buf; + struct ahash_edesc *edesc; + void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); }; struct caam_export_state { @@ -125,6 +130,9 @@ struct caam_export_state { int (*update)(struct ahash_request *req); int (*final)(struct ahash_request *req); int (*finup)(struct ahash_request *req); + struct ahash_edesc *edesc; + void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); }; static inline void switch_buf(struct caam_hash_state *state) @@ -604,6 +612,7 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, { struct caam_jr_request_entry *jrentry = context; struct ahash_request *req = ahash_request_cast(jrentry->base); + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); int digestsize = crypto_ahash_digestsize(ahash); @@ -613,7 +622,8 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); + edesc = state->edesc; + if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -625,7 +635,14 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, ctx->ctx_len, 1); - req->base.complete(&req->base, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + req->base.complete(&req->base, ecode); + else + crypto_finalize_hash_request(jrp->engine, req, ecode); } static void ahash_done(struct device *jrdev, u32 *desc, u32 err, @@ -645,6 +662,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, { struct caam_jr_request_entry *jrentry = context; struct ahash_request *req = ahash_request_cast(jrentry->base); + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); @@ -654,7 +672,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); + edesc = state->edesc; if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -670,7 +688,15 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->result, digestsize, 1); - req->base.complete(&req->base, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + req->base.complete(&req->base, ecode); + else + crypto_finalize_hash_request(jrp->engine, req, ecode); + } static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, @@ -695,6 +721,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, { struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct caam_hash_state *state = ahash_request_ctx(req); gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; struct ahash_edesc *edesc; @@ -707,6 +734,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, } edesc->jrentry.base = &req->base; + state->edesc = edesc; init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc), HDR_SHARE_DEFER | HDR_REVERSE); @@ -750,6 +778,32 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx, return 0; } +static int ahash_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = ahash_request_cast(areq); + struct caam_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct caam_hash_state *state = ahash_request_ctx(req); + struct caam_jr_request_entry *jrentry; + struct device *jrdev = ctx->jrdev; + u32 *desc = state->edesc->hw_desc; + int ret; + + jrentry = &state->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue_no_bklog(jrdev, desc, state->ahash_op_done, + jrentry); + + if (ret != -EINPROGRESS) { + ahash_unmap(jrdev, state->edesc, req, 0); + kfree(state->edesc); + } else { + ret = 0; + } + + return ret; +} + /* submit update job descriptor */ static int ahash_update_ctx(struct ahash_request *req) { @@ -766,7 +820,6 @@ static int ahash_update_ctx(struct ahash_request *req) u32 *desc; int src_nents, mapped_nents, sec4_sg_bytes, sec4_sg_src_index; struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret = 0; last_buflen = *next_buflen; @@ -864,10 +917,11 @@ static int ahash_update_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done_bi; - ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, jrentry); - if (ret != -EINPROGRESS) + ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, + &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) goto unmap_ctx; } else if (*next_buflen) { scatterwalk_map_and_copy(buf + *buflen, req->src, 0, @@ -900,7 +954,6 @@ static int ahash_final_ctx(struct ahash_request *req) int sec4_sg_bytes; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret; sec4_sg_bytes = pad_sg_nents(1 + (buflen ? 1 : 0)) * @@ -943,10 +996,11 @@ static int ahash_final_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done_ctx_src; + + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry); - if (ret == -EINPROGRESS) + if ((ret == -EINPROGRESS) || (ret == -EBUSY)) return ret; unmap_ctx: @@ -967,7 +1021,6 @@ static int ahash_finup_ctx(struct ahash_request *req) int src_nents, mapped_nents; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret; src_nents = sg_nents_for_len(req->src, req->nbytes); @@ -1022,10 +1075,10 @@ static int ahash_finup_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done_ctx_src; - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, jrentry); - if (ret == -EINPROGRESS) + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry); + if ((ret == -EINPROGRESS) || (ret == -EBUSY)) return ret; unmap_ctx: @@ -1044,7 +1097,6 @@ static int ahash_digest(struct ahash_request *req) int digestsize = crypto_ahash_digestsize(ahash); int src_nents, mapped_nents; struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret; state->buf_dma = 0; @@ -1097,10 +1149,10 @@ static int ahash_digest(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done; - ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1120,7 +1172,6 @@ static int ahash_final_no_ctx(struct ahash_request *req) u32 *desc; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret; /* allocate space for base edesc and hw desc commands, link tables */ @@ -1150,20 +1201,19 @@ static int ahash_final_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done; - ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } return ret; - unmap: +unmap: ahash_unmap(jrdev, edesc, req, digestsize); kfree(edesc); return -ENOMEM; - } /* submit ahash update if it the first job descriptor after update */ @@ -1181,7 +1231,6 @@ static int ahash_update_no_ctx(struct ahash_request *req) int in_len = *buflen + req->nbytes, to_hash; int sec4_sg_bytes, src_nents, mapped_nents; struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; u32 *desc; int ret = 0; @@ -1271,10 +1320,11 @@ static int ahash_update_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done_ctx_dst; - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry); - if (ret != -EINPROGRESS) + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, + &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) goto unmap_ctx; state->update = ahash_update_ctx; @@ -1294,7 +1344,7 @@ static int ahash_update_no_ctx(struct ahash_request *req) 1); return ret; - unmap_ctx: +unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE); kfree(edesc); return ret; @@ -1312,7 +1362,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req) int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret; src_nents = sg_nents_for_len(req->src, req->nbytes); @@ -1368,10 +1417,10 @@ static int ahash_finup_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done; - ret = caam_jr_enqueue(jrdev, desc, ahash_done, jrentry); - if (ret != -EINPROGRESS) { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1398,7 +1447,6 @@ static int ahash_update_first(struct ahash_request *req) u32 *desc; int src_nents, mapped_nents; struct ahash_edesc *edesc; - struct caam_jr_request_entry *jrentry; int ret = 0; *next_buflen = req->nbytes & (blocksize - 1); @@ -1468,10 +1516,11 @@ static int ahash_update_first(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - jrentry = &edesc->jrentry; + state->ahash_op_done = ahash_done_ctx_dst; - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, jrentry); - if (ret != -EINPROGRESS) + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, + &edesc->jrentry); + if ((ret != -EINPROGRESS) && (ret != -EBUSY)) goto unmap_ctx; state->update = ahash_update_ctx; @@ -1509,6 +1558,7 @@ static int ahash_init(struct ahash_request *req) state->update = ahash_update_first; state->finup = ahash_finup_first; state->final = ahash_final_no_ctx; + state->ahash_op_done = ahash_done; state->ctx_dma = 0; state->ctx_dma_len = 0; @@ -1562,6 +1612,8 @@ static int ahash_export(struct ahash_request *req, void *out) export->update = state->update; export->final = state->final; export->finup = state->finup; + export->edesc = state->edesc; + export->ahash_op_done = state->ahash_op_done; return 0; } @@ -1578,6 +1630,8 @@ static int ahash_import(struct ahash_request *req, const void *in) state->update = export->update; state->final = export->final; state->finup = export->finup; + state->edesc = export->edesc; + state->ahash_op_done = export->ahash_op_done; return 0; } @@ -1837,7 +1891,9 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) } dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update, - offsetof(struct caam_hash_ctx, key), + offsetof(struct caam_hash_ctx, key) - + offsetof(struct caam_hash_ctx, + sh_desc_update), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (dma_mapping_error(ctx->jrdev, dma_addr)) { dev_err(ctx->jrdev, "unable to map shared descriptors\n"); @@ -1855,11 +1911,19 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) ctx->sh_desc_update_dma = dma_addr; ctx->sh_desc_update_first_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_update_first); + sh_desc_update_first) - + offsetof(struct caam_hash_ctx, + sh_desc_update); ctx->sh_desc_fin_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_fin); + sh_desc_fin) - + offsetof(struct caam_hash_ctx, + sh_desc_update); ctx->sh_desc_digest_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_digest); + sh_desc_digest) - + offsetof(struct caam_hash_ctx, + sh_desc_update); + + ctx->enginectx.op.do_one_request = ahash_do_one_req; crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), sizeof(struct caam_hash_state)); @@ -1876,7 +1940,8 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm) struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm); dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma, - offsetof(struct caam_hash_ctx, key), + offsetof(struct caam_hash_ctx, key) - + offsetof(struct caam_hash_ctx, sh_desc_update), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (ctx->key_dir != DMA_NONE) dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma, diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index 579b1ba..5f7b797 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -440,6 +440,9 @@ static int transfer_request_to_engine(struct crypto_engine *engine, case CRYPTO_ALG_TYPE_AKCIPHER: return crypto_transfer_akcipher_request_to_engine(engine, akcipher_request_cast(req)); + case CRYPTO_ALG_TYPE_AHASH: + return crypto_transfer_hash_request_to_engine(engine, + ahash_request_cast(req)); default: return -EINVAL; }