From patchwork Wed Apr 15 20:26:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11491919 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A5DB14DD for ; Wed, 15 Apr 2020 20:26:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30B2E20784 for ; Wed, 15 Apr 2020 20:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437848AbgDOU0x (ORCPT ); Wed, 15 Apr 2020 16:26:53 -0400 Received: from inva021.nxp.com ([92.121.34.21]:45094 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437346AbgDOU0v (ORCPT ); Wed, 15 Apr 2020 16:26:51 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id E061A200A1C; Wed, 15 Apr 2020 22:26:29 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id D3A6D200A0C; Wed, 15 Apr 2020 22:26:29 +0200 (CEST) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 14695202B0; Wed, 15 Apr 2020 22:26:29 +0200 (CEST) From: Iuliana Prodan To: Herbert Xu , Baolin Wang , Ard Biesheuvel , Corentin Labbe , Horia Geanta , Maxime Coquelin , Alexandre Torgue , Maxime Ripard Cc: Aymen Sghaier , "David S. Miller" , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v5 1/3] crypto: algapi - create function to add request in front of queue Date: Wed, 15 Apr 2020 23:26:13 +0300 Message-Id: <1586982375-18710-2-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> References: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add crypto_enqueue_request_head function that enqueues a MAY_BACKLOG request in front of queue. This will be used in crypto-engine, on error path. In case a request was not executed by hardware, enqueue it back in front of queue (to keep the order of requests). Signed-off-by: Iuliana Prodan --- crypto/algapi.c | 11 +++++++++++ include/crypto/algapi.h | 2 ++ 2 files changed, 13 insertions(+) diff --git a/crypto/algapi.c b/crypto/algapi.c index 69605e21af92..a8732e8e7843 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -904,6 +904,17 @@ int crypto_enqueue_request(struct crypto_queue *queue, } EXPORT_SYMBOL_GPL(crypto_enqueue_request); +void crypto_enqueue_request_head(struct crypto_queue *queue, + struct crypto_async_request *request) +{ + /* only backlog requests can be enqueued */ + BUG_ON(!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)); + + queue->qlen++; + list_add(&request->list, &queue->list); +} +EXPORT_SYMBOL_GPL(crypto_enqueue_request_head); + struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue) { struct list_head *request; diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index e115f9215ed5..00a9cf98debe 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -125,6 +125,8 @@ int crypto_inst_setname(struct crypto_instance *inst, const char *name, void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen); int crypto_enqueue_request(struct crypto_queue *queue, struct crypto_async_request *request); +void crypto_enqueue_request_head(struct crypto_queue *queue, + struct crypto_async_request *request); struct crypto_async_request *crypto_dequeue_request(struct crypto_queue *queue); static inline unsigned int crypto_queue_len(struct crypto_queue *queue) { From patchwork Wed Apr 15 20:26:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11491923 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD83F14DD for ; Wed, 15 Apr 2020 20:27:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B547A2076D for ; Wed, 15 Apr 2020 20:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437905AbgDOU1J (ORCPT ); Wed, 15 Apr 2020 16:27:09 -0400 Received: from inva021.nxp.com ([92.121.34.21]:45142 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437803AbgDOU0x (ORCPT ); Wed, 15 Apr 2020 16:26:53 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 35C09200A21; Wed, 15 Apr 2020 22:26:34 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 1E71C200A0C; Wed, 15 Apr 2020 22:26:34 +0200 (CEST) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 83E09202B0; Wed, 15 Apr 2020 22:26:33 +0200 (CEST) From: Iuliana Prodan To: Herbert Xu , Baolin Wang , Ard Biesheuvel , Corentin Labbe , Horia Geanta , Maxime Coquelin , Alexandre Torgue , Maxime Ripard Cc: Aymen Sghaier , "David S. Miller" , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v5 2/3] crypto: engine - support for parallel requests based on retry mechanism Date: Wed, 15 Apr 2020 23:26:14 +0300 Message-Id: <1586982375-18710-3-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> References: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Added support for executing multiple requests, in parallel, for crypto engine based on a retry mechanism. If hardware was unable to execute a backlog request, enqueue it back in front of crypto-engine queue, to keep the order of requests. A new variable is added, retry_support (this is to keep the backward compatibility of crypto-engine) , which keeps track whether the hardware has support for retry mechanism and, also, if can run multiple requests. If do_one_request() returns: >= 0: hardware executed the request successfully; < 0: this is the old error path. If hardware has support for retry mechanism, the request is put back in front of crypto-engine queue. For backwards compatibility, if the retry support is not available, the crypto-engine will work as before. Only MAY_BACKLOG requests are enqueued back into crypto-engine's queue, since the others can be dropped. The new crypto_engine_alloc_init_and_set function, initializes crypto-engine, sets the maximum size for crypto-engine software queue (not hardcoded anymore) and the retry_support variable is set, by default, to false. On crypto_pump_requests(), if do_one_request() returns >= 0, a new request is send to hardware, until there is no space in hardware and do_one_request() returns < 0. By default, retry_support is false and crypto-engine will work as before - will send requests to hardware, one-by-one, on crypto_pump_requests(), and complete it, on crypto_finalize_request(), and so on. To support multiple requests, in each driver, retry_support must be set on true, and if do_one_request() returns an error the request must not be freed, since it will be enqueued back into crypto-engine's queue. When all drivers, that use crypto-engine now, will be updated for retry mechanism, the retry_support variable can be removed. Signed-off-by: Iuliana Prodan --- crypto/crypto_engine.c | 140 +++++++++++++++++++++++++++++++--------- include/crypto/engine.h | 10 ++- 2 files changed, 118 insertions(+), 32 deletions(-) diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index eb029ff1e05a..2e2b879d31bb 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -22,32 +22,36 @@ * @err: error number */ static void crypto_finalize_request(struct crypto_engine *engine, - struct crypto_async_request *req, int err) + struct crypto_async_request *req, int err) { unsigned long flags; - bool finalize_cur_req = false; + bool finalize_req = false; int ret; struct crypto_engine_ctx *enginectx; - spin_lock_irqsave(&engine->queue_lock, flags); - if (engine->cur_req == req) - finalize_cur_req = true; - spin_unlock_irqrestore(&engine->queue_lock, flags); + /* + * If hardware cannot enqueue more requests + * and retry mechanism is not supported + * make sure we are completing the current request + */ + if (!engine->retry_support) { + spin_lock_irqsave(&engine->queue_lock, flags); + if (engine->cur_req == req) { + finalize_req = true; + engine->cur_req = NULL; + } + spin_unlock_irqrestore(&engine->queue_lock, flags); + } - if (finalize_cur_req) { + if (finalize_req || engine->retry_support) { enginectx = crypto_tfm_ctx(req->tfm); - if (engine->cur_req_prepared && + if (enginectx->op.prepare_request && enginectx->op.unprepare_request) { ret = enginectx->op.unprepare_request(engine, req); if (ret) dev_err(engine->dev, "failed to unprepare request\n"); } - spin_lock_irqsave(&engine->queue_lock, flags); - engine->cur_req = NULL; - engine->cur_req_prepared = false; - spin_unlock_irqrestore(&engine->queue_lock, flags); } - req->complete(req, err); kthread_queue_work(engine->kworker, &engine->pump_requests); @@ -74,7 +78,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, spin_lock_irqsave(&engine->queue_lock, flags); /* Make sure we are not already running a request */ - if (engine->cur_req) + if (!engine->retry_support && engine->cur_req) goto out; /* If another context is idling then defer */ @@ -108,13 +112,21 @@ static void crypto_pump_requests(struct crypto_engine *engine, goto out; } +start_request: /* Get the fist request from the engine queue to handle */ backlog = crypto_get_backlog(&engine->queue); async_req = crypto_dequeue_request(&engine->queue); if (!async_req) goto out; - engine->cur_req = async_req; + /* + * If hardware doesn't support the retry mechanism, + * keep track of the request we are processing now. + * We'll need it on completion (crypto_finalize_request). + */ + if (!engine->retry_support) + engine->cur_req = async_req; + if (backlog) backlog->complete(backlog, -EINPROGRESS); @@ -130,7 +142,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, ret = engine->prepare_crypt_hardware(engine); if (ret) { dev_err(engine->dev, "failed to prepare crypt hardware\n"); - goto req_err; + goto req_err_2; } } @@ -141,28 +153,75 @@ static void crypto_pump_requests(struct crypto_engine *engine, if (ret) { dev_err(engine->dev, "failed to prepare request: %d\n", ret); - goto req_err; + goto req_err_2; } - engine->cur_req_prepared = true; } if (!enginectx->op.do_one_request) { dev_err(engine->dev, "failed to do request\n"); ret = -EINVAL; - goto req_err; + goto req_err_1; } + ret = enginectx->op.do_one_request(engine, async_req); - if (ret) { - dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret); - goto req_err; + + /* Request unsuccessfully executed by hardware */ + if (ret < 0) { + /* Non-backlog requests can be dropped */ + if (!engine->retry_support || + !(async_req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + dev_err(engine->dev, + "Failed to do one request from queue: %d\n", + ret); + goto req_err_1; + } + /* + * If retry mechanism is supported, + * unprepare current request and + * enqueue it back into crypto-engine queue. + */ + if (enginectx->op.unprepare_request) { + ret = enginectx->op.unprepare_request(engine, + async_req); + if (ret) + dev_err(engine->dev, + "failed to unprepare request\n"); + } + + spin_lock_irqsave(&engine->queue_lock, flags); + /* + * If hardware was unable to execute request, enqueue it + * back in front of crypto-engine queue, to keep the order + * of requests. + */ + crypto_enqueue_request_head(&engine->queue, async_req); + + kthread_queue_work(engine->kworker, &engine->pump_requests); + goto out; } - return; -req_err: - crypto_finalize_request(engine, async_req, ret); + goto retry; + +req_err_1: + if (enginectx->op.unprepare_request) { + ret = enginectx->op.unprepare_request(engine, async_req); + if (ret) + dev_err(engine->dev, "failed to unprepare request\n"); + } + +req_err_2: + async_req->complete(async_req, ret); + +retry: + /* If retry mechanism is supported, send new requests to engine */ + if (engine->retry_support) { + spin_lock_irqsave(&engine->queue_lock, flags); + goto start_request; + } return; out: spin_unlock_irqrestore(&engine->queue_lock, flags); + return; } static void crypto_pump_work(struct kthread_work *work) @@ -386,15 +445,20 @@ int crypto_engine_stop(struct crypto_engine *engine) EXPORT_SYMBOL_GPL(crypto_engine_stop); /** - * crypto_engine_alloc_init - allocate crypto hardware engine structure and - * initialize it. + * crypto_engine_alloc_init_and_set - allocate crypto hardware engine structure + * and initialize it by setting the maximum number of entries in the software + * crypto-engine queue. * @dev: the device attached with one hardware engine + * @retry_support: whether hardware has support for retry mechanism * @rt: whether this queue is set to run as a realtime task + * @qlen: maximum size of the crypto-engine queue * * This must be called from context that can sleep. * Return: the crypto engine structure on success, else NULL. */ -struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt) +struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, + bool retry_support, + bool rt, int qlen) { struct sched_param param = { .sched_priority = MAX_RT_PRIO / 2 }; struct crypto_engine *engine; @@ -411,12 +475,12 @@ struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt) engine->running = false; engine->busy = false; engine->idling = false; - engine->cur_req_prepared = false; + engine->retry_support = retry_support; engine->priv_data = dev; snprintf(engine->name, sizeof(engine->name), "%s-engine", dev_name(dev)); - crypto_init_queue(&engine->queue, CRYPTO_ENGINE_MAX_QLEN); + crypto_init_queue(&engine->queue, qlen); spin_lock_init(&engine->queue_lock); engine->kworker = kthread_create_worker(0, "%s", engine->name); @@ -433,6 +497,22 @@ struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt) return engine; } +EXPORT_SYMBOL_GPL(crypto_engine_alloc_init_and_set); + +/** + * crypto_engine_alloc_init - allocate crypto hardware engine structure and + * initialize it. + * @dev: the device attached with one hardware engine + * @rt: whether this queue is set to run as a realtime task + * + * This must be called from context that can sleep. + * Return: the crypto engine structure on success, else NULL. + */ +struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt) +{ + return crypto_engine_alloc_init_and_set(dev, false, rt, + CRYPTO_ENGINE_MAX_QLEN); +} EXPORT_SYMBOL_GPL(crypto_engine_alloc_init); /** diff --git a/include/crypto/engine.h b/include/crypto/engine.h index e29cd67f93c7..b92d7ff1528a 100644 --- a/include/crypto/engine.h +++ b/include/crypto/engine.h @@ -24,7 +24,9 @@ * @idling: the engine is entering idle state * @busy: request pump is busy * @running: the engine is on working - * @cur_req_prepared: current request is prepared + * @retry_support: indication that the hardware allows re-execution + * of a failed backlog request + * crypto-engine, in head position to keep order * @list: link with the global crypto engine list * @queue_lock: spinlock to syncronise access to request queue * @queue: the crypto queue of the engine @@ -45,7 +47,8 @@ struct crypto_engine { bool idling; bool busy; bool running; - bool cur_req_prepared; + + bool retry_support; struct list_head list; spinlock_t queue_lock; @@ -102,6 +105,9 @@ void crypto_finalize_skcipher_request(struct crypto_engine *engine, int crypto_engine_start(struct crypto_engine *engine); int crypto_engine_stop(struct crypto_engine *engine); struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt); +struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, + bool retry_support, + bool rt, int qlen); int crypto_engine_exit(struct crypto_engine *engine); #endif /* _CRYPTO_ENGINE_H */ From patchwork Wed Apr 15 20:26:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 11491921 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 996E1913 for ; Wed, 15 Apr 2020 20:27:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85EF62076D for ; Wed, 15 Apr 2020 20:27:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437867AbgDOU0y (ORCPT ); Wed, 15 Apr 2020 16:26:54 -0400 Received: from inva020.nxp.com ([92.121.34.13]:54544 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437804AbgDOU0v (ORCPT ); Wed, 15 Apr 2020 16:26:51 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 2BB151A0A17; Wed, 15 Apr 2020 22:26:38 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0ED6B1A0A0E; Wed, 15 Apr 2020 22:26:38 +0200 (CEST) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 6C183202B0; Wed, 15 Apr 2020 22:26:37 +0200 (CEST) From: Iuliana Prodan To: Herbert Xu , Baolin Wang , Ard Biesheuvel , Corentin Labbe , Horia Geanta , Maxime Coquelin , Alexandre Torgue , Maxime Ripard Cc: Aymen Sghaier , "David S. Miller" , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v5 3/3] crypto: engine - support for batch requests Date: Wed, 15 Apr 2020 23:26:15 +0300 Message-Id: <1586982375-18710-4-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> References: <1586982375-18710-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Added support for batch requests, per crypto engine. A new callback is added, do_batch_requests, which executes a batch of requests. This has the crypto_engine structure as argument (for cases when more than one crypto-engine is used). The crypto_engine_alloc_init_and_set function, initializes crypto-engine, but also, sets the do_batch_requests callback. On crypto_pump_requests, if do_batch_requests callback is implemented in a driver, this will be executed. The link between the requests will be done in driver, if possible. do_batch_requests is available only if the hardware has support for multiple request. Signed-off-by: Iuliana Prodan --- crypto/crypto_engine.c | 27 ++++++++++++++++++++++++++- include/crypto/engine.h | 5 +++++ 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 2e2b879d31bb..217eae3e8037 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -221,6 +221,18 @@ static void crypto_pump_requests(struct crypto_engine *engine, out: spin_unlock_irqrestore(&engine->queue_lock, flags); + + /* + * Batch requests is possible only if + * hardware can enqueue multiple requests + */ + if (engine->do_batch_requests) { + ret = engine->do_batch_requests(engine); + if (ret) + dev_err(engine->dev, "failed to do batch requests: %d\n", + ret); + } + return; } @@ -450,6 +462,12 @@ EXPORT_SYMBOL_GPL(crypto_engine_stop); * crypto-engine queue. * @dev: the device attached with one hardware engine * @retry_support: whether hardware has support for retry mechanism + * @cbk_do_batch: pointer to a callback function to be invoked when executing a + * a batch of requests. + * This has the form: + * callback(struct crypto_engine *engine) + * where: + * @engine: the crypto engine structure. * @rt: whether this queue is set to run as a realtime task * @qlen: maximum size of the crypto-engine queue * @@ -458,6 +476,7 @@ EXPORT_SYMBOL_GPL(crypto_engine_stop); */ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, bool retry_support, + int (*cbk_do_batch)(struct crypto_engine *engine), bool rt, int qlen) { struct sched_param param = { .sched_priority = MAX_RT_PRIO / 2 }; @@ -477,6 +496,12 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, engine->idling = false; engine->retry_support = retry_support; engine->priv_data = dev; + /* + * Batch requests is possible only if + * hardware has support for retry mechanism. + */ + engine->do_batch_requests = retry_support ? cbk_do_batch : NULL; + snprintf(engine->name, sizeof(engine->name), "%s-engine", dev_name(dev)); @@ -510,7 +535,7 @@ EXPORT_SYMBOL_GPL(crypto_engine_alloc_init_and_set); */ struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt) { - return crypto_engine_alloc_init_and_set(dev, false, rt, + return crypto_engine_alloc_init_and_set(dev, false, NULL, rt, CRYPTO_ENGINE_MAX_QLEN); } EXPORT_SYMBOL_GPL(crypto_engine_alloc_init); diff --git a/include/crypto/engine.h b/include/crypto/engine.h index b92d7ff1528a..3f06e40d063a 100644 --- a/include/crypto/engine.h +++ b/include/crypto/engine.h @@ -37,6 +37,8 @@ * @unprepare_crypt_hardware: there are currently no more requests on the * queue so the subsystem notifies the driver that it may relax the * hardware by issuing this call + * @do_batch_requests: execute a batch of requests. Depends on multiple + * requests support. * @kworker: kthread worker struct for request pump * @pump_requests: work struct for scheduling work to the request pump * @priv_data: the engine private data @@ -59,6 +61,8 @@ struct crypto_engine { int (*prepare_crypt_hardware)(struct crypto_engine *engine); int (*unprepare_crypt_hardware)(struct crypto_engine *engine); + int (*do_batch_requests)(struct crypto_engine *engine); + struct kthread_worker *kworker; struct kthread_work pump_requests; @@ -107,6 +111,7 @@ int crypto_engine_stop(struct crypto_engine *engine); struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt); struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, bool retry_support, + int (*cbk_do_batch)(struct crypto_engine *engine), bool rt, int qlen); int crypto_engine_exit(struct crypto_engine *engine);