Message ID | 1464615122-16685-2-git-send-email-clabbe.montjoie@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Herbert Xu |
Headers | show |
On 30 May 2016 at 21:32, LABBE Corentin <clabbe.montjoie@gmail.com> wrote: > The current crypto engine allow only ablkcipher_request to be enqueued. > Thus denying any use of it for hardware that also handle hash algo. > > This patch convert all ablkcipher_request references to the > more general crypto_async_request. > > Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> > --- > crypto/crypto_engine.c | 17 +++++++---------- > include/crypto/algapi.h | 14 +++++++------- > 2 files changed, 14 insertions(+), 17 deletions(-) > > diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c > index a55c82d..b658cb8 100644 > --- a/crypto/crypto_engine.c > +++ b/crypto/crypto_engine.c > @@ -19,7 +19,7 @@ > #define CRYPTO_ENGINE_MAX_QLEN 10 > > void crypto_finalize_request(struct crypto_engine *engine, > - struct ablkcipher_request *req, int err); > + struct crypto_async_request *req, int err); > > /** > * crypto_pump_requests - dequeue one request from engine queue to process > @@ -34,7 +34,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, > bool in_kthread) > { > struct crypto_async_request *async_req, *backlog; > - struct ablkcipher_request *req; > unsigned long flags; > bool was_busy = false; > int ret; > @@ -82,9 +81,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, > if (!async_req) > goto out; > > - req = ablkcipher_request_cast(async_req); > - > - engine->cur_req = req; > + engine->cur_req = async_req; > if (backlog) > backlog->complete(backlog, -EINPROGRESS); > > @@ -142,7 +139,7 @@ static void crypto_pump_work(struct kthread_work *work) > * @req: the request need to be listed into the engine queue > */ > int crypto_transfer_request(struct crypto_engine *engine, > - struct ablkcipher_request *req, bool need_pump) > + struct crypto_async_request *req, bool need_pump) > { > unsigned long flags; > int ret; > @@ -154,7 +151,7 @@ int crypto_transfer_request(struct crypto_engine *engine, > return -ESHUTDOWN; > } > > - ret = ablkcipher_enqueue_request(&engine->queue, req); > + ret = crypto_enqueue_request(&engine->queue, req); > > if (!engine->busy && need_pump) > queue_kthread_work(&engine->kworker, &engine->pump_requests); > @@ -171,7 +168,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request); > * @req: the request need to be listed into the engine queue > */ > int crypto_transfer_request_to_engine(struct crypto_engine *engine, > - struct ablkcipher_request *req) > + struct crypto_async_request *req) > { > return crypto_transfer_request(engine, req, true); > } > @@ -184,7 +181,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine); > * @err: error number > */ > void crypto_finalize_request(struct crypto_engine *engine, > - struct ablkcipher_request *req, int err) > + struct crypto_async_request *req, int err) > { > unsigned long flags; > bool finalize_cur_req = false; > @@ -208,7 +205,7 @@ void crypto_finalize_request(struct crypto_engine *engine, > spin_unlock_irqrestore(&engine->queue_lock, flags); > } > > - req->base.complete(&req->base, err); > + req->complete(req, err); > > queue_kthread_work(&engine->kworker, &engine->pump_requests); > } > diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h > index eeafd21..d720a2a 100644 > --- a/include/crypto/algapi.h > +++ b/include/crypto/algapi.h > @@ -173,26 +173,26 @@ struct crypto_engine { > int (*unprepare_crypt_hardware)(struct crypto_engine *engine); > > int (*prepare_request)(struct crypto_engine *engine, > - struct ablkcipher_request *req); > + struct crypto_async_request *req); > int (*unprepare_request)(struct crypto_engine *engine, > - struct ablkcipher_request *req); > + struct crypto_async_request *req); > int (*crypt_one_request)(struct crypto_engine *engine, > - struct ablkcipher_request *req); > + struct crypto_async_request *req); > > struct kthread_worker kworker; > struct task_struct *kworker_task; > struct kthread_work pump_requests; > > void *priv_data; > - struct ablkcipher_request *cur_req; > + struct crypto_async_request *cur_req; > }; > > int crypto_transfer_request(struct crypto_engine *engine, > - struct ablkcipher_request *req, bool need_pump); > + struct crypto_async_request *req, bool need_pump); > int crypto_transfer_request_to_engine(struct crypto_engine *engine, > - struct ablkcipher_request *req); > + struct crypto_async_request *req); > void crypto_finalize_request(struct crypto_engine *engine, > - struct ablkcipher_request *req, int err); > + struct crypto_async_request *req, int err); > int crypto_engine_start(struct crypto_engine *engine); > int crypto_engine_stop(struct crypto_engine *engine); > struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt); > -- > 2.7.3 > Reviewed-by: Baolin Wang <baolin.wang@linaro.org>
On Mon, May 30, 2016 at 03:32:01PM +0200, LABBE Corentin wrote: > The current crypto engine allow only ablkcipher_request to be enqueued. > Thus denying any use of it for hardware that also handle hash algo. > > This patch convert all ablkcipher_request references to the > more general crypto_async_request. > > Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> First of all your patches break bisection which is unacceptable. Secondly you should not be casting generic requests to a specific type. Assuming a single engine only has to deal with one type of requests, what you could do is to create a separate engine type for each crypto type that you want to support. Cheers,
On Thu, Jun 02, 2016 at 04:32:59PM +0800, Herbert Xu wrote: > On Mon, May 30, 2016 at 03:32:01PM +0200, LABBE Corentin wrote: > > The current crypto engine allow only ablkcipher_request to be enqueued. > > Thus denying any use of it for hardware that also handle hash algo. > > > > This patch convert all ablkcipher_request references to the > > more general crypto_async_request. > > > > Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> > > First of all your patches break bisection which is unacceptable. > How do I break bisection ? > Secondly you should not be casting generic requests to a specific type. > I didnt add any request type check since omap use engine only for ciphers. My view if usage of crypt_one_request() if hash and ciphers coule be used is to test crypto_tfm_alg_type(areq->tfm) to check which alg is used (CRYPTO_ALG_TYPE_AHASH vs CRYPTO_ALG_TYPE_ABLKCIPHER) For example, this is my setted crypt_one_request function: int handle_request(struct crypto_engine *engine, struct crypto_async_request *areq) { int rtype; struct ahash_request *hreq; struct ablkcipher_request *breq; int err = -EINVAL; rtype = crypto_tfm_alg_type(areq->tfm); switch (rtype) { case CRYPTO_ALG_TYPE_AHASH: hreq = ahash_request_cast(areq); err = sun4i_ss_hash(hreq); break; case CRYPTO_ALG_TYPE_ABLKCIPHER: breq = ablkcipher_request_cast(areq); err = sun4i_ss_cipher(breq); } crypto_finalize_request(engine, areq, err); return 0; } > Assuming a single engine only has to deal with one type of requests, > what you could do is to create a separate engine type for each > crypto type that you want to support. > So, if my hwcrypto can handle hash and ciphers, I need to have two engine and each crypt_one_request()/hash_one_request() need to lock the engine. Having only one engine that handle all types permit to avoid this locking. Regards -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 02, 2016 at 11:12:13AM +0200, LABBE Corentin wrote: > On Thu, Jun 02, 2016 at 04:32:59PM +0800, Herbert Xu wrote: > > On Mon, May 30, 2016 at 03:32:01PM +0200, LABBE Corentin wrote: > > > The current crypto engine allow only ablkcipher_request to be enqueued. > > > Thus denying any use of it for hardware that also handle hash algo. > > > > > > This patch convert all ablkcipher_request references to the > > > more general crypto_async_request. > > > > > > Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> > > > > First of all your patches break bisection which is unacceptable. > > > > How do I break bisection ? Because the kernel won't compile after your first patch. Either do it as one single patch or use the more elaborate "new interafce" + "switchover" + "delete old interface" ritual. > So, if my hwcrypto can handle hash and ciphers, I need to have two engine and each crypt_one_request()/hash_one_request() > need to lock the engine. > Having only one engine that handle all types permit to avoid this locking. OK then we should add some type-checking as you suggested. What I don't want is just blind casting by the user of crypto_engine. Thanks,
On Thu, Jun 02, 2016 at 05:19:40PM +0800, Herbert Xu wrote: > On Thu, Jun 02, 2016 at 11:12:13AM +0200, LABBE Corentin wrote: > > On Thu, Jun 02, 2016 at 04:32:59PM +0800, Herbert Xu wrote: > > > On Mon, May 30, 2016 at 03:32:01PM +0200, LABBE Corentin wrote: > > > > The current crypto engine allow only ablkcipher_request to be enqueued. > > > > Thus denying any use of it for hardware that also handle hash algo. > > > > > > > > This patch convert all ablkcipher_request references to the > > > > more general crypto_async_request. > > > > > > > > Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> > > > > > > First of all your patches break bisection which is unacceptable. > > > > > > > How do I break bisection ? > > Because the kernel won't compile after your first patch. > > Either do it as one single patch or use the more elaborate "new > interafce" + "switchover" + "delete old interface" ritual. > Since my patch is small and easy (and only one client is modified), do you mind if I choose the first one ? > > So, if my hwcrypto can handle hash and ciphers, I need to have two engine and each crypt_one_request()/hash_one_request() > > need to lock the engine. > > Having only one engine that handle all types permit to avoid this locking. > > OK then we should add some type-checking as you suggested. What > I don't want is just blind casting by the user of crypto_engine. I will add this type checking on my patch against omap-aes/des. Regards -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jun 02, 2016 at 11:38:35AM +0200, LABBE Corentin wrote: > > Since my patch is small and easy (and only one client is modified), do you mind if I choose the first one ? Sure. > I will add this type checking on my patch against omap-aes/des. Thanks,
diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index a55c82d..b658cb8 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -19,7 +19,7 @@ #define CRYPTO_ENGINE_MAX_QLEN 10 void crypto_finalize_request(struct crypto_engine *engine, - struct ablkcipher_request *req, int err); + struct crypto_async_request *req, int err); /** * crypto_pump_requests - dequeue one request from engine queue to process @@ -34,7 +34,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, bool in_kthread) { struct crypto_async_request *async_req, *backlog; - struct ablkcipher_request *req; unsigned long flags; bool was_busy = false; int ret; @@ -82,9 +81,7 @@ static void crypto_pump_requests(struct crypto_engine *engine, if (!async_req) goto out; - req = ablkcipher_request_cast(async_req); - - engine->cur_req = req; + engine->cur_req = async_req; if (backlog) backlog->complete(backlog, -EINPROGRESS); @@ -142,7 +139,7 @@ static void crypto_pump_work(struct kthread_work *work) * @req: the request need to be listed into the engine queue */ int crypto_transfer_request(struct crypto_engine *engine, - struct ablkcipher_request *req, bool need_pump) + struct crypto_async_request *req, bool need_pump) { unsigned long flags; int ret; @@ -154,7 +151,7 @@ int crypto_transfer_request(struct crypto_engine *engine, return -ESHUTDOWN; } - ret = ablkcipher_enqueue_request(&engine->queue, req); + ret = crypto_enqueue_request(&engine->queue, req); if (!engine->busy && need_pump) queue_kthread_work(&engine->kworker, &engine->pump_requests); @@ -171,7 +168,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request); * @req: the request need to be listed into the engine queue */ int crypto_transfer_request_to_engine(struct crypto_engine *engine, - struct ablkcipher_request *req) + struct crypto_async_request *req) { return crypto_transfer_request(engine, req, true); } @@ -184,7 +181,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine); * @err: error number */ void crypto_finalize_request(struct crypto_engine *engine, - struct ablkcipher_request *req, int err) + struct crypto_async_request *req, int err) { unsigned long flags; bool finalize_cur_req = false; @@ -208,7 +205,7 @@ void crypto_finalize_request(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); } - req->base.complete(&req->base, err); + req->complete(req, err); queue_kthread_work(&engine->kworker, &engine->pump_requests); } diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index eeafd21..d720a2a 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -173,26 +173,26 @@ struct crypto_engine { int (*unprepare_crypt_hardware)(struct crypto_engine *engine); int (*prepare_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); + struct crypto_async_request *req); int (*unprepare_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); + struct crypto_async_request *req); int (*crypt_one_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); + struct crypto_async_request *req); struct kthread_worker kworker; struct task_struct *kworker_task; struct kthread_work pump_requests; void *priv_data; - struct ablkcipher_request *cur_req; + struct crypto_async_request *cur_req; }; int crypto_transfer_request(struct crypto_engine *engine, - struct ablkcipher_request *req, bool need_pump); + struct crypto_async_request *req, bool need_pump); int crypto_transfer_request_to_engine(struct crypto_engine *engine, - struct ablkcipher_request *req); + struct crypto_async_request *req); void crypto_finalize_request(struct crypto_engine *engine, - struct ablkcipher_request *req, int err); + struct crypto_async_request *req, int err); int crypto_engine_start(struct crypto_engine *engine); int crypto_engine_stop(struct crypto_engine *engine); struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
The current crypto engine allow only ablkcipher_request to be enqueued. Thus denying any use of it for hardware that also handle hash algo. This patch convert all ablkcipher_request references to the more general crypto_async_request. Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> --- crypto/crypto_engine.c | 17 +++++++---------- include/crypto/algapi.h | 14 +++++++------- 2 files changed, 14 insertions(+), 17 deletions(-)