From patchwork Fri Aug 18 11:42:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin Labbe X-Patchwork-Id: 9908391 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D2656600CC for ; Fri, 18 Aug 2017 11:47:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C134928C84 for ; Fri, 18 Aug 2017 11:47:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3C9D28C87; Fri, 18 Aug 2017 11:47:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2813128C82 for ; Fri, 18 Aug 2017 11:47:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751726AbdHRLra (ORCPT ); Fri, 18 Aug 2017 07:47:30 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:32823 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750988AbdHRLrR (ORCPT ); Fri, 18 Aug 2017 07:47:17 -0400 Received: by mail-wr0-f193.google.com with SMTP id 5so1606887wrz.0; Fri, 18 Aug 2017 04:47:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yKRsRn6Hs/UYviBHiZSzP2+PCCQQZrLBs63Vug8aGGw=; b=BtUMjXNu9JvZmwVaLscK3/FEYfvNkWYtGzcWlBeRsC8u4m9SBd0cvseGIwVlnTaUbU g+ir5MTeT2dSTG3qXRe1C5+SNaqXexPtYQvDy4wdjAlxUteySrFz70xpYvh2BwlBDemY Orx3iUYj+S6alE3jNstSbLJfmodntyH8DS2oF0AYwbBwgu80ls956Xu8xPuYsN4RV9Ep eXAv/Pgr+SVVGNhV8afEvhL+gtdGCL8FyFJiWvqhzvZLViQFtHzQN5Cz8nAsUiM1qgY8 HGMMGDBGBnX89lfbY3X77njNEws7VlQQ7aECWJsx0WhICUrlieotoGp3KNnC9Sh2kJBF Dbww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yKRsRn6Hs/UYviBHiZSzP2+PCCQQZrLBs63Vug8aGGw=; b=EnBc244QE8JHVsnoasJPfmlEODkrwDuH+XQdPbfhzIcPNyb4R1iWQdBmys8qYOL+Y+ nA2ArusjnO15VyO/ECjpQln71hZT+SNH46bNWO+2cnR2MwywBQcB54hbeEKfTyvbeJ+C +ztxcSon/sutb2ubpJA2+EDpRwntdR19m9M2AkcUV4Y0IT/mgVaHjSEc4CvohQULtZhQ LW3EOD6Ldl1dlWHZD1xjY7xJthQkL1jhxRvuKHqZFIQQyov6LZRYcihXz0Qjd1EDjrfS XTUaQwrhiTXFZjNwIkR9doF4MYLRpWtPNOgzX/3lj35ixiJYULNP3Abd0smUQinlEhXH Aexg== X-Gm-Message-State: AHYfb5iHpoJ+UuFtm4AVFlR8OS7QtV3Wt4hBRx/gzCaxA+UoFGqfSXEb 4onlS2MrXRa/Idm+ X-Received: by 10.223.196.165 with SMTP id m34mr1024327wrf.134.1503056836464; Fri, 18 Aug 2017 04:47:16 -0700 (PDT) Received: from Red.local (LFbn-1-7109-228.w90-116.abo.wanadoo.fr. [90.116.45.228]) by smtp.googlemail.com with ESMTPSA id j38sm974708wre.79.2017.08.18.04.47.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 18 Aug 2017 04:47:15 -0700 (PDT) From: Corentin Labbe To: herbert@gondor.apana.org.au, davem@davemloft.net Cc: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, Corentin Labbe Subject: [PATCH v4 1/1] crypto: engine - Permit to enqueue skcipher request Date: Fri, 18 Aug 2017 13:42:27 +0200 Message-Id: <20170818114227.15739-2-clabbe.montjoie@gmail.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20170818114227.15739-1-clabbe.montjoie@gmail.com> References: <20170818114227.15739-1-clabbe.montjoie@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue skcipher requests by adding all necessary functions. Signed-off-by: Corentin Labbe --- crypto/crypto_engine.c | 120 ++++++++++++++++++++++++++++++++++++++++++++++++ include/crypto/engine.h | 14 ++++++ 2 files changed, 134 insertions(+) diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 61e7c4e02fd2..d5ee2a615339 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -36,9 +36,11 @@ static void crypto_pump_requests(struct crypto_engine *engine, struct crypto_async_request *async_req, *backlog; struct ahash_request *hreq; struct ablkcipher_request *breq; + struct skcipher_request *skreq; unsigned long flags; bool was_busy = false; int ret, rtype; + const struct crypto_type *cratype; spin_lock_irqsave(&engine->queue_lock, flags); @@ -123,6 +125,25 @@ static void crypto_pump_requests(struct crypto_engine *engine, } return; case CRYPTO_ALG_TYPE_ABLKCIPHER: + cratype = engine->cur_req->tfm->__crt_alg->cra_type; + if (cratype != &crypto_ablkcipher_type) { + skreq = skcipher_request_cast(engine->cur_req); + if (engine->prepare_skcipher_request) { + ret = engine->prepare_skcipher_request(engine, skreq); + if (ret) { + dev_err(engine->dev, "failed to prepare request: %d\n", + ret); + goto req_err; + } + engine->cur_req_prepared = true; + } + ret = engine->skcipher_one_request(engine, skreq); + if (ret) { + dev_err(engine->dev, "failed to cipher one request from queue\n"); + goto req_err; + } + return; + } breq = ablkcipher_request_cast(engine->cur_req); if (engine->prepare_cipher_request) { ret = engine->prepare_cipher_request(engine, breq); @@ -151,6 +172,12 @@ static void crypto_pump_requests(struct crypto_engine *engine, crypto_finalize_hash_request(engine, hreq, ret); break; case CRYPTO_ALG_TYPE_ABLKCIPHER: + cratype = engine->cur_req->tfm->__crt_alg->cra_type; + if (cratype != &crypto_ablkcipher_type) { + skreq = skcipher_request_cast(engine->cur_req); + crypto_finalize_skcipher_request(engine, skreq, ret); + break; + } breq = ablkcipher_request_cast(engine->cur_req); crypto_finalize_cipher_request(engine, breq, ret); break; @@ -213,6 +240,49 @@ int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine, EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine); /** + * crypto_transfer_skcipher_request - transfer the new request into the + * enginequeue + * @engine: the hardware engine + * @req: the request need to be listed into the engine queue + */ +int crypto_transfer_skcipher_request(struct crypto_engine *engine, + struct skcipher_request *req, + bool need_pump) +{ + unsigned long flags; + int ret; + + spin_lock_irqsave(&engine->queue_lock, flags); + + if (!engine->running) { + spin_unlock_irqrestore(&engine->queue_lock, flags); + return -ESHUTDOWN; + } + + ret = crypto_enqueue_request(&engine->queue, &req->base); + + if (!engine->busy && need_pump) + kthread_queue_work(engine->kworker, &engine->pump_requests); + + spin_unlock_irqrestore(&engine->queue_lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request); + +/** + * crypto_transfer_skcipher_request_to_engine - transfer one request to list + * into the engine queue + * @engine: the hardware engine + * @req: the request need to be listed into the engine queue + */ +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine, + struct skcipher_request *req) +{ + return crypto_transfer_skcipher_request(engine, req, true); +} +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine); + +/** * crypto_transfer_hash_request - transfer the new request into the * enginequeue * @engine: the hardware engine @@ -292,6 +362,43 @@ void crypto_finalize_cipher_request(struct crypto_engine *engine, EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request); /** + * crypto_finalize_skcipher_request - finalize one request if the request is done + * @engine: the hardware engine + * @req: the request need to be finalized + * @err: error number + */ +void crypto_finalize_skcipher_request(struct crypto_engine *engine, + struct skcipher_request *req, int err) +{ + unsigned long flags; + bool finalize_cur_req = false; + int ret; + + spin_lock_irqsave(&engine->queue_lock, flags); + if (engine->cur_req == &req->base) + finalize_cur_req = true; + spin_unlock_irqrestore(&engine->queue_lock, flags); + + if (finalize_cur_req) { + if (engine->cur_req_prepared && + engine->unprepare_skcipher_request) { + ret = engine->unprepare_skcipher_request(engine, req); + if (ret) + dev_err(engine->dev, "failed to unprepare request\n"); + } + spin_lock_irqsave(&engine->queue_lock, flags); + engine->cur_req = NULL; + engine->cur_req_prepared = false; + spin_unlock_irqrestore(&engine->queue_lock, flags); + } + + req->base.complete(&req->base, err); + + kthread_queue_work(engine->kworker, &engine->pump_requests); +} +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request); + +/** * crypto_finalize_hash_request - finalize one request if the request is done * @engine: the hardware engine * @req: the request need to be finalized @@ -345,6 +452,19 @@ int crypto_engine_start(struct crypto_engine *engine) return -EBUSY; } + if (!engine->skcipher_one_request && !engine->cipher_one_request && + !engine->hash_one_request) { + spin_unlock_irqrestore(&engine->queue_lock, flags); + dev_err(engine->dev, "need at least one request type\n"); + return -EINVAL; + } + + if (engine->skcipher_one_request && engine->cipher_one_request) { + spin_unlock_irqrestore(&engine->queue_lock, flags); + dev_err(engine->dev, "Cannot use both skcipher and ablkcipher\n"); + return -EINVAL; + } + engine->running = true; spin_unlock_irqrestore(&engine->queue_lock, flags); diff --git a/include/crypto/engine.h b/include/crypto/engine.h index dd04c1699b51..a8f6e6ed377b 100644 --- a/include/crypto/engine.h +++ b/include/crypto/engine.h @@ -18,6 +18,7 @@ #include #include #include +#include #define ENGINE_NAME_LEN 30 /* @@ -69,12 +70,18 @@ struct crypto_engine { struct ablkcipher_request *req); int (*unprepare_cipher_request)(struct crypto_engine *engine, struct ablkcipher_request *req); + int (*prepare_skcipher_request)(struct crypto_engine *engine, + struct skcipher_request *req); + int (*unprepare_skcipher_request)(struct crypto_engine *engine, + struct skcipher_request *req); int (*prepare_hash_request)(struct crypto_engine *engine, struct ahash_request *req); int (*unprepare_hash_request)(struct crypto_engine *engine, struct ahash_request *req); int (*cipher_one_request)(struct crypto_engine *engine, struct ablkcipher_request *req); + int (*skcipher_one_request)(struct crypto_engine *engine, + struct skcipher_request *req); int (*hash_one_request)(struct crypto_engine *engine, struct ahash_request *req); @@ -90,12 +97,19 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine, bool need_pump); int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine, struct ablkcipher_request *req); +int crypto_transfer_skcipher_request(struct crypto_engine *engine, + struct skcipher_request *req, + bool need_pump); +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine, + struct skcipher_request *req); int crypto_transfer_hash_request(struct crypto_engine *engine, struct ahash_request *req, bool need_pump); int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine, struct ahash_request *req); void crypto_finalize_cipher_request(struct crypto_engine *engine, struct ablkcipher_request *req, int err); +void crypto_finalize_skcipher_request(struct crypto_engine *engine, + struct skcipher_request *req, int err); void crypto_finalize_hash_request(struct crypto_engine *engine, struct ahash_request *req, int err); int crypto_engine_start(struct crypto_engine *engine);