From patchwork Tue Oct 4 12:57:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Romain Perier X-Patchwork-Id: 9361815 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5DD46600C8 for ; Tue, 4 Oct 2016 13:01:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E13628801 for ; Tue, 4 Oct 2016 13:01:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41EE32883D; Tue, 4 Oct 2016 13:01:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9B7E928801 for ; Tue, 4 Oct 2016 13:01:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1brPJw-00041i-DX; Tue, 04 Oct 2016 12:59:44 +0000 Received: from down.free-electrons.com ([37.187.137.238] helo=mail.free-electrons.com) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1brPIG-0002hh-VG for linux-arm-kernel@lists.infradead.org; Tue, 04 Oct 2016 12:58:07 +0000 Received: by mail.free-electrons.com (Postfix, from userid 110) id B32652CD; Tue, 4 Oct 2016 14:57:39 +0200 (CEST) Received: from latitudeE7470 (LStLambert-657-1-97-87.w90-63.abo.wanadoo.fr [90.63.216.87]) by mail.free-electrons.com (Postfix) with ESMTPSA id 724DC2F9; Tue, 4 Oct 2016 14:57:29 +0200 (CEST) From: Romain Perier To: Boris Brezillon , Arnaud Ebalard Subject: [PATCH v3 2/2] crypto: marvell - Don't break chain for computable last ahash requests Date: Tue, 4 Oct 2016 14:57:20 +0200 Message-Id: <20161004125720.3347-3-romain.perier@free-electrons.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161004125720.3347-1-romain.perier@free-electrons.com> References: <20161004125720.3347-1-romain.perier@free-electrons.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161004_055801_534370_4F83AADB X-CRM114-Status: GOOD ( 19.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Petazzoni , Andrew Lunn , Jason Cooper , Ofer Heifetz , Nadav Haklai , linux-crypto@vger.kernel.org, Gregory Clement , Herbert Xu , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Sebastian Hesselbarth MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, the driver breaks chain for all kind of hash requests in order to don't override intermediate states of partial ahash updates. However, some final ahash requests can be directly processed by the engine, and so without intermediate state. This is typically the case for most for the HMAC requests processed via IPSec. This commits adds a TDMA descriptor to copy context for these of requests into the "op" dma pool, then it allow to chain these requests at the DMA level. The 'complete' operation is also updated to retrieve the MAC digest from the right location. Signed-off-by: Romain Perier --- Changes in v3: - Copy the whole context back to RAM and not just the digest. Also fixed a rebase issue ^^ (whoops) Changes in v2: - Replaced BUG_ON by an error - Add a variable "break_chain", with "type" to break the chain with ahash requests. It improves code readability. drivers/crypto/marvell/hash.c | 79 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 64 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c index 9f28468..b36f196 100644 --- a/drivers/crypto/marvell/hash.c +++ b/drivers/crypto/marvell/hash.c @@ -312,24 +312,53 @@ static void mv_cesa_ahash_complete(struct crypto_async_request *req) int i; digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(ahashreq)); - for (i = 0; i < digsize / 4; i++) - creq->state[i] = readl_relaxed(engine->regs + CESA_IVDIG(i)); - if (creq->last_req) { + if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ && + !(creq->base.chain.last->flags & CESA_TDMA_BREAK_CHAIN)) { + struct mv_cesa_tdma_desc *tdma = NULL; + __le32 *data = NULL; + + for (tdma = creq->base.chain.first; tdma; tdma = tdma->next) { + u32 type = tdma->flags & CESA_TDMA_TYPE_MSK; + if (type == CESA_TDMA_RESULT) + break; + } + + if (!tdma) { + dev_err(cesa_dev->dev, "Failed to retrieve tdma " + "descriptor for outer data\n"); + return; + } + /* - * Hardware's MD5 digest is in little endian format, but - * SHA in big endian format + * Result is already in the correct endianess when the SA is + * used */ - if (creq->algo_le) { - __le32 *result = (void *)ahashreq->result; + data = tdma->op->ctx.hash.hash; + for (i = 0; i < digsize / 4; i++) + creq->state[i] = cpu_to_le32(data[i]); - for (i = 0; i < digsize / 4; i++) - result[i] = cpu_to_le32(creq->state[i]); - } else { - __be32 *result = (void *)ahashreq->result; + memcpy(ahashreq->result, data, digsize); + } else { + for (i = 0; i < digsize / 4; i++) + creq->state[i] = readl_relaxed(engine->regs + + CESA_IVDIG(i)); + if (creq->last_req) { + /* + * Hardware's MD5 digest is in little endian format, but + * SHA in big endian format + */ + if (creq->algo_le) { + __le32 *result = (void *)ahashreq->result; + + for (i = 0; i < digsize / 4; i++) + result[i] = cpu_to_le32(creq->state[i]); + } else { + __be32 *result = (void *)ahashreq->result; - for (i = 0; i < digsize / 4; i++) - result[i] = cpu_to_be32(creq->state[i]); + for (i = 0; i < digsize / 4; i++) + result[i] = cpu_to_be32(creq->state[i]); + } } } @@ -504,6 +533,12 @@ mv_cesa_ahash_dma_last_req(struct mv_cesa_tdma_chain *chain, CESA_SA_DESC_CFG_LAST_FRAG, CESA_SA_DESC_CFG_FRAG_MSK); + ret = mv_cesa_dma_add_result_op(chain, + CESA_SA_CFG_SRAM_OFFSET, + CESA_SA_DATA_SRAM_OFFSET, + CESA_TDMA_SRC_IN_SRAM, flags); + if (ret) + return ERR_PTR(-ENOMEM); return op; } @@ -564,6 +599,8 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req) struct mv_cesa_op_ctx *op = NULL; unsigned int frag_len; int ret; + u32 type; + bool break_chain = true; basereq->chain.first = NULL; basereq->chain.last = NULL; @@ -635,6 +672,16 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req) goto err_free_tdma; } + /* + * If results are copied via DMA, this means that this + * request can be directly processed by the engine, + * without partial updates. So we can chain it at the + * DMA level with other requests. + */ + type = basereq->chain.last->flags & CESA_TDMA_TYPE_MSK; + if (type == CESA_TDMA_RESULT) + break_chain = false; + if (op) { /* Add dummy desc to wait for crypto operation end */ ret = mv_cesa_dma_add_dummy_end(&basereq->chain, flags); @@ -648,8 +695,10 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req) else creq->cache_ptr = 0; - basereq->chain.last->flags |= (CESA_TDMA_END_OF_REQ | - CESA_TDMA_BREAK_CHAIN); + basereq->chain.last->flags |= CESA_TDMA_END_OF_REQ; + + if (break_chain) + basereq->chain.last->flags |= CESA_TDMA_BREAK_CHAIN; return 0;