From patchwork Sun Oct 18 16:24:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King X-Patchwork-Id: 7430931 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 14C749F302 for ; Sun, 18 Oct 2015 16:24:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0998C2061E for ; Sun, 18 Oct 2015 16:24:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ED38D205EC for ; Sun, 18 Oct 2015 16:24:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932104AbbJRQYb (ORCPT ); Sun, 18 Oct 2015 12:24:31 -0400 Received: from pandora.arm.linux.org.uk ([78.32.30.218]:45253 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932094AbbJRQYa (ORCPT ); Sun, 18 Oct 2015 12:24:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Date:Sender:Message-Id:Content-Type:Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References:In-Reply-To; bh=LT7t1FBibVLVT6yLIp2j9hvx7WExOkJWmctw0TV8QLY=; b=S04o953ZBV58rB3ffvlmEayXdehVLcpbrm/1o24kwLJFdC1FvXQ+sGprYsh/F20rhG6Mbetq/lbiT3sj2Xn/unD0yVyzPJH0mB8d4in89f0lsKPhOXhOGlo8kO4Ohs4v56mhreuLBHkvAkpsFZZkTpkmOi3/hk73skEUNso6QZw=; Received: from e0022681537dd.dyn.arm.linux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:41551 helo=rmk-PC.arm.linux.org.uk) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1Znqku-0002jM-56; Sun, 18 Oct 2015 17:24:20 +0100 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1Znqkq-0005X2-OI; Sun, 18 Oct 2015 17:24:16 +0100 In-Reply-To: <20151018161649.GA6651@n2100.arm.linux.org.uk> References: <20151018161649.GA6651@n2100.arm.linux.org.uk> From: Russell King To: Boris Brezillon , Arnaud Ebalard , Thomas Petazzoni , Jason Cooper Cc: Herbert Xu , "David S. Miller" , linux-crypto@vger.kernel.org Subject: [PATCH 10/18] crypto: marvell: move mv_cesa_dma_add_frag() calls MIME-Version: 1.0 Content-Disposition: inline Message-Id: Date: Sun, 18 Oct 2015 17:24:16 +0100 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the calls to mv_cesa_dma_add_frag() into the parent function, mv_cesa_ahash_dma_req_init(). This is in preparation to changing when we generate the operation blocks, as we need to avoid generating a block for a partial hash block at the end of the user data. Signed-off-by: Russell King --- drivers/crypto/marvell/hash.c | 71 ++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 42 deletions(-) diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c index f567243da005..787cec1715ed 100644 --- a/drivers/crypto/marvell/hash.c +++ b/drivers/crypto/marvell/hash.c @@ -499,51 +499,23 @@ mv_cesa_dma_add_frag(struct mv_cesa_tdma_chain *chain, return op; } -static struct mv_cesa_op_ctx * +static int mv_cesa_ahash_dma_add_cache(struct mv_cesa_tdma_chain *chain, struct mv_cesa_ahash_dma_iter *dma_iter, struct mv_cesa_ahash_req *creq, gfp_t flags) { struct mv_cesa_ahash_dma_req *ahashdreq = &creq->req.dma; - struct mv_cesa_op_ctx *op = NULL; - int ret; if (!creq->cache_ptr) - return NULL; - - ret = mv_cesa_dma_add_data_transfer(chain, - CESA_SA_DATA_SRAM_OFFSET, - ahashdreq->cache_dma, - creq->cache_ptr, - CESA_TDMA_DST_IN_SRAM, - flags); - if (ret) - return ERR_PTR(ret); - - if (!dma_iter->base.op_len) - op = mv_cesa_dma_add_frag(chain, &creq->op_tmpl, - creq->cache_ptr, flags); - - return op; -} - -static struct mv_cesa_op_ctx * -mv_cesa_ahash_dma_add_data(struct mv_cesa_tdma_chain *chain, - struct mv_cesa_ahash_dma_iter *dma_iter, - struct mv_cesa_ahash_req *creq, - gfp_t flags) -{ - int ret; - - /* Add input transfers */ - ret = mv_cesa_dma_add_op_transfers(chain, &dma_iter->base, - &dma_iter->src, flags); - if (ret) - return ERR_PTR(ret); + return 0; - return mv_cesa_dma_add_frag(chain, &creq->op_tmpl, dma_iter->base.op_len, - flags); + return mv_cesa_dma_add_data_transfer(chain, + CESA_SA_DATA_SRAM_OFFSET, + ahashdreq->cache_dma, + creq->cache_ptr, + CESA_TDMA_DST_IN_SRAM, + flags); } static struct mv_cesa_op_ctx * @@ -647,19 +619,34 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req) mv_cesa_tdma_desc_iter_init(&chain); mv_cesa_ahash_req_iter_init(&iter, req); - op = mv_cesa_ahash_dma_add_cache(&chain, &iter, - creq, flags); - if (IS_ERR(op)) { - ret = PTR_ERR(op); + /* + * Add the cache (left-over data from a previous block) first. + * This will never overflow the SRAM size. + */ + ret = mv_cesa_ahash_dma_add_cache(&chain, &iter, creq, flags); + if (ret) goto err_free_tdma; + + if (creq->cache_ptr && !iter.base.op_len) { + op = mv_cesa_dma_add_frag(&chain, &creq->op_tmpl, + creq->cache_ptr, flags); + if (IS_ERR(op)) { + ret = PTR_ERR(op); + goto err_free_tdma; + } } do { if (!iter.base.op_len) break; - op = mv_cesa_ahash_dma_add_data(&chain, &iter, - creq, flags); + ret = mv_cesa_dma_add_op_transfers(&chain, &iter.base, + &iter.src, flags); + if (ret) + goto err_free_tdma; + + op = mv_cesa_dma_add_frag(&chain, &creq->op_tmpl, + iter.base.op_len, flags); if (IS_ERR(op)) { ret = PTR_ERR(op); goto err_free_tdma;