From patchwork Wed Sep 23 11:55:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin Labbe X-Patchwork-Id: 7249431 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5FFA9BEEC1 for ; Wed, 23 Sep 2015 11:57:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 722A720A5F for ; Wed, 23 Sep 2015 11:57:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5BC282095D for ; Wed, 23 Sep 2015 11:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754616AbbIWL5b (ORCPT ); Wed, 23 Sep 2015 07:57:31 -0400 Received: from mail-wi0-f182.google.com ([209.85.212.182]:36110 "EHLO mail-wi0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753784AbbIWL53 (ORCPT ); Wed, 23 Sep 2015 07:57:29 -0400 Received: by wicgb1 with SMTP id gb1so202116308wic.1; Wed, 23 Sep 2015 04:57:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bdNhJfrBT/mDSr0BLp4n5l3tnMFTFaJDv+305sS7NM0=; b=0qEfwKW4YpRfeQl0MMQga8JfQL/KAa5Y5hpEzWEMkY9Qucl5iG3iDI6l9rT3dhLTa6 1BmbyU+eyh1/H/BlI5KyChVhMgOONhtf0auT1Kni2asIWj5a+59i41rqE7Gew6hBjtkI 1RKHW74ISa/8oObdwgoZS3by6xPWhyBtrjXU8iEhver5B2OZFBvQ3Vep1k7Y45g71/m5 llI84yD+pkfBiq3dH5bFyWLEgK+tBmqKZ8F58aWkgBKr1sT93OPSc3eLU7pmeHAO/9Fj v+9fWGfOB860ICNYYhN+Mpx3ml4D1vJ6F3PUrkR0Tl1+e04uGf7GcVhYrx6rn7HCmwtk dXFw== X-Received: by 10.194.87.129 with SMTP id ay1mr22659696wjb.110.1443009437766; Wed, 23 Sep 2015 04:57:17 -0700 (PDT) Received: from Red.local (ANice-651-1-293-37.w83-201.abo.wanadoo.fr. [83.201.193.37]) by smtp.googlemail.com with ESMTPSA id s9sm6920030wjy.16.2015.09.23.04.57.16 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 23 Sep 2015 04:57:17 -0700 (PDT) From: LABBE Corentin To: herbert@gondor.apana.org.au, davem@davemloft.net Cc: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, LABBE Corentin Subject: [PATCH 4/4] crypto: sahara: dma_map_sg can handle chained SG Date: Wed, 23 Sep 2015 13:55:28 +0200 Message-Id: <1443009328-12478-5-git-send-email-clabbe.montjoie@gmail.com> X-Mailer: git-send-email 2.4.9 In-Reply-To: <1443009328-12478-1-git-send-email-clabbe.montjoie@gmail.com> References: <1443009328-12478-1-git-send-email-clabbe.montjoie@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The sahara driver use two dma_map_sg path according to SG are chained or not. Since dma_map_sg can handle both case, clean the code with all references to sg chained. Thus removing the sahara_sha_unmap_sg function. Signed-off-by: LABBE Corentin --- drivers/crypto/sahara.c | 66 ++++++++++--------------------------------------- 1 file changed, 13 insertions(+), 53 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index cea2411..804c0f5 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -173,7 +173,6 @@ struct sahara_aes_reqctx { * @sg_in_idx: number of hw links * @in_sg: scatterlist for input data * @in_sg_chain: scatterlists for chained input data - * @in_sg_chained: specifies if chained scatterlists are used or not * @total: total number of bytes for transfer * @last: is this the last block * @first: is this the first block @@ -191,7 +190,6 @@ struct sahara_sha_reqctx { unsigned int sg_in_idx; struct scatterlist *in_sg; struct scatterlist in_sg_chain[2]; - bool in_sg_chained; size_t total; unsigned int last; unsigned int first; @@ -801,38 +799,19 @@ static int sahara_sha_hw_links_create(struct sahara_dev *dev, return -EINVAL; } - if (rctx->in_sg_chained) { - i = start; - sg = dev->in_sg; - while (sg) { - ret = dma_map_sg(dev->device, sg, 1, - DMA_TO_DEVICE); - if (!ret) - return -EFAULT; - - dev->hw_link[i]->len = sg->length; - dev->hw_link[i]->p = sg->dma_address; + sg = dev->in_sg; + ret = dma_map_sg(dev->device, dev->in_sg, dev->nb_in_sg, DMA_TO_DEVICE); + if (!ret) + return -EFAULT; + + for (i = start; i < dev->nb_in_sg + start; i++) { + dev->hw_link[i]->len = sg->length; + dev->hw_link[i]->p = sg->dma_address; + if (i == (dev->nb_in_sg + start - 1)) { + dev->hw_link[i]->next = 0; + } else { dev->hw_link[i]->next = dev->hw_phys_link[i + 1]; sg = sg_next(sg); - i += 1; - } - dev->hw_link[i-1]->next = 0; - } else { - sg = dev->in_sg; - ret = dma_map_sg(dev->device, dev->in_sg, dev->nb_in_sg, - DMA_TO_DEVICE); - if (!ret) - return -EFAULT; - - for (i = start; i < dev->nb_in_sg + start; i++) { - dev->hw_link[i]->len = sg->length; - dev->hw_link[i]->p = sg->dma_address; - if (i == (dev->nb_in_sg + start - 1)) { - dev->hw_link[i]->next = 0; - } else { - dev->hw_link[i]->next = dev->hw_phys_link[i + 1]; - sg = sg_next(sg); - } } } @@ -980,7 +959,6 @@ static int sahara_sha_prepare_request(struct ahash_request *req) rctx->total = req->nbytes + rctx->buf_cnt; rctx->in_sg = rctx->in_sg_chain; - rctx->in_sg_chained = true; req->src = rctx->in_sg_chain; /* only data from previous operation */ } else if (rctx->buf_cnt) { @@ -991,13 +969,11 @@ static int sahara_sha_prepare_request(struct ahash_request *req) /* buf was copied into rembuf above */ sg_init_one(rctx->in_sg, rctx->rembuf, rctx->buf_cnt); rctx->total = rctx->buf_cnt; - rctx->in_sg_chained = false; /* no data from previous operation */ } else { rctx->in_sg = req->src; rctx->total = req->nbytes; req->src = rctx->in_sg; - rctx->in_sg_chained = false; } /* on next call, we only have the remaining data in the buffer */ @@ -1006,23 +982,6 @@ static int sahara_sha_prepare_request(struct ahash_request *req) return -EINPROGRESS; } -static void sahara_sha_unmap_sg(struct sahara_dev *dev, - struct sahara_sha_reqctx *rctx) -{ - struct scatterlist *sg; - - if (rctx->in_sg_chained) { - sg = dev->in_sg; - while (sg) { - dma_unmap_sg(dev->device, sg, 1, DMA_TO_DEVICE); - sg = sg_next(sg); - } - } else { - dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg, - DMA_TO_DEVICE); - } -} - static int sahara_sha_process(struct ahash_request *req) { struct sahara_dev *dev = dev_ptr; @@ -1062,7 +1021,8 @@ static int sahara_sha_process(struct ahash_request *req) } if (rctx->sg_in_idx) - sahara_sha_unmap_sg(dev, rctx); + dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg, + DMA_TO_DEVICE); memcpy(rctx->context, dev->context_base, rctx->context_size);