From patchwork Mon Feb 28 19:40:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12763686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCE05C433F5 for ; Mon, 28 Feb 2022 19:44:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+qhzgQh6qGwJSH1Irz4RqyZlh3iH7692ZC5EXgX/6b0=; b=Cr+Yvvp/HGw3HR RditYUUsXgOZWgWNmM+7xAtRu55L0Ms9lFikCxGZizQ/P2Xw1DLvxRDDH9QMHXhalx6dembi4flxQ nQIUe4qHpo+3Es7s/xE4l0Rk/nGCBhdlZ9zOuKFTqFvQCZwSzc6WfY7yfccsr/ebgB9Mf2db55Rsh SMa6JNMuOukeXrO/yd2SCDXRPn/4Vux5vdQm2CuvxAp3/Rj9732zTZA1pqz9gNP87Z7LTuMQs0o5O ZmOhWT8wvne2uKxs791DfSmBfnzDTNxl2ym0jTIUQgFzD+/dio+krllHQ4lWKgCgxZzPMZ86S8paY MoYKXsDPzDLVINW1zhpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nOlwj-00DpIg-KS; Mon, 28 Feb 2022 19:44:37 +0000 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nOlt5-00Dnav-Dp for linux-rockchip@lists.infradead.org; Mon, 28 Feb 2022 19:40:58 +0000 Received: by mail-wm1-x332.google.com with SMTP id o62-20020a1ca541000000b00380e3cc26b7so103146wme.0 for ; Mon, 28 Feb 2022 11:40:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CByQqP53ntKxsq/thVmiZuzOirtBeUYrfUxLvaGCym8=; b=b35Y1c4Bv8jWzHrOMH0ClKcWb++RP3GqH7fMlt/NDfSROidNdRsRtavvXlf2Km4510 y/cxJ3W+qLYIDFVlMgZUpRjtRz2JxT40X8plqOuYOeduEYC5rMvSN0+bBPnzdUG4fm4k UFDDEhJOIqDz7iMZsA4B0MuiVKlC/Ws1yS/uzTDqDS0KrJFjslP+mViuNhRUBeslRVjo tQypB+VkwHDYMvpADI45jlaA5saUj8PnBOx8MvRUevFAb7SSV1Y5uN1u2+e3rvezdino ZXCzjAc3yukk8JZjqD8dhihm9dcIOKMfpnBUPW5TPX23c9GyIi4kGVsvBaJOIZRqYm1w VzcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CByQqP53ntKxsq/thVmiZuzOirtBeUYrfUxLvaGCym8=; b=UARoWT9IGdtghDq55YqjIEmBe7z087bg14A6Shuwc1tdeW8Zo8aznkH2Z+rWYCl2K0 suzr9hgQbnBHkMYdY4Qgstt4YYyjGpqUjD5LkoxBF/8LmZzBe3knktxEMeD1LKJfwAew 6L0LFDeaBkpkS1lU30yUkict+YjnddfXQPXGdJfqZiPTTKoRroEsc2sh3+ZmzsUr7MGA UP3KdT8708lbJCb/o+AmnUV9XcsffY4npnLBtoZo7tIavnfodkylIlGhwwoS2vUeQ7wD 4v02KtWaTk/FRuXoF/i6otbVRv//4HDaYGuAis30cOmmIHiH/BPgy2GxAv88T9OhGW3K hy3A== X-Gm-Message-State: AOAM5329+jiJ5sG2fNN1HraxvtjqaVQq1HAeBj/Qtns46OpFGDzjFHpm KqCvv+BwwXfl7yd4fwUlaSfq1g== X-Google-Smtp-Source: ABdhPJyW77F9Id/W8HbikNB4acEw3LYRKNQqmGoWA2aQWeKj7UCvgGREB963du5SbdUNDyPDX3Ca1Q== X-Received: by 2002:a05:600c:a0d:b0:381:774b:4439 with SMTP id z13-20020a05600c0a0d00b00381774b4439mr3139195wmp.48.1646077250133; Mon, 28 Feb 2022 11:40:50 -0800 (PST) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id v25-20020a05600c215900b0038117f41728sm274143wml.43.2022.02.28.11.40.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 11:40:49 -0800 (PST) From: Corentin Labbe To: heiko@sntech.de, herbert@gondor.apana.org.au, krzysztof.kozlowski@canonical.com, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH 09/16] crypto: rockchip: remove non-aligned handling Date: Mon, 28 Feb 2022 19:40:30 +0000 Message-Id: <20220228194037.1600509-10-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220228194037.1600509-1-clabbe@baylibre.com> References: <20220228194037.1600509-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220228_114051_519122_519F5EFD X-CRM114-Status: GOOD ( 17.52 ) X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Now driver have fallback for un-aligned cases, remove all code handling those cases. Fixes: ce0183cb6464b ("crypto: rockchip - switch to skcipher API") Signed-off-by: Corentin Labbe --- drivers/crypto/rockchip/rk3288_crypto.c | 69 +++++-------------- drivers/crypto/rockchip/rk3288_crypto.h | 4 -- drivers/crypto/rockchip/rk3288_crypto_ahash.c | 23 ++----- .../crypto/rockchip/rk3288_crypto_skcipher.c | 40 +++-------- 4 files changed, 31 insertions(+), 105 deletions(-) diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c index 4cff49b82983..b3db096e2ec2 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.c +++ b/drivers/crypto/rockchip/rk3288_crypto.c @@ -88,63 +88,26 @@ static int rk_load_data(struct rk_crypto_info *dev, { unsigned int count; - dev->aligned = dev->aligned ? - check_alignment(sg_src, sg_dst, dev->align_size) : - dev->aligned; - if (dev->aligned) { - count = min(dev->left_bytes, sg_src->length); - dev->left_bytes -= count; - - if (!dma_map_sg(dev->dev, sg_src, 1, DMA_TO_DEVICE)) { - dev_err(dev->dev, "[%s:%d] dma_map_sg(src) error\n", + count = min(dev->left_bytes, sg_src->length); + dev->left_bytes -= count; + + if (!dma_map_sg(dev->dev, sg_src, 1, DMA_TO_DEVICE)) { + dev_err(dev->dev, "[%s:%d] dma_map_sg(src) error\n", __func__, __LINE__); - return -EINVAL; - } - dev->addr_in = sg_dma_address(sg_src); + return -EINVAL; + } + dev->addr_in = sg_dma_address(sg_src); - if (sg_dst) { - if (!dma_map_sg(dev->dev, sg_dst, 1, DMA_FROM_DEVICE)) { - dev_err(dev->dev, + if (sg_dst) { + if (!dma_map_sg(dev->dev, sg_dst, 1, DMA_FROM_DEVICE)) { + dev_err(dev->dev, "[%s:%d] dma_map_sg(dst) error\n", __func__, __LINE__); - dma_unmap_sg(dev->dev, sg_src, 1, - DMA_TO_DEVICE); - return -EINVAL; - } - dev->addr_out = sg_dma_address(sg_dst); - } - } else { - count = (dev->left_bytes > PAGE_SIZE) ? - PAGE_SIZE : dev->left_bytes; - - if (!sg_pcopy_to_buffer(dev->first, dev->src_nents, - dev->addr_vir, count, - dev->total - dev->left_bytes)) { - dev_err(dev->dev, "[%s:%d] pcopy err\n", - __func__, __LINE__); + dma_unmap_sg(dev->dev, sg_src, 1, + DMA_TO_DEVICE); return -EINVAL; } - dev->left_bytes -= count; - sg_init_one(&dev->sg_tmp, dev->addr_vir, count); - if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1, DMA_TO_DEVICE)) { - dev_err(dev->dev, "[%s:%d] dma_map_sg(sg_tmp) error\n", - __func__, __LINE__); - return -ENOMEM; - } - dev->addr_in = sg_dma_address(&dev->sg_tmp); - - if (sg_dst) { - if (!dma_map_sg(dev->dev, &dev->sg_tmp, 1, - DMA_FROM_DEVICE)) { - dev_err(dev->dev, - "[%s:%d] dma_map_sg(sg_tmp) error\n", - __func__, __LINE__); - dma_unmap_sg(dev->dev, &dev->sg_tmp, 1, - DMA_TO_DEVICE); - return -ENOMEM; - } - dev->addr_out = sg_dma_address(&dev->sg_tmp); - } + dev->addr_out = sg_dma_address(sg_dst); } dev->count = count; return 0; @@ -154,11 +117,11 @@ static void rk_unload_data(struct rk_crypto_info *dev) { struct scatterlist *sg_in, *sg_out; - sg_in = dev->aligned ? dev->sg_src : &dev->sg_tmp; + sg_in = dev->sg_src; dma_unmap_sg(dev->dev, sg_in, 1, DMA_TO_DEVICE); if (dev->sg_dst) { - sg_out = dev->aligned ? dev->sg_dst : &dev->sg_tmp; + sg_out = dev->sg_dst; dma_unmap_sg(dev->dev, sg_out, 1, DMA_FROM_DEVICE); } } diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h index 826508e4a0c3..d35b84071935 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.h +++ b/drivers/crypto/rockchip/rk3288_crypto.h @@ -204,12 +204,8 @@ struct rk_crypto_info { /* the public variable */ struct scatterlist *sg_src; struct scatterlist *sg_dst; - struct scatterlist sg_tmp; struct scatterlist *first; unsigned int left_bytes; - void *addr_vir; - int aligned; - int align_size; size_t src_nents; size_t dst_nents; unsigned int total; diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c index 417cf4487514..6030c03a1726 100644 --- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c +++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c @@ -236,8 +236,6 @@ static int rk_ahash_start(struct rk_crypto_info *dev) dev->total = req->nbytes; dev->left_bytes = req->nbytes; - dev->aligned = 0; - dev->align_size = 4; dev->sg_dst = NULL; dev->sg_src = req->src; dev->first = req->src; @@ -272,15 +270,13 @@ static int rk_ahash_crypto_rx(struct rk_crypto_info *dev) dev->unload_data(dev); if (dev->left_bytes) { - if (dev->aligned) { - if (sg_is_last(dev->sg_src)) { - dev_warn(dev->dev, "[%s:%d], Lack of data\n", - __func__, __LINE__); - err = -ENOMEM; - goto out_rx; - } - dev->sg_src = sg_next(dev->sg_src); + if (sg_is_last(dev->sg_src)) { + dev_warn(dev->dev, "[%s:%d], Lack of data\n", + __func__, __LINE__); + err = -ENOMEM; + goto out_rx; } + dev->sg_src = sg_next(dev->sg_src); err = rk_ahash_set_data_start(dev); } else { /* @@ -318,11 +314,6 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm) algt = container_of(alg, struct rk_crypto_tmp, alg.hash); tctx->dev = algt->dev; - tctx->dev->addr_vir = (void *)__get_free_page(GFP_KERNEL); - if (!tctx->dev->addr_vir) { - dev_err(tctx->dev->dev, "failed to kmalloc for addr_vir\n"); - return -ENOMEM; - } tctx->dev->start = rk_ahash_start; tctx->dev->update = rk_ahash_crypto_rx; tctx->dev->complete = rk_ahash_crypto_complete; @@ -344,8 +335,6 @@ static int rk_cra_hash_init(struct crypto_tfm *tfm) static void rk_cra_hash_exit(struct crypto_tfm *tfm) { struct rk_ahash_ctx *tctx = crypto_tfm_ctx(tfm); - - free_page((unsigned long)tctx->dev->addr_vir); } struct rk_crypto_tmp rk_ahash_sha1 = { diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c index cc817d361fda..cef9621ac189 100644 --- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c +++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c @@ -356,7 +356,6 @@ static int rk_ablk_start(struct rk_crypto_info *dev) dev->src_nents = sg_nents(req->src); dev->sg_dst = req->dst; dev->dst_nents = sg_nents(req->dst); - dev->aligned = 1; spin_lock_irqsave(&dev->lock, flags); rk_ablk_hw_init(dev); @@ -376,13 +375,9 @@ static void rk_iv_copyback(struct rk_crypto_info *dev) /* Update the IV buffer to contain the next IV for encryption mode. */ if (!(rctx->mode & RK_CRYPTO_DEC)) { - if (dev->aligned) { - memcpy(req->iv, sg_virt(dev->sg_dst) + - dev->sg_dst->length - ivsize, ivsize); - } else { - memcpy(req->iv, dev->addr_vir + - dev->count - ivsize, ivsize); - } + memcpy(req->iv, + sg_virt(dev->sg_dst) + dev->sg_dst->length - ivsize, + ivsize); } } @@ -420,27 +415,16 @@ static int rk_ablk_rx(struct rk_crypto_info *dev) skcipher_request_cast(dev->async_req); dev->unload_data(dev); - if (!dev->aligned) { - if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, - dev->addr_vir, dev->count, - dev->total - dev->left_bytes - - dev->count)) { - err = -EINVAL; - goto out_rx; - } - } if (dev->left_bytes) { rk_update_iv(dev); - if (dev->aligned) { - if (sg_is_last(dev->sg_src)) { - dev_err(dev->dev, "[%s:%d] Lack of data\n", + if (sg_is_last(dev->sg_src)) { + dev_err(dev->dev, "[%s:%d] Lack of data\n", __func__, __LINE__); - err = -ENOMEM; - goto out_rx; - } - dev->sg_src = sg_next(dev->sg_src); - dev->sg_dst = sg_next(dev->sg_dst); + err = -ENOMEM; + goto out_rx; } + dev->sg_src = sg_next(dev->sg_src); + dev->sg_dst = sg_next(dev->sg_dst); err = rk_set_data_start(dev); } else { rk_iv_copyback(dev); @@ -462,13 +446,9 @@ static int rk_ablk_init_tfm(struct crypto_skcipher *tfm) algt = container_of(alg, struct rk_crypto_tmp, alg.skcipher); ctx->dev = algt->dev; - ctx->dev->align_size = crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm)) + 1; ctx->dev->start = rk_ablk_start; ctx->dev->update = rk_ablk_rx; ctx->dev->complete = rk_crypto_complete; - ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL); - if (!ctx->dev->addr_vir) - return -ENOMEM; ctx->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback_tfm)) { @@ -490,8 +470,6 @@ static int rk_ablk_init_tfm(struct crypto_skcipher *tfm) static void rk_ablk_exit_tfm(struct crypto_skcipher *tfm) { struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); - - free_page((unsigned long)ctx->dev->addr_vir); } struct rk_crypto_tmp rk_ecb_aes_alg = {