From patchwork Thu Oct 24 13:23:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209655 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9906112B for ; Thu, 24 Oct 2019 13:24:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 769272084C for ; Thu, 24 Oct 2019 13:24:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="e/7L/36j"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="S2eThVk8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 769272084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=itD4O9wy2NyOIrLJMFnxh8jFzQmkibLxZyZHwiKXauE=; b=e/7L/36jW/F1i+ 2AZKdBmSk3N2+2PYTuwhr0tlIxclAbOWJLt+ppa9fsO4dVfYe3XvA8j395cfKCwmy0BtDY20VlsfS vgN23NBLLQTQ4fjCSDIeap+CBExCVR3OC/N0Y9WOgIhgHyBvh6+qth+gg30Spmgf/OhIq3TG7Q4iJ OdpajEyBHPLMqLRMr5kxWTeEzN6J0d6MT559FmSbsz8icjlG9Xd+mwlmgZSBQvihCbqI1J5k5mmTs ubaqeLSiMAUCM3me2T3kuWR3F2MMNXPkuN6pFPUm+Mq0rSJGD3nexTSyIKmqDiCzzWgB7WmVBDpJ4 naOGt11m47hL2VKJ8qcA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6P-0000sP-B4; Thu, 24 Oct 2019 13:24:33 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5p-0000PV-1w for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:23:59 +0000 Received: by mail-wr1-x442.google.com with SMTP id c2so20841803wrr.10 for ; Thu, 24 Oct 2019 06:23:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TNWJxh1AFEyP6kvtb+pRqQQilO6T6qX/8rJmhhrHLY0=; b=S2eThVk83dKO5Bwg3yDm/gzVGNto7qsth4WMLLOmiYJINz7Qibs5lDhZH4pMHelfff UI55mx/5I/os0vKVwqaNYzvodYmhm3q7elBYzBwxmPBD/GiwrKVqX7DFhWZBhSZqTfDb KjuRWogf+5wecIavxAAFgv75+yn7OshRyOO5VWsaZJg6YRdu/S6wV2oaeWIGvn7xkyT2 GYlxYl4QmaiC54ujOPw1nrH935qPBMI4BffDjb+GHywDtacfw4evwSNKQ7ue/25jsr1U TWuU7+hKvJpl0M+XfxDx5PBJ128PVg0jtARNCF0ugFpEeUzQLDYVTl90UMFXe0QRcLoa i6Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TNWJxh1AFEyP6kvtb+pRqQQilO6T6qX/8rJmhhrHLY0=; b=S6u/Yz/BYmjO3j5pxjyvpw+Gig7obMhLFwXjOIP37iGgvQU3Tzq+yU4g1FdZbHeD81 tuKbvF4SiBl9IRbx8dmCwcveQ54oiwmBERnlhkkdq0muIvrahlNS5qe5YXtfI8Yptruh ZPbQk/gEFO+a/H5UT1Ok8iaHNSa3hxd1McIw1HSb5ygk5P+mSP4lOUN8W6cOf1gvoxF/ n1CnPUphnR51EKDNw12f/0f4bBaZe2OyJx4EfLYngy95CWE6NjD8V52o8rSU9Hogd03L zCf+q6xyi5yosqxjOfMNMo0rkKGPoQBrvmx7pLoWfKa09tP0TNntWvY2SK1FyfMTSzKR OSwA== X-Gm-Message-State: APjAAAWhGFrAY0/wmsOBZUAR/cSNZTQEHfxFR0Gjjh+jNT5VorctyWtG 89AP3xCjfkTYu8zWN2K0fuzHBQ== X-Google-Smtp-Source: APXvYqyc+CLRulk3KGzezXZq5tnbXiDOqAyJFEZa5vrUyaKiixpSFHdiZydqrTrm/pESPS4zfHYTfQ== X-Received: by 2002:adf:e886:: with SMTP id d6mr4021372wrm.188.1571923435313; Thu, 24 Oct 2019 06:23:55 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.23.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:23:54 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 01/27] crypto: virtio - implement missing support for output IVs Date: Thu, 24 Oct 2019 15:23:19 +0200 Message-Id: <20191024132345.5236-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062357_093515_3D980D3E X-CRM114-Status: GOOD ( 10.52 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , Herbert Xu , "Michael S. Tsirkin" , Jason Wang , Eric Biggers , virtualization@lists.linux-foundation.org, Gonglei , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In order to allow for CBC to be chained, which is something that the CTS template relies upon, implementations of CBC need to pass the IV to be used for subsequent invocations via the IV buffer. This was not implemented yet for virtio-crypto so implement it now. Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver") Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Gonglei Cc: virtualization@lists.linux-foundation.org Signed-off-by: Ard Biesheuvel --- drivers/crypto/virtio/virtio_crypto_algs.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c index 42d19205166b..65ec10800137 100644 --- a/drivers/crypto/virtio/virtio_crypto_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_algs.c @@ -437,6 +437,11 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, goto free; } memcpy(iv, req->info, ivsize); + if (!vc_sym_req->encrypt) + scatterwalk_map_and_copy(req->info, req->src, + req->nbytes - AES_BLOCK_SIZE, + AES_BLOCK_SIZE, 0); + sg_init_one(&iv_sg, iv, ivsize); sgs[num_out++] = &iv_sg; vc_sym_req->iv = iv; @@ -563,6 +568,10 @@ static void virtio_crypto_ablkcipher_finalize_req( struct ablkcipher_request *req, int err) { + if (vc_sym_req->encrypt) + scatterwalk_map_and_copy(req->info, req->dst, + req->nbytes - AES_BLOCK_SIZE, + AES_BLOCK_SIZE, 0); crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine, req, err); kzfree(vc_sym_req->iv); From patchwork Thu Oct 24 13:23:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 669D5112B for ; Thu, 24 Oct 2019 13:25:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F2722084C for ; Thu, 24 Oct 2019 13:25:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rxinQ3tC"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="a9bSL+5/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2F2722084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GuItMDaDEPiRVSsVZO1R/BFB8qdB3LA/2QgrLjLEUls=; b=rxinQ3tC+ge5VH VosoalaWf/1DosxDKNnzll/DuyUPJNUuNwizZbLo+7FNxTo9Qu00EdhfaQlyV4qOC01A1qvIQI+Ma LVPe8tBLi+oRBuuloEwJl7Gndcm/R20b4+K9WERtN0UkrFctFN/49C1Tql/h0gqVZMyMAfQcE53R9 5vF6LHwmhGxwT4idxO1NuMVhlw/N9NTxyUNNqG8gMp4ZJmBsdgBlzayH4qxkJmhirS0b+vWLX6Opb 5RF7hjac1tNrNuVazh66LFWCMBNoTv3T/EdwoIv8SmrBY7NVLES6vQ/G2FzhArciPNvk4FKEzz9+y uJ9sysfzASV9HIBbh7zg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6r-00019h-CR; Thu, 24 Oct 2019 13:25:01 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5q-0000QD-Po for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:00 +0000 Received: by mail-wm1-x342.google.com with SMTP id p21so1390584wmg.2 for ; Thu, 24 Oct 2019 06:23:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n1CZfuSwOBWiTJLsXGGXKsTYggjtyZaieumQoTW6nsw=; b=a9bSL+5/vXArpFGMdZBpm2T2ibvGHxT0RE4psOfP7d5NBAT+S87j+/oqiBatWysEkL kn7q3XuaODmBXB+3LwnsntgHwm56Y1ztfBFP0yqKTcmfHlqkuDe8ovw1sA1hc5kd57JO u8Ak//10fgCr6Mm3FaEZuBQdKM3COSEvlFdzm8IC1x/y3m9Ihrv/yxQgxutnY/g5jcOt WHfH/KT8reM6VDTrsoDEvUnMxcTL43KH8aoYux8cYXKb0qOogFPlsZb/grPu9I3I2dRx R+15WXBQxw9Y9PECCW+oCJlXTjFa3ScZneGVhbiSeQ3h1CYsluF43MWS6chfD96dWXBc QQUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n1CZfuSwOBWiTJLsXGGXKsTYggjtyZaieumQoTW6nsw=; b=C32J+6+eY2GQ/dtXaNe2Smklj8Y45Z/8bdhOBBzsKdm3E7BnhqGDWJP1c5Nq3sNTcl faYXdfUC1LDdgO1FBq5+UnrloKzSAETzDFgf9SCzJvHx5iOc091qgwsDhEgT/+DY+S0v VTW1loOXDEGMaqghevZbw9ai52+VQ8JDZam/v+sgkQtyDyGXIbxY2UBLaLBTFa+rOrhZ rp5InytnjLaEbwZQQR5aPUnAzNcngROUjCLlbGpcx3cOkBZ/05lsko4hzZlju23qgvHS qeKc1VUJj/djir5/WwX7yckNqj3WBZVjCv7JDQupk2Vf0XLs3+gwSc/yFweciv0S4Epz rTNA== X-Gm-Message-State: APjAAAWQSHgsQJW/9OEI7dOR28vr0TL+6tFnoub0IiCBLDZlGo7Xckrq 5n93SZcBKp7VHeFlR2sAGpHx/w== X-Google-Smtp-Source: APXvYqwHtc6EgZfQ0QDGwcZ2Q95eSng9l7ZYxmVHvCtz9rcd14RvSZ98geGqKgL1Tay0isfdqki/1A== X-Received: by 2002:a1c:4489:: with SMTP id r131mr4687908wma.132.1571923436836; Thu, 24 Oct 2019 06:23:56 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.23.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:23:55 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 02/27] crypto: virtio - deal with unsupported input sizes Date: Thu, 24 Oct 2019 15:23:20 +0200 Message-Id: <20191024132345.5236-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062358_832828_3FC4EA56 X-CRM114-Status: GOOD ( 12.96 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , Herbert Xu , "Michael S. Tsirkin" , Jason Wang , Eric Biggers , virtualization@lists.linux-foundation.org, Gonglei , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Return -EINVAL for input sizes that are not a multiple of the AES block size, since they are not supported by our CBC chaining mode. While at it, remove the pr_err() that reports unsupported key sizes being used: we shouldn't spam the kernel log with that. Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver") Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Gonglei Cc: virtualization@lists.linux-foundation.org Signed-off-by: Ard Biesheuvel --- drivers/crypto/virtio/virtio_crypto_algs.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c index 65ec10800137..82b316b2f537 100644 --- a/drivers/crypto/virtio/virtio_crypto_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_algs.c @@ -105,8 +105,6 @@ virtio_crypto_alg_validate_key(int key_len, uint32_t *alg) *alg = VIRTIO_CRYPTO_CIPHER_AES_CBC; break; default: - pr_err("virtio_crypto: Unsupported key length: %d\n", - key_len); return -EINVAL; } return 0; @@ -489,6 +487,11 @@ static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req) /* Use the first data virtqueue as default */ struct data_queue *data_vq = &vcrypto->data_vq[0]; + if (!req->nbytes) + return 0; + if (req->nbytes % AES_BLOCK_SIZE) + return -EINVAL; + vc_req->dataq = data_vq; vc_req->alg_cb = virtio_crypto_dataq_sym_callback; vc_sym_req->ablkcipher_ctx = ctx; @@ -509,6 +512,11 @@ static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req) /* Use the first data virtqueue as default */ struct data_queue *data_vq = &vcrypto->data_vq[0]; + if (!req->nbytes) + return 0; + if (req->nbytes % AES_BLOCK_SIZE) + return -EINVAL; + vc_req->dataq = data_vq; vc_req->alg_cb = virtio_crypto_dataq_sym_callback; vc_sym_req->ablkcipher_ctx = ctx; From patchwork Thu Oct 24 13:23:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BA22112B for ; Thu, 24 Oct 2019 13:25:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B55D2084C for ; Thu, 24 Oct 2019 13:25:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ZALzprLO"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="BfCV50Ry" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B55D2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sPRraSQlIoS/znDG9NEY4vmwoFTAFE5bGwmBHGNPSus=; b=ZALzprLO+yLiwY YW+iI4YhipWFg/4kze0tpwpmbIB3bPS4QNrnHbRVD6UxOHhmn08N38RvH+oXdcUlJSGas19IJuFc4 Q3jSmET3g6EDi1G/d2tx++FgPHv16oeolk5aeMpfyTPFwCjT++a6ig1ZGr7O1fYX5t/T8uuX7pO/d LkUMidCayWHrYNgMxLZFp2cpfHuEPmLJsWzszK8AP+ielfEOxLv20B96QKK8ZHwxBLnVOnCgIOOMU ogf7O/jsGm87RW8ltu47VJoPEeN/JZFF52HNE24oEYVxM1dlDXR1Mh3mySQaq+R3a6g4lTdLrSdMh BW/dPElZkY8eBM1XwOlQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd7C-0002do-Ew; Thu, 24 Oct 2019 13:25:22 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5s-0000Qa-5e for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:02 +0000 Received: by mail-wr1-x443.google.com with SMTP id c2so20841969wrr.10 for ; Thu, 24 Oct 2019 06:23:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KkX6qzZaTwohXSXWCxEEDtNSmaYeUCGtH+INKRMHgMc=; b=BfCV50RyvTSrlpah+w08Tn9qfsskT4aPNelhiHQQxSM9Hy/VYhuAta278+KWUvigxE qal/xQl5/wQPUOaDowlxLkRDHR77t320IkDh+UKBVv+jO88TDtjyN9UZTN/wFejMqWTu RADsEOFxEqagyY6PmQCBePSbFxTx4PJ7vP3Z+QmymtppSYBSL5k4/mS5Rril14RfSgUg cxz1+8gRvbpKdB0WmIS0RqJJ9L7OxemSoaGs2msdzaLC4NaAdzKHhJ46dQfFFRysFaVl 93PS/csDL/e9BSxBXWtzS80raRfH/yE5a6HbY6veQBCoStlalVCXzp884f6p5jsOeHza xjmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KkX6qzZaTwohXSXWCxEEDtNSmaYeUCGtH+INKRMHgMc=; b=HHGN24yjRLKwMyHsSeZxMLgyx3lKHvW3CU/ApS3FQ29Xe8SArrYlOA4btTlVX03Vpd 1jjWoU0bYoS7frPDGsi8YEWQTlGvbxNpiNnZ+5nA4GtfXOHghfVDDXrejxnjfrll+s/7 qaYUJDcboocHKjt7DY2ZQydBNIdp6wSC/ILLDjUNzQZlF4pGoU5gkZo5eAu80N2LYi3F NYgG1jqdwKmHeWWfyNmZ+Y58gyZd46bGNnwpAvhWAMmuAymKV+FWZXUd1o7poiscWvnc /ULtNS1gtLTZfcBMismzdwUs5NT2b3BwME/IuFM43rzY6RbZBw1TWUAlJlskG/6idxEZ oXdg== X-Gm-Message-State: APjAAAXczJQFJsoJ8Du8WG/qMX7osrxEbchR7eiBCKTbR3vSa/hlHy/h Wmw8EiS3DttMzlJzD40XF0Ml/g== X-Google-Smtp-Source: APXvYqzM9YZJL6W1UKq6EICyKY/uz/ckEMYVmjlr9ouq6YzZjn5eqAYpTLyP5ZokyjZ5JwAi7jip6g== X-Received: by 2002:adf:f40c:: with SMTP id g12mr3799325wro.244.1571923438046; Thu, 24 Oct 2019 06:23:58 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.23.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:23:57 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 03/27] crypto: virtio - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:21 +0200 Message-Id: <20191024132345.5236-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062400_264811_BD8A6167 X-CRM114-Status: GOOD ( 19.92 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , Herbert Xu , "Michael S. Tsirkin" , Jason Wang , Eric Biggers , virtualization@lists.linux-foundation.org, Gonglei , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Gonglei Cc: virtualization@lists.linux-foundation.org Signed-off-by: Ard Biesheuvel --- drivers/crypto/virtio/virtio_crypto_algs.c | 187 ++++++++++---------- drivers/crypto/virtio/virtio_crypto_common.h | 2 +- 2 files changed, 92 insertions(+), 97 deletions(-) diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c index 82b316b2f537..4b71e80951b7 100644 --- a/drivers/crypto/virtio/virtio_crypto_algs.c +++ b/drivers/crypto/virtio/virtio_crypto_algs.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -16,10 +17,10 @@ #include "virtio_crypto_common.h" -struct virtio_crypto_ablkcipher_ctx { +struct virtio_crypto_skcipher_ctx { struct crypto_engine_ctx enginectx; struct virtio_crypto *vcrypto; - struct crypto_tfm *tfm; + struct crypto_skcipher *tfm; struct virtio_crypto_sym_session_info enc_sess_info; struct virtio_crypto_sym_session_info dec_sess_info; @@ -30,8 +31,8 @@ struct virtio_crypto_sym_request { /* Cipher or aead */ uint32_t type; - struct virtio_crypto_ablkcipher_ctx *ablkcipher_ctx; - struct ablkcipher_request *ablkcipher_req; + struct virtio_crypto_skcipher_ctx *skcipher_ctx; + struct skcipher_request *skcipher_req; uint8_t *iv; /* Encryption? */ bool encrypt; @@ -41,7 +42,7 @@ struct virtio_crypto_algo { uint32_t algonum; uint32_t service; unsigned int active_devs; - struct crypto_alg algo; + struct skcipher_alg algo; }; /* @@ -49,9 +50,9 @@ struct virtio_crypto_algo { * and crypto algorithms registion. */ static DEFINE_MUTEX(algs_lock); -static void virtio_crypto_ablkcipher_finalize_req( +static void virtio_crypto_skcipher_finalize_req( struct virtio_crypto_sym_request *vc_sym_req, - struct ablkcipher_request *req, + struct skcipher_request *req, int err); static void virtio_crypto_dataq_sym_callback @@ -59,7 +60,7 @@ static void virtio_crypto_dataq_sym_callback { struct virtio_crypto_sym_request *vc_sym_req = container_of(vc_req, struct virtio_crypto_sym_request, base); - struct ablkcipher_request *ablk_req; + struct skcipher_request *ablk_req; int error; /* Finish the encrypt or decrypt process */ @@ -79,8 +80,8 @@ static void virtio_crypto_dataq_sym_callback error = -EIO; break; } - ablk_req = vc_sym_req->ablkcipher_req; - virtio_crypto_ablkcipher_finalize_req(vc_sym_req, + ablk_req = vc_sym_req->skcipher_req; + virtio_crypto_skcipher_finalize_req(vc_sym_req, ablk_req, error); } } @@ -110,8 +111,8 @@ virtio_crypto_alg_validate_key(int key_len, uint32_t *alg) return 0; } -static int virtio_crypto_alg_ablkcipher_init_session( - struct virtio_crypto_ablkcipher_ctx *ctx, +static int virtio_crypto_alg_skcipher_init_session( + struct virtio_crypto_skcipher_ctx *ctx, uint32_t alg, const uint8_t *key, unsigned int keylen, int encrypt) @@ -200,8 +201,8 @@ static int virtio_crypto_alg_ablkcipher_init_session( return 0; } -static int virtio_crypto_alg_ablkcipher_close_session( - struct virtio_crypto_ablkcipher_ctx *ctx, +static int virtio_crypto_alg_skcipher_close_session( + struct virtio_crypto_skcipher_ctx *ctx, int encrypt) { struct scatterlist outhdr, status_sg, *sgs[2]; @@ -261,8 +262,8 @@ static int virtio_crypto_alg_ablkcipher_close_session( return 0; } -static int virtio_crypto_alg_ablkcipher_init_sessions( - struct virtio_crypto_ablkcipher_ctx *ctx, +static int virtio_crypto_alg_skcipher_init_sessions( + struct virtio_crypto_skcipher_ctx *ctx, const uint8_t *key, unsigned int keylen) { uint32_t alg; @@ -278,30 +279,30 @@ static int virtio_crypto_alg_ablkcipher_init_sessions( goto bad_key; /* Create encryption session */ - ret = virtio_crypto_alg_ablkcipher_init_session(ctx, + ret = virtio_crypto_alg_skcipher_init_session(ctx, alg, key, keylen, 1); if (ret) return ret; /* Create decryption session */ - ret = virtio_crypto_alg_ablkcipher_init_session(ctx, + ret = virtio_crypto_alg_skcipher_init_session(ctx, alg, key, keylen, 0); if (ret) { - virtio_crypto_alg_ablkcipher_close_session(ctx, 1); + virtio_crypto_alg_skcipher_close_session(ctx, 1); return ret; } return 0; bad_key: - crypto_tfm_set_flags(ctx->tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(ctx->tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } /* Note: kernel crypto API realization */ -static int virtio_crypto_ablkcipher_setkey(struct crypto_ablkcipher *tfm, +static int virtio_crypto_skcipher_setkey(struct crypto_skcipher *tfm, const uint8_t *key, unsigned int keylen) { - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); uint32_t alg; int ret; @@ -323,11 +324,11 @@ static int virtio_crypto_ablkcipher_setkey(struct crypto_ablkcipher *tfm, ctx->vcrypto = vcrypto; } else { /* Rekeying, we should close the created sessions previously */ - virtio_crypto_alg_ablkcipher_close_session(ctx, 1); - virtio_crypto_alg_ablkcipher_close_session(ctx, 0); + virtio_crypto_alg_skcipher_close_session(ctx, 1); + virtio_crypto_alg_skcipher_close_session(ctx, 0); } - ret = virtio_crypto_alg_ablkcipher_init_sessions(ctx, key, keylen); + ret = virtio_crypto_alg_skcipher_init_sessions(ctx, key, keylen); if (ret) { virtcrypto_dev_put(ctx->vcrypto); ctx->vcrypto = NULL; @@ -339,14 +340,14 @@ static int virtio_crypto_ablkcipher_setkey(struct crypto_ablkcipher *tfm, } static int -__virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, - struct ablkcipher_request *req, +__virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, + struct skcipher_request *req, struct data_queue *data_vq) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct virtio_crypto_ablkcipher_ctx *ctx = vc_sym_req->ablkcipher_ctx; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct virtio_crypto_skcipher_ctx *ctx = vc_sym_req->skcipher_ctx; struct virtio_crypto_request *vc_req = &vc_sym_req->base; - unsigned int ivsize = crypto_ablkcipher_ivsize(tfm); + unsigned int ivsize = crypto_skcipher_ivsize(tfm); struct virtio_crypto *vcrypto = ctx->vcrypto; struct virtio_crypto_op_data_req *req_data; int src_nents, dst_nents; @@ -359,7 +360,7 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, int sg_total; uint8_t *iv; - src_nents = sg_nents_for_len(req->src, req->nbytes); + src_nents = sg_nents_for_len(req->src, req->cryptlen); dst_nents = sg_nents(req->dst); pr_debug("virtio_crypto: Number of sgs (src_nents: %d, dst_nents: %d)\n", @@ -396,7 +397,7 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, req_data->u.sym_req.op_type = cpu_to_le32(VIRTIO_CRYPTO_SYM_OP_CIPHER); req_data->u.sym_req.u.cipher.para.iv_len = cpu_to_le32(ivsize); req_data->u.sym_req.u.cipher.para.src_data_len = - cpu_to_le32(req->nbytes); + cpu_to_le32(req->cryptlen); dst_len = virtio_crypto_alg_sg_nents_length(req->dst); if (unlikely(dst_len > U32_MAX)) { @@ -406,9 +407,9 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, } pr_debug("virtio_crypto: src_len: %u, dst_len: %llu\n", - req->nbytes, dst_len); + req->cryptlen, dst_len); - if (unlikely(req->nbytes + dst_len + ivsize + + if (unlikely(req->cryptlen + dst_len + ivsize + sizeof(vc_req->status) > vcrypto->max_size)) { pr_err("virtio_crypto: The length is too big\n"); err = -EINVAL; @@ -434,10 +435,10 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, err = -ENOMEM; goto free; } - memcpy(iv, req->info, ivsize); + memcpy(iv, req->iv, ivsize); if (!vc_sym_req->encrypt) - scatterwalk_map_and_copy(req->info, req->src, - req->nbytes - AES_BLOCK_SIZE, + scatterwalk_map_and_copy(req->iv, req->src, + req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); sg_init_one(&iv_sg, iv, ivsize); @@ -476,93 +477,93 @@ __virtio_crypto_ablkcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req, return err; } -static int virtio_crypto_ablkcipher_encrypt(struct ablkcipher_request *req) +static int virtio_crypto_skcipher_encrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *atfm = crypto_ablkcipher_reqtfm(req); - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm); + struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(req); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(atfm); struct virtio_crypto_sym_request *vc_sym_req = - ablkcipher_request_ctx(req); + skcipher_request_ctx(req); struct virtio_crypto_request *vc_req = &vc_sym_req->base; struct virtio_crypto *vcrypto = ctx->vcrypto; /* Use the first data virtqueue as default */ struct data_queue *data_vq = &vcrypto->data_vq[0]; - if (!req->nbytes) + if (!req->cryptlen) return 0; - if (req->nbytes % AES_BLOCK_SIZE) + if (req->cryptlen % AES_BLOCK_SIZE) return -EINVAL; vc_req->dataq = data_vq; vc_req->alg_cb = virtio_crypto_dataq_sym_callback; - vc_sym_req->ablkcipher_ctx = ctx; - vc_sym_req->ablkcipher_req = req; + vc_sym_req->skcipher_ctx = ctx; + vc_sym_req->skcipher_req = req; vc_sym_req->encrypt = true; - return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req); + return crypto_transfer_skcipher_request_to_engine(data_vq->engine, req); } -static int virtio_crypto_ablkcipher_decrypt(struct ablkcipher_request *req) +static int virtio_crypto_skcipher_decrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *atfm = crypto_ablkcipher_reqtfm(req); - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(atfm); + struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(req); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(atfm); struct virtio_crypto_sym_request *vc_sym_req = - ablkcipher_request_ctx(req); + skcipher_request_ctx(req); struct virtio_crypto_request *vc_req = &vc_sym_req->base; struct virtio_crypto *vcrypto = ctx->vcrypto; /* Use the first data virtqueue as default */ struct data_queue *data_vq = &vcrypto->data_vq[0]; - if (!req->nbytes) + if (!req->cryptlen) return 0; - if (req->nbytes % AES_BLOCK_SIZE) + if (req->cryptlen % AES_BLOCK_SIZE) return -EINVAL; vc_req->dataq = data_vq; vc_req->alg_cb = virtio_crypto_dataq_sym_callback; - vc_sym_req->ablkcipher_ctx = ctx; - vc_sym_req->ablkcipher_req = req; + vc_sym_req->skcipher_ctx = ctx; + vc_sym_req->skcipher_req = req; vc_sym_req->encrypt = false; - return crypto_transfer_ablkcipher_request_to_engine(data_vq->engine, req); + return crypto_transfer_skcipher_request_to_engine(data_vq->engine, req); } -static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm) +static int virtio_crypto_skcipher_init(struct crypto_skcipher *tfm) { - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request); + crypto_skcipher_set_reqsize(tfm, sizeof(struct virtio_crypto_sym_request)); ctx->tfm = tfm; - ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req; + ctx->enginectx.op.do_one_request = virtio_crypto_skcipher_crypt_req; ctx->enginectx.op.prepare_request = NULL; ctx->enginectx.op.unprepare_request = NULL; return 0; } -static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm) +static void virtio_crypto_skcipher_exit(struct crypto_skcipher *tfm) { - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); if (!ctx->vcrypto) return; - virtio_crypto_alg_ablkcipher_close_session(ctx, 1); - virtio_crypto_alg_ablkcipher_close_session(ctx, 0); + virtio_crypto_alg_skcipher_close_session(ctx, 1); + virtio_crypto_alg_skcipher_close_session(ctx, 0); virtcrypto_dev_put(ctx->vcrypto); ctx->vcrypto = NULL; } -int virtio_crypto_ablkcipher_crypt_req( +int virtio_crypto_skcipher_crypt_req( struct crypto_engine *engine, void *vreq) { - struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base); + struct skcipher_request *req = container_of(vreq, struct skcipher_request, base); struct virtio_crypto_sym_request *vc_sym_req = - ablkcipher_request_ctx(req); + skcipher_request_ctx(req); struct virtio_crypto_request *vc_req = &vc_sym_req->base; struct data_queue *data_vq = vc_req->dataq; int ret; - ret = __virtio_crypto_ablkcipher_do_req(vc_sym_req, req, data_vq); + ret = __virtio_crypto_skcipher_do_req(vc_sym_req, req, data_vq); if (ret < 0) return ret; @@ -571,16 +572,16 @@ int virtio_crypto_ablkcipher_crypt_req( return 0; } -static void virtio_crypto_ablkcipher_finalize_req( +static void virtio_crypto_skcipher_finalize_req( struct virtio_crypto_sym_request *vc_sym_req, - struct ablkcipher_request *req, + struct skcipher_request *req, int err) { if (vc_sym_req->encrypt) - scatterwalk_map_and_copy(req->info, req->dst, - req->nbytes - AES_BLOCK_SIZE, + scatterwalk_map_and_copy(req->iv, req->dst, + req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); - crypto_finalize_ablkcipher_request(vc_sym_req->base.dataq->engine, + crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine, req, err); kzfree(vc_sym_req->iv); virtcrypto_clear_request(&vc_sym_req->base); @@ -590,27 +591,21 @@ static struct virtio_crypto_algo virtio_crypto_algs[] = { { .algonum = VIRTIO_CRYPTO_CIPHER_AES_CBC, .service = VIRTIO_CRYPTO_SERVICE_CIPHER, .algo = { - .cra_name = "cbc(aes)", - .cra_driver_name = "virtio_crypto_aes_cbc", - .cra_priority = 150, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct virtio_crypto_ablkcipher_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_type = &crypto_ablkcipher_type, - .cra_init = virtio_crypto_ablkcipher_init, - .cra_exit = virtio_crypto_ablkcipher_exit, - .cra_u = { - .ablkcipher = { - .setkey = virtio_crypto_ablkcipher_setkey, - .decrypt = virtio_crypto_ablkcipher_decrypt, - .encrypt = virtio_crypto_ablkcipher_encrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - }, - }, + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "virtio_crypto_aes_cbc", + .base.cra_priority = 150, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct virtio_crypto_skcipher_ctx), + .base.cra_module = THIS_MODULE, + .init = virtio_crypto_skcipher_init, + .exit = virtio_crypto_skcipher_exit, + .setkey = virtio_crypto_skcipher_setkey, + .decrypt = virtio_crypto_skcipher_decrypt, + .encrypt = virtio_crypto_skcipher_encrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, } }; @@ -630,14 +625,14 @@ int virtio_crypto_algs_register(struct virtio_crypto *vcrypto) continue; if (virtio_crypto_algs[i].active_devs == 0) { - ret = crypto_register_alg(&virtio_crypto_algs[i].algo); + ret = crypto_register_skcipher(&virtio_crypto_algs[i].algo); if (ret) goto unlock; } virtio_crypto_algs[i].active_devs++; dev_info(&vcrypto->vdev->dev, "Registered algo %s\n", - virtio_crypto_algs[i].algo.cra_name); + virtio_crypto_algs[i].algo.base.cra_name); } unlock: @@ -661,7 +656,7 @@ void virtio_crypto_algs_unregister(struct virtio_crypto *vcrypto) continue; if (virtio_crypto_algs[i].active_devs == 1) - crypto_unregister_alg(&virtio_crypto_algs[i].algo); + crypto_unregister_skcipher(&virtio_crypto_algs[i].algo); virtio_crypto_algs[i].active_devs--; } diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h index 1c6e00da5a29..a24f85c589e7 100644 --- a/drivers/crypto/virtio/virtio_crypto_common.h +++ b/drivers/crypto/virtio/virtio_crypto_common.h @@ -112,7 +112,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node, uint32_t algo); int virtcrypto_dev_start(struct virtio_crypto *vcrypto); void virtcrypto_dev_stop(struct virtio_crypto *vcrypto); -int virtio_crypto_ablkcipher_crypt_req( +int virtio_crypto_skcipher_crypt_req( struct crypto_engine *engine, void *vreq); void From patchwork Thu Oct 24 13:23:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 075BC13B1 for ; Thu, 24 Oct 2019 13:25:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BAB3A2084C for ; Thu, 24 Oct 2019 13:25:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Sb4T52ag"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="qAXKItv7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BAB3A2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fDg67gFzsdf06F+oG6xytKtvWe1AFaxuXxj3bS7L54o=; b=Sb4T52agrU5Iwy DWQ7/Mn8tJyQ6fDykknhaNQAQHAuVBF5ZQB/aVy1aXOgbdlMzAQ5SSuzijd9NQbF5eFG+6qQo5rVO Cw1xNA3ngRWD5KGbvu2QZt/qvNG8huxpJcSw/HS3uOuk6yNkKDqX50IJoXATdo+sAXbFQdlQHpolR x2UwQp4BNNl4djYZBGVqwu672vuSUuYIQiibGBZAoKULQ+lX8OPiCg2+E8I7oG5T6BnpHvl1dwGQE CcLiICW3uILC+Z8JDcIzT8eL5DMoqnYOckIRtPu+iCvOroyhwE5+h6PGcxtanEaVaJTgJhAuW/zuK KG356xY7hBD5Y6/XgUdQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd7d-0002xV-RD; Thu, 24 Oct 2019 13:25:49 +0000 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5u-0000Rz-3Y for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:06 +0000 Received: by mail-wr1-x436.google.com with SMTP id n15so15281210wrw.13 for ; Thu, 24 Oct 2019 06:24:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pvLxGTkuD+BMHjJIFxtXwHoyUp4V/EFW7euwv8EkRzY=; b=qAXKItv7KUeFPk1fXOh/B0uzUOKt1LHtg3qKMN06GgsogGhyXQMESfGKUQTqogc1Ab 8RS+jl43z2+8utgq7FCMWt1tTILOTc9xTRxWLVLsasN/lJBoMhHbsu/zyKo5RkahdEMz xUD+JIJtt3/nlbJRARo4p7bFxFiOHOVZD12OtCiDDQAyy8LwY9hXN0Vkc6V8IyGCzTsb zHRHcIsUL+cSom1P1UlEHMDvfMYYKVwdznI+sUVhHpCMAsHFkIuceFMmrxIOLuFBeBP7 TvGMODU56Yxj3PzNU4Ipch8Yw2j03gYOoFHOS272dEWU5elpnDsWmlyGAH1zQQN56U9e HYkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pvLxGTkuD+BMHjJIFxtXwHoyUp4V/EFW7euwv8EkRzY=; b=OY5iJmvKgSMSRDcPZEb1lONBhaMGBCzbTEJi0lvhobxBZthSzq/lD618wGepmXoIF+ oswaTaCO3tuLOkDDRgdgTZIg/HPL/+/Ry2SV+v3FFjVpWpY+DmZ9xnt2JMbYzbiQ9hiR O5azpJKo244XqSbuYZ+g/LudcAU4KD1UP7V+F8onCxTpvXjthGtNB8KjFiiPj+cbGvmO 2DD5LfRchDyIWqfj7MpkCk3XrODwqF3XncD8cnq/TLaTPa14nXRxm3DvhfjVUF1KeWKd Lx3V41iZjeUsqs5+zViINJCH3jWaKNAwKfFe4CbioNTvAMNCji+P8TaQlfGgu4fsZtAq rSvg== X-Gm-Message-State: APjAAAUrMxRCa0Fe0Yg7fQ1UOtqZx7g+6SlGK8ok4iOfNRTeCn43AjHQ DEWQ72Jl7ZT7t9rPSz2HJiWwvA== X-Google-Smtp-Source: APXvYqzQVmzqlZ6pbd7mxFu2xx4xPEZLCAMp4X5cSQYUPmHKniSFgq6nSrUyBFt5eBMT1UJEUzPgWA== X-Received: by 2002:adf:a547:: with SMTP id j7mr3828429wrb.154.1571923439333; Thu, 24 Oct 2019 06:23:59 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.23.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:23:58 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 04/27] crypto: ccp - switch from ablkcipher to skcipher Date: Thu, 24 Oct 2019 15:23:22 +0200 Message-Id: <20191024132345.5236-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062402_242949_BD7C8D2F X-CRM114-Status: GOOD ( 17.94 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:436 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , Gary R Hook , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Reviewed-by: Gary R Hook Tested-by: Gary R Hook Signed-off-by: Ard Biesheuvel --- drivers/crypto/ccp/ccp-crypto-aes-galois.c | 7 +- drivers/crypto/ccp/ccp-crypto-aes-xts.c | 94 +++++------ drivers/crypto/ccp/ccp-crypto-aes.c | 169 +++++++++----------- drivers/crypto/ccp/ccp-crypto-des3.c | 100 ++++++------ drivers/crypto/ccp/ccp-crypto-main.c | 14 +- drivers/crypto/ccp/ccp-crypto.h | 13 +- 6 files changed, 186 insertions(+), 211 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c b/drivers/crypto/ccp/ccp-crypto-aes-galois.c index 94c1ad7eeddf..ff50ee80d223 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-galois.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c @@ -172,14 +172,12 @@ static struct aead_alg ccp_aes_gcm_defaults = { .ivsize = GCM_AES_IV_SIZE, .maxauthsize = AES_BLOCK_SIZE, .base = { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct ccp_ctx), .cra_priority = CCP_CRA_PRIORITY, - .cra_type = &crypto_ablkcipher_type, .cra_exit = ccp_aes_gcm_cra_exit, .cra_module = THIS_MODULE, }, @@ -229,11 +227,10 @@ static int ccp_register_aes_aead(struct list_head *head, snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", def->driver_name); alg->base.cra_blocksize = def->blocksize; - alg->base.cra_ablkcipher.ivsize = def->ivsize; ret = crypto_register_aead(alg); if (ret) { - pr_err("%s ablkcipher algorithm registration error (%d)\n", + pr_err("%s aead algorithm registration error (%d)\n", alg->base.cra_name, ret); kfree(ccp_aead); return ret; diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 8e4a531f4f70..04b2517df955 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -24,7 +24,7 @@ struct ccp_aes_xts_def { const char *drv_name; }; -static struct ccp_aes_xts_def aes_xts_algs[] = { +static const struct ccp_aes_xts_def aes_xts_algs[] = { { .name = "xts(aes)", .drv_name = "xts-aes-ccp", @@ -61,26 +61,25 @@ static struct ccp_unit_size_map xts_unit_sizes[] = { static int ccp_aes_xts_complete(struct crypto_async_request *async_req, int ret) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(async_req); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); if (ret) return ret; - memcpy(req->info, rctx->iv, AES_BLOCK_SIZE); + memcpy(req->iv, rctx->iv, AES_BLOCK_SIZE); return 0; } -static int ccp_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ccp_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct crypto_tfm *xfm = crypto_ablkcipher_tfm(tfm); - struct ccp_ctx *ctx = crypto_tfm_ctx(xfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); unsigned int ccpversion = ccp_version(); int ret; - ret = xts_check_key(xfm, key, key_len); + ret = xts_verify_key(tfm, key, key_len); if (ret) return ret; @@ -102,11 +101,12 @@ static int ccp_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return crypto_sync_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); } -static int ccp_aes_xts_crypt(struct ablkcipher_request *req, +static int ccp_aes_xts_crypt(struct skcipher_request *req, unsigned int encrypt) { - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); unsigned int ccpversion = ccp_version(); unsigned int fallback = 0; unsigned int unit; @@ -116,7 +116,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, if (!ctx->u.aes.key_len) return -EINVAL; - if (!req->info) + if (!req->iv) return -EINVAL; /* Check conditions under which the CCP can fulfill a request. The @@ -127,7 +127,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, */ unit_size = CCP_XTS_AES_UNIT_SIZE__LAST; for (unit = 0; unit < ARRAY_SIZE(xts_unit_sizes); unit++) { - if (req->nbytes == xts_unit_sizes[unit].size) { + if (req->cryptlen == xts_unit_sizes[unit].size) { unit_size = unit; break; } @@ -155,14 +155,14 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); ret = encrypt ? crypto_skcipher_encrypt(subreq) : crypto_skcipher_decrypt(subreq); skcipher_request_zero(subreq); return ret; } - memcpy(rctx->iv, req->info, AES_BLOCK_SIZE); + memcpy(rctx->iv, req->iv, AES_BLOCK_SIZE); sg_init_one(&rctx->iv_sg, rctx->iv, AES_BLOCK_SIZE); memset(&rctx->cmd, 0, sizeof(rctx->cmd)); @@ -177,7 +177,7 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, rctx->cmd.u.xts.iv = &rctx->iv_sg; rctx->cmd.u.xts.iv_len = AES_BLOCK_SIZE; rctx->cmd.u.xts.src = req->src; - rctx->cmd.u.xts.src_len = req->nbytes; + rctx->cmd.u.xts.src_len = req->cryptlen; rctx->cmd.u.xts.dst = req->dst; ret = ccp_crypto_enqueue_request(&req->base, &rctx->cmd); @@ -185,19 +185,19 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, return ret; } -static int ccp_aes_xts_encrypt(struct ablkcipher_request *req) +static int ccp_aes_xts_encrypt(struct skcipher_request *req) { return ccp_aes_xts_crypt(req, 1); } -static int ccp_aes_xts_decrypt(struct ablkcipher_request *req) +static int ccp_aes_xts_decrypt(struct skcipher_request *req) { return ccp_aes_xts_crypt(req, 0); } -static int ccp_aes_xts_cra_init(struct crypto_tfm *tfm) +static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); struct crypto_sync_skcipher *fallback_tfm; ctx->complete = ccp_aes_xts_complete; @@ -212,14 +212,14 @@ static int ccp_aes_xts_cra_init(struct crypto_tfm *tfm) } ctx->u.aes.tfm_skcipher = fallback_tfm; - tfm->crt_ablkcipher.reqsize = sizeof(struct ccp_aes_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); return 0; } -static void ccp_aes_xts_cra_exit(struct crypto_tfm *tfm) +static void ccp_aes_xts_exit_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); crypto_free_sync_skcipher(ctx->u.aes.tfm_skcipher); } @@ -227,8 +227,8 @@ static void ccp_aes_xts_cra_exit(struct crypto_tfm *tfm) static int ccp_register_aes_xts_alg(struct list_head *head, const struct ccp_aes_xts_def *def) { - struct ccp_crypto_ablkcipher_alg *ccp_alg; - struct crypto_alg *alg; + struct ccp_crypto_skcipher_alg *ccp_alg; + struct skcipher_alg *alg; int ret; ccp_alg = kzalloc(sizeof(*ccp_alg), GFP_KERNEL); @@ -239,30 +239,30 @@ static int ccp_register_aes_xts_alg(struct list_head *head, alg = &ccp_alg->alg; - snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); - snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", def->drv_name); - alg->cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_NEED_FALLBACK; - alg->cra_blocksize = AES_BLOCK_SIZE; - alg->cra_ctxsize = sizeof(struct ccp_ctx); - alg->cra_priority = CCP_CRA_PRIORITY; - alg->cra_type = &crypto_ablkcipher_type; - alg->cra_ablkcipher.setkey = ccp_aes_xts_setkey; - alg->cra_ablkcipher.encrypt = ccp_aes_xts_encrypt; - alg->cra_ablkcipher.decrypt = ccp_aes_xts_decrypt; - alg->cra_ablkcipher.min_keysize = AES_MIN_KEY_SIZE * 2; - alg->cra_ablkcipher.max_keysize = AES_MAX_KEY_SIZE * 2; - alg->cra_ablkcipher.ivsize = AES_BLOCK_SIZE; - alg->cra_init = ccp_aes_xts_cra_init; - alg->cra_exit = ccp_aes_xts_cra_exit; - alg->cra_module = THIS_MODULE; - - ret = crypto_register_alg(alg); + alg->base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK; + alg->base.cra_blocksize = AES_BLOCK_SIZE; + alg->base.cra_ctxsize = sizeof(struct ccp_ctx); + alg->base.cra_priority = CCP_CRA_PRIORITY; + alg->base.cra_module = THIS_MODULE; + + alg->setkey = ccp_aes_xts_setkey; + alg->encrypt = ccp_aes_xts_encrypt; + alg->decrypt = ccp_aes_xts_decrypt; + alg->min_keysize = AES_MIN_KEY_SIZE * 2; + alg->max_keysize = AES_MAX_KEY_SIZE * 2; + alg->ivsize = AES_BLOCK_SIZE; + alg->init = ccp_aes_xts_init_tfm; + alg->exit = ccp_aes_xts_exit_tfm; + + ret = crypto_register_skcipher(alg); if (ret) { - pr_err("%s ablkcipher algorithm registration error (%d)\n", - alg->cra_name, ret); + pr_err("%s skcipher algorithm registration error (%d)\n", + alg->base.cra_name, ret); kfree(ccp_alg); return ret; } diff --git a/drivers/crypto/ccp/ccp-crypto-aes.c b/drivers/crypto/ccp/ccp-crypto-aes.c index 58c6dddfc5e1..33328a153225 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes.c +++ b/drivers/crypto/ccp/ccp-crypto-aes.c @@ -21,25 +21,24 @@ static int ccp_aes_complete(struct crypto_async_request *async_req, int ret) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); + struct skcipher_request *req = skcipher_request_cast(async_req); struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); if (ret) return ret; if (ctx->u.aes.mode != CCP_AES_MODE_ECB) - memcpy(req->info, rctx->iv, AES_BLOCK_SIZE); + memcpy(req->iv, rctx->iv, AES_BLOCK_SIZE); return 0; } -static int ccp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ccp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm)); - struct ccp_crypto_ablkcipher_alg *alg = - ccp_crypto_ablkcipher_alg(crypto_ablkcipher_tfm(tfm)); + struct ccp_crypto_skcipher_alg *alg = ccp_crypto_skcipher_alg(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); switch (key_len) { case AES_KEYSIZE_128: @@ -52,7 +51,7 @@ static int ccp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, ctx->u.aes.type = CCP_AES_TYPE_256; break; default: - crypto_ablkcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } ctx->u.aes.mode = alg->mode; @@ -64,10 +63,11 @@ static int ccp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int ccp_aes_crypt(struct ablkcipher_request *req, bool encrypt) +static int ccp_aes_crypt(struct skcipher_request *req, bool encrypt) { - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); struct scatterlist *iv_sg = NULL; unsigned int iv_len = 0; int ret; @@ -77,14 +77,14 @@ static int ccp_aes_crypt(struct ablkcipher_request *req, bool encrypt) if (((ctx->u.aes.mode == CCP_AES_MODE_ECB) || (ctx->u.aes.mode == CCP_AES_MODE_CBC)) && - (req->nbytes & (AES_BLOCK_SIZE - 1))) + (req->cryptlen & (AES_BLOCK_SIZE - 1))) return -EINVAL; if (ctx->u.aes.mode != CCP_AES_MODE_ECB) { - if (!req->info) + if (!req->iv) return -EINVAL; - memcpy(rctx->iv, req->info, AES_BLOCK_SIZE); + memcpy(rctx->iv, req->iv, AES_BLOCK_SIZE); iv_sg = &rctx->iv_sg; iv_len = AES_BLOCK_SIZE; sg_init_one(iv_sg, rctx->iv, iv_len); @@ -102,7 +102,7 @@ static int ccp_aes_crypt(struct ablkcipher_request *req, bool encrypt) rctx->cmd.u.aes.iv = iv_sg; rctx->cmd.u.aes.iv_len = iv_len; rctx->cmd.u.aes.src = req->src; - rctx->cmd.u.aes.src_len = req->nbytes; + rctx->cmd.u.aes.src_len = req->cryptlen; rctx->cmd.u.aes.dst = req->dst; ret = ccp_crypto_enqueue_request(&req->base, &rctx->cmd); @@ -110,48 +110,44 @@ static int ccp_aes_crypt(struct ablkcipher_request *req, bool encrypt) return ret; } -static int ccp_aes_encrypt(struct ablkcipher_request *req) +static int ccp_aes_encrypt(struct skcipher_request *req) { return ccp_aes_crypt(req, true); } -static int ccp_aes_decrypt(struct ablkcipher_request *req) +static int ccp_aes_decrypt(struct skcipher_request *req) { return ccp_aes_crypt(req, false); } -static int ccp_aes_cra_init(struct crypto_tfm *tfm) +static int ccp_aes_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); ctx->complete = ccp_aes_complete; ctx->u.aes.key_len = 0; - tfm->crt_ablkcipher.reqsize = sizeof(struct ccp_aes_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); return 0; } -static void ccp_aes_cra_exit(struct crypto_tfm *tfm) -{ -} - static int ccp_aes_rfc3686_complete(struct crypto_async_request *async_req, int ret) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(async_req); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); /* Restore the original pointer */ - req->info = rctx->rfc3686_info; + req->iv = rctx->rfc3686_info; return ccp_aes_complete(async_req, ret); } -static int ccp_aes_rfc3686_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ccp_aes_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm)); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); if (key_len < CTR_RFC3686_NONCE_SIZE) return -EINVAL; @@ -162,10 +158,11 @@ static int ccp_aes_rfc3686_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return ccp_aes_setkey(tfm, key, key_len); } -static int ccp_aes_rfc3686_crypt(struct ablkcipher_request *req, bool encrypt) +static int ccp_aes_rfc3686_crypt(struct skcipher_request *req, bool encrypt) { - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); u8 *iv; /* Initialize the CTR block */ @@ -173,84 +170,72 @@ static int ccp_aes_rfc3686_crypt(struct ablkcipher_request *req, bool encrypt) memcpy(iv, ctx->u.aes.nonce, CTR_RFC3686_NONCE_SIZE); iv += CTR_RFC3686_NONCE_SIZE; - memcpy(iv, req->info, CTR_RFC3686_IV_SIZE); + memcpy(iv, req->iv, CTR_RFC3686_IV_SIZE); iv += CTR_RFC3686_IV_SIZE; *(__be32 *)iv = cpu_to_be32(1); /* Point to the new IV */ - rctx->rfc3686_info = req->info; - req->info = rctx->rfc3686_iv; + rctx->rfc3686_info = req->iv; + req->iv = rctx->rfc3686_iv; return ccp_aes_crypt(req, encrypt); } -static int ccp_aes_rfc3686_encrypt(struct ablkcipher_request *req) +static int ccp_aes_rfc3686_encrypt(struct skcipher_request *req) { return ccp_aes_rfc3686_crypt(req, true); } -static int ccp_aes_rfc3686_decrypt(struct ablkcipher_request *req) +static int ccp_aes_rfc3686_decrypt(struct skcipher_request *req) { return ccp_aes_rfc3686_crypt(req, false); } -static int ccp_aes_rfc3686_cra_init(struct crypto_tfm *tfm) +static int ccp_aes_rfc3686_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); ctx->complete = ccp_aes_rfc3686_complete; ctx->u.aes.key_len = 0; - tfm->crt_ablkcipher.reqsize = sizeof(struct ccp_aes_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); return 0; } -static void ccp_aes_rfc3686_cra_exit(struct crypto_tfm *tfm) -{ -} - -static struct crypto_alg ccp_aes_defaults = { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ccp_ctx), - .cra_priority = CCP_CRA_PRIORITY, - .cra_type = &crypto_ablkcipher_type, - .cra_init = ccp_aes_cra_init, - .cra_exit = ccp_aes_cra_exit, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = ccp_aes_setkey, - .encrypt = ccp_aes_encrypt, - .decrypt = ccp_aes_decrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - }, +static const struct skcipher_alg ccp_aes_defaults = { + .setkey = ccp_aes_setkey, + .encrypt = ccp_aes_encrypt, + .decrypt = ccp_aes_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .init = ccp_aes_init_tfm, + + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_priority = CCP_CRA_PRIORITY, + .base.cra_module = THIS_MODULE, }; -static struct crypto_alg ccp_aes_rfc3686_defaults = { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = CTR_RFC3686_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ccp_ctx), - .cra_priority = CCP_CRA_PRIORITY, - .cra_type = &crypto_ablkcipher_type, - .cra_init = ccp_aes_rfc3686_cra_init, - .cra_exit = ccp_aes_rfc3686_cra_exit, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = ccp_aes_rfc3686_setkey, - .encrypt = ccp_aes_rfc3686_encrypt, - .decrypt = ccp_aes_rfc3686_decrypt, - .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, - }, +static const struct skcipher_alg ccp_aes_rfc3686_defaults = { + .setkey = ccp_aes_rfc3686_setkey, + .encrypt = ccp_aes_rfc3686_encrypt, + .decrypt = ccp_aes_rfc3686_decrypt, + .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .init = ccp_aes_rfc3686_init_tfm, + + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = CTR_RFC3686_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_priority = CCP_CRA_PRIORITY, + .base.cra_module = THIS_MODULE, }; struct ccp_aes_def { @@ -260,7 +245,7 @@ struct ccp_aes_def { const char *driver_name; unsigned int blocksize; unsigned int ivsize; - struct crypto_alg *alg_defaults; + const struct skcipher_alg *alg_defaults; }; static struct ccp_aes_def aes_algs[] = { @@ -323,8 +308,8 @@ static struct ccp_aes_def aes_algs[] = { static int ccp_register_aes_alg(struct list_head *head, const struct ccp_aes_def *def) { - struct ccp_crypto_ablkcipher_alg *ccp_alg; - struct crypto_alg *alg; + struct ccp_crypto_skcipher_alg *ccp_alg; + struct skcipher_alg *alg; int ret; ccp_alg = kzalloc(sizeof(*ccp_alg), GFP_KERNEL); @@ -338,16 +323,16 @@ static int ccp_register_aes_alg(struct list_head *head, /* Copy the defaults and override as necessary */ alg = &ccp_alg->alg; *alg = *def->alg_defaults; - snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); - snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", def->driver_name); - alg->cra_blocksize = def->blocksize; - alg->cra_ablkcipher.ivsize = def->ivsize; + alg->base.cra_blocksize = def->blocksize; + alg->ivsize = def->ivsize; - ret = crypto_register_alg(alg); + ret = crypto_register_skcipher(alg); if (ret) { - pr_err("%s ablkcipher algorithm registration error (%d)\n", - alg->cra_name, ret); + pr_err("%s skcipher algorithm registration error (%d)\n", + alg->base.cra_name, ret); kfree(ccp_alg); return ret; } diff --git a/drivers/crypto/ccp/ccp-crypto-des3.c b/drivers/crypto/ccp/ccp-crypto-des3.c index d2c49b2f0323..9c129defdb50 100644 --- a/drivers/crypto/ccp/ccp-crypto-des3.c +++ b/drivers/crypto/ccp/ccp-crypto-des3.c @@ -20,28 +20,27 @@ static int ccp_des3_complete(struct crypto_async_request *async_req, int ret) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); + struct skcipher_request *req = skcipher_request_cast(async_req); struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req); + struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req); if (ret) return ret; if (ctx->u.des3.mode != CCP_DES3_MODE_ECB) - memcpy(req->info, rctx->iv, DES3_EDE_BLOCK_SIZE); + memcpy(req->iv, rctx->iv, DES3_EDE_BLOCK_SIZE); return 0; } -static int ccp_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ccp_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm)); - struct ccp_crypto_ablkcipher_alg *alg = - ccp_crypto_ablkcipher_alg(crypto_ablkcipher_tfm(tfm)); + struct ccp_crypto_skcipher_alg *alg = ccp_crypto_skcipher_alg(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = verify_ablkcipher_des3_key(tfm, key); + err = verify_skcipher_des3_key(tfm, key); if (err) return err; @@ -58,10 +57,11 @@ static int ccp_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt) +static int ccp_des3_crypt(struct skcipher_request *req, bool encrypt) { - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req); struct scatterlist *iv_sg = NULL; unsigned int iv_len = 0; int ret; @@ -71,14 +71,14 @@ static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt) if (((ctx->u.des3.mode == CCP_DES3_MODE_ECB) || (ctx->u.des3.mode == CCP_DES3_MODE_CBC)) && - (req->nbytes & (DES3_EDE_BLOCK_SIZE - 1))) + (req->cryptlen & (DES3_EDE_BLOCK_SIZE - 1))) return -EINVAL; if (ctx->u.des3.mode != CCP_DES3_MODE_ECB) { - if (!req->info) + if (!req->iv) return -EINVAL; - memcpy(rctx->iv, req->info, DES3_EDE_BLOCK_SIZE); + memcpy(rctx->iv, req->iv, DES3_EDE_BLOCK_SIZE); iv_sg = &rctx->iv_sg; iv_len = DES3_EDE_BLOCK_SIZE; sg_init_one(iv_sg, rctx->iv, iv_len); @@ -97,7 +97,7 @@ static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt) rctx->cmd.u.des3.iv = iv_sg; rctx->cmd.u.des3.iv_len = iv_len; rctx->cmd.u.des3.src = req->src; - rctx->cmd.u.des3.src_len = req->nbytes; + rctx->cmd.u.des3.src_len = req->cryptlen; rctx->cmd.u.des3.dst = req->dst; ret = ccp_crypto_enqueue_request(&req->base, &rctx->cmd); @@ -105,51 +105,43 @@ static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt) return ret; } -static int ccp_des3_encrypt(struct ablkcipher_request *req) +static int ccp_des3_encrypt(struct skcipher_request *req) { return ccp_des3_crypt(req, true); } -static int ccp_des3_decrypt(struct ablkcipher_request *req) +static int ccp_des3_decrypt(struct skcipher_request *req) { return ccp_des3_crypt(req, false); } -static int ccp_des3_cra_init(struct crypto_tfm *tfm) +static int ccp_des3_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); ctx->complete = ccp_des3_complete; ctx->u.des3.key_len = 0; - tfm->crt_ablkcipher.reqsize = sizeof(struct ccp_des3_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_des3_req_ctx)); return 0; } -static void ccp_des3_cra_exit(struct crypto_tfm *tfm) -{ -} - -static struct crypto_alg ccp_des3_defaults = { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ccp_ctx), - .cra_priority = CCP_CRA_PRIORITY, - .cra_type = &crypto_ablkcipher_type, - .cra_init = ccp_des3_cra_init, - .cra_exit = ccp_des3_cra_exit, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = ccp_des3_setkey, - .encrypt = ccp_des3_encrypt, - .decrypt = ccp_des3_decrypt, - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - }, +static const struct skcipher_alg ccp_des3_defaults = { + .setkey = ccp_des3_setkey, + .encrypt = ccp_des3_encrypt, + .decrypt = ccp_des3_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .init = ccp_des3_init_tfm, + + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_priority = CCP_CRA_PRIORITY, + .base.cra_module = THIS_MODULE, }; struct ccp_des3_def { @@ -159,10 +151,10 @@ struct ccp_des3_def { const char *driver_name; unsigned int blocksize; unsigned int ivsize; - struct crypto_alg *alg_defaults; + const struct skcipher_alg *alg_defaults; }; -static struct ccp_des3_def des3_algs[] = { +static const struct ccp_des3_def des3_algs[] = { { .mode = CCP_DES3_MODE_ECB, .version = CCP_VERSION(5, 0), @@ -186,8 +178,8 @@ static struct ccp_des3_def des3_algs[] = { static int ccp_register_des3_alg(struct list_head *head, const struct ccp_des3_def *def) { - struct ccp_crypto_ablkcipher_alg *ccp_alg; - struct crypto_alg *alg; + struct ccp_crypto_skcipher_alg *ccp_alg; + struct skcipher_alg *alg; int ret; ccp_alg = kzalloc(sizeof(*ccp_alg), GFP_KERNEL); @@ -201,16 +193,16 @@ static int ccp_register_des3_alg(struct list_head *head, /* Copy the defaults and override as necessary */ alg = &ccp_alg->alg; *alg = *def->alg_defaults; - snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); - snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", def->driver_name); - alg->cra_blocksize = def->blocksize; - alg->cra_ablkcipher.ivsize = def->ivsize; + alg->base.cra_blocksize = def->blocksize; + alg->ivsize = def->ivsize; - ret = crypto_register_alg(alg); + ret = crypto_register_skcipher(alg); if (ret) { - pr_err("%s ablkcipher algorithm registration error (%d)\n", - alg->cra_name, ret); + pr_err("%s skcipher algorithm registration error (%d)\n", + alg->base.cra_name, ret); kfree(ccp_alg); return ret; } diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c index 8ee4cb45a3f3..88275b4867ea 100644 --- a/drivers/crypto/ccp/ccp-crypto-main.c +++ b/drivers/crypto/ccp/ccp-crypto-main.c @@ -41,7 +41,7 @@ MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value"); /* List heads for the supported algorithms */ static LIST_HEAD(hash_algs); -static LIST_HEAD(cipher_algs); +static LIST_HEAD(skcipher_algs); static LIST_HEAD(aead_algs); static LIST_HEAD(akcipher_algs); @@ -330,7 +330,7 @@ static int ccp_register_algs(void) int ret; if (!aes_disable) { - ret = ccp_register_aes_algs(&cipher_algs); + ret = ccp_register_aes_algs(&skcipher_algs); if (ret) return ret; @@ -338,7 +338,7 @@ static int ccp_register_algs(void) if (ret) return ret; - ret = ccp_register_aes_xts_algs(&cipher_algs); + ret = ccp_register_aes_xts_algs(&skcipher_algs); if (ret) return ret; @@ -348,7 +348,7 @@ static int ccp_register_algs(void) } if (!des3_disable) { - ret = ccp_register_des3_algs(&cipher_algs); + ret = ccp_register_des3_algs(&skcipher_algs); if (ret) return ret; } @@ -371,7 +371,7 @@ static int ccp_register_algs(void) static void ccp_unregister_algs(void) { struct ccp_crypto_ahash_alg *ahash_alg, *ahash_tmp; - struct ccp_crypto_ablkcipher_alg *ablk_alg, *ablk_tmp; + struct ccp_crypto_skcipher_alg *ablk_alg, *ablk_tmp; struct ccp_crypto_aead *aead_alg, *aead_tmp; struct ccp_crypto_akcipher_alg *akc_alg, *akc_tmp; @@ -381,8 +381,8 @@ static void ccp_unregister_algs(void) kfree(ahash_alg); } - list_for_each_entry_safe(ablk_alg, ablk_tmp, &cipher_algs, entry) { - crypto_unregister_alg(&ablk_alg->alg); + list_for_each_entry_safe(ablk_alg, ablk_tmp, &skcipher_algs, entry) { + crypto_unregister_skcipher(&ablk_alg->alg); list_del(&ablk_alg->entry); kfree(ablk_alg); } diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h index 9015b5da6ba3..90a009e6b5c1 100644 --- a/drivers/crypto/ccp/ccp-crypto.h +++ b/drivers/crypto/ccp/ccp-crypto.h @@ -21,6 +21,7 @@ #include #include #include +#include #include /* We want the module name in front of our messages */ @@ -31,12 +32,12 @@ #define CCP_CRA_PRIORITY 300 -struct ccp_crypto_ablkcipher_alg { +struct ccp_crypto_skcipher_alg { struct list_head entry; u32 mode; - struct crypto_alg alg; + struct skcipher_alg alg; }; struct ccp_crypto_aead { @@ -66,12 +67,12 @@ struct ccp_crypto_akcipher_alg { struct akcipher_alg alg; }; -static inline struct ccp_crypto_ablkcipher_alg * - ccp_crypto_ablkcipher_alg(struct crypto_tfm *tfm) +static inline struct ccp_crypto_skcipher_alg * + ccp_crypto_skcipher_alg(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - return container_of(alg, struct ccp_crypto_ablkcipher_alg, alg); + return container_of(alg, struct ccp_crypto_skcipher_alg, alg); } static inline struct ccp_crypto_ahash_alg * From patchwork Thu Oct 24 13:23:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209665 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2F471864 for ; Thu, 24 Oct 2019 13:26:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B0B842084C for ; Thu, 24 Oct 2019 13:26:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IUq+PCe2"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="p8oBfXoR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0B842084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kH0mFP7QZveXy3cSbWLLflRRZbugLjv/GqovJtj8qVY=; b=IUq+PCe2lXnRzb +PZgTF2mHUKgPJzf5OiQy0G12rHRqKrRAOVggBpEb0Dt56mTKuLazYBT8EDWJLTid09ISgQJPgfUo YGvGkj65BIlP97hoPp8y+G6dN52SLrJPmZEykSqBFAzmG45LiqRGtyJsQrgF8LyZW4/v4D6RGgTYH +0W5VLV9NdAQYzUDoX/iQd5xl+xjrLoGMeRZypunN2ecxfM1vQpLmjxgdvSidClSpcM6ACftZpzpg ld/yIOD1RwSuVbzYLyvvMcKaIWl9ycrpiq1q4xMXaT8sUQw8Yja6QxlcFanMWahOSEuJLPHHuILJf qFm2iBns2R4K69OutLXQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd81-0003EW-07; Thu, 24 Oct 2019 13:26:13 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5v-0000TB-0D for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:07 +0000 Received: by mail-wm1-x343.google.com with SMTP id r141so2589508wme.4 for ; Thu, 24 Oct 2019 06:24:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vmuJ5fRMooNF8KLnzcJb6IIjCgwpVbLUGJ1LgIOykUY=; b=p8oBfXoRZlJal2PYx+SF4cK23xbwFjLggVtVwwyyoxa+YEP6a9FVRgzI0YrYi6IrHQ 2GYNRIdKCes9qIVsvPnhFQMkv1LNdHZDo5oc1acUQ5MJ0lQ9a5qAvx6swjxbuS+a7CRD 3xBIeqjgh1TU30yV/zNYxEIkmZwny9kNsA2gfY4b5kjda5IxZJl9HmcJdNvnVJ1X+Fks Op6wmrLoV/V7jJ3SDr55CVIDaiYLpnlWqlz+SX00v3ghDZ7JVNCBCIP8UoFTMgd62HSg 4L8avthYwMyyBHmQmeHLnhSA09YFg2QiLmI3xVkHxaF8T90koQyZvDYuVDVGCWHv9ySv 7dyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vmuJ5fRMooNF8KLnzcJb6IIjCgwpVbLUGJ1LgIOykUY=; b=sce4WDv7vHP8+6XYc8nSNjhG47T1XkZ5jVH9O4Eg3+ylWqEzXXdYR6+Eg5kw38ahUX rrz+U9NZWfz+ef/j21qFvfPAGfCR0dOgRQ95dYoM1ztjndgU2nSNdqNHrCPDRobGMvlb ilk5it0mxCa7fNp1Ugv4oW2J0as/yIyVSfNyDFyfjdbYk0fqRvQ8NvxRqgFv9o2h7jxU HU0o3B1PeK2ZpxhoKBcNj5mS8mfEtH2XaGQRp9g2/7EIj1v08or9RtyGBr1YHccMBLHB DID521wUxBq+aqkcUPQ+4wW6vpMzAxOAc8zKkFWHUsxqE7CrmhKsCZO8sqYS+TJSu7sy PnMw== X-Gm-Message-State: APjAAAWfK93QxVdK8FO/P8NNQz37PHi08Axr8SNKOhYEMRGei+xZln2K INIYolCFVM6mKOwwWcmO3/a8qg== X-Google-Smtp-Source: APXvYqyDHZEc5GDY04xxA+Kc4tiwbX89/IbPZWtgF47dRxOmJYOCipBtJVMZPcAeKa1HQyWubpl6IA== X-Received: by 2002:a1c:108:: with SMTP id 8mr4840566wmb.25.1571923440868; Thu, 24 Oct 2019 06:24:00 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.23.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:23:59 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 05/27] crypto: omap - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:23 +0200 Message-Id: <20191024132345.5236-6-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062403_215067_5E984E9E X-CRM114-Status: GOOD ( 17.60 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Tony Lindgren , Ard Biesheuvel , linux-omap@vger.kernel.org, "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Tony Lindgren Cc: linux-omap@vger.kernel.org Signed-off-by: Ard Biesheuvel Reviewed-by: Tero Kristo Tested-by: Tero Kristo --- drivers/crypto/omap-aes.c | 209 +++++++++--------- drivers/crypto/omap-aes.h | 4 +- drivers/crypto/omap-des.c | 232 +++++++++----------- 3 files changed, 207 insertions(+), 238 deletions(-) diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index 2f53fbb74100..a1fc03ed01f3 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -142,8 +142,8 @@ int omap_aes_write_ctrl(struct omap_aes_dev *dd) __le32_to_cpu(dd->ctx->key[i])); } - if ((dd->flags & (FLAGS_CBC | FLAGS_CTR)) && dd->req->info) - omap_aes_write_n(dd, AES_REG_IV(dd, 0), dd->req->info, 4); + if ((dd->flags & (FLAGS_CBC | FLAGS_CTR)) && dd->req->iv) + omap_aes_write_n(dd, AES_REG_IV(dd, 0), (void *)dd->req->iv, 4); if ((dd->flags & (FLAGS_GCM)) && dd->aead_req->iv) { rctx = aead_request_ctx(dd->aead_req); @@ -382,11 +382,11 @@ int omap_aes_crypt_dma_start(struct omap_aes_dev *dd) static void omap_aes_finish_req(struct omap_aes_dev *dd, int err) { - struct ablkcipher_request *req = dd->req; + struct skcipher_request *req = dd->req; pr_debug("err: %d\n", err); - crypto_finalize_ablkcipher_request(dd->engine, req, err); + crypto_finalize_skcipher_request(dd->engine, req, err); pm_runtime_mark_last_busy(dd->dev); pm_runtime_put_autosuspend(dd->dev); @@ -403,10 +403,10 @@ int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd) } static int omap_aes_handle_queue(struct omap_aes_dev *dd, - struct ablkcipher_request *req) + struct skcipher_request *req) { if (req) - return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req); + return crypto_transfer_skcipher_request_to_engine(dd->engine, req); return 0; } @@ -414,10 +414,10 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd, static int omap_aes_prepare_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base); - struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); - struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = container_of(areq, struct skcipher_request, base); + struct omap_aes_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); + struct omap_aes_reqctx *rctx = skcipher_request_ctx(req); struct omap_aes_dev *dd = rctx->dd; int ret; u16 flags; @@ -427,8 +427,8 @@ static int omap_aes_prepare_req(struct crypto_engine *engine, /* assign new request to device */ dd->req = req; - dd->total = req->nbytes; - dd->total_save = req->nbytes; + dd->total = req->cryptlen; + dd->total_save = req->cryptlen; dd->in_sg = req->src; dd->out_sg = req->dst; dd->orig_out = req->dst; @@ -469,8 +469,8 @@ static int omap_aes_prepare_req(struct crypto_engine *engine, static int omap_aes_crypt_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base); - struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = container_of(areq, struct skcipher_request, base); + struct omap_aes_reqctx *rctx = skcipher_request_ctx(req); struct omap_aes_dev *dd = rctx->dd; if (!dd) @@ -505,26 +505,26 @@ static void omap_aes_done_task(unsigned long data) pr_debug("exit\n"); } -static int omap_aes_crypt(struct ablkcipher_request *req, unsigned long mode) +static int omap_aes_crypt(struct skcipher_request *req, unsigned long mode) { - struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); - struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct omap_aes_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); + struct omap_aes_reqctx *rctx = skcipher_request_ctx(req); struct omap_aes_dev *dd; int ret; - pr_debug("nbytes: %d, enc: %d, cbc: %d\n", req->nbytes, + pr_debug("nbytes: %d, enc: %d, cbc: %d\n", req->cryptlen, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC)); - if (req->nbytes < aes_fallback_sz) { + if (req->cryptlen < aes_fallback_sz) { SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); if (mode & FLAGS_ENCRYPT) ret = crypto_skcipher_encrypt(subreq); @@ -545,10 +545,10 @@ static int omap_aes_crypt(struct ablkcipher_request *req, unsigned long mode) /* ********************** ALG API ************************************ */ -static int omap_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int omap_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); int ret; if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && @@ -571,32 +571,32 @@ static int omap_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int omap_aes_ecb_encrypt(struct ablkcipher_request *req) +static int omap_aes_ecb_encrypt(struct skcipher_request *req) { return omap_aes_crypt(req, FLAGS_ENCRYPT); } -static int omap_aes_ecb_decrypt(struct ablkcipher_request *req) +static int omap_aes_ecb_decrypt(struct skcipher_request *req) { return omap_aes_crypt(req, 0); } -static int omap_aes_cbc_encrypt(struct ablkcipher_request *req) +static int omap_aes_cbc_encrypt(struct skcipher_request *req) { return omap_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); } -static int omap_aes_cbc_decrypt(struct ablkcipher_request *req) +static int omap_aes_cbc_decrypt(struct skcipher_request *req) { return omap_aes_crypt(req, FLAGS_CBC); } -static int omap_aes_ctr_encrypt(struct ablkcipher_request *req) +static int omap_aes_ctr_encrypt(struct skcipher_request *req) { return omap_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CTR); } -static int omap_aes_ctr_decrypt(struct ablkcipher_request *req) +static int omap_aes_ctr_decrypt(struct skcipher_request *req) { return omap_aes_crypt(req, FLAGS_CTR); } @@ -606,10 +606,10 @@ static int omap_aes_prepare_req(struct crypto_engine *engine, static int omap_aes_crypt_req(struct crypto_engine *engine, void *req); -static int omap_aes_cra_init(struct crypto_tfm *tfm) +static int omap_aes_init_tfm(struct crypto_skcipher *tfm) { - const char *name = crypto_tfm_alg_name(tfm); - struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm); + const char *name = crypto_tfm_alg_name(&tfm->base); + struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); struct crypto_sync_skcipher *blk; blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); @@ -618,7 +618,7 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm) ctx->fallback = blk; - tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_aes_reqctx)); ctx->enginectx.op.prepare_request = omap_aes_prepare_req; ctx->enginectx.op.unprepare_request = NULL; @@ -657,9 +657,9 @@ static int omap_aes_gcm_cra_init(struct crypto_aead *tfm) return 0; } -static void omap_aes_cra_exit(struct crypto_tfm *tfm) +static void omap_aes_exit_tfm(struct crypto_skcipher *tfm) { - struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct omap_aes_ctx *ctx = crypto_skcipher_ctx(tfm); if (ctx->fallback) crypto_free_sync_skcipher(ctx->fallback); @@ -671,7 +671,10 @@ static void omap_aes_gcm_cra_exit(struct crypto_aead *tfm) { struct omap_aes_ctx *ctx = crypto_aead_ctx(tfm); - omap_aes_cra_exit(crypto_aead_tfm(tfm)); + if (ctx->fallback) + crypto_free_sync_skcipher(ctx->fallback); + + ctx->fallback = NULL; if (ctx->ctr) crypto_free_skcipher(ctx->ctr); @@ -679,78 +682,69 @@ static void omap_aes_gcm_cra_exit(struct crypto_aead *tfm) /* ********************** ALGS ************************************ */ -static struct crypto_alg algs_ecb_cbc[] = { +static struct skcipher_alg algs_ecb_cbc[] = { { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-omap", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_aes_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_aes_cra_init, - .cra_exit = omap_aes_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = omap_aes_setkey, - .encrypt = omap_aes_ecb_encrypt, - .decrypt = omap_aes_ecb_decrypt, - } + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-omap", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = omap_aes_setkey, + .encrypt = omap_aes_ecb_encrypt, + .decrypt = omap_aes_ecb_decrypt, + .init = omap_aes_init_tfm, + .exit = omap_aes_exit_tfm, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-omap", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_aes_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_aes_cra_init, - .cra_exit = omap_aes_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = omap_aes_setkey, - .encrypt = omap_aes_cbc_encrypt, - .decrypt = omap_aes_cbc_decrypt, - } + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-omap", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = omap_aes_setkey, + .encrypt = omap_aes_cbc_encrypt, + .decrypt = omap_aes_cbc_decrypt, + .init = omap_aes_init_tfm, + .exit = omap_aes_exit_tfm, } }; -static struct crypto_alg algs_ctr[] = { +static struct skcipher_alg algs_ctr[] = { { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-omap", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_aes_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_aes_cra_init, - .cra_exit = omap_aes_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = omap_aes_setkey, - .encrypt = omap_aes_ctr_encrypt, - .decrypt = omap_aes_ctr_decrypt, - } -} , + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-omap", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = omap_aes_setkey, + .encrypt = omap_aes_ctr_encrypt, + .decrypt = omap_aes_ctr_decrypt, + .init = omap_aes_init_tfm, + .exit = omap_aes_exit_tfm, +} }; static struct omap_aes_algs_info omap_aes_algs_info_ecb_cbc[] = { @@ -1121,7 +1115,7 @@ static int omap_aes_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct omap_aes_dev *dd; - struct crypto_alg *algp; + struct skcipher_alg *algp; struct aead_alg *aalg; struct resource res; int err = -ENOMEM, i, j, irq = -1; @@ -1215,9 +1209,9 @@ static int omap_aes_probe(struct platform_device *pdev) for (j = 0; j < dd->pdata->algs_info[i].size; j++) { algp = &dd->pdata->algs_info[i].algs_list[j]; - pr_debug("reg alg: %s\n", algp->cra_name); + pr_debug("reg alg: %s\n", algp->base.cra_name); - err = crypto_register_alg(algp); + err = crypto_register_skcipher(algp); if (err) goto err_algs; @@ -1230,9 +1224,8 @@ static int omap_aes_probe(struct platform_device *pdev) !dd->pdata->aead_algs_info->registered) { for (i = 0; i < dd->pdata->aead_algs_info->size; i++) { aalg = &dd->pdata->aead_algs_info->algs_list[i]; - algp = &aalg->base; - pr_debug("reg alg: %s\n", algp->cra_name); + pr_debug("reg alg: %s\n", aalg->base.cra_name); err = crypto_register_aead(aalg); if (err) @@ -1257,7 +1250,7 @@ static int omap_aes_probe(struct platform_device *pdev) err_algs: for (i = dd->pdata->algs_info_size - 1; i >= 0; i--) for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) - crypto_unregister_alg( + crypto_unregister_skcipher( &dd->pdata->algs_info[i].algs_list[j]); err_engine: @@ -1290,7 +1283,7 @@ static int omap_aes_remove(struct platform_device *pdev) for (i = dd->pdata->algs_info_size - 1; i >= 0; i--) for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) - crypto_unregister_alg( + crypto_unregister_skcipher( &dd->pdata->algs_info[i].algs_list[j]); for (i = dd->pdata->aead_algs_info->size - 1; i >= 0; i--) { diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h index 2d4b1f87a1c9..2d3575231e31 100644 --- a/drivers/crypto/omap-aes.h +++ b/drivers/crypto/omap-aes.h @@ -112,7 +112,7 @@ struct omap_aes_reqctx { #define OMAP_AES_CACHE_SIZE 0 struct omap_aes_algs_info { - struct crypto_alg *algs_list; + struct skcipher_alg *algs_list; unsigned int size; unsigned int registered; }; @@ -162,7 +162,7 @@ struct omap_aes_dev { struct aead_queue aead_queue; spinlock_t lock; - struct ablkcipher_request *req; + struct skcipher_request *req; struct aead_request *aead_req; struct crypto_engine *engine; diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c index b19d7e5d55ec..4c4dbc2b377e 100644 --- a/drivers/crypto/omap-des.c +++ b/drivers/crypto/omap-des.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -98,7 +99,7 @@ struct omap_des_reqctx { #define OMAP_DES_CACHE_SIZE 0 struct omap_des_algs_info { - struct crypto_alg *algs_list; + struct skcipher_alg *algs_list; unsigned int size; unsigned int registered; }; @@ -139,7 +140,7 @@ struct omap_des_dev { struct tasklet_struct done_task; - struct ablkcipher_request *req; + struct skcipher_request *req; struct crypto_engine *engine; /* * total is used by PIO mode for book keeping so introduce @@ -261,8 +262,8 @@ static int omap_des_write_ctrl(struct omap_des_dev *dd) __le32_to_cpu(dd->ctx->key[i])); } - if ((dd->flags & FLAGS_CBC) && dd->req->info) - omap_des_write_n(dd, DES_REG_IV(dd, 0), dd->req->info, 2); + if ((dd->flags & FLAGS_CBC) && dd->req->iv) + omap_des_write_n(dd, DES_REG_IV(dd, 0), (void *)dd->req->iv, 2); if (dd->flags & FLAGS_CBC) val |= DES_REG_CTRL_CBC; @@ -456,8 +457,8 @@ static int omap_des_crypt_dma(struct crypto_tfm *tfm, static int omap_des_crypt_dma_start(struct omap_des_dev *dd) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm( - crypto_ablkcipher_reqtfm(dd->req)); + struct crypto_tfm *tfm = crypto_skcipher_tfm( + crypto_skcipher_reqtfm(dd->req)); int err; pr_debug("total: %d\n", dd->total); @@ -491,11 +492,11 @@ static int omap_des_crypt_dma_start(struct omap_des_dev *dd) static void omap_des_finish_req(struct omap_des_dev *dd, int err) { - struct ablkcipher_request *req = dd->req; + struct skcipher_request *req = dd->req; pr_debug("err: %d\n", err); - crypto_finalize_ablkcipher_request(dd->engine, req, err); + crypto_finalize_skcipher_request(dd->engine, req, err); pm_runtime_mark_last_busy(dd->dev); pm_runtime_put_autosuspend(dd->dev); @@ -514,10 +515,10 @@ static int omap_des_crypt_dma_stop(struct omap_des_dev *dd) } static int omap_des_handle_queue(struct omap_des_dev *dd, - struct ablkcipher_request *req) + struct skcipher_request *req) { if (req) - return crypto_transfer_ablkcipher_request_to_engine(dd->engine, req); + return crypto_transfer_skcipher_request_to_engine(dd->engine, req); return 0; } @@ -525,9 +526,9 @@ static int omap_des_handle_queue(struct omap_des_dev *dd, static int omap_des_prepare_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base); - struct omap_des_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct skcipher_request *req = container_of(areq, struct skcipher_request, base); + struct omap_des_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); struct omap_des_dev *dd = omap_des_find_dev(ctx); struct omap_des_reqctx *rctx; int ret; @@ -538,8 +539,8 @@ static int omap_des_prepare_req(struct crypto_engine *engine, /* assign new request to device */ dd->req = req; - dd->total = req->nbytes; - dd->total_save = req->nbytes; + dd->total = req->cryptlen; + dd->total_save = req->cryptlen; dd->in_sg = req->src; dd->out_sg = req->dst; dd->orig_out = req->dst; @@ -568,8 +569,8 @@ static int omap_des_prepare_req(struct crypto_engine *engine, if (dd->out_sg_len < 0) return dd->out_sg_len; - rctx = ablkcipher_request_ctx(req); - ctx = crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)); + rctx = skcipher_request_ctx(req); + ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); rctx->mode &= FLAGS_MODE_MASK; dd->flags = (dd->flags & ~FLAGS_MODE_MASK) | rctx->mode; @@ -582,9 +583,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine, static int omap_des_crypt_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base); - struct omap_des_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct skcipher_request *req = container_of(areq, struct skcipher_request, base); + struct omap_des_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); struct omap_des_dev *dd = omap_des_find_dev(ctx); if (!dd) @@ -619,18 +620,18 @@ static void omap_des_done_task(unsigned long data) pr_debug("exit\n"); } -static int omap_des_crypt(struct ablkcipher_request *req, unsigned long mode) +static int omap_des_crypt(struct skcipher_request *req, unsigned long mode) { - struct omap_des_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); - struct omap_des_reqctx *rctx = ablkcipher_request_ctx(req); + struct omap_des_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); + struct omap_des_reqctx *rctx = skcipher_request_ctx(req); struct omap_des_dev *dd; - pr_debug("nbytes: %d, enc: %d, cbc: %d\n", req->nbytes, + pr_debug("nbytes: %d, enc: %d, cbc: %d\n", req->cryptlen, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC)); - if (!IS_ALIGNED(req->nbytes, DES_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, DES_BLOCK_SIZE)) { pr_err("request size is not exact amount of DES blocks\n"); return -EINVAL; } @@ -646,15 +647,15 @@ static int omap_des_crypt(struct ablkcipher_request *req, unsigned long mode) /* ********************** ALG API ************************************ */ -static int omap_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int omap_des_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct omap_des_ctx *ctx = crypto_skcipher_ctx(cipher); int err; pr_debug("enter, keylen: %d\n", keylen); - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(cipher, key); if (err) return err; @@ -664,15 +665,15 @@ static int omap_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int omap_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int omap_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct omap_des_ctx *ctx = crypto_skcipher_ctx(cipher); int err; pr_debug("enter, keylen: %d\n", keylen); - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(cipher, key); if (err) return err; @@ -682,22 +683,22 @@ static int omap_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int omap_des_ecb_encrypt(struct ablkcipher_request *req) +static int omap_des_ecb_encrypt(struct skcipher_request *req) { return omap_des_crypt(req, FLAGS_ENCRYPT); } -static int omap_des_ecb_decrypt(struct ablkcipher_request *req) +static int omap_des_ecb_decrypt(struct skcipher_request *req) { return omap_des_crypt(req, 0); } -static int omap_des_cbc_encrypt(struct ablkcipher_request *req) +static int omap_des_cbc_encrypt(struct skcipher_request *req) { return omap_des_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); } -static int omap_des_cbc_decrypt(struct ablkcipher_request *req) +static int omap_des_cbc_decrypt(struct skcipher_request *req) { return omap_des_crypt(req, FLAGS_CBC); } @@ -707,13 +708,13 @@ static int omap_des_prepare_req(struct crypto_engine *engine, static int omap_des_crypt_req(struct crypto_engine *engine, void *areq); -static int omap_des_cra_init(struct crypto_tfm *tfm) +static int omap_des_init_tfm(struct crypto_skcipher *tfm) { - struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm); + struct omap_des_ctx *ctx = crypto_skcipher_ctx(tfm); pr_debug("enter\n"); - tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct omap_des_reqctx)); ctx->enginectx.op.prepare_request = omap_des_prepare_req; ctx->enginectx.op.unprepare_request = NULL; @@ -722,103 +723,78 @@ static int omap_des_cra_init(struct crypto_tfm *tfm) return 0; } -static void omap_des_cra_exit(struct crypto_tfm *tfm) -{ - pr_debug("enter\n"); -} - /* ********************** ALGS ************************************ */ -static struct crypto_alg algs_ecb_cbc[] = { +static struct skcipher_alg algs_ecb_cbc[] = { { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-omap", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-omap", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_des_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_des_cra_init, - .cra_exit = omap_des_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = omap_des_setkey, - .encrypt = omap_des_ecb_encrypt, - .decrypt = omap_des_ecb_decrypt, - } + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_des_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = omap_des_setkey, + .encrypt = omap_des_ecb_encrypt, + .decrypt = omap_des_ecb_decrypt, + .init = omap_des_init_tfm, }, { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-omap", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-omap", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_des_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_des_cra_init, - .cra_exit = omap_des_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = omap_des_setkey, - .encrypt = omap_des_cbc_encrypt, - .decrypt = omap_des_cbc_decrypt, - } + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_des_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = omap_des_setkey, + .encrypt = omap_des_cbc_encrypt, + .decrypt = omap_des_cbc_decrypt, + .init = omap_des_init_tfm, }, { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3-omap", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3-omap", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_des_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_des_cra_init, - .cra_exit = omap_des_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = 3*DES_KEY_SIZE, - .max_keysize = 3*DES_KEY_SIZE, - .setkey = omap_des3_setkey, - .encrypt = omap_des_ecb_encrypt, - .decrypt = omap_des_ecb_decrypt, - } + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_des_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = omap_des3_setkey, + .encrypt = omap_des_ecb_encrypt, + .decrypt = omap_des_ecb_decrypt, + .init = omap_des_init_tfm, }, { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3-omap", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3-omap", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct omap_des_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = omap_des_cra_init, - .cra_exit = omap_des_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = 3*DES_KEY_SIZE, - .max_keysize = 3*DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = omap_des3_setkey, - .encrypt = omap_des_cbc_encrypt, - .decrypt = omap_des_cbc_decrypt, - } + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct omap_des_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .setkey = omap_des3_setkey, + .encrypt = omap_des_cbc_encrypt, + .decrypt = omap_des_cbc_decrypt, + .init = omap_des_init_tfm, } }; @@ -976,7 +952,7 @@ static int omap_des_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct omap_des_dev *dd; - struct crypto_alg *algp; + struct skcipher_alg *algp; struct resource *res; int err = -ENOMEM, i, j, irq = -1; u32 reg; @@ -1071,9 +1047,9 @@ static int omap_des_probe(struct platform_device *pdev) for (j = 0; j < dd->pdata->algs_info[i].size; j++) { algp = &dd->pdata->algs_info[i].algs_list[j]; - pr_debug("reg alg: %s\n", algp->cra_name); + pr_debug("reg alg: %s\n", algp->base.cra_name); - err = crypto_register_alg(algp); + err = crypto_register_skcipher(algp); if (err) goto err_algs; @@ -1086,7 +1062,7 @@ static int omap_des_probe(struct platform_device *pdev) err_algs: for (i = dd->pdata->algs_info_size - 1; i >= 0; i--) for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) - crypto_unregister_alg( + crypto_unregister_skcipher( &dd->pdata->algs_info[i].algs_list[j]); err_engine: @@ -1119,7 +1095,7 @@ static int omap_des_remove(struct platform_device *pdev) for (i = dd->pdata->algs_info_size - 1; i >= 0; i--) for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) - crypto_unregister_alg( + crypto_unregister_skcipher( &dd->pdata->algs_info[i].algs_list[j]); tasklet_kill(&dd->done_task); From patchwork Thu Oct 24 13:23:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209669 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4DE1112B for ; Thu, 24 Oct 2019 13:26:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B905C2084C for ; Thu, 24 Oct 2019 13:26:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bOCWo5G9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="IqGFfham" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B905C2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fG1hu2C7Gd+RB2MdLS3GrYWVctvSm5WiJK9IA0Lk1iI=; b=bOCWo5G9u1Q7uk 3Yj2fft/M5Ra0qO13cW8hKaVnJB22G9Cb4DlDZ/2kTKx8M3BygafX+o81mmSiTEE2fjWRMFGeZfTE SDCY0kd2QR/PQ8t0izPXjdDlHZ4ushBiVgIESAt1nXZrgGTEUurhkV1RjDxCWgJ5PXM5zKf/OHIne IiMpr6crkszyAgqCkxG3tuFWtRMkF5bTMPVLPstUL6etrmpTg8sM0WN/T4KmEJMJMN9BzHcLtliID PX4RxupIBhNr/F5b8H/FFjhKqWiFPEd2I+sP7Gd5l1MFArDEcLJ6BUSxjZPBnyJLz1/tAAJGlparr MUtB9Pqvsbi1+IARDSnw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd8d-0003gb-1S; Thu, 24 Oct 2019 13:26:51 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5w-0000UD-4V for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:11 +0000 Received: by mail-wm1-x343.google.com with SMTP id p21so1390997wmg.2 for ; Thu, 24 Oct 2019 06:24:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AHAjxDTgvw14NRuMBQJJd/5J/X8L7USnOHUXlkFXmWk=; b=IqGFfham7Fv1lPjSBtEEHP1De1DjTgd0ynZ1HUqFFiZmb6mF6Rg36d00NEvwalreC3 piwNSm2lxypb6dpGsTPPjjAOUYUYQqNmorEbxbSUzrJcJmCSe89Skcc4BgGKQ4AC542Y jI688X9RnSPaFqBd6J+N0/NISsiOxHEeVUAG0THuqe3qVyehce7h2Ew+StFyB3Ohk9tx mAQrw6GCtn5dDkVWldzhKo9baiAqBvKt2mxgk7D1DDzaSth+oJFLVP5GUD291ZjR8doF HalAX7+2nYGuuDHnRVP/3ujmy3B/Ug+H79nLSP9LJgOvpW5dWXvOSY/lji5biqithlpW pWmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AHAjxDTgvw14NRuMBQJJd/5J/X8L7USnOHUXlkFXmWk=; b=F9qfIZJMHwHXlGszwueMmieMl6h8znSe+rlCz/xJWrnqMHFGt970OMDqJ/k3OBmIlb uX3KBaSLq4uvChHiUeuwB2TcNDNjQOyGUd8mT1Uq8sMmLmxP8MAiO0Fsz78/iTqXx7CI OdWsauu+H09Qop9242RquR2E94A+tfNwnBjfCtguNbKZCZLfL9MexkcIw+s94z5rcCrV ppr5oCpfWRyKWH3c8a2D8/D8LsPmmnBz2ZSKeJG3AgCzzRPb+54P6nHkimTj061cRUd0 OoFV46XRujhkki46V5miHP9gexsS9dmFdEOInqhsIasRZGmxMz5lGyschppRiw9EFS75 AQHQ== X-Gm-Message-State: APjAAAUDplI4ljYPDIxuZQfgsL+esjQb1IIFelOALaLThkrd7+AA2RGx L2K+vht+w+gJBwjr2zDL2zmOZA== X-Google-Smtp-Source: APXvYqzEDpYmSxs/TQINpq5fi7/t7PM5fN9ZBRgMuqhYMagC1JtPc5zc51b7KkAi5L1ZYO0/Jjiweg== X-Received: by 2002:a1c:f714:: with SMTP id v20mr5099543wmh.55.1571923442141; Thu, 24 Oct 2019 06:24:02 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:01 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 06/27] crypto: ux500 - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:24 +0200 Message-Id: <20191024132345.5236-7-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062404_381820_D3FB3225 X-CRM114-Status: GOOD ( 16.05 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Linus Walleij , Ard Biesheuvel , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Reviewed-by: Linus Walleij Signed-off-by: Ard Biesheuvel --- drivers/crypto/ux500/cryp/cryp_core.c | 371 ++++++++------------ 1 file changed, 156 insertions(+), 215 deletions(-) diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c index 1628ae7a1467..95fb694a2667 100644 --- a/drivers/crypto/ux500/cryp/cryp_core.c +++ b/drivers/crypto/ux500/cryp/cryp_core.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include @@ -828,10 +829,10 @@ static int get_nents(struct scatterlist *sg, int nbytes) return nents; } -static int ablk_dma_crypt(struct ablkcipher_request *areq) +static int ablk_dma_crypt(struct skcipher_request *areq) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); struct cryp_device_data *device_data; int bytes_written = 0; @@ -840,8 +841,8 @@ static int ablk_dma_crypt(struct ablkcipher_request *areq) pr_debug(DEV_DBG_NAME " [%s]", __func__); - ctx->datalen = areq->nbytes; - ctx->outlen = areq->nbytes; + ctx->datalen = areq->cryptlen; + ctx->outlen = areq->cryptlen; ret = cryp_get_device_data(ctx, &device_data); if (ret) @@ -885,11 +886,11 @@ static int ablk_dma_crypt(struct ablkcipher_request *areq) return 0; } -static int ablk_crypt(struct ablkcipher_request *areq) +static int ablk_crypt(struct skcipher_request *areq) { - struct ablkcipher_walk walk; - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct skcipher_walk walk; + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); struct cryp_device_data *device_data; unsigned long src_paddr; unsigned long dst_paddr; @@ -902,21 +903,20 @@ static int ablk_crypt(struct ablkcipher_request *areq) if (ret) goto out; - ablkcipher_walk_init(&walk, areq->dst, areq->src, areq->nbytes); - ret = ablkcipher_walk_phys(areq, &walk); + ret = skcipher_walk_async(&walk, areq); if (ret) { - pr_err(DEV_DBG_NAME "[%s]: ablkcipher_walk_phys() failed!", + pr_err(DEV_DBG_NAME "[%s]: skcipher_walk_async() failed!", __func__); goto out; } while ((nbytes = walk.nbytes) > 0) { ctx->iv = walk.iv; - src_paddr = (page_to_phys(walk.src.page) + walk.src.offset); + src_paddr = (page_to_phys(walk.src.phys.page) + walk.src.phys.offset); ctx->indata = phys_to_virt(src_paddr); - dst_paddr = (page_to_phys(walk.dst.page) + walk.dst.offset); + dst_paddr = (page_to_phys(walk.dst.phys.page) + walk.dst.phys.offset); ctx->outdata = phys_to_virt(dst_paddr); ctx->datalen = nbytes - (nbytes % ctx->blocksize); @@ -926,11 +926,10 @@ static int ablk_crypt(struct ablkcipher_request *areq) goto out; nbytes -= ctx->datalen; - ret = ablkcipher_walk_done(areq, &walk, nbytes); + ret = skcipher_walk_done(&walk, nbytes); if (ret) goto out; } - ablkcipher_walk_complete(&walk); out: /* Release the device */ @@ -948,10 +947,10 @@ static int ablk_crypt(struct ablkcipher_request *areq) return ret; } -static int aes_ablkcipher_setkey(struct crypto_ablkcipher *cipher, +static int aes_skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); u32 *flags = &cipher->base.crt_flags; pr_debug(DEV_DBG_NAME " [%s]", __func__); @@ -983,15 +982,15 @@ static int aes_ablkcipher_setkey(struct crypto_ablkcipher *cipher, return 0; } -static int des_ablkcipher_setkey(struct crypto_ablkcipher *cipher, +static int des_skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); int err; pr_debug(DEV_DBG_NAME " [%s]", __func__); - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(cipher, key); if (err) return err; @@ -1002,15 +1001,15 @@ static int des_ablkcipher_setkey(struct crypto_ablkcipher *cipher, return 0; } -static int des3_ablkcipher_setkey(struct crypto_ablkcipher *cipher, +static int des3_skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); int err; pr_debug(DEV_DBG_NAME " [%s]", __func__); - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(cipher, key); if (err) return err; @@ -1021,10 +1020,10 @@ static int des3_ablkcipher_setkey(struct crypto_ablkcipher *cipher, return 0; } -static int cryp_blk_encrypt(struct ablkcipher_request *areq) +static int cryp_blk_encrypt(struct skcipher_request *areq) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); pr_debug(DEV_DBG_NAME " [%s]", __func__); @@ -1039,10 +1038,10 @@ static int cryp_blk_encrypt(struct ablkcipher_request *areq) return ablk_crypt(areq); } -static int cryp_blk_decrypt(struct ablkcipher_request *areq) +static int cryp_blk_decrypt(struct skcipher_request *areq) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct cryp_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct cryp_ctx *ctx = crypto_skcipher_ctx(cipher); pr_debug(DEV_DBG_NAME " [%s]", __func__); @@ -1058,19 +1057,19 @@ static int cryp_blk_decrypt(struct ablkcipher_request *areq) struct cryp_algo_template { enum cryp_algo_mode algomode; - struct crypto_alg crypto; + struct skcipher_alg skcipher; }; -static int cryp_cra_init(struct crypto_tfm *tfm) +static int cryp_init_tfm(struct crypto_skcipher *tfm) { - struct cryp_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_alg *alg = tfm->__crt_alg; + struct cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct cryp_algo_template *cryp_alg = container_of(alg, struct cryp_algo_template, - crypto); + skcipher); ctx->config.algomode = cryp_alg->algomode; - ctx->blocksize = crypto_tfm_alg_blocksize(tfm); + ctx->blocksize = crypto_skcipher_blocksize(tfm); return 0; } @@ -1078,205 +1077,147 @@ static int cryp_cra_init(struct crypto_tfm *tfm) static struct cryp_algo_template cryp_algs[] = { { .algomode = CRYP_ALGO_AES_ECB, - .crypto = { - .cra_name = "aes", - .cra_driver_name = "aes-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = aes_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt - } - } - } - }, - { - .algomode = CRYP_ALGO_AES_ECB, - .crypto = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = aes_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - } - } + .skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = aes_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .init = cryp_init_tfm, } }, { .algomode = CRYP_ALGO_AES_CBC, - .crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = aes_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - .ivsize = AES_BLOCK_SIZE, - } - } + .skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = aes_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .init = cryp_init_tfm, + .ivsize = AES_BLOCK_SIZE, } }, { .algomode = CRYP_ALGO_AES_CTR, - .crypto = { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = aes_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - .ivsize = AES_BLOCK_SIZE, - } - } + .skcipher = { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = aes_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .init = cryp_init_tfm, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, } }, { .algomode = CRYP_ALGO_DES_ECB, - .crypto = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = des_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - } - } + .skcipher = { + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = des_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .init = cryp_init_tfm, } }, { .algomode = CRYP_ALGO_TDES_ECB, - .crypto = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3_ede-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .setkey = des3_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - } - } + .skcipher = { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3_ede-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = des3_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .init = cryp_init_tfm, } }, { .algomode = CRYP_ALGO_DES_CBC, - .crypto = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = des_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - } - } + .skcipher = { + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = des_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .ivsize = DES_BLOCK_SIZE, + .init = cryp_init_tfm, } }, { .algomode = CRYP_ALGO_TDES_CBC, - .crypto = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3_ede-ux500", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cryp_ctx), - .cra_alignmask = 3, - .cra_type = &crypto_ablkcipher_type, - .cra_init = cryp_cra_init, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .setkey = des3_ablkcipher_setkey, - .encrypt = cryp_blk_encrypt, - .decrypt = cryp_blk_decrypt, - .ivsize = DES3_EDE_BLOCK_SIZE, - } - } + .skcipher = { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3_ede-ux500", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cryp_ctx), + .base.cra_alignmask = 3, + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = des3_skcipher_setkey, + .encrypt = cryp_blk_encrypt, + .decrypt = cryp_blk_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .init = cryp_init_tfm, } } }; @@ -1293,18 +1234,18 @@ static int cryp_algs_register_all(void) pr_debug("[%s]", __func__); for (i = 0; i < ARRAY_SIZE(cryp_algs); i++) { - ret = crypto_register_alg(&cryp_algs[i].crypto); + ret = crypto_register_skcipher(&cryp_algs[i].skcipher); if (ret) { count = i; pr_err("[%s] alg registration failed", - cryp_algs[i].crypto.cra_driver_name); + cryp_algs[i].skcipher.base.cra_driver_name); goto unreg; } } return 0; unreg: for (i = 0; i < count; i++) - crypto_unregister_alg(&cryp_algs[i].crypto); + crypto_unregister_skcipher(&cryp_algs[i].skcipher); return ret; } @@ -1318,7 +1259,7 @@ static void cryp_algs_unregister_all(void) pr_debug(DEV_DBG_NAME " [%s]", __func__); for (i = 0; i < ARRAY_SIZE(cryp_algs); i++) - crypto_unregister_alg(&cryp_algs[i].crypto); + crypto_unregister_skcipher(&cryp_algs[i].skcipher); } static int ux500_cryp_probe(struct platform_device *pdev) From patchwork Thu Oct 24 13:23:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209667 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B2F613B1 for ; Thu, 24 Oct 2019 13:26:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D11DB2084C for ; Thu, 24 Oct 2019 13:26:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qgHybwqs"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="FPhsp5+P" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D11DB2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kh+LoiUdG//LWHEEfz9+ies2+9yvhcLURprnzUGP/zk=; b=qgHybwqsrd3T8J jyNBMsg0RaoaUQNGsXrNPsWBAPnQP9sVdgaZy9D81d2jBBjOBDuXAGgIeiG3UDxVdkV+1g2h86vhi PyJl7+3K8fG5idAj8VlNRfUovYgR3ztf66fTb6EJWvTmm25AIf85om0m/K5pNRye6tjGM46H7uVrQ D6JHWtKyYCspwp7jUCVqoojm/JpVJ0d9mCeK0ZEnT9FEo3Axx4W0gRK14Sl2yTG4lhMJ/HPYSxZyF lo2z/omdbLwBUh6DiBQ2iv0Q6/1tUWERh7VaGmFRsZf8S6xX3blDZpb6+V9v8iHIZ8hCfz6+/wVJW bVzCyVPCFKOpZ6lJ8NGw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd8J-0003SI-VU; Thu, 24 Oct 2019 13:26:32 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5x-0000Us-2R for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:11 +0000 Received: by mail-wm1-x342.google.com with SMTP id q70so2852756wme.1 for ; Thu, 24 Oct 2019 06:24:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WmeLFC87n6V6vSv3JzzQ3KF8FxsVw98rW8XrNY8je5A=; b=FPhsp5+PQ1FItwZMbcoK9VPPVN3AY8JrvlVuQAE9C2fMhnCQITpokKE1mjONPvff6Y R95Hc2SK9Q5B08j1bZ/ClJ3JgC2tdHwlaDQeOAPi89r49ql1xtSOl7FVKJZP2T0UDLPB CPV1cgAeEhq4SQ9syREOkiZoiNgxVbbOLEUmgufkHKZ+ewg2u5Y3rzXcdcj1zMDTzUKo EJ94qu/ZXdv1a8ZzyDD8dvr5lGvYL+Yq8/8gZx4TvdXGP+pKD5oGh8267zguq3dsO9RM UIVra40gtRP0QV3w/jnZe5K71QYFY5q2fbnP7fdOX9xwa7hZlzr97qLt1gooazXk1aWD JJww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WmeLFC87n6V6vSv3JzzQ3KF8FxsVw98rW8XrNY8je5A=; b=JsJNjP2tOil4QgRlK4iqyDFbVkNGuDUNA62szjGAAr8JyfOHyF50Y0qd+AF8+bR4H2 P5fpdcjeZWqA4UA250cuz6BYtPkDKOg97VBP2eiW9MQ8fyYD0Arx3xyVOCtLI+UgMucy fxIv4EYUmHLgGbKoeSlMmejxhaZYr/Fy7Z6Gu79kr530Ta1QhYKRKCDr6RsfWq9XMgw0 0856WYjjbLx/F0vrpPyUsXuwBNsahmQiqmPeGa5B/uZcU9+JPx7wFDVcow9MOX8+wP2r axpXxLa2FxftE1JWT21Zh7NLhGTEwkB7ttY8Co190dMz0lxuZFSwbcd4IA/scN2IbYsv UYoQ== X-Gm-Message-State: APjAAAVwW917Qrdd6Ie0PAY8ewaImWpigIB9zEjyQ3x+I3+W1WdquZAR awGJtTfjj3mPta1/WVASlelItw== X-Google-Smtp-Source: APXvYqzyXB+XlANedJZ8wcxOPJ7L6wa6oXs2AHk8b4YUeHftyVYH7rxro73+uDv8mDeYYYaDoiKm0A== X-Received: by 2002:a7b:c74a:: with SMTP id w10mr4932154wmk.30.1571923443320; Thu, 24 Oct 2019 06:24:03 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:02 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 07/27] crypto: s5p - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:25 +0200 Message-Id: <20191024132345.5236-8-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062405_493043_33EF85FF X-CRM114-Status: GOOD ( 18.15 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Kamil Konieczny , Ard Biesheuvel , Krzysztof Kozlowski , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Reviewed-by: Kamil Konieczny Tested-by: Kamil Konieczny Acked-by: Krzysztof Kozlowski Signed-off-by: Ard Biesheuvel --- drivers/crypto/s5p-sss.c | 187 ++++++++++---------- 1 file changed, 89 insertions(+), 98 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index 010f1bb20dad..d66e20a2f54c 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -303,7 +303,7 @@ struct s5p_aes_dev { void __iomem *aes_ioaddr; int irq_fc; - struct ablkcipher_request *req; + struct skcipher_request *req; struct s5p_aes_ctx *ctx; struct scatterlist *sg_src; struct scatterlist *sg_dst; @@ -456,7 +456,7 @@ static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg) if (!*sg) return; - len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + len = ALIGN(dev->req->cryptlen, AES_BLOCK_SIZE); free_pages((unsigned long)sg_virt(*sg), get_order(len)); kfree(*sg); @@ -478,27 +478,27 @@ static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg, static void s5p_sg_done(struct s5p_aes_dev *dev) { - struct ablkcipher_request *req = dev->req; - struct s5p_aes_reqctx *reqctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = dev->req; + struct s5p_aes_reqctx *reqctx = skcipher_request_ctx(req); if (dev->sg_dst_cpy) { dev_dbg(dev->dev, "Copying %d bytes of output data back to original place\n", - dev->req->nbytes); + dev->req->cryptlen); s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst, - dev->req->nbytes, 1); + dev->req->cryptlen, 1); } s5p_free_sg_cpy(dev, &dev->sg_src_cpy); s5p_free_sg_cpy(dev, &dev->sg_dst_cpy); if (reqctx->mode & FLAGS_AES_CBC) - memcpy_fromio(req->info, dev->aes_ioaddr + SSS_REG_AES_IV_DATA(0), AES_BLOCK_SIZE); + memcpy_fromio(req->iv, dev->aes_ioaddr + SSS_REG_AES_IV_DATA(0), AES_BLOCK_SIZE); else if (reqctx->mode & FLAGS_AES_CTR) - memcpy_fromio(req->info, dev->aes_ioaddr + SSS_REG_AES_CNT_DATA(0), AES_BLOCK_SIZE); + memcpy_fromio(req->iv, dev->aes_ioaddr + SSS_REG_AES_CNT_DATA(0), AES_BLOCK_SIZE); } /* Calls the completion. Cannot be called with dev->lock hold. */ -static void s5p_aes_complete(struct ablkcipher_request *req, int err) +static void s5p_aes_complete(struct skcipher_request *req, int err) { req->base.complete(&req->base, err); } @@ -523,7 +523,7 @@ static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, if (!*dst) return -ENOMEM; - len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE); + len = ALIGN(dev->req->cryptlen, AES_BLOCK_SIZE); pages = (void *)__get_free_pages(GFP_ATOMIC, get_order(len)); if (!pages) { kfree(*dst); @@ -531,7 +531,7 @@ static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src, return -ENOMEM; } - s5p_sg_copy_buf(pages, src, dev->req->nbytes, 0); + s5p_sg_copy_buf(pages, src, dev->req->cryptlen, 0); sg_init_table(*dst, 1); sg_set_buf(*dst, pages, len); @@ -660,7 +660,7 @@ static irqreturn_t s5p_aes_interrupt(int irq, void *dev_id) { struct platform_device *pdev = dev_id; struct s5p_aes_dev *dev = platform_get_drvdata(pdev); - struct ablkcipher_request *req; + struct skcipher_request *req; int err_dma_tx = 0; int err_dma_rx = 0; int err_dma_hx = 0; @@ -1870,7 +1870,7 @@ static bool s5p_is_sg_aligned(struct scatterlist *sg) } static int s5p_set_indata_start(struct s5p_aes_dev *dev, - struct ablkcipher_request *req) + struct skcipher_request *req) { struct scatterlist *sg; int err; @@ -1897,7 +1897,7 @@ static int s5p_set_indata_start(struct s5p_aes_dev *dev, } static int s5p_set_outdata_start(struct s5p_aes_dev *dev, - struct ablkcipher_request *req) + struct skcipher_request *req) { struct scatterlist *sg; int err; @@ -1925,7 +1925,7 @@ static int s5p_set_outdata_start(struct s5p_aes_dev *dev, static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) { - struct ablkcipher_request *req = dev->req; + struct skcipher_request *req = dev->req; u32 aes_control; unsigned long flags; int err; @@ -1938,12 +1938,12 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode) if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CBC) { aes_control |= SSS_AES_CHAIN_MODE_CBC; - iv = req->info; + iv = req->iv; ctr = NULL; } else if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CTR) { aes_control |= SSS_AES_CHAIN_MODE_CTR; iv = NULL; - ctr = req->info; + ctr = req->iv; } else { iv = NULL; /* AES_ECB */ ctr = NULL; @@ -2021,21 +2021,21 @@ static void s5p_tasklet_cb(unsigned long data) if (backlog) backlog->complete(backlog, -EINPROGRESS); - dev->req = ablkcipher_request_cast(async_req); + dev->req = skcipher_request_cast(async_req); dev->ctx = crypto_tfm_ctx(dev->req->base.tfm); - reqctx = ablkcipher_request_ctx(dev->req); + reqctx = skcipher_request_ctx(dev->req); s5p_aes_crypt_start(dev, reqctx->mode); } static int s5p_aes_handle_req(struct s5p_aes_dev *dev, - struct ablkcipher_request *req) + struct skcipher_request *req) { unsigned long flags; int err; spin_lock_irqsave(&dev->lock, flags); - err = ablkcipher_enqueue_request(&dev->queue, req); + err = crypto_enqueue_request(&dev->queue, &req->base); if (dev->busy) { spin_unlock_irqrestore(&dev->lock, flags); return err; @@ -2049,17 +2049,17 @@ static int s5p_aes_handle_req(struct s5p_aes_dev *dev, return err; } -static int s5p_aes_crypt(struct ablkcipher_request *req, unsigned long mode) +static int s5p_aes_crypt(struct skcipher_request *req, unsigned long mode) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct s5p_aes_reqctx *reqctx = ablkcipher_request_ctx(req); - struct s5p_aes_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct s5p_aes_reqctx *reqctx = skcipher_request_ctx(req); + struct s5p_aes_ctx *ctx = crypto_skcipher_ctx(tfm); struct s5p_aes_dev *dev = ctx->dev; - if (!req->nbytes) + if (!req->cryptlen) return 0; - if (!IS_ALIGNED(req->nbytes, AES_BLOCK_SIZE) && + if (!IS_ALIGNED(req->cryptlen, AES_BLOCK_SIZE) && ((mode & FLAGS_AES_MODE_MASK) != FLAGS_AES_CTR)) { dev_dbg(dev->dev, "request size is not exact amount of AES blocks\n"); return -EINVAL; @@ -2070,10 +2070,10 @@ static int s5p_aes_crypt(struct ablkcipher_request *req, unsigned long mode) return s5p_aes_handle_req(dev, req); } -static int s5p_aes_setkey(struct crypto_ablkcipher *cipher, +static int s5p_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct s5p_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (keylen != AES_KEYSIZE_128 && @@ -2087,106 +2087,97 @@ static int s5p_aes_setkey(struct crypto_ablkcipher *cipher, return 0; } -static int s5p_aes_ecb_encrypt(struct ablkcipher_request *req) +static int s5p_aes_ecb_encrypt(struct skcipher_request *req) { return s5p_aes_crypt(req, 0); } -static int s5p_aes_ecb_decrypt(struct ablkcipher_request *req) +static int s5p_aes_ecb_decrypt(struct skcipher_request *req) { return s5p_aes_crypt(req, FLAGS_AES_DECRYPT); } -static int s5p_aes_cbc_encrypt(struct ablkcipher_request *req) +static int s5p_aes_cbc_encrypt(struct skcipher_request *req) { return s5p_aes_crypt(req, FLAGS_AES_CBC); } -static int s5p_aes_cbc_decrypt(struct ablkcipher_request *req) +static int s5p_aes_cbc_decrypt(struct skcipher_request *req) { return s5p_aes_crypt(req, FLAGS_AES_DECRYPT | FLAGS_AES_CBC); } -static int s5p_aes_ctr_crypt(struct ablkcipher_request *req) +static int s5p_aes_ctr_crypt(struct skcipher_request *req) { return s5p_aes_crypt(req, FLAGS_AES_CTR); } -static int s5p_aes_cra_init(struct crypto_tfm *tfm) +static int s5p_aes_init_tfm(struct crypto_skcipher *tfm) { - struct s5p_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct s5p_aes_ctx *ctx = crypto_skcipher_ctx(tfm); ctx->dev = s5p_dev; - tfm->crt_ablkcipher.reqsize = sizeof(struct s5p_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct s5p_aes_reqctx)); return 0; } -static struct crypto_alg algs[] = { +static struct skcipher_alg algs[] = { { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-s5p", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-s5p", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct s5p_aes_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = s5p_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = s5p_aes_setkey, - .encrypt = s5p_aes_ecb_encrypt, - .decrypt = s5p_aes_ecb_decrypt, - } + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct s5p_aes_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = s5p_aes_setkey, + .encrypt = s5p_aes_ecb_encrypt, + .decrypt = s5p_aes_ecb_decrypt, + .init = s5p_aes_init_tfm, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-s5p", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-s5p", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct s5p_aes_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = s5p_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = s5p_aes_setkey, - .encrypt = s5p_aes_cbc_encrypt, - .decrypt = s5p_aes_cbc_decrypt, - } + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct s5p_aes_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = s5p_aes_setkey, + .encrypt = s5p_aes_cbc_encrypt, + .decrypt = s5p_aes_cbc_decrypt, + .init = s5p_aes_init_tfm, }, { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-s5p", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-s5p", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct s5p_aes_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = s5p_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = s5p_aes_setkey, - .encrypt = s5p_aes_ctr_crypt, - .decrypt = s5p_aes_ctr_crypt, - } + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct s5p_aes_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = s5p_aes_setkey, + .encrypt = s5p_aes_ctr_crypt, + .decrypt = s5p_aes_ctr_crypt, + .init = s5p_aes_init_tfm, }, }; @@ -2297,7 +2288,7 @@ static int s5p_aes_probe(struct platform_device *pdev) crypto_init_queue(&pdata->queue, CRYPTO_QUEUE_LEN); for (i = 0; i < ARRAY_SIZE(algs); i++) { - err = crypto_register_alg(&algs[i]); + err = crypto_register_skcipher(&algs[i]); if (err) goto err_algs; } @@ -2334,11 +2325,11 @@ static int s5p_aes_probe(struct platform_device *pdev) err_algs: if (i < ARRAY_SIZE(algs)) - dev_err(dev, "can't register '%s': %d\n", algs[i].cra_name, + dev_err(dev, "can't register '%s': %d\n", algs[i].base.cra_name, err); for (j = 0; j < i; j++) - crypto_unregister_alg(&algs[j]); + crypto_unregister_skcipher(&algs[j]); tasklet_kill(&pdata->tasklet); @@ -2362,7 +2353,7 @@ static int s5p_aes_remove(struct platform_device *pdev) return -ENODEV; for (i = 0; i < ARRAY_SIZE(algs); i++) - crypto_unregister_alg(&algs[i]); + crypto_unregister_skcipher(&algs[i]); tasklet_kill(&pdata->tasklet); if (pdata->use_hash) { From patchwork Thu Oct 24 13:23:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39D1413B1 for ; Thu, 24 Oct 2019 13:27:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1443A2166E for ; Thu, 24 Oct 2019 13:27:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="O/mcqfon"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="gwmxMfqO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1443A2166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CsaZV6oJn7C6AXmHnWNZu+/ryVBu6YEMkuvH5HRgr5s=; b=O/mcqfoneVDNlN XLGJHq0KZ60D7ZjWn4KPOYN1pqmbInmbkh5sUTnkW10+ZyqTGLq6J8NcocT99DBXFj/COA0kgYEVA 0QVpd4CDEboexvH+sG1VqZpdzu51lGbLu82etvKbsPSATeMn2bH1AnIA4f59ZhdFIrO0PtarRgjTJ 2lKOcvWeW2PbB0IZzXfPI5MfULgWyglaJUowt3d0M1Rut+EW33+Aox38Ja7G0rZEyTMvUXQpejw54 2xYsBzWflHtbu0Q09a1ihDxt7HI8pzbjzjY48bnW+4n1Wfx8+SwGKbpZP83x1LI/SbSUksuNSQxGt NHNYwTVQL3RsCN8Fe5Dw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd9d-0004PD-B4; Thu, 24 Oct 2019 13:27:53 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5y-0000W7-Rb for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:14 +0000 Received: by mail-wm1-x341.google.com with SMTP id p21so1391171wmg.2 for ; Thu, 24 Oct 2019 06:24:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mc2WnweIaRtDafmlV+b5kd5/545ymLu8iFRfFqG/ZGs=; b=gwmxMfqOuK/AkoUT2NsJbgOEJ10J+0blvqaG8wdGFBWU7ThKf2TAegGuDFPnimRXEU NlNfjYSUl1gb3ofZ0kySStBhkM070vr+ZtlGkAiUfqpjAGSjsK7av6WCNM5JOJnJrXXU Va0mbJhP7wu/xExkmk8tdTkdkpfBh6rCcmzO7SIOGtk0xNiErDfjgJ3njVb2LkqQrc3r VaxFQ0YM+Uyox5+uNduXv4p9mV46FPVBNb5Oi3yRqqvANCE+1nodCdaH309wl/5Vz/nk eemWvNLdxV14zdfEcnL7+H7e+DLarq5cbFStaCMQ0pPvRZjc/SrLHjW57MD4Gpt2ZuHv +aYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mc2WnweIaRtDafmlV+b5kd5/545ymLu8iFRfFqG/ZGs=; b=gpb8SBo84uaXTkPn3RzGNrZP4RgFarO4HozRNA8v4P+TOieA/HXPZqWePMhOyBR5Z8 ONRNGtJLuako8t8RgEjycVbkyzl5Vkp2M9l0p+KkdZfZuYHGhBiPn09WCVhXbfTLMVCS lSgbJpwnPQopG9HrkLzxMJ7uP7nQaDcLlP5UWZjkWoBpDQFw/43SBlNMUhINuaBxb5uJ ZwSuiUn52yqLmSMEOMJwGZWjl4xZNsaPiYE/EL5D8FKAqMhXjXouFFyUEIYVfATyLL4o /oRHR7EoJPKVFCrgroi3eVIEXcvVGTJO9pIZlenNViJfL/guyKjIbMVi+LVifCaVgsYo ymgQ== X-Gm-Message-State: APjAAAU9arDWnI1i70FQBW9zQ0WvFrfV3Bsq1Q7nGG+3eHUzKXS/Zer+ aXVaQpZ1LhkEE8XPy56IUW81fmzJADJ5Yvsp X-Google-Smtp-Source: APXvYqx+U6iyGF4NuMqJjOPQBjBgswbNGE8WPgUedfhjqWwTNvaTOMUvhtKo6fQIS9H+FkWc+jpz+Q== X-Received: by 2002:a1c:67d7:: with SMTP id b206mr4803243wmc.68.1571923444629; Thu, 24 Oct 2019 06:24:04 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:03 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 08/27] crypto: atmel-aes - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:26 +0200 Message-Id: <20191024132345.5236-9-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062407_153212_F04872A2 X-CRM114-Status: GOOD ( 17.39 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexandre Belloni , Herbert Xu , Eric Biggers , Ard Biesheuvel , Ludovic Desroches , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Nicolas Ferre Cc: Alexandre Belloni Cc: Ludovic Desroches Signed-off-by: Ard Biesheuvel --- drivers/crypto/atmel-aes.c | 509 ++++++++++---------- 1 file changed, 245 insertions(+), 264 deletions(-) diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c index 00920a2b95ce..28e420ffd4f5 100644 --- a/drivers/crypto/atmel-aes.c +++ b/drivers/crypto/atmel-aes.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include "atmel-aes-regs.h" @@ -492,23 +493,23 @@ static void atmel_aes_authenc_complete(struct atmel_aes_dev *dd, int err); static void atmel_aes_set_iv_as_last_ciphertext_block(struct atmel_aes_dev *dd) { - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); - struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req); - struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); - unsigned int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + struct skcipher_request *req = skcipher_request_cast(dd->areq); + struct atmel_aes_reqctx *rctx = skcipher_request_ctx(req); + struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + unsigned int ivsize = crypto_skcipher_ivsize(skcipher); - if (req->nbytes < ivsize) + if (req->cryptlen < ivsize) return; if (rctx->mode & AES_FLAGS_ENCRYPT) { - scatterwalk_map_and_copy(req->info, req->dst, - req->nbytes - ivsize, ivsize, 0); + scatterwalk_map_and_copy(req->iv, req->dst, + req->cryptlen - ivsize, ivsize, 0); } else { if (req->src == req->dst) - memcpy(req->info, rctx->lastc, ivsize); + memcpy(req->iv, rctx->lastc, ivsize); else - scatterwalk_map_and_copy(req->info, req->src, - req->nbytes - ivsize, + scatterwalk_map_and_copy(req->iv, req->src, + req->cryptlen - ivsize, ivsize, 0); } } @@ -981,9 +982,9 @@ static int atmel_aes_transfer_complete(struct atmel_aes_dev *dd) static int atmel_aes_start(struct atmel_aes_dev *dd) { - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); - struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req); - bool use_dma = (req->nbytes >= ATMEL_AES_DMA_THRESHOLD || + struct skcipher_request *req = skcipher_request_cast(dd->areq); + struct atmel_aes_reqctx *rctx = skcipher_request_ctx(req); + bool use_dma = (req->cryptlen >= ATMEL_AES_DMA_THRESHOLD || dd->ctx->block_size != AES_BLOCK_SIZE); int err; @@ -993,12 +994,12 @@ static int atmel_aes_start(struct atmel_aes_dev *dd) if (err) return atmel_aes_complete(dd, err); - atmel_aes_write_ctrl(dd, use_dma, req->info); + atmel_aes_write_ctrl(dd, use_dma, (void *)req->iv); if (use_dma) - return atmel_aes_dma_start(dd, req->src, req->dst, req->nbytes, + return atmel_aes_dma_start(dd, req->src, req->dst, req->cryptlen, atmel_aes_transfer_complete); - return atmel_aes_cpu_start(dd, req->src, req->dst, req->nbytes, + return atmel_aes_cpu_start(dd, req->src, req->dst, req->cryptlen, atmel_aes_transfer_complete); } @@ -1011,7 +1012,7 @@ atmel_aes_ctr_ctx_cast(struct atmel_aes_base_ctx *ctx) static int atmel_aes_ctr_transfer(struct atmel_aes_dev *dd) { struct atmel_aes_ctr_ctx *ctx = atmel_aes_ctr_ctx_cast(dd->ctx); - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); + struct skcipher_request *req = skcipher_request_cast(dd->areq); struct scatterlist *src, *dst; u32 ctr, blocks; size_t datalen; @@ -1019,11 +1020,11 @@ static int atmel_aes_ctr_transfer(struct atmel_aes_dev *dd) /* Check for transfer completion. */ ctx->offset += dd->total; - if (ctx->offset >= req->nbytes) + if (ctx->offset >= req->cryptlen) return atmel_aes_transfer_complete(dd); /* Compute data length. */ - datalen = req->nbytes - ctx->offset; + datalen = req->cryptlen - ctx->offset; blocks = DIV_ROUND_UP(datalen, AES_BLOCK_SIZE); ctr = be32_to_cpu(ctx->iv[3]); if (dd->caps.has_ctr32) { @@ -1076,8 +1077,8 @@ static int atmel_aes_ctr_transfer(struct atmel_aes_dev *dd) static int atmel_aes_ctr_start(struct atmel_aes_dev *dd) { struct atmel_aes_ctr_ctx *ctx = atmel_aes_ctr_ctx_cast(dd->ctx); - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); - struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(dd->areq); + struct atmel_aes_reqctx *rctx = skcipher_request_ctx(req); int err; atmel_aes_set_mode(dd, rctx); @@ -1086,16 +1087,16 @@ static int atmel_aes_ctr_start(struct atmel_aes_dev *dd) if (err) return atmel_aes_complete(dd, err); - memcpy(ctx->iv, req->info, AES_BLOCK_SIZE); + memcpy(ctx->iv, req->iv, AES_BLOCK_SIZE); ctx->offset = 0; dd->total = 0; return atmel_aes_ctr_transfer(dd); } -static int atmel_aes_crypt(struct ablkcipher_request *req, unsigned long mode) +static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode) { - struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); - struct atmel_aes_base_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + struct atmel_aes_base_ctx *ctx = crypto_skcipher_ctx(skcipher); struct atmel_aes_reqctx *rctx; struct atmel_aes_dev *dd; @@ -1126,30 +1127,30 @@ static int atmel_aes_crypt(struct ablkcipher_request *req, unsigned long mode) if (!dd) return -ENODEV; - rctx = ablkcipher_request_ctx(req); + rctx = skcipher_request_ctx(req); rctx->mode = mode; if (!(mode & AES_FLAGS_ENCRYPT) && (req->src == req->dst)) { - unsigned int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + unsigned int ivsize = crypto_skcipher_ivsize(skcipher); - if (req->nbytes >= ivsize) + if (req->cryptlen >= ivsize) scatterwalk_map_and_copy(rctx->lastc, req->src, - req->nbytes - ivsize, + req->cryptlen - ivsize, ivsize, 0); } return atmel_aes_handle_queue(dd, &req->base); } -static int atmel_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int atmel_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct atmel_aes_base_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct atmel_aes_base_ctx *ctx = crypto_skcipher_ctx(tfm); if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && keylen != AES_KEYSIZE_256) { - crypto_ablkcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } @@ -1159,297 +1160,279 @@ static int atmel_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int atmel_aes_ecb_encrypt(struct ablkcipher_request *req) +static int atmel_aes_ecb_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_ECB | AES_FLAGS_ENCRYPT); } -static int atmel_aes_ecb_decrypt(struct ablkcipher_request *req) +static int atmel_aes_ecb_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_ECB); } -static int atmel_aes_cbc_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cbc_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CBC | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cbc_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cbc_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CBC); } -static int atmel_aes_ofb_encrypt(struct ablkcipher_request *req) +static int atmel_aes_ofb_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_OFB | AES_FLAGS_ENCRYPT); } -static int atmel_aes_ofb_decrypt(struct ablkcipher_request *req) +static int atmel_aes_ofb_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_OFB); } -static int atmel_aes_cfb_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB128 | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cfb_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB128); } -static int atmel_aes_cfb64_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb64_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB64 | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cfb64_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb64_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB64); } -static int atmel_aes_cfb32_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb32_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB32 | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cfb32_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb32_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB32); } -static int atmel_aes_cfb16_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb16_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB16 | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cfb16_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb16_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB16); } -static int atmel_aes_cfb8_encrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb8_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB8 | AES_FLAGS_ENCRYPT); } -static int atmel_aes_cfb8_decrypt(struct ablkcipher_request *req) +static int atmel_aes_cfb8_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CFB8); } -static int atmel_aes_ctr_encrypt(struct ablkcipher_request *req) +static int atmel_aes_ctr_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CTR | AES_FLAGS_ENCRYPT); } -static int atmel_aes_ctr_decrypt(struct ablkcipher_request *req) +static int atmel_aes_ctr_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_CTR); } -static int atmel_aes_cra_init(struct crypto_tfm *tfm) +static int atmel_aes_init_tfm(struct crypto_skcipher *tfm) { - struct atmel_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct atmel_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct atmel_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx)); ctx->base.start = atmel_aes_start; return 0; } -static int atmel_aes_ctr_cra_init(struct crypto_tfm *tfm) +static int atmel_aes_ctr_init_tfm(struct crypto_skcipher *tfm) { - struct atmel_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct atmel_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct atmel_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx)); ctx->base.start = atmel_aes_ctr_start; return 0; } -static struct crypto_alg aes_algs[] = { -{ - .cra_name = "ecb(aes)", - .cra_driver_name = "atmel-ecb-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_ecb_encrypt, - .decrypt = atmel_aes_ecb_decrypt, - } +static struct skcipher_alg aes_algs[] = { +{ + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "atmel-ecb-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_ecb_encrypt, + .decrypt = atmel_aes_ecb_decrypt, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "atmel-cbc-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cbc_encrypt, - .decrypt = atmel_aes_cbc_decrypt, - } + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "atmel-cbc-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cbc_encrypt, + .decrypt = atmel_aes_cbc_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "ofb(aes)", - .cra_driver_name = "atmel-ofb-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_ofb_encrypt, - .decrypt = atmel_aes_ofb_decrypt, - } + .base.cra_name = "ofb(aes)", + .base.cra_driver_name = "atmel-ofb-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_ofb_encrypt, + .decrypt = atmel_aes_ofb_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "cfb(aes)", - .cra_driver_name = "atmel-cfb-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cfb_encrypt, - .decrypt = atmel_aes_cfb_decrypt, - } + .base.cra_name = "cfb(aes)", + .base.cra_driver_name = "atmel-cfb-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cfb_encrypt, + .decrypt = atmel_aes_cfb_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "cfb32(aes)", - .cra_driver_name = "atmel-cfb32-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB32_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0x3, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cfb32_encrypt, - .decrypt = atmel_aes_cfb32_decrypt, - } + .base.cra_name = "cfb32(aes)", + .base.cra_driver_name = "atmel-cfb32-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB32_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cfb32_encrypt, + .decrypt = atmel_aes_cfb32_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "cfb16(aes)", - .cra_driver_name = "atmel-cfb16-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB16_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0x1, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cfb16_encrypt, - .decrypt = atmel_aes_cfb16_decrypt, - } + .base.cra_name = "cfb16(aes)", + .base.cra_driver_name = "atmel-cfb16-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB16_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cfb16_encrypt, + .decrypt = atmel_aes_cfb16_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "cfb8(aes)", - .cra_driver_name = "atmel-cfb8-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB8_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0x0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cfb8_encrypt, - .decrypt = atmel_aes_cfb8_decrypt, - } + .base.cra_name = "cfb8(aes)", + .base.cra_driver_name = "atmel-cfb8-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB8_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cfb8_encrypt, + .decrypt = atmel_aes_cfb8_decrypt, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "ctr(aes)", - .cra_driver_name = "atmel-ctr-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct atmel_aes_ctr_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_ctr_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_ctr_encrypt, - .decrypt = atmel_aes_ctr_decrypt, - } + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "atmel-ctr-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctr_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_ctr_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_ctr_encrypt, + .decrypt = atmel_aes_ctr_decrypt, + .ivsize = AES_BLOCK_SIZE, }, }; -static struct crypto_alg aes_cfb64_alg = { - .cra_name = "cfb64(aes)", - .cra_driver_name = "atmel-cfb64-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB64_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_setkey, - .encrypt = atmel_aes_cfb64_encrypt, - .decrypt = atmel_aes_cfb64_decrypt, - } +static struct skcipher_alg aes_cfb64_alg = { + .base.cra_name = "cfb64(aes)", + .base.cra_driver_name = "atmel-cfb64-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB64_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = atmel_aes_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = atmel_aes_setkey, + .encrypt = atmel_aes_cfb64_encrypt, + .decrypt = atmel_aes_cfb64_decrypt, + .ivsize = AES_BLOCK_SIZE, }; @@ -1864,8 +1847,8 @@ static int atmel_aes_xts_process_data(struct atmel_aes_dev *dd); static int atmel_aes_xts_start(struct atmel_aes_dev *dd) { struct atmel_aes_xts_ctx *ctx = atmel_aes_xts_ctx_cast(dd->ctx); - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); - struct atmel_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(dd->areq); + struct atmel_aes_reqctx *rctx = skcipher_request_ctx(req); unsigned long flags; int err; @@ -1875,7 +1858,7 @@ static int atmel_aes_xts_start(struct atmel_aes_dev *dd) if (err) return atmel_aes_complete(dd, err); - /* Compute the tweak value from req->info with ecb(aes). */ + /* Compute the tweak value from req->iv with ecb(aes). */ flags = dd->flags; dd->flags &= ~AES_FLAGS_MODE_MASK; dd->flags |= (AES_FLAGS_ECB | AES_FLAGS_ENCRYPT); @@ -1883,14 +1866,14 @@ static int atmel_aes_xts_start(struct atmel_aes_dev *dd) ctx->key2, ctx->base.keylen); dd->flags = flags; - atmel_aes_write_block(dd, AES_IDATAR(0), req->info); + atmel_aes_write_block(dd, AES_IDATAR(0), (void *)req->iv); return atmel_aes_wait_for_data_ready(dd, atmel_aes_xts_process_data); } static int atmel_aes_xts_process_data(struct atmel_aes_dev *dd) { - struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); - bool use_dma = (req->nbytes >= ATMEL_AES_DMA_THRESHOLD); + struct skcipher_request *req = skcipher_request_cast(dd->areq); + bool use_dma = (req->cryptlen >= ATMEL_AES_DMA_THRESHOLD); u32 tweak[AES_BLOCK_SIZE / sizeof(u32)]; static const u32 one[AES_BLOCK_SIZE / sizeof(u32)] = {cpu_to_le32(1), }; u8 *tweak_bytes = (u8 *)tweak; @@ -1915,20 +1898,20 @@ static int atmel_aes_xts_process_data(struct atmel_aes_dev *dd) atmel_aes_write_block(dd, AES_TWR(0), tweak); atmel_aes_write_block(dd, AES_ALPHAR(0), one); if (use_dma) - return atmel_aes_dma_start(dd, req->src, req->dst, req->nbytes, + return atmel_aes_dma_start(dd, req->src, req->dst, req->cryptlen, atmel_aes_transfer_complete); - return atmel_aes_cpu_start(dd, req->src, req->dst, req->nbytes, + return atmel_aes_cpu_start(dd, req->src, req->dst, req->cryptlen, atmel_aes_transfer_complete); } -static int atmel_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int atmel_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct atmel_aes_xts_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = xts_check_key(crypto_ablkcipher_tfm(tfm), key, keylen); + err = xts_check_key(crypto_skcipher_tfm(tfm), key, keylen); if (err) return err; @@ -1939,45 +1922,43 @@ static int atmel_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int atmel_aes_xts_encrypt(struct ablkcipher_request *req) +static int atmel_aes_xts_encrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_XTS | AES_FLAGS_ENCRYPT); } -static int atmel_aes_xts_decrypt(struct ablkcipher_request *req) +static int atmel_aes_xts_decrypt(struct skcipher_request *req) { return atmel_aes_crypt(req, AES_FLAGS_XTS); } -static int atmel_aes_xts_cra_init(struct crypto_tfm *tfm) +static int atmel_aes_xts_init_tfm(struct crypto_skcipher *tfm) { - struct atmel_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm); + struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct atmel_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx)); ctx->base.start = atmel_aes_xts_start; return 0; } -static struct crypto_alg aes_xts_alg = { - .cra_name = "xts(aes)", - .cra_driver_name = "atmel-xts-aes", - .cra_priority = ATMEL_AES_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_aes_xts_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_aes_xts_cra_init, - .cra_u.ablkcipher = { - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = atmel_aes_xts_setkey, - .encrypt = atmel_aes_xts_encrypt, - .decrypt = atmel_aes_xts_decrypt, - } +static struct skcipher_alg aes_xts_alg = { + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "atmel-xts-aes", + .base.cra_priority = ATMEL_AES_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_aes_xts_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = atmel_aes_xts_setkey, + .encrypt = atmel_aes_xts_encrypt, + .decrypt = atmel_aes_xts_decrypt, + .init = atmel_aes_xts_init_tfm, }; #ifdef CONFIG_CRYPTO_DEV_ATMEL_AUTHENC @@ -2474,16 +2455,16 @@ static void atmel_aes_unregister_algs(struct atmel_aes_dev *dd) #endif if (dd->caps.has_xts) - crypto_unregister_alg(&aes_xts_alg); + crypto_unregister_skcipher(&aes_xts_alg); if (dd->caps.has_gcm) crypto_unregister_aead(&aes_gcm_alg); if (dd->caps.has_cfb64) - crypto_unregister_alg(&aes_cfb64_alg); + crypto_unregister_skcipher(&aes_cfb64_alg); for (i = 0; i < ARRAY_SIZE(aes_algs); i++) - crypto_unregister_alg(&aes_algs[i]); + crypto_unregister_skcipher(&aes_algs[i]); } static int atmel_aes_register_algs(struct atmel_aes_dev *dd) @@ -2491,13 +2472,13 @@ static int atmel_aes_register_algs(struct atmel_aes_dev *dd) int err, i, j; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { - err = crypto_register_alg(&aes_algs[i]); + err = crypto_register_skcipher(&aes_algs[i]); if (err) goto err_aes_algs; } if (dd->caps.has_cfb64) { - err = crypto_register_alg(&aes_cfb64_alg); + err = crypto_register_skcipher(&aes_cfb64_alg); if (err) goto err_aes_cfb64_alg; } @@ -2509,7 +2490,7 @@ static int atmel_aes_register_algs(struct atmel_aes_dev *dd) } if (dd->caps.has_xts) { - err = crypto_register_alg(&aes_xts_alg); + err = crypto_register_skcipher(&aes_xts_alg); if (err) goto err_aes_xts_alg; } @@ -2531,17 +2512,17 @@ static int atmel_aes_register_algs(struct atmel_aes_dev *dd) err_aes_authenc_alg: for (j = 0; j < i; j++) crypto_unregister_aead(&aes_authenc_algs[j]); - crypto_unregister_alg(&aes_xts_alg); + crypto_unregister_skcipher(&aes_xts_alg); #endif err_aes_xts_alg: crypto_unregister_aead(&aes_gcm_alg); err_aes_gcm_alg: - crypto_unregister_alg(&aes_cfb64_alg); + crypto_unregister_skcipher(&aes_cfb64_alg); err_aes_cfb64_alg: i = ARRAY_SIZE(aes_algs); err_aes_algs: for (j = 0; j < i; j++) - crypto_unregister_alg(&aes_algs[j]); + crypto_unregister_skcipher(&aes_algs[j]); return err; } From patchwork Thu Oct 24 13:23:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209671 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E906112B for ; Thu, 24 Oct 2019 13:27:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E7BFF2084C for ; Thu, 24 Oct 2019 13:27:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IP9vcyux"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="OCRvqMCb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7BFF2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Mt7JfvnIHEsSfaf7PVpXFsLDGa1TUQcUd5i/mHWRGMA=; b=IP9vcyuxpJBeOq zn4rRECYm0VkW4Y1cAzhZJKwDcmBwXJgt4StKQD+uGAECU10rltGMOlz9lzkNgnidZniu9OviazL+ nRQymIdl4GmwI1XWQc0xRR1lNu8w+0OU9y87sr2xcpq0aVEwz6v1avProZp/WRhuGYkICjL4YFLl8 WUj5D+d9g9APlI2bf5iW4wybiNeu4gKNIjxsqZrbEY8DYtml8yv4lt9oC31s3zN9bUihJtqaLSuj0 0Pe8JY8h4WIBsE1B+pKJOw1KvGk4q1iLYynlshGar5ojHPpezd7Lv0Huoizj2TfpNWb5sfqaAZY7c rQtiIHAmVIrOOB5xKSYg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd91-0003wx-OP; Thu, 24 Oct 2019 13:27:15 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd5z-0000Xf-Vz for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:14 +0000 Received: by mail-wm1-x343.google.com with SMTP id 3so2600434wmi.3 for ; Thu, 24 Oct 2019 06:24:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=25UKh8EiGAnNUBZJQi258MHVfhFf66il5RFIQHhfILQ=; b=OCRvqMCbtRfCSdlWR+suVuQAmrm02dyiXJ8vSJ8InhLFAefZIRGfzBUOtV8EKNrtrg N90V4X8IWVAx+o7KDdDVNaBVssjUUT/2PyXJyr+gTqD/UfHTZNsNkNPiNAmyhxpL/QW1 UTyAGXEuhmsQ7QbQvfYZVQ6koa0yLqECx33/NtFft5lxQ9WUmlasFebnCrmZiry9IRqs jIH9qsUhLJ0cixLmgeMGBfItFY4n3ycHFX/oVT7eFBS1GZ91cVKMsRxDRLnDqTdf4uLi NJx4vBq2tKgMl37ucNVQtsXgUiCarDloBThquuWss+tJGoVxZPVH2y1WuIKH/33PJ4ji y9yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=25UKh8EiGAnNUBZJQi258MHVfhFf66il5RFIQHhfILQ=; b=OJnVpw8nuzU7vwvSSlvPTP5Tu07QhLRLh5AlWsxkd/K/aExur5h2t2R/PVx/INutFg uToRcQ1/T/0B02cj1Mwz1F97N/aJPV+lSM48pqb7Z4hYqFf+rOJi+KHgRuXyEYryEdip Yn0cDui5B5HFSuv18ct12E961CB8rpPmYQYSnAf3E3MehAmwtFNFF1rNpJE0Z+pmLQeN pzM9FRxXcYjY8LoCzHno4Io8W3XQ45LvG2ymXPhun12sWItVOsNWXPKr11EYcRRDrqHJ U76bNmzgbrgvEoA8reJOdbpIZYDVTmTrvr9zHFvtJ/gaGf8qcdLI5FrSbizprARnR1Ui GOwA== X-Gm-Message-State: APjAAAXSgSK2256SOhLWH/LX3Fb4Yf350PQu8OIsO2TCz4RBV69zVkvH zJ1DxNfakpAIDEhxZJc1cNx5mg== X-Google-Smtp-Source: APXvYqylYWVZeqjzlsELP9hdN8kkD34WX4svIiCVf+pO66O+eQ5a8X64httVbxbfTAV0o7RnU5X9Ew== X-Received: by 2002:a1c:2884:: with SMTP id o126mr5321246wmo.153.1571923446080; Thu, 24 Oct 2019 06:24:06 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 09/27] crypto: atmel-tdes - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:27 +0200 Message-Id: <20191024132345.5236-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062408_190938_F1B0BB4D X-CRM114-Status: GOOD ( 17.30 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexandre Belloni , Herbert Xu , Eric Biggers , Ard Biesheuvel , Ludovic Desroches , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Nicolas Ferre Cc: Alexandre Belloni Cc: Ludovic Desroches Signed-off-by: Ard Biesheuvel --- drivers/crypto/atmel-tdes.c | 433 ++++++++++---------- 1 file changed, 207 insertions(+), 226 deletions(-) diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c index 1a6c86ae6148..bb7c0a387c04 100644 --- a/drivers/crypto/atmel-tdes.c +++ b/drivers/crypto/atmel-tdes.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include "atmel-tdes-regs.h" @@ -72,7 +73,7 @@ struct atmel_tdes_ctx { struct atmel_tdes_dev *dd; int keylen; - u32 key[3*DES_KEY_SIZE / sizeof(u32)]; + u32 key[DES3_EDE_KEY_SIZE / sizeof(u32)]; unsigned long flags; u16 block_size; @@ -106,7 +107,7 @@ struct atmel_tdes_dev { struct tasklet_struct done_task; struct tasklet_struct queue_task; - struct ablkcipher_request *req; + struct skcipher_request *req; size_t total; struct scatterlist *in_sg; @@ -307,8 +308,8 @@ static int atmel_tdes_write_ctrl(struct atmel_tdes_dev *dd) dd->ctx->keylen >> 2); if (((dd->flags & TDES_FLAGS_CBC) || (dd->flags & TDES_FLAGS_CFB) || - (dd->flags & TDES_FLAGS_OFB)) && dd->req->info) { - atmel_tdes_write_n(dd, TDES_IV1R, dd->req->info, 2); + (dd->flags & TDES_FLAGS_OFB)) && dd->req->iv) { + atmel_tdes_write_n(dd, TDES_IV1R, (void *)dd->req->iv, 2); } return 0; @@ -502,8 +503,8 @@ static int atmel_tdes_crypt_dma(struct crypto_tfm *tfm, dma_addr_t dma_addr_in, static int atmel_tdes_crypt_start(struct atmel_tdes_dev *dd) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm( - crypto_ablkcipher_reqtfm(dd->req)); + struct crypto_tfm *tfm = crypto_skcipher_tfm( + crypto_skcipher_reqtfm(dd->req)); int err, fast = 0, in, out; size_t count; dma_addr_t addr_in, addr_out; @@ -573,7 +574,7 @@ static int atmel_tdes_crypt_start(struct atmel_tdes_dev *dd) static void atmel_tdes_finish_req(struct atmel_tdes_dev *dd, int err) { - struct ablkcipher_request *req = dd->req; + struct skcipher_request *req = dd->req; clk_disable_unprepare(dd->iclk); @@ -583,7 +584,7 @@ static void atmel_tdes_finish_req(struct atmel_tdes_dev *dd, int err) } static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, - struct ablkcipher_request *req) + struct skcipher_request *req) { struct crypto_async_request *async_req, *backlog; struct atmel_tdes_ctx *ctx; @@ -593,7 +594,7 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, spin_lock_irqsave(&dd->lock, flags); if (req) - ret = ablkcipher_enqueue_request(&dd->queue, req); + ret = crypto_enqueue_request(&dd->queue, &req->base); if (dd->flags & TDES_FLAGS_BUSY) { spin_unlock_irqrestore(&dd->lock, flags); return ret; @@ -610,18 +611,18 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, if (backlog) backlog->complete(backlog, -EINPROGRESS); - req = ablkcipher_request_cast(async_req); + req = skcipher_request_cast(async_req); /* assign new request to device */ dd->req = req; - dd->total = req->nbytes; + dd->total = req->cryptlen; dd->in_offset = 0; dd->in_sg = req->src; dd->out_offset = 0; dd->out_sg = req->dst; - rctx = ablkcipher_request_ctx(req); - ctx = crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)); + rctx = skcipher_request_ctx(req); + ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); rctx->mode &= TDES_FLAGS_MODE_MASK; dd->flags = (dd->flags & ~TDES_FLAGS_MODE_MASK) | rctx->mode; dd->ctx = ctx; @@ -665,32 +666,32 @@ static int atmel_tdes_crypt_dma_stop(struct atmel_tdes_dev *dd) return err; } -static int atmel_tdes_crypt(struct ablkcipher_request *req, unsigned long mode) +static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode) { - struct atmel_tdes_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); - struct atmel_tdes_reqctx *rctx = ablkcipher_request_ctx(req); + struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); + struct atmel_tdes_reqctx *rctx = skcipher_request_ctx(req); if (mode & TDES_FLAGS_CFB8) { - if (!IS_ALIGNED(req->nbytes, CFB8_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, CFB8_BLOCK_SIZE)) { pr_err("request size is not exact amount of CFB8 blocks\n"); return -EINVAL; } ctx->block_size = CFB8_BLOCK_SIZE; } else if (mode & TDES_FLAGS_CFB16) { - if (!IS_ALIGNED(req->nbytes, CFB16_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, CFB16_BLOCK_SIZE)) { pr_err("request size is not exact amount of CFB16 blocks\n"); return -EINVAL; } ctx->block_size = CFB16_BLOCK_SIZE; } else if (mode & TDES_FLAGS_CFB32) { - if (!IS_ALIGNED(req->nbytes, CFB32_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, CFB32_BLOCK_SIZE)) { pr_err("request size is not exact amount of CFB32 blocks\n"); return -EINVAL; } ctx->block_size = CFB32_BLOCK_SIZE; } else { - if (!IS_ALIGNED(req->nbytes, DES_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, DES_BLOCK_SIZE)) { pr_err("request size is not exact amount of DES blocks\n"); return -EINVAL; } @@ -770,13 +771,13 @@ static void atmel_tdes_dma_cleanup(struct atmel_tdes_dev *dd) dma_release_channel(dd->dma_lch_out.chan); } -static int atmel_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int atmel_des_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct atmel_tdes_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = verify_ablkcipher_des_key(tfm, key); + err = verify_skcipher_des_key(tfm, key); if (err) return err; @@ -786,13 +787,13 @@ static int atmel_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int atmel_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int atmel_tdes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct atmel_tdes_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = verify_ablkcipher_des3_key(tfm, key); + err = verify_skcipher_des3_key(tfm, key); if (err) return err; @@ -802,84 +803,84 @@ static int atmel_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int atmel_tdes_ecb_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_ecb_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT); } -static int atmel_tdes_ecb_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_ecb_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, 0); } -static int atmel_tdes_cbc_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_cbc_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_CBC); } -static int atmel_tdes_cbc_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_cbc_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_CBC); } -static int atmel_tdes_cfb_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_CFB); } -static int atmel_tdes_cfb_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_CFB); } -static int atmel_tdes_cfb8_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb8_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_CFB | TDES_FLAGS_CFB8); } -static int atmel_tdes_cfb8_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb8_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_CFB | TDES_FLAGS_CFB8); } -static int atmel_tdes_cfb16_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb16_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_CFB | TDES_FLAGS_CFB16); } -static int atmel_tdes_cfb16_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb16_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_CFB | TDES_FLAGS_CFB16); } -static int atmel_tdes_cfb32_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb32_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_CFB | TDES_FLAGS_CFB32); } -static int atmel_tdes_cfb32_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_cfb32_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_CFB | TDES_FLAGS_CFB32); } -static int atmel_tdes_ofb_encrypt(struct ablkcipher_request *req) +static int atmel_tdes_ofb_encrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_ENCRYPT | TDES_FLAGS_OFB); } -static int atmel_tdes_ofb_decrypt(struct ablkcipher_request *req) +static int atmel_tdes_ofb_decrypt(struct skcipher_request *req) { return atmel_tdes_crypt(req, TDES_FLAGS_OFB); } -static int atmel_tdes_cra_init(struct crypto_tfm *tfm) +static int atmel_tdes_init_tfm(struct crypto_skcipher *tfm) { - struct atmel_tdes_ctx *ctx = crypto_tfm_ctx(tfm); + struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx(tfm); struct atmel_tdes_dev *dd; - tfm->crt_ablkcipher.reqsize = sizeof(struct atmel_tdes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_tdes_reqctx)); dd = atmel_tdes_find_dev(ctx); if (!dd) @@ -888,204 +889,184 @@ static int atmel_tdes_cra_init(struct crypto_tfm *tfm) return 0; } -static struct crypto_alg tdes_algs[] = { +static struct skcipher_alg tdes_algs[] = { { - .cra_name = "ecb(des)", - .cra_driver_name = "atmel-ecb-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_ecb_encrypt, - .decrypt = atmel_tdes_ecb_decrypt, - } + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "atmel-ecb-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_ecb_encrypt, + .decrypt = atmel_tdes_ecb_decrypt, }, { - .cra_name = "cbc(des)", - .cra_driver_name = "atmel-cbc-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_cbc_encrypt, - .decrypt = atmel_tdes_cbc_decrypt, - } + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "atmel-cbc-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_cbc_encrypt, + .decrypt = atmel_tdes_cbc_decrypt, }, { - .cra_name = "cfb(des)", - .cra_driver_name = "atmel-cfb-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_cfb_encrypt, - .decrypt = atmel_tdes_cfb_decrypt, - } + .base.cra_name = "cfb(des)", + .base.cra_driver_name = "atmel-cfb-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_cfb_encrypt, + .decrypt = atmel_tdes_cfb_decrypt, }, { - .cra_name = "cfb8(des)", - .cra_driver_name = "atmel-cfb8-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB8_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_cfb8_encrypt, - .decrypt = atmel_tdes_cfb8_decrypt, - } + .base.cra_name = "cfb8(des)", + .base.cra_driver_name = "atmel-cfb8-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB8_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_cfb8_encrypt, + .decrypt = atmel_tdes_cfb8_decrypt, }, { - .cra_name = "cfb16(des)", - .cra_driver_name = "atmel-cfb16-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB16_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x1, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_cfb16_encrypt, - .decrypt = atmel_tdes_cfb16_decrypt, - } + .base.cra_name = "cfb16(des)", + .base.cra_driver_name = "atmel-cfb16-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB16_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x1, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_cfb16_encrypt, + .decrypt = atmel_tdes_cfb16_decrypt, }, { - .cra_name = "cfb32(des)", - .cra_driver_name = "atmel-cfb32-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = CFB32_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x3, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_cfb32_encrypt, - .decrypt = atmel_tdes_cfb32_decrypt, - } + .base.cra_name = "cfb32(des)", + .base.cra_driver_name = "atmel-cfb32-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = CFB32_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x3, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_cfb32_encrypt, + .decrypt = atmel_tdes_cfb32_decrypt, }, { - .cra_name = "ofb(des)", - .cra_driver_name = "atmel-ofb-des", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_des_setkey, - .encrypt = atmel_tdes_ofb_encrypt, - .decrypt = atmel_tdes_ofb_decrypt, - } + .base.cra_name = "ofb(des)", + .base.cra_driver_name = "atmel-ofb-des", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = atmel_des_setkey, + .encrypt = atmel_tdes_ofb_encrypt, + .decrypt = atmel_tdes_ofb_decrypt, }, { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "atmel-ecb-tdes", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = 3 * DES_KEY_SIZE, - .max_keysize = 3 * DES_KEY_SIZE, - .setkey = atmel_tdes_setkey, - .encrypt = atmel_tdes_ecb_encrypt, - .decrypt = atmel_tdes_ecb_decrypt, - } + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "atmel-ecb-tdes", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = atmel_tdes_setkey, + .encrypt = atmel_tdes_ecb_encrypt, + .decrypt = atmel_tdes_ecb_decrypt, }, { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "atmel-cbc-tdes", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = 3*DES_KEY_SIZE, - .max_keysize = 3*DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_tdes_setkey, - .encrypt = atmel_tdes_cbc_encrypt, - .decrypt = atmel_tdes_cbc_decrypt, - } + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "atmel-cbc-tdes", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = atmel_tdes_setkey, + .encrypt = atmel_tdes_cbc_encrypt, + .decrypt = atmel_tdes_cbc_decrypt, + .ivsize = DES_BLOCK_SIZE, }, { - .cra_name = "ofb(des3_ede)", - .cra_driver_name = "atmel-ofb-tdes", - .cra_priority = 100, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct atmel_tdes_ctx), - .cra_alignmask = 0x7, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = atmel_tdes_cra_init, - .cra_u.ablkcipher = { - .min_keysize = 3*DES_KEY_SIZE, - .max_keysize = 3*DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = atmel_tdes_setkey, - .encrypt = atmel_tdes_ofb_encrypt, - .decrypt = atmel_tdes_ofb_decrypt, - } + .base.cra_name = "ofb(des3_ede)", + .base.cra_driver_name = "atmel-ofb-tdes", + .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct atmel_tdes_ctx), + .base.cra_alignmask = 0x7, + .base.cra_module = THIS_MODULE, + + .init = atmel_tdes_init_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = atmel_tdes_setkey, + .encrypt = atmel_tdes_ofb_encrypt, + .decrypt = atmel_tdes_ofb_decrypt, + .ivsize = DES_BLOCK_SIZE, }, }; @@ -1148,7 +1129,7 @@ static void atmel_tdes_unregister_algs(struct atmel_tdes_dev *dd) int i; for (i = 0; i < ARRAY_SIZE(tdes_algs); i++) - crypto_unregister_alg(&tdes_algs[i]); + crypto_unregister_skcipher(&tdes_algs[i]); } static int atmel_tdes_register_algs(struct atmel_tdes_dev *dd) @@ -1156,7 +1137,7 @@ static int atmel_tdes_register_algs(struct atmel_tdes_dev *dd) int err, i, j; for (i = 0; i < ARRAY_SIZE(tdes_algs); i++) { - err = crypto_register_alg(&tdes_algs[i]); + err = crypto_register_skcipher(&tdes_algs[i]); if (err) goto err_tdes_algs; } @@ -1165,7 +1146,7 @@ static int atmel_tdes_register_algs(struct atmel_tdes_dev *dd) err_tdes_algs: for (j = 0; j < i; j++) - crypto_unregister_alg(&tdes_algs[j]); + crypto_unregister_skcipher(&tdes_algs[j]); return err; } From patchwork Thu Oct 24 13:23:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209679 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E528112B for ; Thu, 24 Oct 2019 13:28:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 382392084C for ; Thu, 24 Oct 2019 13:28:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kQaDpobz"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="kqIR14of" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 382392084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Nq3820PEOvdINT6aUAqmC+/6LTwGfwt+mz5tFEenYx0=; b=kQaDpobzM21l9I Dl8VNxBRCBGZz9oBFGeGA42aj0QtULImPx3Z/EiOUQKN5vjhuT+SIE+nhCECHlTWZaWug25DxjvbQ bu7LNHTkPLYziuLcQkyRuPVhFCwWx7UsYLKvPptdiEtXfhKLjI7pviHBn+ftEumJW+64APnr7vWLG APqrV4FMxLfIAKUoKa53KfSoFQFHlsThCC2KqI2cAXDVLYqu1fQeGbQSnft63fTuyQ02pVEXInyGk IU5lQ8ZxFvJ6Xc4akIRpvvlqiTIAkovHR4RZZl/YoPlcNxk9nWGe9R5zn5AAc7Jigv4aAkfTxq/FC vHqluFUlYu1xEhhjOTbA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd9t-0004dx-Og; Thu, 24 Oct 2019 13:28:09 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd61-0000ZJ-Fc for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:20 +0000 Received: by mail-wm1-x343.google.com with SMTP id g24so2826636wmh.5 for ; Thu, 24 Oct 2019 06:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=R2fk+803yvXcT5kskWPrBKPLdiV6htgG0A7gFMyBpAU=; b=kqIR14ofmm3EHS77NWi6funxfQCWLTko3IyVBXtoDCho2D6J0nlPe0Vk2BwQ9scJKQ k+LyPo7wc7zHaJR4Bgtq/154UP8fdMaLu3/ltCm2OFb+I1sZRbtJ8lJXQAsgQftojIEo fr3aUrpJa13npVTzK+0vJNEBSFwEoWbz38wSFgl2S8Cj4T6LLZ0lB/+ZK5WmGA8A691C u/kvNkXBhttcWTqVXdAMqBU17A4go0d+0Y+CwsGHK9WF7xG/I76CwoNV1bbprPNF+GG4 Mde9BGTTUWStd7PYBl8AOYEtbtCu3Dol6ui1pJ1eaF5KamhUStKFGWmcrpl5JWbZyS42 CSbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=R2fk+803yvXcT5kskWPrBKPLdiV6htgG0A7gFMyBpAU=; b=L+iKpYgwNmMihH6i0lxZEg6XyBfJx0Q++BXGpOr0bQf9BK+KzHfhx6/xnCn8bt6fVZ 6mBaUnCKKv1COFsZ13jmg8BAouriwfhsunC4Mc4MiphRlr6rJLNDLu8vPRNgozTtUahb evdvjpSrAdUJjj2nUko21fLXJrFq0BXQLq9TYFLu2pNslaU0HzgGefppShveeDWfYfu8 6AbMAu2OCdd6s0ynK3G10Mdqc1C9mnSKrh0AOk+YdX0F1tIQ6iQTvulwXUWoff5rzzyy i7/bqLxHQCEhQ3g30SnKfi76U4ee5eLvcGGW5HEKHJeRg47RNJu/6BOClRHMnXdW5h3Z 4Q4w== X-Gm-Message-State: APjAAAVQbSjdHuZ7oFCr5MWiazTNQa9tWgJo6NdRFZ5oenY/fcyPNqYq RSjuQlBPuBfJMafhuBc4w66C0g== X-Google-Smtp-Source: APXvYqyAhl7p6TJhZRX/Jpx2wkw/7Rlo45sp6FkEfKuAtWK2RQsUHb2sCzgQYYFg06rlMyT5VSpWjg== X-Received: by 2002:a05:600c:2295:: with SMTP id 21mr4708397wmf.106.1571923447526; Thu, 24 Oct 2019 06:24:07 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:06 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 10/27] crypto: bcm-spu - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:28 +0200 Message-Id: <20191024132345.5236-11-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062410_176979_703B6792 X-CRM114-Status: GOOD ( 19.84 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/bcm/cipher.c | 373 +++++++++----------- drivers/crypto/bcm/cipher.h | 10 +- drivers/crypto/bcm/spu2.c | 6 +- 4 files changed, 186 insertions(+), 205 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 3e51bae191ec..e8ed9f00c1a8 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -731,7 +731,7 @@ config CRYPTO_DEV_BCM_SPU select CRYPTO_SHA512 help This driver provides support for Broadcom crypto acceleration using the - Secure Processing Unit (SPU). The SPU driver registers ablkcipher, + Secure Processing Unit (SPU). The SPU driver registers skcipher, ahash, and aead algorithms with the kernel cryptographic API. source "drivers/crypto/stm32/Kconfig" diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c index f85356a48e7e..1564a6f8c9cb 100644 --- a/drivers/crypto/bcm/cipher.c +++ b/drivers/crypto/bcm/cipher.c @@ -110,8 +110,8 @@ static u8 select_channel(void) } /** - * spu_ablkcipher_rx_sg_create() - Build up the scatterlist of buffers used to - * receive a SPU response message for an ablkcipher request. Includes buffers to + * spu_skcipher_rx_sg_create() - Build up the scatterlist of buffers used to + * receive a SPU response message for an skcipher request. Includes buffers to * catch SPU message headers and the response data. * @mssg: mailbox message containing the receive sg * @rctx: crypto request context @@ -130,7 +130,7 @@ static u8 select_channel(void) * < 0 if an error */ static int -spu_ablkcipher_rx_sg_create(struct brcm_message *mssg, +spu_skcipher_rx_sg_create(struct brcm_message *mssg, struct iproc_reqctx_s *rctx, u8 rx_frag_num, unsigned int chunksize, u32 stat_pad_len) @@ -179,8 +179,8 @@ spu_ablkcipher_rx_sg_create(struct brcm_message *mssg, } /** - * spu_ablkcipher_tx_sg_create() - Build up the scatterlist of buffers used to - * send a SPU request message for an ablkcipher request. Includes SPU message + * spu_skcipher_tx_sg_create() - Build up the scatterlist of buffers used to + * send a SPU request message for an skcipher request. Includes SPU message * headers and the request data. * @mssg: mailbox message containing the transmit sg * @rctx: crypto request context @@ -198,7 +198,7 @@ spu_ablkcipher_rx_sg_create(struct brcm_message *mssg, * < 0 if an error */ static int -spu_ablkcipher_tx_sg_create(struct brcm_message *mssg, +spu_skcipher_tx_sg_create(struct brcm_message *mssg, struct iproc_reqctx_s *rctx, u8 tx_frag_num, unsigned int chunksize, u32 pad_len) { @@ -283,7 +283,7 @@ static int mailbox_send_message(struct brcm_message *mssg, u32 flags, } /** - * handle_ablkcipher_req() - Submit as much of a block cipher request as fits in + * handle_skcipher_req() - Submit as much of a block cipher request as fits in * a single SPU request message, starting at the current position in the request * data. * @rctx: Crypto request context @@ -300,12 +300,12 @@ static int mailbox_send_message(struct brcm_message *mssg, u32 flags, * asynchronously * Any other value indicates an error */ -static int handle_ablkcipher_req(struct iproc_reqctx_s *rctx) +static int handle_skcipher_req(struct iproc_reqctx_s *rctx) { struct spu_hw *spu = &iproc_priv.spu; struct crypto_async_request *areq = rctx->parent; - struct ablkcipher_request *req = - container_of(areq, struct ablkcipher_request, base); + struct skcipher_request *req = + container_of(areq, struct skcipher_request, base); struct iproc_ctx_s *ctx = rctx->ctx; struct spu_cipher_parms cipher_parms; int err = 0; @@ -468,7 +468,7 @@ static int handle_ablkcipher_req(struct iproc_reqctx_s *rctx) spu->spu_xts_tweak_in_payload()) rx_frag_num++; /* extra sg to insert tweak */ - err = spu_ablkcipher_rx_sg_create(mssg, rctx, rx_frag_num, chunksize, + err = spu_skcipher_rx_sg_create(mssg, rctx, rx_frag_num, chunksize, stat_pad_len); if (err) return err; @@ -482,7 +482,7 @@ static int handle_ablkcipher_req(struct iproc_reqctx_s *rctx) spu->spu_xts_tweak_in_payload()) tx_frag_num++; /* extra sg to insert tweak */ - err = spu_ablkcipher_tx_sg_create(mssg, rctx, tx_frag_num, chunksize, + err = spu_skcipher_tx_sg_create(mssg, rctx, tx_frag_num, chunksize, pad_len); if (err) return err; @@ -495,16 +495,16 @@ static int handle_ablkcipher_req(struct iproc_reqctx_s *rctx) } /** - * handle_ablkcipher_resp() - Process a block cipher SPU response. Updates the + * handle_skcipher_resp() - Process a block cipher SPU response. Updates the * total received count for the request and updates global stats. * @rctx: Crypto request context */ -static void handle_ablkcipher_resp(struct iproc_reqctx_s *rctx) +static void handle_skcipher_resp(struct iproc_reqctx_s *rctx) { struct spu_hw *spu = &iproc_priv.spu; #ifdef DEBUG struct crypto_async_request *areq = rctx->parent; - struct ablkcipher_request *req = ablkcipher_request_cast(areq); + struct skcipher_request *req = skcipher_request_cast(areq); #endif struct iproc_ctx_s *ctx = rctx->ctx; u32 payload_len; @@ -1685,8 +1685,8 @@ static void spu_rx_callback(struct mbox_client *cl, void *msg) /* Process the SPU response message */ switch (rctx->ctx->alg->type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - handle_ablkcipher_resp(rctx); + case CRYPTO_ALG_TYPE_SKCIPHER: + handle_skcipher_resp(rctx); break; case CRYPTO_ALG_TYPE_AHASH: handle_ahash_resp(rctx); @@ -1708,8 +1708,8 @@ static void spu_rx_callback(struct mbox_client *cl, void *msg) spu_chunk_cleanup(rctx); switch (rctx->ctx->alg->type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - err = handle_ablkcipher_req(rctx); + case CRYPTO_ALG_TYPE_SKCIPHER: + err = handle_skcipher_req(rctx); break; case CRYPTO_ALG_TYPE_AHASH: err = handle_ahash_req(rctx); @@ -1739,7 +1739,7 @@ static void spu_rx_callback(struct mbox_client *cl, void *msg) /* ==================== Kernel Cryptographic API ==================== */ /** - * ablkcipher_enqueue() - Handle ablkcipher encrypt or decrypt request. + * skcipher_enqueue() - Handle skcipher encrypt or decrypt request. * @req: Crypto API request * @encrypt: true if encrypting; false if decrypting * @@ -1747,11 +1747,11 @@ static void spu_rx_callback(struct mbox_client *cl, void *msg) * asynchronously * < 0 if an error */ -static int ablkcipher_enqueue(struct ablkcipher_request *req, bool encrypt) +static int skcipher_enqueue(struct skcipher_request *req, bool encrypt) { - struct iproc_reqctx_s *rctx = ablkcipher_request_ctx(req); + struct iproc_reqctx_s *rctx = skcipher_request_ctx(req); struct iproc_ctx_s *ctx = - crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)); + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); int err; flow_log("%s() enc:%u\n", __func__, encrypt); @@ -1761,7 +1761,7 @@ static int ablkcipher_enqueue(struct ablkcipher_request *req, bool encrypt) rctx->parent = &req->base; rctx->is_encrypt = encrypt; rctx->bd_suppress = false; - rctx->total_todo = req->nbytes; + rctx->total_todo = req->cryptlen; rctx->src_sent = 0; rctx->total_sent = 0; rctx->total_received = 0; @@ -1782,15 +1782,15 @@ static int ablkcipher_enqueue(struct ablkcipher_request *req, bool encrypt) ctx->cipher.mode == CIPHER_MODE_GCM || ctx->cipher.mode == CIPHER_MODE_CCM) { rctx->iv_ctr_len = - crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req)); - memcpy(rctx->msg_buf.iv_ctr, req->info, rctx->iv_ctr_len); + crypto_skcipher_ivsize(crypto_skcipher_reqtfm(req)); + memcpy(rctx->msg_buf.iv_ctr, req->iv, rctx->iv_ctr_len); } else { rctx->iv_ctr_len = 0; } /* Choose a SPU to process this request */ rctx->chan_idx = select_channel(); - err = handle_ablkcipher_req(rctx); + err = handle_skcipher_req(rctx); if (err != -EINPROGRESS) /* synchronous result */ spu_chunk_cleanup(rctx); @@ -1798,13 +1798,13 @@ static int ablkcipher_enqueue(struct ablkcipher_request *req, bool encrypt) return err; } -static int des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int des_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct iproc_ctx_s *ctx = crypto_ablkcipher_ctx(cipher); + struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher); int err; - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(cipher, key); if (err) return err; @@ -1812,13 +1812,13 @@ static int des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int threedes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int threedes_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct iproc_ctx_s *ctx = crypto_ablkcipher_ctx(cipher); + struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher); int err; - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(cipher, key); if (err) return err; @@ -1826,10 +1826,10 @@ static int threedes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int aes_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct iproc_ctx_s *ctx = crypto_ablkcipher_ctx(cipher); + struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher); if (ctx->cipher.mode == CIPHER_MODE_XTS) /* XTS includes two keys of equal length */ @@ -1846,7 +1846,7 @@ static int aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, ctx->cipher_type = CIPHER_TYPE_AES256; break; default: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } WARN_ON((ctx->max_payload != SPU_MAX_PAYLOAD_INF) && @@ -1854,10 +1854,10 @@ static int aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int rc4_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int rc4_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct iproc_ctx_s *ctx = crypto_ablkcipher_ctx(cipher); + struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher); int i; ctx->enckeylen = ARC4_MAX_KEY_SIZE + ARC4_STATE_SIZE; @@ -1874,16 +1874,16 @@ static int rc4_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int ablkcipher_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { struct spu_hw *spu = &iproc_priv.spu; - struct iproc_ctx_s *ctx = crypto_ablkcipher_ctx(cipher); + struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher); struct spu_cipher_parms cipher_parms; u32 alloc_len = 0; int err; - flow_log("ablkcipher_setkey() keylen: %d\n", keylen); + flow_log("skcipher_setkey() keylen: %d\n", keylen); flow_dump(" key: ", key, keylen); switch (ctx->cipher.alg) { @@ -1926,7 +1926,7 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *cipher, const u8 *key, alloc_len = BCM_HDR_LEN + SPU2_HEADER_ALLOC_LEN; memset(ctx->bcm_spu_req_hdr, 0, alloc_len); cipher_parms.iv_buf = NULL; - cipher_parms.iv_len = crypto_ablkcipher_ivsize(cipher); + cipher_parms.iv_len = crypto_skcipher_ivsize(cipher); flow_log("%s: iv_len %u\n", __func__, cipher_parms.iv_len); cipher_parms.alg = ctx->cipher.alg; @@ -1950,17 +1950,17 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int ablkcipher_encrypt(struct ablkcipher_request *req) +static int skcipher_encrypt(struct skcipher_request *req) { - flow_log("ablkcipher_encrypt() nbytes:%u\n", req->nbytes); + flow_log("skcipher_encrypt() nbytes:%u\n", req->cryptlen); - return ablkcipher_enqueue(req, true); + return skcipher_enqueue(req, true); } -static int ablkcipher_decrypt(struct ablkcipher_request *req) +static int skcipher_decrypt(struct skcipher_request *req) { - flow_log("ablkcipher_decrypt() nbytes:%u\n", req->nbytes); - return ablkcipher_enqueue(req, false); + flow_log("skcipher_decrypt() nbytes:%u\n", req->cryptlen); + return skcipher_enqueue(req, false); } static int ahash_enqueue(struct ahash_request *req) @@ -3585,18 +3585,16 @@ static struct iproc_alg_s driver_algs[] = { .auth_first = 0, }, -/* ABLKCIPHER algorithms. */ +/* SKCIPHER algorithms. */ { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(arc4)", - .cra_driver_name = "ecb-arc4-iproc", - .cra_blocksize = ARC4_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = ARC4_MIN_KEY_SIZE, - .max_keysize = ARC4_MAX_KEY_SIZE, - .ivsize = 0, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(arc4)", + .base.cra_driver_name = "ecb-arc4-iproc", + .base.cra_blocksize = ARC4_BLOCK_SIZE, + .min_keysize = ARC4_MIN_KEY_SIZE, + .max_keysize = ARC4_MAX_KEY_SIZE, + .ivsize = 0, }, .cipher_info = { .alg = CIPHER_ALG_RC4, @@ -3608,16 +3606,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ofb(des)", - .cra_driver_name = "ofb-des-iproc", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ofb(des)", + .base.cra_driver_name = "ofb-des-iproc", + .base.cra_blocksize = DES_BLOCK_SIZE, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_DES, @@ -3629,16 +3625,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-iproc", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-iproc", + .base.cra_blocksize = DES_BLOCK_SIZE, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_DES, @@ -3650,16 +3644,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-iproc", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = 0, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-iproc", + .base.cra_blocksize = DES_BLOCK_SIZE, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = 0, }, .cipher_info = { .alg = CIPHER_ALG_DES, @@ -3671,16 +3663,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ofb(des3_ede)", - .cra_driver_name = "ofb-des3-iproc", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ofb(des3_ede)", + .base.cra_driver_name = "ofb-des3-iproc", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_3DES, @@ -3692,16 +3682,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3-iproc", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3-iproc", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_3DES, @@ -3713,16 +3701,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3-iproc", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = 0, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3-iproc", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = 0, }, .cipher_info = { .alg = CIPHER_ALG_3DES, @@ -3734,16 +3720,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ofb(aes)", - .cra_driver_name = "ofb-aes-iproc", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ofb(aes)", + .base.cra_driver_name = "ofb-aes-iproc", + .base.cra_blocksize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_AES, @@ -3755,16 +3739,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-iproc", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-iproc", + .base.cra_blocksize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_AES, @@ -3776,16 +3758,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-iproc", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = 0, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-iproc", + .base.cra_blocksize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = 0, }, .cipher_info = { .alg = CIPHER_ALG_AES, @@ -3797,16 +3777,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-iproc", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-iproc", + .base.cra_blocksize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_AES, @@ -3818,16 +3796,14 @@ static struct iproc_alg_s driver_algs[] = { }, }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "xts(aes)", - .cra_driver_name = "xts-aes-iproc", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ablkcipher = { - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "xts-aes-iproc", + .base.cra_blocksize = AES_BLOCK_SIZE, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cipher_info = { .alg = CIPHER_ALG_AES, @@ -4282,16 +4258,17 @@ static int generic_cra_init(struct crypto_tfm *tfm, return 0; } -static int ablkcipher_cra_init(struct crypto_tfm *tfm) +static int skcipher_init_tfm(struct crypto_skcipher *skcipher) { - struct crypto_alg *alg = tfm->__crt_alg; + struct crypto_tfm *tfm = crypto_skcipher_tfm(skcipher); + struct skcipher_alg *alg = crypto_skcipher_alg(skcipher); struct iproc_alg_s *cipher_alg; flow_log("%s()\n", __func__); - tfm->crt_ablkcipher.reqsize = sizeof(struct iproc_reqctx_s); + crypto_skcipher_set_reqsize(skcipher, sizeof(struct iproc_reqctx_s)); - cipher_alg = container_of(alg, struct iproc_alg_s, alg.crypto); + cipher_alg = container_of(alg, struct iproc_alg_s, alg.skcipher); return generic_cra_init(tfm, cipher_alg); } @@ -4363,6 +4340,11 @@ static void generic_cra_exit(struct crypto_tfm *tfm) atomic_dec(&iproc_priv.session_count); } +static void skcipher_exit_tfm(struct crypto_skcipher *tfm) +{ + generic_cra_exit(crypto_skcipher_tfm(tfm)); +} + static void aead_cra_exit(struct crypto_aead *aead) { struct crypto_tfm *tfm = crypto_aead_tfm(aead); @@ -4524,10 +4506,10 @@ static void spu_counters_init(void) atomic_set(&iproc_priv.bad_icv, 0); } -static int spu_register_ablkcipher(struct iproc_alg_s *driver_alg) +static int spu_register_skcipher(struct iproc_alg_s *driver_alg) { struct spu_hw *spu = &iproc_priv.spu; - struct crypto_alg *crypto = &driver_alg->alg.crypto; + struct skcipher_alg *crypto = &driver_alg->alg.skcipher; int err; /* SPU2 does not support RC4 */ @@ -4535,26 +4517,23 @@ static int spu_register_ablkcipher(struct iproc_alg_s *driver_alg) (spu->spu_type == SPU_TYPE_SPU2)) return 0; - crypto->cra_module = THIS_MODULE; - crypto->cra_priority = cipher_pri; - crypto->cra_alignmask = 0; - crypto->cra_ctxsize = sizeof(struct iproc_ctx_s); - - crypto->cra_init = ablkcipher_cra_init; - crypto->cra_exit = generic_cra_exit; - crypto->cra_type = &crypto_ablkcipher_type; - crypto->cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY; + crypto->base.cra_module = THIS_MODULE; + crypto->base.cra_priority = cipher_pri; + crypto->base.cra_alignmask = 0; + crypto->base.cra_ctxsize = sizeof(struct iproc_ctx_s); + crypto->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; - crypto->cra_ablkcipher.setkey = ablkcipher_setkey; - crypto->cra_ablkcipher.encrypt = ablkcipher_encrypt; - crypto->cra_ablkcipher.decrypt = ablkcipher_decrypt; + crypto->init = skcipher_init_tfm; + crypto->exit = skcipher_exit_tfm; + crypto->setkey = skcipher_setkey; + crypto->encrypt = skcipher_encrypt; + crypto->decrypt = skcipher_decrypt; - err = crypto_register_alg(crypto); + err = crypto_register_skcipher(crypto); /* Mark alg as having been registered, if successful */ if (err == 0) driver_alg->registered = true; - pr_debug(" registered ablkcipher %s\n", crypto->cra_driver_name); + pr_debug(" registered skcipher %s\n", crypto->base.cra_driver_name); return err; } @@ -4649,8 +4628,8 @@ static int spu_algs_register(struct device *dev) for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { switch (driver_algs[i].type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - err = spu_register_ablkcipher(&driver_algs[i]); + case CRYPTO_ALG_TYPE_SKCIPHER: + err = spu_register_skcipher(&driver_algs[i]); break; case CRYPTO_ALG_TYPE_AHASH: err = spu_register_ahash(&driver_algs[i]); @@ -4680,8 +4659,8 @@ static int spu_algs_register(struct device *dev) if (!driver_algs[j].registered) continue; switch (driver_algs[j].type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - crypto_unregister_alg(&driver_algs[j].alg.crypto); + case CRYPTO_ALG_TYPE_SKCIPHER: + crypto_unregister_skcipher(&driver_algs[j].alg.skcipher); driver_algs[j].registered = false; break; case CRYPTO_ALG_TYPE_AHASH: @@ -4837,10 +4816,10 @@ static int bcm_spu_remove(struct platform_device *pdev) continue; switch (driver_algs[i].type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - crypto_unregister_alg(&driver_algs[i].alg.crypto); + case CRYPTO_ALG_TYPE_SKCIPHER: + crypto_unregister_skcipher(&driver_algs[i].alg.skcipher); dev_dbg(dev, " unregistered cipher %s\n", - driver_algs[i].alg.crypto.cra_driver_name); + driver_algs[i].alg.skcipher.base.cra_driver_name); driver_algs[i].registered = false; break; case CRYPTO_ALG_TYPE_AHASH: diff --git a/drivers/crypto/bcm/cipher.h b/drivers/crypto/bcm/cipher.h index 766452b24d0a..b6d83e3aa46c 100644 --- a/drivers/crypto/bcm/cipher.h +++ b/drivers/crypto/bcm/cipher.h @@ -1,3 +1,4 @@ + /* SPDX-License-Identifier: GPL-2.0-only */ /* * Copyright 2016 Broadcom @@ -11,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -102,7 +104,7 @@ struct auth_op { struct iproc_alg_s { u32 type; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg hash; struct aead_alg aead; } alg; @@ -149,7 +151,7 @@ struct spu_msg_buf { u8 rx_stat[ALIGN(SPU_RX_STATUS_LEN, SPU_MSG_ALIGN)]; union { - /* Buffers only used for ablkcipher */ + /* Buffers only used for skcipher */ struct { /* * Field used for either SUPDT when RC4 is used @@ -214,7 +216,7 @@ struct iproc_ctx_s { /* * Buffer to hold SPU message header template. Template is created at - * setkey time for ablkcipher requests, since most of the fields in the + * setkey time for skcipher requests, since most of the fields in the * header are known at that time. At request time, just fill in a few * missing pieces related to length of data in the request and IVs, etc. */ @@ -256,7 +258,7 @@ struct iproc_reqctx_s { /* total todo, rx'd, and sent for this request */ unsigned int total_todo; - unsigned int total_received; /* only valid for ablkcipher */ + unsigned int total_received; /* only valid for skcipher */ unsigned int total_sent; /* diff --git a/drivers/crypto/bcm/spu2.c b/drivers/crypto/bcm/spu2.c index 2add51024575..59abb5ecefa4 100644 --- a/drivers/crypto/bcm/spu2.c +++ b/drivers/crypto/bcm/spu2.c @@ -542,7 +542,7 @@ void spu2_dump_msg_hdr(u8 *buf, unsigned int buf_len) /** * spu2_fmd_init() - At setkey time, initialize the fixed meta data for - * subsequent ablkcipher requests for this context. + * subsequent skcipher requests for this context. * @spu2_cipher_type: Cipher algorithm * @spu2_mode: Cipher mode * @cipher_key_len: Length of cipher key, in bytes @@ -1107,13 +1107,13 @@ u32 spu2_create_request(u8 *spu_hdr, } /** - * spu_cipher_req_init() - Build an ablkcipher SPU2 request message header, + * spu_cipher_req_init() - Build an skcipher SPU2 request message header, * including FMD and OMD. * @spu_hdr: Location of start of SPU request (FMD field) * @cipher_parms: Parameters describing cipher request * * Called at setkey time to initialize a msg header that can be reused for all - * subsequent ablkcipher requests. Construct the message starting at spu_hdr. + * subsequent skcipher requests. Construct the message starting at spu_hdr. * Caller should allocate this buffer in DMA-able memory at least * SPU_HEADER_ALLOC_LEN bytes long. * From patchwork Thu Oct 24 13:23:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209675 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A27AE13B1 for ; Thu, 24 Oct 2019 13:27:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6ACD921872 for ; Thu, 24 Oct 2019 13:27:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qqGrAa/o"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="HEX+83ux" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6ACD921872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XLOvY+8XZX3UWGmwNIz2FlaI4y/vZnv1lpJ5sSwmQyA=; b=qqGrAa/oUt665D 2NYZ3r6B2efaFx8YK3B+AvFACDjIr4/3QWWQF6hLZ9grHLr5F3Am4/8hs71b3UL0Ng+CEN8YMuGnA JBohlu+LwurNhwf1Gakqg4oKu/BaYFNe0KTJI92pwm9j49EZfhOvBXWyVaD4qv4eE6WRPM8Ncj/Eb MpOTQakwbifmzLsRdrIroG1p1H1EmE8xJEJwkHS3UipFxFT6WsTaiiDZIIEFL+heqS4UxDyB3TAro X36yzbZhfm9bPxEzubNZ0ZxgaDZs9pLpWYRc6Fi9Ord+Heeyq/ZgFtjotQeSiq/xussvK8U2R6FEY 3X8IFdX7WfOoaOupZrAA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd9O-0004CN-DX; Thu, 24 Oct 2019 13:27:38 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd62-0000Zq-Os for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:14 +0000 Received: by mail-wr1-x441.google.com with SMTP id p4so26083977wrm.8 for ; Thu, 24 Oct 2019 06:24:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lIZcqfC0xhMTbTw2eMsrTKpkwPVrbw6wba0eQebSEA0=; b=HEX+83uxGppuiEYtz0BN0OY3n15lrqBsTgXIjr1kLKborsEFlEAnqjvqySTWN/itZH 9idihEpi5ygJCZHTUr+Rahl0sdt7awAfiEiiNfh9YQajTDdzI7EGKq4SksUx7HFKsKTd eSPMddzNym/6WaTm1xoD0/0cOdAfXjnh0W4TP5yDcLOSCdeHVD/8Z+bgCQEIKJR0UEf8 pB+wbP6m5eSPhZKwfUGYK1M312r9+ytYS7VphNbLnyOGOayYrPH/O4pS0brzlfywDF4z HExQTP6sEthmV8PlnW9UUH2KsOz2a285tPkdU+ZBvp/raUtrnGt50m2PGelfYFYHGILF XUJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lIZcqfC0xhMTbTw2eMsrTKpkwPVrbw6wba0eQebSEA0=; b=rumT5Iu6vQ5/I4miSGMZxuQj6SQHhPiEDzfJs1v4jboWoCQyVigTI7hdwm/tEQjxNW ELQLAspaNGi1V2ZMSRP5S/Bjx/AbpreCSq6I7ec9pLYSukS1/SCfVWkvOUK8vY7JllG0 Yxh/9wO/WbdxTY9i3zIXHBIty3SwW4GbqJhVJU6XNSCP130Rl2qKylaCfq6O9cKJu2uN I+dqakHHQrDPDJmJUtJAVq+14jccioKp6Ilq6A5lQS/CmUmVrKOnYUqalnKezurfdZXA higMLFkqQsaRJ9YIIsIEeBNlNzsxApr2SmH0QQduGbvG2N//uAmcncyjgIKCkqWkS7zL ZtVQ== X-Gm-Message-State: APjAAAUQLS4D8xWjDQT5jWD4yP8AL/y+YbmGfoa5RflRymwykYlTFce0 zCMt2u/wSJwL3AlVlVQEOwL5oLgW1AoVJCcG X-Google-Smtp-Source: APXvYqyiJvR8nSugvtaqqa222EL75kz7TBLfILj8bSxp/C3t2ll5+xsUD6vm0JSO1ClDtKBl4ieypg== X-Received: by 2002:a5d:4382:: with SMTP id i2mr3957338wrq.387.1571923448698; Thu, 24 Oct 2019 06:24:08 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:07 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 11/27] crypto: nitrox - remove cra_type reference to ablkcipher Date: Thu, 24 Oct 2019 15:23:29 +0200 Message-Id: <20191024132345.5236-12-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062410_859376_F9877899 X-CRM114-Status: GOOD ( 10.77 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Setting the cra_type field is not necessary for skciphers, and ablkcipher will be removed, so drop the assignment from the nitrox driver. Signed-off-by: Ard Biesheuvel --- drivers/crypto/cavium/nitrox/nitrox_skcipher.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c index ec3aaadc6fd7..97af4d50d003 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c +++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c @@ -493,7 +493,6 @@ static struct skcipher_alg nitrox_skciphers[] = { { .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct nitrox_crypto_ctx), .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, .cra_module = THIS_MODULE, }, .min_keysize = AES_MIN_KEY_SIZE, From patchwork Thu Oct 24 13:23:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209683 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63045112B for ; Thu, 24 Oct 2019 13:28:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1E782084C for ; Thu, 24 Oct 2019 13:28:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bdJTY8ZA"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="K0HTc66R" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1E782084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WLSJqZv4JCD+2JE52b9l/aEohvSoonZ0n/qRr5OXpP4=; b=bdJTY8ZAoLmrnr KucQoiUYttAkG0DzsSMxdZxeaa1iMCW6CKrTP48xcpnTkYChWDHziY3RnF41MkcGPD/R9NyLIb27A VwHxBPnnTomRSnAQ7rXyttlI4c6NlUQdWous7RtezspJGfPFEcn/wZeEC/gBCBAoODRcfn3gQSFic /HVUAsFysK2/E7ed0KUy/K5yFNpOy5pfWx4tDTB+iFVgQI1URRTl3H4YkXubUIiq7+XQDe1JMOd4U WGPZHiSiLMHk7hf2TfNZTwIyclyDP5h1OQoVhWSV1yJWYFw2gjPguGNs+oBpRWOXTSz4QxeDI++dp uX9iaktgrwj8MJkxqipQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdA9-0004ry-Us; Thu, 24 Oct 2019 13:28:26 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd64-0000bG-77 for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:20 +0000 Received: by mail-wr1-x443.google.com with SMTP id w18so25542022wrt.3 for ; Thu, 24 Oct 2019 06:24:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KCy1T3O5xm/Iv0blzz4frawvkZXFTldGpWRPBR9LMG8=; b=K0HTc66RMFKKcblerbuFtxf3Z30TCZ43CUyiGNBSQcQnVCF90vSt/3xq0c2dbWR7Yr 7RARN75sRzTwPSmYZiWKRTpPJ8qEhL/S8+DXjzd6SbNLBkINUkKQWd2qeKLUdq23vplb D4tk89yQhmYDKI1Ci+eZeBTn2lJ3hCORx/Q9CHG7kFmki8xj5NciC2T8D03c3ppFSFpC e7iE4aH/QpFCNsXgEC3A4hg52ir29eFaavoQHPC2Qx3X4qPOBAD/gKHb7ALx0cRixugN 83qLtpCdd0H6jR2Cavh90oJPlwTMp4iH808wJ+SP86nCihzNdSe+QiDFdIxtWYFk0l4G W+qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KCy1T3O5xm/Iv0blzz4frawvkZXFTldGpWRPBR9LMG8=; b=s6JbZjR60q+k3PAlHTGyGJY+L5O+23zknscwUuRMEm1VO5zCQwc/+xafsd6v5Dwgsx uZFoeDz4lB2JkUObG3nMk24gq12nbnGe0+8pb2zXreEWUOkH2ZMELCqvEi15RwUi0s3I uMzUUCox0WYRGdYvku5l+UjMfiFddMD14B83LF6mh2HBgd74DmYbtSqm2+RXZV2Pa9qj 4LDIYs1WvbyyT9JFPW+ptHUE1K0Cjd9xo633rwL2gp2SwEFr4UKrh6dG69Ilok6mQjBl 25MMWVtcU1VwqU9K+B6l9IldHKZwwjLK3U9XMRh3xAVfTRg6rDbgGVJR006X6hGwk76d rECw== X-Gm-Message-State: APjAAAWPM3p2uz7R5bh/EIw2lW6P7fh+rET6KkIXWXqc8llh2mNZ5Zxa s2p2z4lf6N3m4YYhM0azUCeE1A== X-Google-Smtp-Source: APXvYqxT0Ljlgl+6s2g7lIyUfXJl5ooLVGuStQMmF8zl6k6A9mFqY80bk5FLreLkTAZitD7xH58i4g== X-Received: by 2002:adf:fe90:: with SMTP id l16mr3814992wrr.81.1571923450374; Thu, 24 Oct 2019 06:24:10 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:09 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 12/27] crypto: cavium/cpt - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:30 +0200 Message-Id: <20191024132345.5236-13-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062412_437440_22FA7821 X-CRM114-Status: GOOD ( 14.81 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/cavium/cpt/cptvf_algs.c | 292 +++++++++----------- 1 file changed, 134 insertions(+), 158 deletions(-) diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c index 596ce28b957d..1ad66677d88e 100644 --- a/drivers/crypto/cavium/cpt/cptvf_algs.c +++ b/drivers/crypto/cavium/cpt/cptvf_algs.c @@ -92,15 +92,15 @@ static inline void update_output_data(struct cpt_request_info *req_info, } } -static inline u32 create_ctx_hdr(struct ablkcipher_request *req, u32 enc, +static inline u32 create_ctx_hdr(struct skcipher_request *req, u32 enc, u32 *argcnt) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct cvm_enc_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct cvm_req_ctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cvm_enc_ctx *ctx = crypto_skcipher_ctx(tfm); + struct cvm_req_ctx *rctx = skcipher_request_ctx(req); struct fc_context *fctx = &rctx->fctx; u64 *offset_control = &rctx->control_word; - u32 enc_iv_len = crypto_ablkcipher_ivsize(tfm); + u32 enc_iv_len = crypto_skcipher_ivsize(tfm); struct cpt_request_info *req_info = &rctx->cpt_req; u64 *ctrl_flags = NULL; @@ -115,7 +115,7 @@ static inline u32 create_ctx_hdr(struct ablkcipher_request *req, u32 enc, else req_info->req.opcode.s.minor = 3; - req_info->req.param1 = req->nbytes; /* Encryption Data length */ + req_info->req.param1 = req->cryptlen; /* Encryption Data length */ req_info->req.param2 = 0; /*Auth data length */ fctx->enc.enc_ctrl.e.enc_cipher = ctx->cipher_type; @@ -147,32 +147,32 @@ static inline u32 create_ctx_hdr(struct ablkcipher_request *req, u32 enc, return 0; } -static inline u32 create_input_list(struct ablkcipher_request *req, u32 enc, +static inline u32 create_input_list(struct skcipher_request *req, u32 enc, u32 enc_iv_len) { - struct cvm_req_ctx *rctx = ablkcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx(req); struct cpt_request_info *req_info = &rctx->cpt_req; u32 argcnt = 0; create_ctx_hdr(req, enc, &argcnt); - update_input_iv(req_info, req->info, enc_iv_len, &argcnt); - update_input_data(req_info, req->src, req->nbytes, &argcnt); + update_input_iv(req_info, req->iv, enc_iv_len, &argcnt); + update_input_data(req_info, req->src, req->cryptlen, &argcnt); req_info->incnt = argcnt; return 0; } -static inline void store_cb_info(struct ablkcipher_request *req, +static inline void store_cb_info(struct skcipher_request *req, struct cpt_request_info *req_info) { req_info->callback = (void *)cvm_callback; req_info->callback_arg = (void *)&req->base; } -static inline void create_output_list(struct ablkcipher_request *req, +static inline void create_output_list(struct skcipher_request *req, u32 enc_iv_len) { - struct cvm_req_ctx *rctx = ablkcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx(req); struct cpt_request_info *req_info = &rctx->cpt_req; u32 argcnt = 0; @@ -184,16 +184,16 @@ static inline void create_output_list(struct ablkcipher_request *req, * [ 16 Bytes/ [ Request Enc/Dec/ DATA Len AES CBC ] */ /* Reading IV information */ - update_output_iv(req_info, req->info, enc_iv_len, &argcnt); - update_output_data(req_info, req->dst, req->nbytes, &argcnt); + update_output_iv(req_info, req->iv, enc_iv_len, &argcnt); + update_output_data(req_info, req->dst, req->cryptlen, &argcnt); req_info->outcnt = argcnt; } -static inline int cvm_enc_dec(struct ablkcipher_request *req, u32 enc) +static inline int cvm_enc_dec(struct skcipher_request *req, u32 enc) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct cvm_req_ctx *rctx = ablkcipher_request_ctx(req); - u32 enc_iv_len = crypto_ablkcipher_ivsize(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx(req); + u32 enc_iv_len = crypto_skcipher_ivsize(tfm); struct fc_context *fctx = &rctx->fctx; struct cpt_request_info *req_info = &rctx->cpt_req; void *cdev = NULL; @@ -217,20 +217,20 @@ static inline int cvm_enc_dec(struct ablkcipher_request *req, u32 enc) return -EINPROGRESS; } -static int cvm_encrypt(struct ablkcipher_request *req) +static int cvm_encrypt(struct skcipher_request *req) { return cvm_enc_dec(req, true); } -static int cvm_decrypt(struct ablkcipher_request *req) +static int cvm_decrypt(struct skcipher_request *req) { return cvm_enc_dec(req, false); } -static int cvm_xts_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_xts_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct cvm_enc_ctx *ctx = crypto_tfm_ctx(tfm); int err; const u8 *key1 = key; @@ -284,10 +284,10 @@ static int cvm_validate_keylen(struct cvm_enc_ctx *ctx, u32 keylen) return -EINVAL; } -static int cvm_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen, u8 cipher_type) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct cvm_enc_ctx *ctx = crypto_tfm_ctx(tfm); ctx->cipher_type = cipher_type; @@ -295,183 +295,159 @@ static int cvm_setkey(struct crypto_ablkcipher *cipher, const u8 *key, memcpy(ctx->enc_key, key, keylen); return 0; } else { - crypto_ablkcipher_set_flags(cipher, + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } } -static int cvm_cbc_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_cbc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { return cvm_setkey(cipher, key, keylen, AES_CBC); } -static int cvm_ecb_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_ecb_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { return cvm_setkey(cipher, key, keylen, AES_ECB); } -static int cvm_cfb_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_cfb_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { return cvm_setkey(cipher, key, keylen, AES_CFB); } -static int cvm_cbc_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_cbc_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { - return verify_ablkcipher_des3_key(cipher, key) ?: + return verify_skcipher_des3_key(cipher, key) ?: cvm_setkey(cipher, key, keylen, DES3_CBC); } -static int cvm_ecb_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int cvm_ecb_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, u32 keylen) { - return verify_ablkcipher_des3_key(cipher, key) ?: + return verify_skcipher_des3_key(cipher, key) ?: cvm_setkey(cipher, key, keylen, DES3_ECB); } -static int cvm_enc_dec_init(struct crypto_tfm *tfm) +static int cvm_enc_dec_init(struct crypto_skcipher *tfm) { - tfm->crt_ablkcipher.reqsize = sizeof(struct cvm_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct cvm_req_ctx)); + return 0; } -static struct crypto_alg algs[] = { { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_enc_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "xts(aes)", - .cra_driver_name = "cavium-xts-aes", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .ivsize = AES_BLOCK_SIZE, - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .setkey = cvm_xts_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, +static struct skcipher_alg algs[] = { { + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "cavium-xts-aes", + .base.cra_module = THIS_MODULE, + + .ivsize = AES_BLOCK_SIZE, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .setkey = cvm_xts_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, }, { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_enc_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "cbc(aes)", - .cra_driver_name = "cavium-cbc-aes", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .ivsize = AES_BLOCK_SIZE, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = cvm_cbc_aes_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cavium-cbc-aes", + .base.cra_module = THIS_MODULE, + + .ivsize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = cvm_cbc_aes_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, }, { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_enc_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "ecb(aes)", - .cra_driver_name = "cavium-ecb-aes", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .ivsize = AES_BLOCK_SIZE, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = cvm_ecb_aes_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "cavium-ecb-aes", + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = cvm_ecb_aes_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, }, { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_enc_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "cfb(aes)", - .cra_driver_name = "cavium-cfb-aes", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .ivsize = AES_BLOCK_SIZE, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = cvm_cfb_aes_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "cfb(aes)", + .base.cra_driver_name = "cavium-cfb-aes", + .base.cra_module = THIS_MODULE, + + .ivsize = AES_BLOCK_SIZE, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = cvm_cfb_aes_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, }, { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_des3_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cavium-cbc-des3_ede", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = cvm_cbc_des3_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_des3_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cavium-cbc-des3_ede", + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = cvm_cbc_des3_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, }, { - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct cvm_des3_ctx), - .cra_alignmask = 7, - .cra_priority = 4001, - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "cavium-ecb-des3_ede", - .cra_type = &crypto_ablkcipher_type, - .cra_u = { - .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = cvm_ecb_des3_setkey, - .encrypt = cvm_encrypt, - .decrypt = cvm_decrypt, - }, - }, - .cra_init = cvm_enc_dec_init, - .cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct cvm_des3_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "cavium-ecb-des3_ede", + .base.cra_module = THIS_MODULE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = cvm_ecb_des3_setkey, + .encrypt = cvm_encrypt, + .decrypt = cvm_decrypt, + .init = cvm_enc_dec_init, } }; static inline int cav_register_algs(void) { int err = 0; - err = crypto_register_algs(algs, ARRAY_SIZE(algs)); + err = crypto_register_skciphers(algs, ARRAY_SIZE(algs)); if (err) return err; @@ -480,7 +456,7 @@ static inline int cav_register_algs(void) static inline void cav_unregister_algs(void) { - crypto_unregister_algs(algs, ARRAY_SIZE(algs)); + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); } int cvm_crypto_init(struct cpt_vf *cptvf) From patchwork Thu Oct 24 13:23:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 335EF13B1 for ; Thu, 24 Oct 2019 13:28:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0AA112084C for ; Thu, 24 Oct 2019 13:28:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="UQDmVWGG"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="gnEP8NDu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AA112084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+lKPm3QC4run9nhiPUcU2Ws3HTr/4xKwaJRzLV+TPGg=; b=UQDmVWGGcxUgfU y5xpVmStcjOI3vfustTq3zzVhpZ3gjL8hj/lINJ/PWF4m5oVGARo9gYDSgcC3uGrCInX9dLRtMRPb f93wH0K4ow53+4TQ9umnzmm6nto/dxUYAboSWkaPJ3ltL9HpT6QlqHNJISFz2+HMD6oHqVoFCMnfg inIj43AHODThw3+385al9Dsg2z/OgEK5v1s37vAbotgXQTVYASrDudDbJa3Tfo9dfomR+XeblIpyp D4RVCFnLRGbq8kkfeujMX9Cp/EVLgNhFKazKU4IB6dTP3a9o17YW7bV8EH0fg0nsfHiV6wKik8JG6 90nff9lPHirwGagdKT7A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdAQ-00055y-7c; Thu, 24 Oct 2019 13:28:42 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd65-0000cR-K6 for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:24 +0000 Received: by mail-wm1-x341.google.com with SMTP id q70so2853372wme.1 for ; Thu, 24 Oct 2019 06:24:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nE1sIohimgs8BDMyHcZseP+9B7QEi23+jSplbaZyScQ=; b=gnEP8NDu/kP7bkEq+n2ShLiOg2vQC7gGOEuJt6csJsrF6El9r4MmmgqHp8LnwRXD4/ jZbZ3YITttv0upFT5rjcgke85rM7C3CCAPr6iouIxsU8kNOjIh2ZL/rd2Z2P/qCC/Qx7 sdCyyYRu7XLRNSREAp3hFV7YKXGM3g0tUhJMgNx7nWAdJwFcrzG9hTCdMfBXXiVFY7eh W3UKcK5MaoR3THN7ygIBbqYDNeYT56Kog+9oxVstn+tLPinzJ/ab91pzezdol4Xw/t+U wcpeQGPeM9rwzB9d794wlHRLdbDIbWHsNDHwvQ+dLUVH5+PWJphAi4H7X6+Cqw6wEu5T O6og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nE1sIohimgs8BDMyHcZseP+9B7QEi23+jSplbaZyScQ=; b=WLrBgzhbpg0vP9tiG5AvtPD/eWec9Jinq7c9ntS7KYVCHBymILokSF3gA9tk+Z14uP JA3qxgRpwn9OBdLpO8dR7hR8lox6BF6ppRZ+2R1BZzHp5zm2pkUKT/OPi06WR0gD8jCw 8IGQqy0MfdlOOcKrFCXPJq3rI5/gOUsN4ltwYUuY77ZkI4w3qiwWj33OMZoyDv6gVfDB JLS0XwYx20OpkXNtbja21X5XlrAQfBWBeKEScxPtOoxrieoQEluntlv+zVBPSWPXKK6F Z7Ml7T5BdBkAe78FAt4T5139qD3VNOyJdOhB1iG0WJD+2REI1bOyFQpjfIfwR9boZnOD aAuw== X-Gm-Message-State: APjAAAU2WOVmI/A1hqKT9lu4YBO2PIlrWc5/ngtfmXpP4p/O25EDQIBA +SCAEXfjv3kiImbORxfcIMSmiA== X-Google-Smtp-Source: APXvYqzWMRjSp1ePcxmIQqfi0Jo+Vz1/aNQO+x9um44ulYOiKw9Y3D2xYXNR/5y6PqzJDa5uoOSogA== X-Received: by 2002:a1c:1d10:: with SMTP id d16mr5259861wmd.14.1571923451603; Thu, 24 Oct 2019 06:24:11 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:10 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 13/27] crypto: chelsio - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:31 +0200 Message-Id: <20191024132345.5236-14-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062413_975483_DEFD2569 X-CRM114-Status: GOOD ( 17.43 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , "David S. Miller" , Atul Gupta , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Atul Gupta Signed-off-by: Ard Biesheuvel --- drivers/crypto/chelsio/chcr_algo.c | 334 ++++++++++---------- drivers/crypto/chelsio/chcr_algo.h | 2 +- drivers/crypto/chelsio/chcr_crypto.h | 16 +- 3 files changed, 173 insertions(+), 179 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 38ee38b37ae6..1b4a5664e604 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -93,7 +93,7 @@ static u32 round_constant[11] = { 0x1B000000, 0x36000000, 0x6C000000 }; -static int chcr_handle_cipher_resp(struct ablkcipher_request *req, +static int chcr_handle_cipher_resp(struct skcipher_request *req, unsigned char *input, int err); static inline struct chcr_aead_ctx *AEAD_CTX(struct chcr_context *ctx) @@ -568,11 +568,11 @@ static void ulptx_walk_add_sg(struct ulptx_walk *walk, } } -static inline int get_cryptoalg_subtype(struct crypto_tfm *tfm) +static inline int get_cryptoalg_subtype(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct chcr_alg_template *chcr_crypto_alg = - container_of(alg, struct chcr_alg_template, alg.crypto); + container_of(alg, struct chcr_alg_template, alg.skcipher); return chcr_crypto_alg->type & CRYPTO_ALG_SUB_TYPE_MASK; } @@ -757,14 +757,14 @@ static inline void create_wreq(struct chcr_context *ctx, */ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req); struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm)); struct sk_buff *skb = NULL; struct chcr_wr *chcr_req; struct cpl_rx_phys_dsgl *phys_cpl; struct ulptx_sgl *ulptx; - struct chcr_blkcipher_req_ctx *reqctx = - ablkcipher_request_ctx(wrparam->req); + struct chcr_skcipher_req_ctx *reqctx = + skcipher_request_ctx(wrparam->req); unsigned int temp = 0, transhdr_len, dst_size; int error; int nents; @@ -807,9 +807,9 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) chcr_req->key_ctx.ctx_hdr = ablkctx->key_ctx_hdr; if ((reqctx->op == CHCR_DECRYPT_OP) && - (!(get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)) == + (!(get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_CTR)) && - (!(get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)) == + (!(get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686))) { generate_copy_rrkey(ablkctx, &chcr_req->key_ctx); } else { @@ -843,7 +843,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) if (reqctx->op && (ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC)) sg_pcopy_to_buffer(wrparam->req->src, - sg_nents(wrparam->req->src), wrparam->req->info, 16, + sg_nents(wrparam->req->src), wrparam->req->iv, 16, reqctx->processed + wrparam->bytes - AES_BLOCK_SIZE); return skb; @@ -866,11 +866,11 @@ static inline int chcr_keyctx_ck_size(unsigned int keylen) return ck_size; } -static int chcr_cipher_fallback_setkey(struct crypto_ablkcipher *cipher, +static int chcr_cipher_fallback_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); int err = 0; @@ -886,7 +886,7 @@ static int chcr_cipher_fallback_setkey(struct crypto_ablkcipher *cipher, return err; } -static int chcr_aes_cbc_setkey(struct crypto_ablkcipher *cipher, +static int chcr_aes_cbc_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { @@ -912,13 +912,13 @@ static int chcr_aes_cbc_setkey(struct crypto_ablkcipher *cipher, ablkctx->ciph_mode = CHCR_SCMD_CIPHER_MODE_AES_CBC; return 0; badkey_err: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); ablkctx->enckey_len = 0; return err; } -static int chcr_aes_ctr_setkey(struct crypto_ablkcipher *cipher, +static int chcr_aes_ctr_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { @@ -943,13 +943,13 @@ static int chcr_aes_ctr_setkey(struct crypto_ablkcipher *cipher, return 0; badkey_err: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); ablkctx->enckey_len = 0; return err; } -static int chcr_aes_rfc3686_setkey(struct crypto_ablkcipher *cipher, +static int chcr_aes_rfc3686_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { @@ -981,7 +981,7 @@ static int chcr_aes_rfc3686_setkey(struct crypto_ablkcipher *cipher, return 0; badkey_err: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); ablkctx->enckey_len = 0; return err; @@ -1017,12 +1017,12 @@ static unsigned int adjust_ctr_overflow(u8 *iv, u32 bytes) return bytes; } -static int chcr_update_tweak(struct ablkcipher_request *req, u8 *iv, +static int chcr_update_tweak(struct skcipher_request *req, u8 *iv, u32 isfinal) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm)); - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); struct crypto_aes_ctx aes; int ret, i; u8 *key; @@ -1051,16 +1051,16 @@ static int chcr_update_tweak(struct ablkcipher_request *req, u8 *iv, return 0; } -static int chcr_update_cipher_iv(struct ablkcipher_request *req, +static int chcr_update_cipher_iv(struct skcipher_request *req, struct cpl_fw6_pld *fw6_pld, u8 *iv) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); - int subtype = get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); + int subtype = get_cryptoalg_subtype(tfm); int ret = 0; if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) - ctr_add_iv(iv, req->info, (reqctx->processed / + ctr_add_iv(iv, req->iv, (reqctx->processed / AES_BLOCK_SIZE)); else if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) *(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE + @@ -1071,7 +1071,7 @@ static int chcr_update_cipher_iv(struct ablkcipher_request *req, else if (subtype == CRYPTO_ALG_SUB_TYPE_CBC) { if (reqctx->op) /*Updated before sending last WR*/ - memcpy(iv, req->info, AES_BLOCK_SIZE); + memcpy(iv, req->iv, AES_BLOCK_SIZE); else memcpy(iv, &fw6_pld->data[2], AES_BLOCK_SIZE); } @@ -1085,16 +1085,16 @@ static int chcr_update_cipher_iv(struct ablkcipher_request *req, * for subsequent update requests */ -static int chcr_final_cipher_iv(struct ablkcipher_request *req, +static int chcr_final_cipher_iv(struct skcipher_request *req, struct cpl_fw6_pld *fw6_pld, u8 *iv) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); - int subtype = get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); + int subtype = get_cryptoalg_subtype(tfm); int ret = 0; if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) - ctr_add_iv(iv, req->info, DIV_ROUND_UP(reqctx->processed, + ctr_add_iv(iv, req->iv, DIV_ROUND_UP(reqctx->processed, AES_BLOCK_SIZE)); else if (subtype == CRYPTO_ALG_SUB_TYPE_XTS) ret = chcr_update_tweak(req, iv, 1); @@ -1108,25 +1108,25 @@ static int chcr_final_cipher_iv(struct ablkcipher_request *req, } -static int chcr_handle_cipher_resp(struct ablkcipher_request *req, +static int chcr_handle_cipher_resp(struct skcipher_request *req, unsigned char *input, int err) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm)); struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm)); struct sk_buff *skb; struct cpl_fw6_pld *fw6_pld = (struct cpl_fw6_pld *)input; - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); struct cipher_wr_param wrparam; struct chcr_dev *dev = c_ctx(tfm)->dev; int bytes; if (err) goto unmap; - if (req->nbytes == reqctx->processed) { + if (req->cryptlen == reqctx->processed) { chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); - err = chcr_final_cipher_iv(req, fw6_pld, req->info); + err = chcr_final_cipher_iv(req, fw6_pld, req->iv); goto complete; } @@ -1134,13 +1134,13 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, bytes = chcr_sg_ent_in_wr(reqctx->srcsg, reqctx->dstsg, 0, CIP_SPACE_LEFT(ablkctx->enckey_len), reqctx->src_ofst, reqctx->dst_ofst); - if ((bytes + reqctx->processed) >= req->nbytes) - bytes = req->nbytes - reqctx->processed; + if ((bytes + reqctx->processed) >= req->cryptlen) + bytes = req->cryptlen - reqctx->processed; else bytes = rounddown(bytes, 16); } else { /*CTR mode counter overfloa*/ - bytes = req->nbytes - reqctx->processed; + bytes = req->cryptlen - reqctx->processed; } err = chcr_update_cipher_iv(req, fw6_pld, reqctx->iv); if (err) @@ -1153,13 +1153,13 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, req->base.flags, req->src, req->dst, - req->nbytes, - req->info, + req->cryptlen, + req->iv, reqctx->op); goto complete; } - if (get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)) == + if (get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_CTR) bytes = adjust_ctr_overflow(reqctx->iv, bytes); wrparam.qid = u_ctx->lldi.rxq_ids[c_ctx(tfm)->rx_qidx]; @@ -1185,33 +1185,33 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, return err; } -static int process_cipher(struct ablkcipher_request *req, +static int process_cipher(struct skcipher_request *req, unsigned short qid, struct sk_buff **skb, unsigned short op_type) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - unsigned int ivsize = crypto_ablkcipher_ivsize(tfm); - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + unsigned int ivsize = crypto_skcipher_ivsize(tfm); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(tfm)); struct cipher_wr_param wrparam; int bytes, err = -EINVAL; reqctx->processed = 0; - if (!req->info) + if (!req->iv) goto error; if ((ablkctx->enckey_len == 0) || (ivsize > AES_BLOCK_SIZE) || - (req->nbytes == 0) || - (req->nbytes % crypto_ablkcipher_blocksize(tfm))) { + (req->cryptlen == 0) || + (req->cryptlen % crypto_skcipher_blocksize(tfm))) { pr_err("AES: Invalid value of Key Len %d nbytes %d IV Len %d\n", - ablkctx->enckey_len, req->nbytes, ivsize); + ablkctx->enckey_len, req->cryptlen, ivsize); goto error; } err = chcr_cipher_dma_map(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); if (err) goto error; - if (req->nbytes < (SGE_MAX_WR_LEN - (sizeof(struct chcr_wr) + + if (req->cryptlen < (SGE_MAX_WR_LEN - (sizeof(struct chcr_wr) + AES_MIN_KEY_SIZE + sizeof(struct cpl_rx_phys_dsgl) + /*Min dsgl size*/ @@ -1219,14 +1219,14 @@ static int process_cipher(struct ablkcipher_request *req, /* Can be sent as Imm*/ unsigned int dnents = 0, transhdr_len, phys_dsgl, kctx_len; - dnents = sg_nents_xlen(req->dst, req->nbytes, + dnents = sg_nents_xlen(req->dst, req->cryptlen, CHCR_DST_SG_SIZE, 0); phys_dsgl = get_space_for_phys_dsgl(dnents); kctx_len = roundup(ablkctx->enckey_len, 16); transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, phys_dsgl); - reqctx->imm = (transhdr_len + IV + req->nbytes) <= + reqctx->imm = (transhdr_len + IV + req->cryptlen) <= SGE_MAX_WR_LEN; - bytes = IV + req->nbytes; + bytes = IV + req->cryptlen; } else { reqctx->imm = 0; @@ -1236,21 +1236,21 @@ static int process_cipher(struct ablkcipher_request *req, bytes = chcr_sg_ent_in_wr(req->src, req->dst, 0, CIP_SPACE_LEFT(ablkctx->enckey_len), 0, 0); - if ((bytes + reqctx->processed) >= req->nbytes) - bytes = req->nbytes - reqctx->processed; + if ((bytes + reqctx->processed) >= req->cryptlen) + bytes = req->cryptlen - reqctx->processed; else bytes = rounddown(bytes, 16); } else { - bytes = req->nbytes; + bytes = req->cryptlen; } - if (get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)) == + if (get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_CTR) { - bytes = adjust_ctr_overflow(req->info, bytes); + bytes = adjust_ctr_overflow(req->iv, bytes); } - if (get_cryptoalg_subtype(crypto_ablkcipher_tfm(tfm)) == + if (get_cryptoalg_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) { memcpy(reqctx->iv, ablkctx->nonce, CTR_RFC3686_NONCE_SIZE); - memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->info, + memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE); /* initialize counter portion of counter block */ @@ -1259,7 +1259,7 @@ static int process_cipher(struct ablkcipher_request *req, } else { - memcpy(reqctx->iv, req->info, IV); + memcpy(reqctx->iv, req->iv, IV); } if (unlikely(bytes == 0)) { chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, @@ -1268,7 +1268,7 @@ static int process_cipher(struct ablkcipher_request *req, req->base.flags, req->src, req->dst, - req->nbytes, + req->cryptlen, reqctx->iv, op_type); goto error; @@ -1296,9 +1296,9 @@ static int process_cipher(struct ablkcipher_request *req, return err; } -static int chcr_aes_encrypt(struct ablkcipher_request *req) +static int chcr_aes_encrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct chcr_dev *dev = c_ctx(tfm)->dev; struct sk_buff *skb = NULL; int err, isfull = 0; @@ -1329,9 +1329,9 @@ static int chcr_aes_encrypt(struct ablkcipher_request *req) return err; } -static int chcr_aes_decrypt(struct ablkcipher_request *req) +static int chcr_aes_decrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct uld_ctx *u_ctx = ULD_CTX(c_ctx(tfm)); struct chcr_dev *dev = c_ctx(tfm)->dev; struct sk_buff *skb = NULL; @@ -1398,27 +1398,28 @@ static int chcr_device_init(struct chcr_context *ctx) return err; } -static int chcr_cra_init(struct crypto_tfm *tfm) +static int chcr_init_tfm(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; - struct chcr_context *ctx = crypto_tfm_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->cra_name, 0, + ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { - pr_err("failed to allocate fallback for %s\n", alg->cra_name); + pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } - tfm->crt_ablkcipher.reqsize = sizeof(struct chcr_blkcipher_req_ctx); - return chcr_device_init(crypto_tfm_ctx(tfm)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + + return chcr_device_init(ctx); } -static int chcr_rfc3686_init(struct crypto_tfm *tfm) +static int chcr_rfc3686_init(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; - struct chcr_context *ctx = crypto_tfm_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) @@ -1427,17 +1428,17 @@ static int chcr_rfc3686_init(struct crypto_tfm *tfm) ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { - pr_err("failed to allocate fallback for %s\n", alg->cra_name); + pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } - tfm->crt_ablkcipher.reqsize = sizeof(struct chcr_blkcipher_req_ctx); - return chcr_device_init(crypto_tfm_ctx(tfm)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + return chcr_device_init(ctx); } -static void chcr_cra_exit(struct crypto_tfm *tfm) +static void chcr_exit_tfm(struct crypto_skcipher *tfm) { - struct chcr_context *ctx = crypto_tfm_ctx(tfm); + struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); crypto_free_sync_skcipher(ablkctx->sw_cipher); @@ -2056,8 +2057,8 @@ int chcr_handle_resp(struct crypto_async_request *req, unsigned char *input, err = chcr_handle_aead_resp(aead_request_cast(req), input, err); break; - case CRYPTO_ALG_TYPE_ABLKCIPHER: - chcr_handle_cipher_resp(ablkcipher_request_cast(req), + case CRYPTO_ALG_TYPE_SKCIPHER: + chcr_handle_cipher_resp(skcipher_request_cast(req), input, err); break; case CRYPTO_ALG_TYPE_AHASH: @@ -2148,7 +2149,7 @@ static int chcr_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, return err; } -static int chcr_aes_xts_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int chcr_aes_xts_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int key_len) { struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); @@ -2172,7 +2173,7 @@ static int chcr_aes_xts_setkey(struct crypto_ablkcipher *cipher, const u8 *key, ablkctx->ciph_mode = CHCR_SCMD_CIPHER_MODE_AES_XTS; return 0; badkey_err: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); ablkctx->enckey_len = 0; return err; @@ -2576,12 +2577,12 @@ void chcr_add_aead_dst_ent(struct aead_request *req, dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id); } -void chcr_add_cipher_src_ent(struct ablkcipher_request *req, +void chcr_add_cipher_src_ent(struct skcipher_request *req, void *ulptx, struct cipher_wr_param *wrparam) { struct ulptx_walk ulp_walk; - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); u8 *buf = ulptx; memcpy(buf, reqctx->iv, IV); @@ -2599,13 +2600,13 @@ void chcr_add_cipher_src_ent(struct ablkcipher_request *req, } } -void chcr_add_cipher_dst_ent(struct ablkcipher_request *req, +void chcr_add_cipher_dst_ent(struct skcipher_request *req, struct cpl_rx_phys_dsgl *phys_cpl, struct cipher_wr_param *wrparam, unsigned short qid) { - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req); + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req); struct chcr_context *ctx = c_ctx(tfm); struct dsgl_walk dsgl_walk; @@ -2680,7 +2681,7 @@ void chcr_hash_dma_unmap(struct device *dev, } int chcr_cipher_dma_map(struct device *dev, - struct ablkcipher_request *req) + struct skcipher_request *req) { int error; @@ -2709,7 +2710,7 @@ int chcr_cipher_dma_map(struct device *dev, } void chcr_cipher_dma_unmap(struct device *dev, - struct ablkcipher_request *req) + struct skcipher_request *req) { if (req->src == req->dst) { dma_unmap_sg(dev, req->src, sg_nents(req->src), @@ -3712,82 +3713,76 @@ static int chcr_aead_decrypt(struct aead_request *req) static struct chcr_alg_template driver_algs[] = { /* AES-CBC */ { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_SUB_TYPE_CBC, + .type = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_SUB_TYPE_CBC, .is_registered = 0, - .alg.crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-chcr", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_init = chcr_cra_init, - .cra_exit = chcr_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = chcr_aes_cbc_setkey, - .encrypt = chcr_aes_encrypt, - .decrypt = chcr_aes_decrypt, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-chcr", + .base.cra_blocksize = AES_BLOCK_SIZE, + + .init = chcr_init_tfm, + .exit = chcr_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = chcr_aes_cbc_setkey, + .encrypt = chcr_aes_encrypt, + .decrypt = chcr_aes_decrypt, } - } }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_SUB_TYPE_XTS, + .type = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_SUB_TYPE_XTS, .is_registered = 0, - .alg.crypto = { - .cra_name = "xts(aes)", - .cra_driver_name = "xts-aes-chcr", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_init = chcr_cra_init, - .cra_exit = NULL, - .cra_u .ablkcipher = { - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = chcr_aes_xts_setkey, - .encrypt = chcr_aes_encrypt, - .decrypt = chcr_aes_decrypt, - } + .alg.skcipher = { + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "xts-aes-chcr", + .base.cra_blocksize = AES_BLOCK_SIZE, + + .init = chcr_init_tfm, + .exit = chcr_exit_tfm, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = chcr_aes_xts_setkey, + .encrypt = chcr_aes_encrypt, + .decrypt = chcr_aes_decrypt, } }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_SUB_TYPE_CTR, + .type = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_SUB_TYPE_CTR, .is_registered = 0, - .alg.crypto = { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-chcr", - .cra_blocksize = 1, - .cra_init = chcr_cra_init, - .cra_exit = chcr_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = chcr_aes_ctr_setkey, - .encrypt = chcr_aes_encrypt, - .decrypt = chcr_aes_decrypt, - } + .alg.skcipher = { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-chcr", + .base.cra_blocksize = 1, + + .init = chcr_init_tfm, + .exit = chcr_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = chcr_aes_ctr_setkey, + .encrypt = chcr_aes_encrypt, + .decrypt = chcr_aes_decrypt, } }, { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER | + .type = CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_SUB_TYPE_CTR_RFC3686, .is_registered = 0, - .alg.crypto = { - .cra_name = "rfc3686(ctr(aes))", - .cra_driver_name = "rfc3686-ctr-aes-chcr", - .cra_blocksize = 1, - .cra_init = chcr_rfc3686_init, - .cra_exit = chcr_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .max_keysize = AES_MAX_KEY_SIZE + - CTR_RFC3686_NONCE_SIZE, - .ivsize = CTR_RFC3686_IV_SIZE, - .setkey = chcr_aes_rfc3686_setkey, - .encrypt = chcr_aes_encrypt, - .decrypt = chcr_aes_decrypt, - } + .alg.skcipher = { + .base.cra_name = "rfc3686(ctr(aes))", + .base.cra_driver_name = "rfc3686-ctr-aes-chcr", + .base.cra_blocksize = 1, + + .init = chcr_rfc3686_init, + .exit = chcr_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE, + .ivsize = CTR_RFC3686_IV_SIZE, + .setkey = chcr_aes_rfc3686_setkey, + .encrypt = chcr_aes_encrypt, + .decrypt = chcr_aes_decrypt, } }, /* SHA */ @@ -4254,10 +4249,10 @@ static int chcr_unregister_alg(void) for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { switch (driver_algs[i].type & CRYPTO_ALG_TYPE_MASK) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: + case CRYPTO_ALG_TYPE_SKCIPHER: if (driver_algs[i].is_registered) - crypto_unregister_alg( - &driver_algs[i].alg.crypto); + crypto_unregister_skcipher( + &driver_algs[i].alg.skcipher); break; case CRYPTO_ALG_TYPE_AEAD: if (driver_algs[i].is_registered) @@ -4293,21 +4288,20 @@ static int chcr_register_alg(void) if (driver_algs[i].is_registered) continue; switch (driver_algs[i].type & CRYPTO_ALG_TYPE_MASK) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - driver_algs[i].alg.crypto.cra_priority = + case CRYPTO_ALG_TYPE_SKCIPHER: + driver_algs[i].alg.skcipher.base.cra_priority = CHCR_CRA_PRIORITY; - driver_algs[i].alg.crypto.cra_module = THIS_MODULE; - driver_algs[i].alg.crypto.cra_flags = - CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC | + driver_algs[i].alg.skcipher.base.cra_module = THIS_MODULE; + driver_algs[i].alg.skcipher.base.cra_flags = + CRYPTO_ALG_TYPE_SKCIPHER | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK; - driver_algs[i].alg.crypto.cra_ctxsize = + driver_algs[i].alg.skcipher.base.cra_ctxsize = sizeof(struct chcr_context) + sizeof(struct ablk_ctx); - driver_algs[i].alg.crypto.cra_alignmask = 0; - driver_algs[i].alg.crypto.cra_type = - &crypto_ablkcipher_type; - err = crypto_register_alg(&driver_algs[i].alg.crypto); - name = driver_algs[i].alg.crypto.cra_driver_name; + driver_algs[i].alg.skcipher.base.cra_alignmask = 0; + + err = crypto_register_skcipher(&driver_algs[i].alg.skcipher); + name = driver_algs[i].alg.skcipher.base.cra_driver_name; break; case CRYPTO_ALG_TYPE_AEAD: driver_algs[i].alg.aead.base.cra_flags = diff --git a/drivers/crypto/chelsio/chcr_algo.h b/drivers/crypto/chelsio/chcr_algo.h index d1e6b51df0ce..f58c2b5c7fc5 100644 --- a/drivers/crypto/chelsio/chcr_algo.h +++ b/drivers/crypto/chelsio/chcr_algo.h @@ -287,7 +287,7 @@ struct hash_wr_param { }; struct cipher_wr_param { - struct ablkcipher_request *req; + struct skcipher_request *req; char *iv; int bytes; unsigned short qid; diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index 993c97e70565..6db2df8c8a05 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -160,9 +160,9 @@ static inline struct chcr_context *a_ctx(struct crypto_aead *tfm) return crypto_aead_ctx(tfm); } -static inline struct chcr_context *c_ctx(struct crypto_ablkcipher *tfm) +static inline struct chcr_context *c_ctx(struct crypto_skcipher *tfm) { - return crypto_ablkcipher_ctx(tfm); + return crypto_skcipher_ctx(tfm); } static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm) @@ -285,7 +285,7 @@ struct chcr_ahash_req_ctx { u8 bfr2[CHCR_HASH_MAX_BLOCK_SIZE_128]; }; -struct chcr_blkcipher_req_ctx { +struct chcr_skcipher_req_ctx { struct sk_buff *skb; struct scatterlist *dstsg; unsigned int processed; @@ -302,7 +302,7 @@ struct chcr_alg_template { u32 type; u32 is_registered; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg hash; struct aead_alg aead; } alg; @@ -321,12 +321,12 @@ void chcr_add_aead_dst_ent(struct aead_request *req, struct cpl_rx_phys_dsgl *phys_cpl, unsigned short qid); void chcr_add_aead_src_ent(struct aead_request *req, struct ulptx_sgl *ulptx); -void chcr_add_cipher_src_ent(struct ablkcipher_request *req, +void chcr_add_cipher_src_ent(struct skcipher_request *req, void *ulptx, struct cipher_wr_param *wrparam); -int chcr_cipher_dma_map(struct device *dev, struct ablkcipher_request *req); -void chcr_cipher_dma_unmap(struct device *dev, struct ablkcipher_request *req); -void chcr_add_cipher_dst_ent(struct ablkcipher_request *req, +int chcr_cipher_dma_map(struct device *dev, struct skcipher_request *req); +void chcr_cipher_dma_unmap(struct device *dev, struct skcipher_request *req); +void chcr_add_cipher_dst_ent(struct skcipher_request *req, struct cpl_rx_phys_dsgl *phys_cpl, struct cipher_wr_param *wrparam, unsigned short qid); From patchwork Thu Oct 24 13:23:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209691 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40E1413B1 for ; Thu, 24 Oct 2019 13:29:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 19E1E2166E for ; Thu, 24 Oct 2019 13:29:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="MoHsFE6g"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Dt7NkgNq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19E1E2166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nCSF3bGU+vpDrMi5obl2fGDm+s+fi+svsEuV7S8780w=; b=MoHsFE6gVL8FRv es5joHrjwzB/ii3fGfNBpNq3ERpQPAZ6wlu8/rVn12U0Kz/01kvi4CmFlvH3Ei+L4jXkxOqNxQKA0 DnjSzS42rHj8K0iPdvwe9qZ7pkqASHtuba7CrpYnZ1swviSU0PX57RBIjm8gz4kyA6+Fb6tSsBrr2 rUs7+UavyvaZTbXOIAInMrM0KVnuZC+9ZPWMLlOUxwVatUftU2chCx1JFZ/JcwbDvrZEqtXYkaYI0 8wNaDTXtOH8pfDmq1eBFaVV9lxggP0qjCruraAEcHFHzjZ7k2ZZI+1x9lvNI5y6yLBC86jx3gCJzy 68lMGEaGOoqEcauUMzCg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdB1-0005a9-5I; Thu, 24 Oct 2019 13:29:19 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd67-0000cu-Ox for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:25 +0000 Received: by mail-wm1-x344.google.com with SMTP id g24so2827031wmh.5 for ; Thu, 24 Oct 2019 06:24:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7v5O3wUy2OkKyS7OY6FG70291BOVXDEM0+zkv7MUUt0=; b=Dt7NkgNqjCCYyPVY4R7tFcN5c1UXn/TTt1d5ik9LlEdVaBOo7MBICYLgRUe+r4aQLc rzSK3koNshPaLJXwxW+O52/B+/zaE7SFUBzTo7bvyHTFe7RTt+dwAZEMF510d7t4xlfh qSN87eIUJl5qpE4bup7iaWCIU+tvdHr/4bW8pdXZAKZ0dFquuF7Sb5yJEOs88XrdhJ9S KloedT2LCLxKl/UH4u1wXJgYLpbctLwz9BObIj9SxCZqc374iUQpq1+q4QWbHa3M48GU Ta9BIk7t/vmxNz00J0j3u0eC/3Q5jJSHuNME3xeMYmbjdmwlXWydxax14SJcFi5K5sXR T6sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7v5O3wUy2OkKyS7OY6FG70291BOVXDEM0+zkv7MUUt0=; b=LI9lR7LrqqoGGrGrknKdGm1aLt6VWoyvvVrjQsWuJ2889SpswlIgMvk1U3a2SwEXLg 4mv951bb10wAJq7jnRGI9APU70Af9kGusIxtgvV5QuSjLYeuDdb3wYEv6B2KR8ZNLVRu aAf2wH3Mr7K2Rrl8JbMllqKOwVEVLA+n56+DNstUgTpkm8SNdRCSULzsH8jJ5wYSMPGl cdGfL55BJHV8hWnFKxERcNCWHiRNmMxQ46MgzUGUDODI+7RHvR0Z1hLnGEILeYGXyKek TR7aW1PkOqobavyDDimSJ7ztq/Ir5sto2k3RfZsVTckwdYBeYDCkLFVJESwwKnlFnEyD +R4g== X-Gm-Message-State: APjAAAXU3fh/1+Py7Aa2J7mb6e1z/b0OWAueI76YugY+IGaSctFIrA+A te9xAXTSOmzM99cbA57hfb6XSw== X-Google-Smtp-Source: APXvYqz42yZPcXoQBAWfsOcmivf55JHbRqEmsPRAaInWTdmUo2HxzWKiG3JdcvIaWEsH7hEivu+dtw== X-Received: by 2002:a1c:2884:: with SMTP id o126mr5321759wmo.153.1571923452945; Thu, 24 Oct 2019 06:24:12 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:11 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 14/27] crypto: hifn - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:32 +0200 Message-Id: <20191024132345.5236-15-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062416_126464_C2D015FA X-CRM114-Status: GOOD ( 17.25 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/hifn_795x.c | 183 ++++++++++---------- 1 file changed, 92 insertions(+), 91 deletions(-) diff --git a/drivers/crypto/hifn_795x.c b/drivers/crypto/hifn_795x.c index a18e62df68d9..4e7323884ae3 100644 --- a/drivers/crypto/hifn_795x.c +++ b/drivers/crypto/hifn_795x.c @@ -22,6 +22,7 @@ #include #include +#include static char hifn_pll_ref[sizeof("extNNN")] = "ext"; module_param_string(hifn_pll_ref, hifn_pll_ref, sizeof(hifn_pll_ref), 0444); @@ -596,7 +597,7 @@ struct hifn_crypt_result { struct hifn_crypto_alg { struct list_head entry; - struct crypto_alg alg; + struct skcipher_alg alg; struct hifn_device *dev; }; @@ -1404,7 +1405,7 @@ static void hifn_cipher_walk_exit(struct hifn_cipher_walk *w) w->num = 0; } -static int ablkcipher_add(unsigned int *drestp, struct scatterlist *dst, +static int skcipher_add(unsigned int *drestp, struct scatterlist *dst, unsigned int size, unsigned int *nbytesp) { unsigned int copy, drest = *drestp, nbytes = *nbytesp; @@ -1433,11 +1434,11 @@ static int ablkcipher_add(unsigned int *drestp, struct scatterlist *dst, return idx; } -static int hifn_cipher_walk(struct ablkcipher_request *req, +static int hifn_cipher_walk(struct skcipher_request *req, struct hifn_cipher_walk *w) { struct scatterlist *dst, *t; - unsigned int nbytes = req->nbytes, offset, copy, diff; + unsigned int nbytes = req->cryptlen, offset, copy, diff; int idx, tidx, err; tidx = idx = 0; @@ -1459,7 +1460,7 @@ static int hifn_cipher_walk(struct ablkcipher_request *req, t = &w->cache[idx]; - err = ablkcipher_add(&dlen, dst, slen, &nbytes); + err = skcipher_add(&dlen, dst, slen, &nbytes); if (err < 0) return err; @@ -1498,7 +1499,7 @@ static int hifn_cipher_walk(struct ablkcipher_request *req, dst = &req->dst[idx]; - err = ablkcipher_add(&dlen, dst, nbytes, &nbytes); + err = skcipher_add(&dlen, dst, nbytes, &nbytes); if (err < 0) return err; @@ -1518,13 +1519,13 @@ static int hifn_cipher_walk(struct ablkcipher_request *req, return tidx; } -static int hifn_setup_session(struct ablkcipher_request *req) +static int hifn_setup_session(struct skcipher_request *req) { struct hifn_context *ctx = crypto_tfm_ctx(req->base.tfm); - struct hifn_request_context *rctx = ablkcipher_request_ctx(req); + struct hifn_request_context *rctx = skcipher_request_ctx(req); struct hifn_device *dev = ctx->dev; unsigned long dlen, flags; - unsigned int nbytes = req->nbytes, idx = 0; + unsigned int nbytes = req->cryptlen, idx = 0; int err = -EINVAL, sg_num; struct scatterlist *dst; @@ -1563,7 +1564,7 @@ static int hifn_setup_session(struct ablkcipher_request *req) goto err_out; } - err = hifn_setup_dma(dev, ctx, rctx, req->src, req->dst, req->nbytes, req); + err = hifn_setup_dma(dev, ctx, rctx, req->src, req->dst, req->cryptlen, req); if (err) goto err_out; @@ -1610,7 +1611,7 @@ static int hifn_start_device(struct hifn_device *dev) return 0; } -static int ablkcipher_get(void *saddr, unsigned int *srestp, unsigned int offset, +static int skcipher_get(void *saddr, unsigned int *srestp, unsigned int offset, struct scatterlist *dst, unsigned int size, unsigned int *nbytesp) { unsigned int srest = *srestp, nbytes = *nbytesp, copy; @@ -1660,12 +1661,12 @@ static inline void hifn_complete_sa(struct hifn_device *dev, int i) BUG_ON(dev->started < 0); } -static void hifn_process_ready(struct ablkcipher_request *req, int error) +static void hifn_process_ready(struct skcipher_request *req, int error) { - struct hifn_request_context *rctx = ablkcipher_request_ctx(req); + struct hifn_request_context *rctx = skcipher_request_ctx(req); if (rctx->walk.flags & ASYNC_FLAGS_MISALIGNED) { - unsigned int nbytes = req->nbytes; + unsigned int nbytes = req->cryptlen; int idx = 0, err; struct scatterlist *dst, *t; void *saddr; @@ -1688,7 +1689,7 @@ static void hifn_process_ready(struct ablkcipher_request *req, int error) saddr = kmap_atomic(sg_page(t)); - err = ablkcipher_get(saddr, &t->length, t->offset, + err = skcipher_get(saddr, &t->length, t->offset, dst, nbytes, &nbytes); if (err < 0) { kunmap_atomic(saddr); @@ -1910,7 +1911,7 @@ static void hifn_flush(struct hifn_device *dev) { unsigned long flags; struct crypto_async_request *async_req; - struct ablkcipher_request *req; + struct skcipher_request *req; struct hifn_dma *dma = (struct hifn_dma *)dev->desc_virt; int i; @@ -1926,7 +1927,7 @@ static void hifn_flush(struct hifn_device *dev) spin_lock_irqsave(&dev->lock, flags); while ((async_req = crypto_dequeue_request(&dev->queue))) { - req = ablkcipher_request_cast(async_req); + req = skcipher_request_cast(async_req); spin_unlock_irqrestore(&dev->lock, flags); hifn_process_ready(req, -ENODEV); @@ -1936,14 +1937,14 @@ static void hifn_flush(struct hifn_device *dev) spin_unlock_irqrestore(&dev->lock, flags); } -static int hifn_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int hifn_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct hifn_context *ctx = crypto_ablkcipher_ctx(cipher); + struct hifn_context *ctx = crypto_skcipher_ctx(cipher); struct hifn_device *dev = ctx->dev; int err; - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(cipher, key); if (err) return err; @@ -1955,14 +1956,14 @@ static int hifn_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int hifn_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int hifn_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct hifn_context *ctx = crypto_ablkcipher_ctx(cipher); + struct hifn_context *ctx = crypto_skcipher_ctx(cipher); struct hifn_device *dev = ctx->dev; int err; - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(cipher, key); if (err) return err; @@ -1974,36 +1975,36 @@ static int hifn_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int hifn_handle_req(struct ablkcipher_request *req) +static int hifn_handle_req(struct skcipher_request *req) { struct hifn_context *ctx = crypto_tfm_ctx(req->base.tfm); struct hifn_device *dev = ctx->dev; int err = -EAGAIN; - if (dev->started + DIV_ROUND_UP(req->nbytes, PAGE_SIZE) <= HIFN_QUEUE_LENGTH) + if (dev->started + DIV_ROUND_UP(req->cryptlen, PAGE_SIZE) <= HIFN_QUEUE_LENGTH) err = hifn_setup_session(req); if (err == -EAGAIN) { unsigned long flags; spin_lock_irqsave(&dev->lock, flags); - err = ablkcipher_enqueue_request(&dev->queue, req); + err = crypto_enqueue_request(&dev->queue, &req->base); spin_unlock_irqrestore(&dev->lock, flags); } return err; } -static int hifn_setup_crypto_req(struct ablkcipher_request *req, u8 op, +static int hifn_setup_crypto_req(struct skcipher_request *req, u8 op, u8 type, u8 mode) { struct hifn_context *ctx = crypto_tfm_ctx(req->base.tfm); - struct hifn_request_context *rctx = ablkcipher_request_ctx(req); + struct hifn_request_context *rctx = skcipher_request_ctx(req); unsigned ivsize; - ivsize = crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(req)); + ivsize = crypto_skcipher_ivsize(crypto_skcipher_reqtfm(req)); - if (req->info && mode != ACRYPTO_MODE_ECB) { + if (req->iv && mode != ACRYPTO_MODE_ECB) { if (type == ACRYPTO_TYPE_AES_128) ivsize = HIFN_AES_IV_LENGTH; else if (type == ACRYPTO_TYPE_DES) @@ -2022,7 +2023,7 @@ static int hifn_setup_crypto_req(struct ablkcipher_request *req, u8 op, rctx->op = op; rctx->mode = mode; rctx->type = type; - rctx->iv = req->info; + rctx->iv = req->iv; rctx->ivsize = ivsize; /* @@ -2037,7 +2038,7 @@ static int hifn_setup_crypto_req(struct ablkcipher_request *req, u8 op, static int hifn_process_queue(struct hifn_device *dev) { struct crypto_async_request *async_req, *backlog; - struct ablkcipher_request *req; + struct skcipher_request *req; unsigned long flags; int err = 0; @@ -2053,7 +2054,7 @@ static int hifn_process_queue(struct hifn_device *dev) if (backlog) backlog->complete(backlog, -EINPROGRESS); - req = ablkcipher_request_cast(async_req); + req = skcipher_request_cast(async_req); err = hifn_handle_req(req); if (err) @@ -2063,7 +2064,7 @@ static int hifn_process_queue(struct hifn_device *dev) return err; } -static int hifn_setup_crypto(struct ablkcipher_request *req, u8 op, +static int hifn_setup_crypto(struct skcipher_request *req, u8 op, u8 type, u8 mode) { int err; @@ -2083,22 +2084,22 @@ static int hifn_setup_crypto(struct ablkcipher_request *req, u8 op, /* * AES ecryption functions. */ -static inline int hifn_encrypt_aes_ecb(struct ablkcipher_request *req) +static inline int hifn_encrypt_aes_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_ECB); } -static inline int hifn_encrypt_aes_cbc(struct ablkcipher_request *req) +static inline int hifn_encrypt_aes_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_CBC); } -static inline int hifn_encrypt_aes_cfb(struct ablkcipher_request *req) +static inline int hifn_encrypt_aes_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_CFB); } -static inline int hifn_encrypt_aes_ofb(struct ablkcipher_request *req) +static inline int hifn_encrypt_aes_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_OFB); @@ -2107,22 +2108,22 @@ static inline int hifn_encrypt_aes_ofb(struct ablkcipher_request *req) /* * AES decryption functions. */ -static inline int hifn_decrypt_aes_ecb(struct ablkcipher_request *req) +static inline int hifn_decrypt_aes_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_ECB); } -static inline int hifn_decrypt_aes_cbc(struct ablkcipher_request *req) +static inline int hifn_decrypt_aes_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_CBC); } -static inline int hifn_decrypt_aes_cfb(struct ablkcipher_request *req) +static inline int hifn_decrypt_aes_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_CFB); } -static inline int hifn_decrypt_aes_ofb(struct ablkcipher_request *req) +static inline int hifn_decrypt_aes_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_AES_128, ACRYPTO_MODE_OFB); @@ -2131,22 +2132,22 @@ static inline int hifn_decrypt_aes_ofb(struct ablkcipher_request *req) /* * DES ecryption functions. */ -static inline int hifn_encrypt_des_ecb(struct ablkcipher_request *req) +static inline int hifn_encrypt_des_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_ECB); } -static inline int hifn_encrypt_des_cbc(struct ablkcipher_request *req) +static inline int hifn_encrypt_des_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_CBC); } -static inline int hifn_encrypt_des_cfb(struct ablkcipher_request *req) +static inline int hifn_encrypt_des_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_CFB); } -static inline int hifn_encrypt_des_ofb(struct ablkcipher_request *req) +static inline int hifn_encrypt_des_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_OFB); @@ -2155,22 +2156,22 @@ static inline int hifn_encrypt_des_ofb(struct ablkcipher_request *req) /* * DES decryption functions. */ -static inline int hifn_decrypt_des_ecb(struct ablkcipher_request *req) +static inline int hifn_decrypt_des_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_ECB); } -static inline int hifn_decrypt_des_cbc(struct ablkcipher_request *req) +static inline int hifn_decrypt_des_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_CBC); } -static inline int hifn_decrypt_des_cfb(struct ablkcipher_request *req) +static inline int hifn_decrypt_des_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_CFB); } -static inline int hifn_decrypt_des_ofb(struct ablkcipher_request *req) +static inline int hifn_decrypt_des_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_DES, ACRYPTO_MODE_OFB); @@ -2179,44 +2180,44 @@ static inline int hifn_decrypt_des_ofb(struct ablkcipher_request *req) /* * 3DES ecryption functions. */ -static inline int hifn_encrypt_3des_ecb(struct ablkcipher_request *req) +static inline int hifn_encrypt_3des_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_ECB); } -static inline int hifn_encrypt_3des_cbc(struct ablkcipher_request *req) +static inline int hifn_encrypt_3des_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_CBC); } -static inline int hifn_encrypt_3des_cfb(struct ablkcipher_request *req) +static inline int hifn_encrypt_3des_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_CFB); } -static inline int hifn_encrypt_3des_ofb(struct ablkcipher_request *req) +static inline int hifn_encrypt_3des_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_ENCRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_OFB); } /* 3DES decryption functions. */ -static inline int hifn_decrypt_3des_ecb(struct ablkcipher_request *req) +static inline int hifn_decrypt_3des_ecb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_ECB); } -static inline int hifn_decrypt_3des_cbc(struct ablkcipher_request *req) +static inline int hifn_decrypt_3des_cbc(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_CBC); } -static inline int hifn_decrypt_3des_cfb(struct ablkcipher_request *req) +static inline int hifn_decrypt_3des_cfb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_CFB); } -static inline int hifn_decrypt_3des_ofb(struct ablkcipher_request *req) +static inline int hifn_decrypt_3des_ofb(struct skcipher_request *req) { return hifn_setup_crypto(req, ACRYPTO_OP_DECRYPT, ACRYPTO_TYPE_3DES, ACRYPTO_MODE_OFB); @@ -2226,16 +2227,16 @@ struct hifn_alg_template { char name[CRYPTO_MAX_ALG_NAME]; char drv_name[CRYPTO_MAX_ALG_NAME]; unsigned int bsize; - struct ablkcipher_alg ablkcipher; + struct skcipher_alg skcipher; }; -static struct hifn_alg_template hifn_alg_templates[] = { +static const struct hifn_alg_template hifn_alg_templates[] = { /* * 3DES ECB, CBC, CFB and OFB modes. */ { .name = "cfb(des3_ede)", .drv_name = "cfb-3des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_3DES_KEY_LENGTH, .max_keysize = HIFN_3DES_KEY_LENGTH, .setkey = hifn_des3_setkey, @@ -2245,7 +2246,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "ofb(des3_ede)", .drv_name = "ofb-3des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_3DES_KEY_LENGTH, .max_keysize = HIFN_3DES_KEY_LENGTH, .setkey = hifn_des3_setkey, @@ -2255,7 +2256,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "cbc(des3_ede)", .drv_name = "cbc-3des", .bsize = 8, - .ablkcipher = { + .skcipher = { .ivsize = HIFN_IV_LENGTH, .min_keysize = HIFN_3DES_KEY_LENGTH, .max_keysize = HIFN_3DES_KEY_LENGTH, @@ -2266,7 +2267,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "ecb(des3_ede)", .drv_name = "ecb-3des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_3DES_KEY_LENGTH, .max_keysize = HIFN_3DES_KEY_LENGTH, .setkey = hifn_des3_setkey, @@ -2280,7 +2281,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { */ { .name = "cfb(des)", .drv_name = "cfb-des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_DES_KEY_LENGTH, .max_keysize = HIFN_DES_KEY_LENGTH, .setkey = hifn_setkey, @@ -2290,7 +2291,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "ofb(des)", .drv_name = "ofb-des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_DES_KEY_LENGTH, .max_keysize = HIFN_DES_KEY_LENGTH, .setkey = hifn_setkey, @@ -2300,7 +2301,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "cbc(des)", .drv_name = "cbc-des", .bsize = 8, - .ablkcipher = { + .skcipher = { .ivsize = HIFN_IV_LENGTH, .min_keysize = HIFN_DES_KEY_LENGTH, .max_keysize = HIFN_DES_KEY_LENGTH, @@ -2311,7 +2312,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "ecb(des)", .drv_name = "ecb-des", .bsize = 8, - .ablkcipher = { + .skcipher = { .min_keysize = HIFN_DES_KEY_LENGTH, .max_keysize = HIFN_DES_KEY_LENGTH, .setkey = hifn_setkey, @@ -2325,7 +2326,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { */ { .name = "ecb(aes)", .drv_name = "ecb-aes", .bsize = 16, - .ablkcipher = { + .skcipher = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .setkey = hifn_setkey, @@ -2335,7 +2336,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "cbc(aes)", .drv_name = "cbc-aes", .bsize = 16, - .ablkcipher = { + .skcipher = { .ivsize = HIFN_AES_IV_LENGTH, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, @@ -2346,7 +2347,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "cfb(aes)", .drv_name = "cfb-aes", .bsize = 16, - .ablkcipher = { + .skcipher = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .setkey = hifn_setkey, @@ -2356,7 +2357,7 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, { .name = "ofb(aes)", .drv_name = "ofb-aes", .bsize = 16, - .ablkcipher = { + .skcipher = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .setkey = hifn_setkey, @@ -2366,18 +2367,19 @@ static struct hifn_alg_template hifn_alg_templates[] = { }, }; -static int hifn_cra_init(struct crypto_tfm *tfm) +static int hifn_init_tfm(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct hifn_crypto_alg *ha = crypto_alg_to_hifn(alg); - struct hifn_context *ctx = crypto_tfm_ctx(tfm); + struct hifn_context *ctx = crypto_skcipher_ctx(tfm); ctx->dev = ha->dev; - tfm->crt_ablkcipher.reqsize = sizeof(struct hifn_request_context); + crypto_skcipher_set_reqsize(tfm, sizeof(struct hifn_request_context)); + return 0; } -static int hifn_alg_alloc(struct hifn_device *dev, struct hifn_alg_template *t) +static int hifn_alg_alloc(struct hifn_device *dev, const struct hifn_alg_template *t) { struct hifn_crypto_alg *alg; int err; @@ -2386,26 +2388,25 @@ static int hifn_alg_alloc(struct hifn_device *dev, struct hifn_alg_template *t) if (!alg) return -ENOMEM; - snprintf(alg->alg.cra_name, CRYPTO_MAX_ALG_NAME, "%s", t->name); - snprintf(alg->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-%s", + alg->alg = t->skcipher; + alg->alg.init = hifn_init_tfm; + + snprintf(alg->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", t->name); + snprintf(alg->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-%s", t->drv_name, dev->name); - alg->alg.cra_priority = 300; - alg->alg.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC; - alg->alg.cra_blocksize = t->bsize; - alg->alg.cra_ctxsize = sizeof(struct hifn_context); - alg->alg.cra_alignmask = 0; - alg->alg.cra_type = &crypto_ablkcipher_type; - alg->alg.cra_module = THIS_MODULE; - alg->alg.cra_u.ablkcipher = t->ablkcipher; - alg->alg.cra_init = hifn_cra_init; + alg->alg.base.cra_priority = 300; + alg->alg.base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC; + alg->alg.base.cra_blocksize = t->bsize; + alg->alg.base.cra_ctxsize = sizeof(struct hifn_context); + alg->alg.base.cra_alignmask = 0; + alg->alg.base.cra_module = THIS_MODULE; alg->dev = dev; list_add_tail(&alg->entry, &dev->alg_list); - err = crypto_register_alg(&alg->alg); + err = crypto_register_skcipher(&alg->alg); if (err) { list_del(&alg->entry); kfree(alg); @@ -2420,7 +2421,7 @@ static void hifn_unregister_alg(struct hifn_device *dev) list_for_each_entry_safe(a, n, &dev->alg_list, entry) { list_del(&a->entry); - crypto_unregister_alg(&a->alg); + crypto_unregister_skcipher(&a->alg); kfree(a); } } From patchwork Thu Oct 24 13:23:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209689 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F71813B1 for ; Thu, 24 Oct 2019 13:29:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A3F121872 for ; Thu, 24 Oct 2019 13:29:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="BaN5tacA"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Gb3yBo+w" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A3F121872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WfUoH0pP+9TP6UrIIRw6bHTAIsD6ezpEPFq3GEjmxY0=; b=BaN5tacATCPsm+ 2yPCyk0ECJSltYbZA2kxYbPyfCvPASqLmjJdu0mJtArmGfAor8vISreVfOCrvDf+EWkvWNRK9R013 /50vBGc2g/CJKm7UZf3wUJo9q9IWYU1gQoyR6nlV+B63lRy/7+JkcCAU40f27BAROcC6yrC4S6xZm 3urbR0kOlJBEJgVhYlNgP707G+I6TXRdu2zBJjSrMESgcYjUYdFT57DWsuFEWXyuPnl59i5iceMCK qinqeyxyIFQWVShNZcPqN+Nech01WYPzbketIsybGBDCn4K6zPN1O3ozjGFdkNJwfrMSBVIUdS49T F+jIwzWz5/PZxAZOSCpA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdAj-0005MM-Ud; Thu, 24 Oct 2019 13:29:02 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd68-0000eJ-C2 for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:24 +0000 Received: by mail-wr1-x442.google.com with SMTP id e11so17376573wrv.4 for ; Thu, 24 Oct 2019 06:24:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ttf9GrpdOJgPggCPCLwZMCd7mtHDiLunhDuCBXZWURE=; b=Gb3yBo+w7HQC24wD8mDYO1FSrYnJZZPnUvoYh19fXyhyvq/X0ug5MRfd7b0VGsSHsU j2az1RH9eO+lyRq4TZt5N9Ytpjc778iwOjQMPo6S2cmsV8nvC7f3azt/bR3mwQwi/WVD 0L8R3yJRnUbTXwNqjXS/NfeEZnDp2RNnFmzQybOcoDiuVTO8noHuQ12qkXGQqOYCLGV4 aULbVq5XdCfQmTJbGcO8aFOyxUxsiyQlXpT3pD1r3J3YEGiwVdr0x4M2icTyVN4Hb6vP q1rEjLhBw8BAKwZd428B5hSSOZQoouTG6XJPOe/hbqgMK27/hobaawItAn7IEHviEhS0 lBAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ttf9GrpdOJgPggCPCLwZMCd7mtHDiLunhDuCBXZWURE=; b=DgECH+f2KXTI1AegTAkuW0/FlrJI86HaYt0/59H/ODFyFVAWQwIejQ/XMAiFzSMzLT AZST2x5M2GLLn3WqSXhaH22koV1EChp1Ps0etTGnca+M+SdEwWuFqZio1wxCvuvTWa9e 6kdqUjaJWA0/Zz8COC5Bp4va73ftxIlsfhiBy26pAJcld8OZ7E9X2sFeEa5fvE9OUkHx ZrNcvTKvm3UykZSEQx1u1PlwPb3CSo+b4d4ozr5Yjnghkv4tjL2Pac4LXq7MjAiRkTkK 1z7ujkGG2fM6nRXUdkgM4DMX92u7AnjLqFipGBRfHoy60V8wAoasePOUS87URqtJqIP0 V1GQ== X-Gm-Message-State: APjAAAX/UjB8v1s4FIKGmVB9+nbrIGvXFDTOExOiTLo/Yzff2IE0AK2r BLnmEnFzbMQxyvr8515eRMx7HA== X-Google-Smtp-Source: APXvYqwp7KsPxvrcUPq5fRJnBWu0s2JwNJc0hOfP0B+LPrEQpRl2zLPUgzwDeTmlhuHZDWhR0gwd/g== X-Received: by 2002:adf:f40c:: with SMTP id g12mr3800524wro.244.1571923454591; Thu, 24 Oct 2019 06:24:14 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:13 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 15/27] crypto: ixp4xx - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:33 +0200 Message-Id: <20191024132345.5236-16-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062416_677519_F7F17517 X-CRM114-Status: GOOD ( 19.13 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Linus Walleij , Ard Biesheuvel , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Reviewed-by: Linus Walleij Signed-off-by: Ard Biesheuvel --- drivers/crypto/ixp4xx_crypto.c | 228 ++++++++++---------- 1 file changed, 108 insertions(+), 120 deletions(-) diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c index 9181523ba760..391e3b4df364 100644 --- a/drivers/crypto/ixp4xx_crypto.c +++ b/drivers/crypto/ixp4xx_crypto.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -137,7 +138,7 @@ struct crypt_ctl { /* Used by Host: 4*4 bytes*/ unsigned ctl_flags; union { - struct ablkcipher_request *ablk_req; + struct skcipher_request *ablk_req; struct aead_request *aead_req; struct crypto_tfm *tfm; } data; @@ -186,7 +187,7 @@ struct ixp_ctx { }; struct ixp_alg { - struct crypto_alg crypto; + struct skcipher_alg crypto; const struct ix_hash_algo *hash; u32 cfg_enc; u32 cfg_dec; @@ -239,17 +240,17 @@ static inline struct crypt_ctl *crypt_phys2virt(dma_addr_t phys) static inline u32 cipher_cfg_enc(struct crypto_tfm *tfm) { - return container_of(tfm->__crt_alg, struct ixp_alg,crypto)->cfg_enc; + return container_of(tfm->__crt_alg, struct ixp_alg,crypto.base)->cfg_enc; } static inline u32 cipher_cfg_dec(struct crypto_tfm *tfm) { - return container_of(tfm->__crt_alg, struct ixp_alg,crypto)->cfg_dec; + return container_of(tfm->__crt_alg, struct ixp_alg,crypto.base)->cfg_dec; } static inline const struct ix_hash_algo *ix_hash(struct crypto_tfm *tfm) { - return container_of(tfm->__crt_alg, struct ixp_alg, crypto)->hash; + return container_of(tfm->__crt_alg, struct ixp_alg, crypto.base)->hash; } static int setup_crypt_desc(void) @@ -378,8 +379,8 @@ static void one_packet(dma_addr_t phys) break; } case CTL_FLAG_PERFORM_ABLK: { - struct ablkcipher_request *req = crypt->data.ablk_req; - struct ablk_ctx *req_ctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = crypt->data.ablk_req; + struct ablk_ctx *req_ctx = skcipher_request_ctx(req); if (req_ctx->dst) { free_buf_chain(dev, req_ctx->dst, crypt->dst_buf); @@ -571,10 +572,10 @@ static int init_tfm(struct crypto_tfm *tfm) return ret; } -static int init_tfm_ablk(struct crypto_tfm *tfm) +static int init_tfm_ablk(struct crypto_skcipher *tfm) { - tfm->crt_ablkcipher.reqsize = sizeof(struct ablk_ctx); - return init_tfm(tfm); + crypto_skcipher_set_reqsize(tfm, sizeof(struct ablk_ctx)); + return init_tfm(crypto_skcipher_tfm(tfm)); } static int init_tfm_aead(struct crypto_aead *tfm) @@ -590,6 +591,11 @@ static void exit_tfm(struct crypto_tfm *tfm) free_sa_dir(&ctx->decrypt); } +static void exit_tfm_ablk(struct crypto_skcipher *tfm) +{ + exit_tfm(crypto_skcipher_tfm(tfm)); +} + static void exit_tfm_aead(struct crypto_aead *tfm) { exit_tfm(crypto_aead_tfm(tfm)); @@ -809,10 +815,10 @@ static struct buffer_desc *chainup_buffers(struct device *dev, return buf; } -static int ablk_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ablk_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ixp_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); u32 *flags = &tfm->base.crt_flags; int ret; @@ -845,17 +851,17 @@ static int ablk_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return ret; } -static int ablk_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ablk_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - return verify_ablkcipher_des3_key(tfm, key) ?: + return verify_skcipher_des3_key(tfm, key) ?: ablk_setkey(tfm, key, key_len); } -static int ablk_rfc3686_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int ablk_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ixp_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); /* the nonce is stored in bytes at end of key */ if (key_len < CTR_RFC3686_NONCE_SIZE) @@ -868,16 +874,16 @@ static int ablk_rfc3686_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return ablk_setkey(tfm, key, key_len); } -static int ablk_perform(struct ablkcipher_request *req, int encrypt) +static int ablk_perform(struct skcipher_request *req, int encrypt) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct ixp_ctx *ctx = crypto_ablkcipher_ctx(tfm); - unsigned ivsize = crypto_ablkcipher_ivsize(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); + unsigned ivsize = crypto_skcipher_ivsize(tfm); struct ix_sa_dir *dir; struct crypt_ctl *crypt; - unsigned int nbytes = req->nbytes; + unsigned int nbytes = req->cryptlen; enum dma_data_direction src_direction = DMA_BIDIRECTIONAL; - struct ablk_ctx *req_ctx = ablkcipher_request_ctx(req); + struct ablk_ctx *req_ctx = skcipher_request_ctx(req); struct buffer_desc src_hook; struct device *dev = &pdev->dev; gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? @@ -902,8 +908,8 @@ static int ablk_perform(struct ablkcipher_request *req, int encrypt) crypt->crypt_offs = 0; crypt->crypt_len = nbytes; - BUG_ON(ivsize && !req->info); - memcpy(crypt->iv, req->info, ivsize); + BUG_ON(ivsize && !req->iv); + memcpy(crypt->iv, req->iv, ivsize); if (req->src != req->dst) { struct buffer_desc dst_hook; crypt->mode |= NPE_OP_NOT_IN_PLACE; @@ -941,22 +947,22 @@ static int ablk_perform(struct ablkcipher_request *req, int encrypt) return -ENOMEM; } -static int ablk_encrypt(struct ablkcipher_request *req) +static int ablk_encrypt(struct skcipher_request *req) { return ablk_perform(req, 1); } -static int ablk_decrypt(struct ablkcipher_request *req) +static int ablk_decrypt(struct skcipher_request *req) { return ablk_perform(req, 0); } -static int ablk_rfc3686_crypt(struct ablkcipher_request *req) +static int ablk_rfc3686_crypt(struct skcipher_request *req) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct ixp_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm); u8 iv[CTR_RFC3686_BLOCK_SIZE]; - u8 *info = req->info; + u8 *info = req->iv; int ret; /* set up counter block */ @@ -967,9 +973,9 @@ static int ablk_rfc3686_crypt(struct ablkcipher_request *req) *(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) = cpu_to_be32(1); - req->info = iv; + req->iv = iv; ret = ablk_perform(req, 1); - req->info = info; + req->iv = info; return ret; } @@ -1212,107 +1218,91 @@ static int aead_decrypt(struct aead_request *req) static struct ixp_alg ixp4xx_algos[] = { { .crypto = { - .cra_name = "cbc(des)", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - } - } + .base.cra_name = "cbc(des)", + .base.cra_blocksize = DES_BLOCK_SIZE, + + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, }, .cfg_enc = CIPH_ENCR | MOD_DES | MOD_CBC_ENC | KEYLEN_192, .cfg_dec = CIPH_DECR | MOD_DES | MOD_CBC_DEC | KEYLEN_192, }, { .crypto = { - .cra_name = "ecb(des)", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - } - } + .base.cra_name = "ecb(des)", + .base.cra_blocksize = DES_BLOCK_SIZE, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, }, .cfg_enc = CIPH_ENCR | MOD_DES | MOD_ECB | KEYLEN_192, .cfg_dec = CIPH_DECR | MOD_DES | MOD_ECB | KEYLEN_192, }, { .crypto = { - .cra_name = "cbc(des3_ede)", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - .setkey = ablk_des3_setkey, - } - } + .base.cra_name = "cbc(des3_ede)", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .setkey = ablk_des3_setkey, }, .cfg_enc = CIPH_ENCR | MOD_3DES | MOD_CBC_ENC | KEYLEN_192, .cfg_dec = CIPH_DECR | MOD_3DES | MOD_CBC_DEC | KEYLEN_192, }, { .crypto = { - .cra_name = "ecb(des3_ede)", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .setkey = ablk_des3_setkey, - } - } + .base.cra_name = "ecb(des3_ede)", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = ablk_des3_setkey, }, .cfg_enc = CIPH_ENCR | MOD_3DES | MOD_ECB | KEYLEN_192, .cfg_dec = CIPH_DECR | MOD_3DES | MOD_ECB | KEYLEN_192, }, { .crypto = { - .cra_name = "cbc(aes)", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } - } + .base.cra_name = "cbc(aes)", + .base.cra_blocksize = AES_BLOCK_SIZE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cfg_enc = CIPH_ENCR | MOD_AES | MOD_CBC_ENC, .cfg_dec = CIPH_DECR | MOD_AES | MOD_CBC_DEC, }, { .crypto = { - .cra_name = "ecb(aes)", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - } - } + .base.cra_name = "ecb(aes)", + .base.cra_blocksize = AES_BLOCK_SIZE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, }, .cfg_enc = CIPH_ENCR | MOD_AES | MOD_ECB, .cfg_dec = CIPH_DECR | MOD_AES | MOD_ECB, }, { .crypto = { - .cra_name = "ctr(aes)", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - } - } + .base.cra_name = "ctr(aes)", + .base.cra_blocksize = 1, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, .cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR, .cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR, }, { .crypto = { - .cra_name = "rfc3686(ctr(aes))", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_u = { .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ablk_rfc3686_setkey, - .encrypt = ablk_rfc3686_crypt, - .decrypt = ablk_rfc3686_crypt } - } + .base.cra_name = "rfc3686(ctr(aes))", + .base.cra_blocksize = 1, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ablk_rfc3686_setkey, + .encrypt = ablk_rfc3686_crypt, + .decrypt = ablk_rfc3686_crypt, }, .cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR, .cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR, @@ -1421,10 +1411,10 @@ static int __init ixp_module_init(void) return err; } for (i=0; i< num; i++) { - struct crypto_alg *cra = &ixp4xx_algos[i].crypto; + struct skcipher_alg *cra = &ixp4xx_algos[i].crypto; - if (snprintf(cra->cra_driver_name, CRYPTO_MAX_ALG_NAME, - "%s"IXP_POSTFIX, cra->cra_name) >= + if (snprintf(cra->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, + "%s"IXP_POSTFIX, cra->base.cra_name) >= CRYPTO_MAX_ALG_NAME) { continue; @@ -1434,26 +1424,24 @@ static int __init ixp_module_init(void) } /* block ciphers */ - cra->cra_type = &crypto_ablkcipher_type; - cra->cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC; - if (!cra->cra_ablkcipher.setkey) - cra->cra_ablkcipher.setkey = ablk_setkey; - if (!cra->cra_ablkcipher.encrypt) - cra->cra_ablkcipher.encrypt = ablk_encrypt; - if (!cra->cra_ablkcipher.decrypt) - cra->cra_ablkcipher.decrypt = ablk_decrypt; - cra->cra_init = init_tfm_ablk; - - cra->cra_ctxsize = sizeof(struct ixp_ctx); - cra->cra_module = THIS_MODULE; - cra->cra_alignmask = 3; - cra->cra_priority = 300; - cra->cra_exit = exit_tfm; - if (crypto_register_alg(cra)) + cra->base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC; + if (!cra->setkey) + cra->setkey = ablk_setkey; + if (!cra->encrypt) + cra->encrypt = ablk_encrypt; + if (!cra->decrypt) + cra->decrypt = ablk_decrypt; + cra->init = init_tfm_ablk; + cra->exit = exit_tfm_ablk; + + cra->base.cra_ctxsize = sizeof(struct ixp_ctx); + cra->base.cra_module = THIS_MODULE; + cra->base.cra_alignmask = 3; + cra->base.cra_priority = 300; + if (crypto_register_skcipher(cra)) printk(KERN_ERR "Failed to register '%s'\n", - cra->cra_name); + cra->base.cra_name); else ixp4xx_algos[i].registered = 1; } @@ -1504,7 +1492,7 @@ static void __exit ixp_module_exit(void) for (i=0; i< num; i++) { if (ixp4xx_algos[i].registered) - crypto_unregister_alg(&ixp4xx_algos[i].crypto); + crypto_unregister_skcipher(&ixp4xx_algos[i].crypto); } release_ixp_crypto(&pdev->dev); platform_device_unregister(pdev); From patchwork Thu Oct 24 13:23:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1B72112B for ; Thu, 24 Oct 2019 13:29:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BE4852084C for ; Thu, 24 Oct 2019 13:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="FWPFQgOB"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="qpmcwaJU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE4852084C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=icwCjsQSjZZJTaGNxiX/3sARr12AWLrpaMUQVq/62VE=; b=FWPFQgOBvO0pE4 hNJMC7CSxN8W5j5MlToS9KKQ8V3iGydAedMWNgZAFqPU/MfI8OHDIMTwYb7N0qsq4txy0lY/Bf0l2 lR6KCI+znhOX4pCEjOFyMfIqRNyzn32jQcqxngXYm3PofYUJGX6borxcKeMtGJiuBiLFpt7sMPsNJ wTWO20eYnlpdh06J6FDqa/iOW0zXhJ29yZ88d089+A3CvjVWqY7fwM74WaMjQE3MWBqSTYDhZnkHy JyDFn3XVoHz2lC35ZwTVi3SC3lQexDv2oHs43AHDXiXJgU96zZqta/0iTOsuc3dcffikJTOTI5AJj 5pyHF07t3qTDUvw+J8mg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdBG-0005nU-6W; Thu, 24 Oct 2019 13:29:34 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd69-0000fY-LS for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:26 +0000 Received: by mail-wr1-x443.google.com with SMTP id e11so17376641wrv.4 for ; Thu, 24 Oct 2019 06:24:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SiPZZI5PLXqyLfa5k17kCDXNYyEdfu2hMJuznLN4Qxo=; b=qpmcwaJUqL1jyeXgkdFQwSYCRzj1iP+GlTM+uiYhecGxHMpVN610zF+cLWiBK424iP M1/AopkVQcQZnbgO8L42QfZOI8l4g5gQ6b5M83KYPvNEJczFUCJOmJU8dNBKt0a2QNxv 56uQn3GVsFVDBoKMojVUKD1uh2YrtH3pN8idCPqo3OO6GmmOR28EPZ9BdVz5WiDGn+OQ O4gNM+J6A4NMjkUlLnG9Ja9ns4Sis+EDRlj0D0N5P+fVVFNYX3TSKs4xGAw+fkRsULLj RSp5b4A7ScJtfnbEvs9rT1AR99QG9gk9OreUUaEp7Qm/a5yNe6f3HDbAsegjBkIIjn5u 8cpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SiPZZI5PLXqyLfa5k17kCDXNYyEdfu2hMJuznLN4Qxo=; b=gr7lrnUPv6hNK8ZXmdBUqx6gcVdyl7obZfJNVOsww73Zb1OlkFmd0gdvlerqLr+TAA beF4pZnkVNfH0Lg12bBSDnGycakUcLcZnMON32a90+AAz66QetN+UmtP/6P+1OzaaKcn eqGjKXc607BXGu1ZQTBkz8aFgoj/SoqH9BN3AoEuh/NYjHYokNhcQoyw02F4zB73hr86 ExOBS0BEeFBoyH+DkMtqw//yHahC4QbP7AhXKT66TfB+icqraNUkX3pyXZMXuuDXRNKr kZNCMhzcZLDidlnbTACn3Y0feNMOgn1/xDFwqqyy4yzEE45GQbSLhiFnlRSv7UbFdDfz RXIA== X-Gm-Message-State: APjAAAUudPVQ6rNMw/4R0t1d9Pxj2VFRzyNGLSYXG94vt4IwBXiLNxYd wFvp1Uh3Tv+meXvRJJhy4S2Y0g== X-Google-Smtp-Source: APXvYqy2MQmehUmqPXJ7y5IVudrepRjxlpN2KL0Fo5frDeV8mHNsPlOpl85N9hmlVyWKzr18WhPf+Q== X-Received: by 2002:adf:8103:: with SMTP id 3mr4158379wrm.194.1571923455937; Thu, 24 Oct 2019 06:24:15 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:14 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 16/27] crypto: mxs - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:34 +0200 Message-Id: <20191024132345.5236-17-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062418_212134_FDFFAFC2 X-CRM114-Status: GOOD ( 17.76 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , =?utf-8?q?Horia_Geant=C4=83?= , Eric Biggers , Ard Biesheuvel , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Tested-by: Horia Geantă Signed-off-by: Ard Biesheuvel --- drivers/crypto/mxs-dcp.c | 140 +++++++++----------- 1 file changed, 65 insertions(+), 75 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index bf8d2197bc11..f438b425c655 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -211,11 +211,11 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx) * Encryption (AES128) */ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx, - struct ablkcipher_request *req, int init) + struct skcipher_request *req, int init) { struct dcp *sdcp = global_sdcp; struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan]; - struct dcp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); int ret; dma_addr_t key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key, @@ -274,9 +274,9 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) { struct dcp *sdcp = global_sdcp; - struct ablkcipher_request *req = ablkcipher_request_cast(arq); + struct skcipher_request *req = skcipher_request_cast(arq); struct dcp_async_ctx *actx = crypto_tfm_ctx(arq->tfm); - struct dcp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); struct scatterlist *dst = req->dst; struct scatterlist *src = req->src; @@ -305,7 +305,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) if (!rctx->ecb) { /* Copy the CBC IV just past the key. */ - memcpy(key + AES_KEYSIZE_128, req->info, AES_KEYSIZE_128); + memcpy(key + AES_KEYSIZE_128, req->iv, AES_KEYSIZE_128); /* CBC needs the INIT set. */ init = 1; } else { @@ -316,10 +316,10 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) src_buf = sg_virt(src); len = sg_dma_len(src); tlen += len; - limit_hit = tlen > req->nbytes; + limit_hit = tlen > req->cryptlen; if (limit_hit) - len = req->nbytes - (tlen - len); + len = req->cryptlen - (tlen - len); do { if (actx->fill + len > out_off) @@ -375,10 +375,10 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) /* Copy the IV for CBC for chaining */ if (!rctx->ecb) { if (rctx->enc) - memcpy(req->info, out_buf+(last_out_len-AES_BLOCK_SIZE), + memcpy(req->iv, out_buf+(last_out_len-AES_BLOCK_SIZE), AES_BLOCK_SIZE); else - memcpy(req->info, in_buf+(last_out_len-AES_BLOCK_SIZE), + memcpy(req->iv, in_buf+(last_out_len-AES_BLOCK_SIZE), AES_BLOCK_SIZE); } @@ -422,17 +422,17 @@ static int dcp_chan_thread_aes(void *data) return 0; } -static int mxs_dcp_block_fallback(struct ablkcipher_request *req, int enc) +static int mxs_dcp_block_fallback(struct skcipher_request *req, int enc) { - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct dcp_async_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct dcp_async_ctx *ctx = crypto_skcipher_ctx(tfm); SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); int ret; skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); if (enc) ret = crypto_skcipher_encrypt(subreq); @@ -444,12 +444,12 @@ static int mxs_dcp_block_fallback(struct ablkcipher_request *req, int enc) return ret; } -static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb) +static int mxs_dcp_aes_enqueue(struct skcipher_request *req, int enc, int ecb) { struct dcp *sdcp = global_sdcp; struct crypto_async_request *arq = &req->base; struct dcp_async_ctx *actx = crypto_tfm_ctx(arq->tfm); - struct dcp_aes_req_ctx *rctx = ablkcipher_request_ctx(req); + struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req); int ret; if (unlikely(actx->key_len != AES_KEYSIZE_128)) @@ -468,30 +468,30 @@ static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb) return ret; } -static int mxs_dcp_aes_ecb_decrypt(struct ablkcipher_request *req) +static int mxs_dcp_aes_ecb_decrypt(struct skcipher_request *req) { return mxs_dcp_aes_enqueue(req, 0, 1); } -static int mxs_dcp_aes_ecb_encrypt(struct ablkcipher_request *req) +static int mxs_dcp_aes_ecb_encrypt(struct skcipher_request *req) { return mxs_dcp_aes_enqueue(req, 1, 1); } -static int mxs_dcp_aes_cbc_decrypt(struct ablkcipher_request *req) +static int mxs_dcp_aes_cbc_decrypt(struct skcipher_request *req) { return mxs_dcp_aes_enqueue(req, 0, 0); } -static int mxs_dcp_aes_cbc_encrypt(struct ablkcipher_request *req) +static int mxs_dcp_aes_cbc_encrypt(struct skcipher_request *req) { return mxs_dcp_aes_enqueue(req, 1, 0); } -static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int mxs_dcp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int len) { - struct dcp_async_ctx *actx = crypto_ablkcipher_ctx(tfm); + struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); unsigned int ret; /* @@ -525,10 +525,10 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return ret; } -static int mxs_dcp_aes_fallback_init(struct crypto_tfm *tfm) +static int mxs_dcp_aes_fallback_init_tfm(struct crypto_skcipher *tfm) { - const char *name = crypto_tfm_alg_name(tfm); - struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm); + const char *name = crypto_tfm_alg_name(crypto_skcipher_tfm(tfm)); + struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); struct crypto_sync_skcipher *blk; blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); @@ -536,13 +536,13 @@ static int mxs_dcp_aes_fallback_init(struct crypto_tfm *tfm) return PTR_ERR(blk); actx->fallback = blk; - tfm->crt_ablkcipher.reqsize = sizeof(struct dcp_aes_req_ctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct dcp_aes_req_ctx)); return 0; } -static void mxs_dcp_aes_fallback_exit(struct crypto_tfm *tfm) +static void mxs_dcp_aes_fallback_exit_tfm(struct crypto_skcipher *tfm) { - struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm); + struct dcp_async_ctx *actx = crypto_skcipher_ctx(tfm); crypto_free_sync_skcipher(actx->fallback); } @@ -854,54 +854,44 @@ static void dcp_sha_cra_exit(struct crypto_tfm *tfm) } /* AES 128 ECB and AES 128 CBC */ -static struct crypto_alg dcp_aes_algs[] = { +static struct skcipher_alg dcp_aes_algs[] = { { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-dcp", - .cra_priority = 400, - .cra_alignmask = 15, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-dcp", + .base.cra_priority = 400, + .base.cra_alignmask = 15, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_init = mxs_dcp_aes_fallback_init, - .cra_exit = mxs_dcp_aes_fallback_exit, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct dcp_async_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = mxs_dcp_aes_setkey, - .encrypt = mxs_dcp_aes_ecb_encrypt, - .decrypt = mxs_dcp_aes_ecb_decrypt - }, - }, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct dcp_async_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = mxs_dcp_aes_setkey, + .encrypt = mxs_dcp_aes_ecb_encrypt, + .decrypt = mxs_dcp_aes_ecb_decrypt, + .init = mxs_dcp_aes_fallback_init_tfm, + .exit = mxs_dcp_aes_fallback_exit_tfm, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-dcp", - .cra_priority = 400, - .cra_alignmask = 15, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-dcp", + .base.cra_priority = 400, + .base.cra_alignmask = 15, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_init = mxs_dcp_aes_fallback_init, - .cra_exit = mxs_dcp_aes_fallback_exit, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct dcp_async_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u = { - .ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = mxs_dcp_aes_setkey, - .encrypt = mxs_dcp_aes_cbc_encrypt, - .decrypt = mxs_dcp_aes_cbc_decrypt, - .ivsize = AES_BLOCK_SIZE, - }, - }, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct dcp_async_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = mxs_dcp_aes_setkey, + .encrypt = mxs_dcp_aes_cbc_encrypt, + .decrypt = mxs_dcp_aes_cbc_decrypt, + .ivsize = AES_BLOCK_SIZE, + .init = mxs_dcp_aes_fallback_init_tfm, + .exit = mxs_dcp_aes_fallback_exit_tfm, }, }; @@ -1104,8 +1094,8 @@ static int mxs_dcp_probe(struct platform_device *pdev) sdcp->caps = readl(sdcp->base + MXS_DCP_CAPABILITY1); if (sdcp->caps & MXS_DCP_CAPABILITY1_AES128) { - ret = crypto_register_algs(dcp_aes_algs, - ARRAY_SIZE(dcp_aes_algs)); + ret = crypto_register_skciphers(dcp_aes_algs, + ARRAY_SIZE(dcp_aes_algs)); if (ret) { /* Failed to register algorithm. */ dev_err(dev, "Failed to register AES crypto!\n"); @@ -1139,7 +1129,7 @@ static int mxs_dcp_probe(struct platform_device *pdev) err_unregister_aes: if (sdcp->caps & MXS_DCP_CAPABILITY1_AES128) - crypto_unregister_algs(dcp_aes_algs, ARRAY_SIZE(dcp_aes_algs)); + crypto_unregister_skciphers(dcp_aes_algs, ARRAY_SIZE(dcp_aes_algs)); err_destroy_aes_thread: kthread_stop(sdcp->thread[DCP_CHAN_CRYPTO]); @@ -1164,7 +1154,7 @@ static int mxs_dcp_remove(struct platform_device *pdev) crypto_unregister_ahash(&dcp_sha1_alg); if (sdcp->caps & MXS_DCP_CAPABILITY1_AES128) - crypto_unregister_algs(dcp_aes_algs, ARRAY_SIZE(dcp_aes_algs)); + crypto_unregister_skciphers(dcp_aes_algs, ARRAY_SIZE(dcp_aes_algs)); kthread_stop(sdcp->thread[DCP_CHAN_HASH_SHA]); kthread_stop(sdcp->thread[DCP_CHAN_CRYPTO]); From patchwork Thu Oct 24 13:23:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209695 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20E72112B for ; Thu, 24 Oct 2019 13:30:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DF5A62166E for ; Thu, 24 Oct 2019 13:30:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="I3sQ6YpC"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Yam2Lo+h" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF5A62166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AWUrizPlqnUnXjuacWbN85lexiL3HqOZHEMkGfIn45I=; b=I3sQ6YpClQJcko c0eCHF4VEtpqIlHOt5jUU98OOq/8wIPyGgW4v1IEsxDa6Mmj0NvIPFYV0xUkqrtdUHEbR0qMjO55U Ukf4jDJvs1EsJHnopDOSGcXlQsayJGamxAoQxHqNVisE4jJHwnAyEpF+pZHLaf/fQTdzkOmZgSwZP Zvy8uPEPjs4F+RJ+BB2z9Z4qsCn6/6rTcyo39qaGyDEhTqI0LrkAAFWVDxf0fwiv+NdIshf9tQpDK ZQg40rdIlloq03cOIoY9QSpJ7ddnOD2IpvUjtJ6DueNx9r2uJ4pekBF9D+r4mSmiZjaonHhg+7qGx 5ALSaI09avV4H9zqlLeg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdBd-00067k-P4; Thu, 24 Oct 2019 13:29:57 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6C-0000gj-CO for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:27 +0000 Received: by mail-wm1-x342.google.com with SMTP id q70so2853890wme.1 for ; Thu, 24 Oct 2019 06:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=33/CVTeLGzG6b73PvB1DolonEza52HznBgK284X66LM=; b=Yam2Lo+hRVbLvFNTCDyB/iKYb8FH0k3OD8unvOGek5e9rwKOJngzNhRpfoWJgMOHl0 4aMMGmuaY5th0gaL5yFSsbIISFAEaYh87aA3Hf1p2FFhZ3R+cMDQObiPGoph0hsZsqG0 vfbclD8YNF+3KIEQPlvZC5s0wXpjdSch6DGMLctz3WVbR7eMURx9FtH5Otq7ZLDzpusO R45io91mEebjK7yZ0uOeBmQWXIuE7CUOBk/AjLXLLFBkaED2mn+xT2U1BmTm8rGFUIMA Mq+69lbCwAN9SoOzJtKJ0s8BnyaTs6ewNyZ0m8m1kZEQzjtvQAN8Jwh37h//KC8pjO/d MEgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=33/CVTeLGzG6b73PvB1DolonEza52HznBgK284X66LM=; b=GfDB6NruN6ASDLVHz+zx8EEhddgoJAYlUdyra+FPMZLnDO00XjBGV04OSQOf9wG+lr yYYmg4mOM0Lqn8DIrfBn2BHOovxNux4BbYqufIGGpSKf7P3bsYfwX0cEZpDYrh5IoJRf 5bfWcptjZsgOkUeAOW2QNYF/6WMwqITSzu0CgsXccUG9WsTlJMLb/BinO031iA+M3Uk6 6gCK9NTU8XPcWfKX62QJxXwCnFcKdY+64BcmQ2bCdd2hOAHK2/pSYR4B7OzHOFQ+ac8f Dd66RflDmqukeijB9ep5M+UWE9gWQxP7wT3sh5dfe0pIv+OpcZ7Ga34cALvtLeCs4fOB nREw== X-Gm-Message-State: APjAAAX37sCF81ldAqa/vWz6XE1bhh3kwSbtSg9bNjfzENXWr/aiGUID AnFlWHWCa6MS6de9K7pU5YOeUA== X-Google-Smtp-Source: APXvYqz+wWl3qSwzBgGs3O94wUbYTG9fOE3aaDqBqclhRubXa8eLvJxJxeWSd1Y9jvtb92Fo9O/M7A== X-Received: by 2002:a7b:c74a:: with SMTP id w10mr4933230wmk.30.1571923458318; Thu, 24 Oct 2019 06:24:18 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:16 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 17/27] crypto: mediatek - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:35 +0200 Message-Id: <20191024132345.5236-18-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062420_771165_8B16B769 X-CRM114-Status: GOOD ( 16.93 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , linux-mediatek@lists.infradead.org, Matthias Brugger , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Matthias Brugger Cc: linux-mediatek@lists.infradead.org Signed-off-by: Ard Biesheuvel --- drivers/crypto/mediatek/mtk-aes.c | 248 +++++++++----------- 1 file changed, 116 insertions(+), 132 deletions(-) diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c index 90c9644fb8a8..d3416020669f 100644 --- a/drivers/crypto/mediatek/mtk-aes.c +++ b/drivers/crypto/mediatek/mtk-aes.c @@ -11,6 +11,7 @@ #include #include +#include #include "mtk-platform.h" #define AES_QUEUE_SIZE 512 @@ -414,7 +415,7 @@ static int mtk_aes_map(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) static void mtk_aes_info_init(struct mtk_cryp *cryp, struct mtk_aes_rec *aes, size_t len) { - struct ablkcipher_request *req = ablkcipher_request_cast(aes->areq); + struct skcipher_request *req = skcipher_request_cast(aes->areq); struct mtk_aes_base_ctx *ctx = aes->ctx; struct mtk_aes_info *info = &ctx->info; u32 cnt = 0; @@ -450,7 +451,7 @@ static void mtk_aes_info_init(struct mtk_cryp *cryp, struct mtk_aes_rec *aes, return; } - mtk_aes_write_state_le(info->state + ctx->keylen, req->info, + mtk_aes_write_state_le(info->state + ctx->keylen, (void *)req->iv, AES_BLOCK_SIZE); ctr: info->tfm[0] += AES_TFM_SIZE(SIZE_IN_WORDS(AES_BLOCK_SIZE)); @@ -552,13 +553,13 @@ static int mtk_aes_transfer_complete(struct mtk_cryp *cryp, static int mtk_aes_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) { - struct ablkcipher_request *req = ablkcipher_request_cast(aes->areq); - struct mtk_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(aes->areq); + struct mtk_aes_reqctx *rctx = skcipher_request_ctx(req); mtk_aes_set_mode(aes, rctx); aes->resume = mtk_aes_transfer_complete; - return mtk_aes_dma(cryp, aes, req->src, req->dst, req->nbytes); + return mtk_aes_dma(cryp, aes, req->src, req->dst, req->cryptlen); } static inline struct mtk_aes_ctr_ctx * @@ -571,7 +572,7 @@ static int mtk_aes_ctr_transfer(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) { struct mtk_aes_base_ctx *ctx = aes->ctx; struct mtk_aes_ctr_ctx *cctx = mtk_aes_ctr_ctx_cast(ctx); - struct ablkcipher_request *req = ablkcipher_request_cast(aes->areq); + struct skcipher_request *req = skcipher_request_cast(aes->areq); struct scatterlist *src, *dst; u32 start, end, ctr, blocks; size_t datalen; @@ -579,11 +580,11 @@ static int mtk_aes_ctr_transfer(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) /* Check for transfer completion. */ cctx->offset += aes->total; - if (cctx->offset >= req->nbytes) + if (cctx->offset >= req->cryptlen) return mtk_aes_transfer_complete(cryp, aes); /* Compute data length. */ - datalen = req->nbytes - cctx->offset; + datalen = req->cryptlen - cctx->offset; blocks = DIV_ROUND_UP(datalen, AES_BLOCK_SIZE); ctr = be32_to_cpu(cctx->iv[3]); @@ -620,12 +621,12 @@ static int mtk_aes_ctr_transfer(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) static int mtk_aes_ctr_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) { struct mtk_aes_ctr_ctx *cctx = mtk_aes_ctr_ctx_cast(aes->ctx); - struct ablkcipher_request *req = ablkcipher_request_cast(aes->areq); - struct mtk_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(aes->areq); + struct mtk_aes_reqctx *rctx = skcipher_request_ctx(req); mtk_aes_set_mode(aes, rctx); - memcpy(cctx->iv, req->info, AES_BLOCK_SIZE); + memcpy(cctx->iv, req->iv, AES_BLOCK_SIZE); cctx->offset = 0; aes->total = 0; aes->resume = mtk_aes_ctr_transfer; @@ -634,10 +635,10 @@ static int mtk_aes_ctr_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) } /* Check and set the AES key to transform state buffer */ -static int mtk_aes_setkey(struct crypto_ablkcipher *tfm, +static int mtk_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, u32 keylen) { - struct mtk_aes_base_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct mtk_aes_base_ctx *ctx = crypto_skcipher_ctx(tfm); switch (keylen) { case AES_KEYSIZE_128: @@ -651,7 +652,7 @@ static int mtk_aes_setkey(struct crypto_ablkcipher *tfm, break; default: - crypto_ablkcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } @@ -661,10 +662,10 @@ static int mtk_aes_setkey(struct crypto_ablkcipher *tfm, return 0; } -static int mtk_aes_crypt(struct ablkcipher_request *req, u64 mode) +static int mtk_aes_crypt(struct skcipher_request *req, u64 mode) { - struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); - struct mtk_aes_base_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + struct mtk_aes_base_ctx *ctx = crypto_skcipher_ctx(skcipher); struct mtk_aes_reqctx *rctx; struct mtk_cryp *cryp; @@ -672,185 +673,168 @@ static int mtk_aes_crypt(struct ablkcipher_request *req, u64 mode) if (!cryp) return -ENODEV; - rctx = ablkcipher_request_ctx(req); + rctx = skcipher_request_ctx(req); rctx->mode = mode; return mtk_aes_handle_queue(cryp, !(mode & AES_FLAGS_ENCRYPT), &req->base); } -static int mtk_aes_ecb_encrypt(struct ablkcipher_request *req) +static int mtk_aes_ecb_encrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ENCRYPT | AES_FLAGS_ECB); } -static int mtk_aes_ecb_decrypt(struct ablkcipher_request *req) +static int mtk_aes_ecb_decrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ECB); } -static int mtk_aes_cbc_encrypt(struct ablkcipher_request *req) +static int mtk_aes_cbc_encrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ENCRYPT | AES_FLAGS_CBC); } -static int mtk_aes_cbc_decrypt(struct ablkcipher_request *req) +static int mtk_aes_cbc_decrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_CBC); } -static int mtk_aes_ctr_encrypt(struct ablkcipher_request *req) +static int mtk_aes_ctr_encrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ENCRYPT | AES_FLAGS_CTR); } -static int mtk_aes_ctr_decrypt(struct ablkcipher_request *req) +static int mtk_aes_ctr_decrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_CTR); } -static int mtk_aes_ofb_encrypt(struct ablkcipher_request *req) +static int mtk_aes_ofb_encrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ENCRYPT | AES_FLAGS_OFB); } -static int mtk_aes_ofb_decrypt(struct ablkcipher_request *req) +static int mtk_aes_ofb_decrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_OFB); } -static int mtk_aes_cfb_encrypt(struct ablkcipher_request *req) +static int mtk_aes_cfb_encrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_ENCRYPT | AES_FLAGS_CFB128); } -static int mtk_aes_cfb_decrypt(struct ablkcipher_request *req) +static int mtk_aes_cfb_decrypt(struct skcipher_request *req) { return mtk_aes_crypt(req, AES_FLAGS_CFB128); } -static int mtk_aes_cra_init(struct crypto_tfm *tfm) +static int mtk_aes_init_tfm(struct crypto_skcipher *tfm) { - struct mtk_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct mtk_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct mtk_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct mtk_aes_reqctx)); ctx->base.start = mtk_aes_start; return 0; } -static int mtk_aes_ctr_cra_init(struct crypto_tfm *tfm) +static int mtk_aes_ctr_init_tfm(struct crypto_skcipher *tfm) { - struct mtk_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct mtk_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct mtk_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct mtk_aes_reqctx)); ctx->base.start = mtk_aes_ctr_start; return 0; } -static struct crypto_alg aes_algs[] = { +static struct skcipher_alg aes_algs[] = { { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-mtk", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_init = mtk_aes_cra_init, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct mtk_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_cbc_encrypt, - .decrypt = mtk_aes_cbc_decrypt, - .ivsize = AES_BLOCK_SIZE, - } + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-mtk", + .base.cra_priority = 400, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = mtk_aes_setkey, + .encrypt = mtk_aes_cbc_encrypt, + .decrypt = mtk_aes_cbc_decrypt, + .ivsize = AES_BLOCK_SIZE, + .init = mtk_aes_init_tfm, }, { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-mtk", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_init = mtk_aes_cra_init, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct mtk_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ecb_encrypt, - .decrypt = mtk_aes_ecb_decrypt, - } + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-mtk", + .base.cra_priority = 400, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = mtk_aes_setkey, + .encrypt = mtk_aes_ecb_encrypt, + .decrypt = mtk_aes_ecb_decrypt, + .init = mtk_aes_init_tfm, }, { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-mtk", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_init = mtk_aes_ctr_cra_init, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct mtk_aes_ctr_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ctr_encrypt, - .decrypt = mtk_aes_ctr_decrypt, - } + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-mtk", + .base.cra_priority = 400, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = mtk_aes_setkey, + .encrypt = mtk_aes_ctr_encrypt, + .decrypt = mtk_aes_ctr_decrypt, + .init = mtk_aes_ctr_init_tfm, }, { - .cra_name = "ofb(aes)", - .cra_driver_name = "ofb-aes-mtk", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_init = mtk_aes_cra_init, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct mtk_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ofb_encrypt, - .decrypt = mtk_aes_ofb_decrypt, - } + .base.cra_name = "ofb(aes)", + .base.cra_driver_name = "ofb-aes-mtk", + .base.cra_priority = 400, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = mtk_aes_setkey, + .encrypt = mtk_aes_ofb_encrypt, + .decrypt = mtk_aes_ofb_decrypt, }, { - .cra_name = "cfb(aes)", - .cra_driver_name = "cfb-aes-mtk", - .cra_priority = 400, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_init = mtk_aes_cra_init, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct mtk_aes_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_cfb_encrypt, - .decrypt = mtk_aes_cfb_decrypt, - } + .base.cra_name = "cfb(aes)", + .base.cra_driver_name = "cfb-aes-mtk", + .base.cra_priority = 400, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = mtk_aes_setkey, + .encrypt = mtk_aes_cfb_encrypt, + .decrypt = mtk_aes_cfb_decrypt, }, }; @@ -1259,7 +1243,7 @@ static void mtk_aes_unregister_algs(void) crypto_unregister_aead(&aes_gcm_alg); for (i = 0; i < ARRAY_SIZE(aes_algs); i++) - crypto_unregister_alg(&aes_algs[i]); + crypto_unregister_skcipher(&aes_algs[i]); } static int mtk_aes_register_algs(void) @@ -1267,7 +1251,7 @@ static int mtk_aes_register_algs(void) int err, i; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { - err = crypto_register_alg(&aes_algs[i]); + err = crypto_register_skcipher(&aes_algs[i]); if (err) goto err_aes_algs; } @@ -1280,7 +1264,7 @@ static int mtk_aes_register_algs(void) err_aes_algs: for (; i--; ) - crypto_unregister_alg(&aes_algs[i]); + crypto_unregister_skcipher(&aes_algs[i]); return err; } From patchwork Thu Oct 24 13:23:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209701 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE4F513BD for ; Thu, 24 Oct 2019 13:31:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4E3A920663 for ; Thu, 24 Oct 2019 13:31:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VfJ/3T9E"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="b2NwSqCm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E3A920663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9v8QJ6LXayBXeQYiNk3hJAOgiGb14fwhrVVwQq+GFM8=; b=VfJ/3T9E9szQrL 3ffIREGenmOGpbgL6/hkYgM9iazzAdkP8hnUJBUE/nbdHjFXQL1GaJrC6c9nJkQMdDMwVhFl/xtTi s2N+RrmQdRmik9ZBYvvcnSzcLMo34IZHFhZSmjrnxS2VD0xA8J/zo+FydGIF6AR1Xnkq5YhGnj7D/ QbU9Bo/9ktyZI5TMkzNoVYyYWJZLPKJA/u193XM7i53NKDKjZ+ZDcprDo7vZgl9a81uP4PxI6zHO2 QO4tXjZ48M0//njxpTYcSLxfUC0LvRBbVzVV8bbbdoaS/790BVRjj02FjQNfWWOZgqjpFJJG6YJrU aVsxuYXPWPTQ5BEAieiw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdCl-0008LZ-T5; Thu, 24 Oct 2019 13:31:07 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6E-0000hg-66 for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:30 +0000 Received: by mail-wr1-x444.google.com with SMTP id a11so9991488wra.6 for ; Thu, 24 Oct 2019 06:24:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fzYA+KPH1HLckQxgkplKIucO6xedINSV3sstBlzedDE=; b=b2NwSqCmBjvd9lPDRC2i1ZghCWgjAhbHe5IE1d4zsfCNGD172iPPA+t+WpU06uGh9/ iVH0jzfuKZUL1jaIuWIz0aa+HTgLhZxvLsGfJvBkmSdx47Vfr4MzDvu91DAg/sOBMoCv virPjlj2xqwR8loNzKbGZfM9Jx60nxaIgfE222tM8FbsGb5Hbi6XhhthW4Ui29dU/mqC t5XSrw2aPK3WMZUx9VI35GwlCqTSyn31CGPKbJifWBrFO0NG/XHUIDaf2ZkPlboyvsii +1rl59zvPTDdgxhFA/bxXCxLkzkz9jKpXqKEoo9xoDxlm/d2ewku3jg/CQPpjqrcMWVM OokQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fzYA+KPH1HLckQxgkplKIucO6xedINSV3sstBlzedDE=; b=VP1G7kEELhLZZOhfNdYc+fnhYyqZIxmRgTiUD1l5JFtZeUx28hwj8fiFswqJ6BDRGu fkYb/TYHUj/MM6W9F4R61NTlZkozpNc3wqcX2xdcuExLYVjnht4AvRwVqwV9X1ab5AJk BJe1nZT0X4U4+xd/SKZHGZB1NLgPmVDwb2Sh+4BRyyPuwJ5t4ILkYuvJ9Mxf8OCzgch+ zY4Ep0HM57sasxdBd5xf86LjheilmGcUI0w2rV081DvPspHhWmilSUG2oprcSPogSVTp 74WcFDPnP38SRi9WZ73rKohsNnciXY3bH3VmCrCRwgC1gA0H3aM/mkeAWl+NK9oPzkvf wE0Q== X-Gm-Message-State: APjAAAW7BR1b9hIacilfnweGV0BT25qPcOtXlUjRzGxkz1anwxQ9zgBv DuSUASxJ8wb88ZOy+z0wfPFsow== X-Google-Smtp-Source: APXvYqy8osoZ8j7OBMvrxaJaUsVN6xjSrByaJqzGpSNwElcPzcnA/1DB32H2XHXZ0Y0hJswlXx2aKQ== X-Received: by 2002:adf:e747:: with SMTP id c7mr3914028wrn.384.1571923459461; Thu, 24 Oct 2019 06:24:19 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:18 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 18/27] crypto: sahara - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:36 +0200 Message-Id: <20191024132345.5236-19-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062422_282519_BBD5DAF0 X-CRM114-Status: GOOD ( 17.95 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/sahara.c | 156 ++++++++++---------- 1 file changed, 75 insertions(+), 81 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 8ac8ec6decd5..d4ea2f11ca68 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -547,7 +547,7 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev) return -EINVAL; } -static int sahara_aes_process(struct ablkcipher_request *req) +static int sahara_aes_process(struct skcipher_request *req) { struct sahara_dev *dev = dev_ptr; struct sahara_ctx *ctx; @@ -558,20 +558,20 @@ static int sahara_aes_process(struct ablkcipher_request *req) /* Request is ready to be dispatched by the device */ dev_dbg(dev->device, "dispatch request (nbytes=%d, src=%p, dst=%p)\n", - req->nbytes, req->src, req->dst); + req->cryptlen, req->src, req->dst); /* assign new request to device */ - dev->total = req->nbytes; + dev->total = req->cryptlen; dev->in_sg = req->src; dev->out_sg = req->dst; - rctx = ablkcipher_request_ctx(req); - ctx = crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)); + rctx = skcipher_request_ctx(req); + ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); rctx->mode &= FLAGS_MODE_MASK; dev->flags = (dev->flags & ~FLAGS_MODE_MASK) | rctx->mode; - if ((dev->flags & FLAGS_CBC) && req->info) - memcpy(dev->iv_base, req->info, AES_KEYSIZE_128); + if ((dev->flags & FLAGS_CBC) && req->iv) + memcpy(dev->iv_base, req->iv, AES_KEYSIZE_128); /* assign new context to device */ dev->ctx = ctx; @@ -597,10 +597,10 @@ static int sahara_aes_process(struct ablkcipher_request *req) return 0; } -static int sahara_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct sahara_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); int ret; ctx->keylen = keylen; @@ -630,16 +630,16 @@ static int sahara_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return ret; } -static int sahara_aes_crypt(struct ablkcipher_request *req, unsigned long mode) +static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) { - struct sahara_aes_reqctx *rctx = ablkcipher_request_ctx(req); + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_dev *dev = dev_ptr; int err = 0; dev_dbg(dev->device, "nbytes: %d, enc: %d, cbc: %d\n", - req->nbytes, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC)); + req->cryptlen, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC)); - if (!IS_ALIGNED(req->nbytes, AES_BLOCK_SIZE)) { + if (!IS_ALIGNED(req->cryptlen, AES_BLOCK_SIZE)) { dev_err(dev->device, "request size is not exact amount of AES blocks\n"); return -EINVAL; @@ -648,7 +648,7 @@ static int sahara_aes_crypt(struct ablkcipher_request *req, unsigned long mode) rctx->mode = mode; mutex_lock(&dev->queue_mutex); - err = ablkcipher_enqueue_request(&dev->queue, req); + err = crypto_enqueue_request(&dev->queue, &req->base); mutex_unlock(&dev->queue_mutex); wake_up_process(dev->kthread); @@ -656,10 +656,10 @@ static int sahara_aes_crypt(struct ablkcipher_request *req, unsigned long mode) return err; } -static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req) +static int sahara_aes_ecb_encrypt(struct skcipher_request *req) { - struct sahara_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct sahara_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { @@ -669,7 +669,7 @@ static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req) skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); err = crypto_skcipher_encrypt(subreq); skcipher_request_zero(subreq); return err; @@ -678,10 +678,10 @@ static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req) return sahara_aes_crypt(req, FLAGS_ENCRYPT); } -static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req) +static int sahara_aes_ecb_decrypt(struct skcipher_request *req) { - struct sahara_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct sahara_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { @@ -691,7 +691,7 @@ static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req) skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); err = crypto_skcipher_decrypt(subreq); skcipher_request_zero(subreq); return err; @@ -700,10 +700,10 @@ static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req) return sahara_aes_crypt(req, 0); } -static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req) +static int sahara_aes_cbc_encrypt(struct skcipher_request *req) { - struct sahara_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct sahara_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { @@ -713,7 +713,7 @@ static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req) skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); err = crypto_skcipher_encrypt(subreq); skcipher_request_zero(subreq); return err; @@ -722,10 +722,10 @@ static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req) return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); } -static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) +static int sahara_aes_cbc_decrypt(struct skcipher_request *req) { - struct sahara_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct sahara_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { @@ -735,7 +735,7 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); err = crypto_skcipher_decrypt(subreq); skcipher_request_zero(subreq); return err; @@ -744,10 +744,10 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) return sahara_aes_crypt(req, FLAGS_CBC); } -static int sahara_aes_cra_init(struct crypto_tfm *tfm) +static int sahara_aes_init_tfm(struct crypto_skcipher *tfm) { - const char *name = crypto_tfm_alg_name(tfm); - struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); + const char *name = crypto_tfm_alg_name(&tfm->base); + struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); ctx->fallback = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); @@ -756,14 +756,14 @@ static int sahara_aes_cra_init(struct crypto_tfm *tfm) return PTR_ERR(ctx->fallback); } - tfm->crt_ablkcipher.reqsize = sizeof(struct sahara_aes_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx)); return 0; } -static void sahara_aes_cra_exit(struct crypto_tfm *tfm) +static void sahara_aes_exit_tfm(struct crypto_skcipher *tfm) { - struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); + struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); crypto_free_sync_skcipher(ctx->fallback); } @@ -1071,8 +1071,8 @@ static int sahara_queue_manage(void *data) ret = sahara_sha_process(req); } else { - struct ablkcipher_request *req = - ablkcipher_request_cast(async_req); + struct skcipher_request *req = + skcipher_request_cast(async_req); ret = sahara_aes_process(req); } @@ -1189,48 +1189,42 @@ static int sahara_sha_cra_init(struct crypto_tfm *tfm) return 0; } -static struct crypto_alg aes_algs[] = { +static struct skcipher_alg aes_algs[] = { { - .cra_name = "ecb(aes)", - .cra_driver_name = "sahara-ecb-aes", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct sahara_ctx), - .cra_alignmask = 0x0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = sahara_aes_cra_init, - .cra_exit = sahara_aes_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE , - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = sahara_aes_setkey, - .encrypt = sahara_aes_ecb_encrypt, - .decrypt = sahara_aes_ecb_decrypt, - } + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "sahara-ecb-aes", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct sahara_ctx), + .base.cra_alignmask = 0x0, + .base.cra_module = THIS_MODULE, + + .init = sahara_aes_init_tfm, + .exit = sahara_aes_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE , + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = sahara_aes_setkey, + .encrypt = sahara_aes_ecb_encrypt, + .decrypt = sahara_aes_ecb_decrypt, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "sahara-cbc-aes", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct sahara_ctx), - .cra_alignmask = 0x0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = sahara_aes_cra_init, - .cra_exit = sahara_aes_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE , - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = sahara_aes_setkey, - .encrypt = sahara_aes_cbc_encrypt, - .decrypt = sahara_aes_cbc_decrypt, - } + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "sahara-cbc-aes", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct sahara_ctx), + .base.cra_alignmask = 0x0, + .base.cra_module = THIS_MODULE, + + .init = sahara_aes_init_tfm, + .exit = sahara_aes_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE , + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = sahara_aes_setkey, + .encrypt = sahara_aes_cbc_encrypt, + .decrypt = sahara_aes_cbc_decrypt, } }; @@ -1318,7 +1312,7 @@ static int sahara_register_algs(struct sahara_dev *dev) unsigned int i, j, k, l; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { - err = crypto_register_alg(&aes_algs[i]); + err = crypto_register_skcipher(&aes_algs[i]); if (err) goto err_aes_algs; } @@ -1348,7 +1342,7 @@ static int sahara_register_algs(struct sahara_dev *dev) err_aes_algs: for (j = 0; j < i; j++) - crypto_unregister_alg(&aes_algs[j]); + crypto_unregister_skcipher(&aes_algs[j]); return err; } @@ -1358,7 +1352,7 @@ static void sahara_unregister_algs(struct sahara_dev *dev) unsigned int i; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) - crypto_unregister_alg(&aes_algs[i]); + crypto_unregister_skcipher(&aes_algs[i]); for (i = 0; i < ARRAY_SIZE(sha_v3_algs); i++) crypto_unregister_ahash(&sha_v3_algs[i]); From patchwork Thu Oct 24 13:23:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209705 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 454D513B1 for ; Thu, 24 Oct 2019 13:31:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 19EF620663 for ; Thu, 24 Oct 2019 13:31:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NHiSOy67"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="xAAuv74D" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19EF620663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VVr+4vb+X/g6/AFopNOJMf33eCMNJe2/q5ul3Vcs2yA=; b=NHiSOy67EgTDsY dcKYZlsW/RUkrYGJpMh78/YZ7mSdyVJP1NBS8BC1Z1bptrQHaM3as6rx+WRxFp3qS+7E7xuEYDRsp fVDSeS6wZ6qf8pQBo6FQcK4WD8UNWmCh9CcGVvrnwUItdP+My2XS1lrHRWb98f6ppdan8e6Ng/BHi 1pcuM0EgxoVcvMc6gPhFdeXlxvtNbfifsY0szfv6FcWR/ww8UI/tZneObYewTaJGVKg+rBTNB9zUd EXgSQQjGrnhTld5tO9X0/bLrCOTsGYuf2N/mrjoFVjMU9PYy1dcK1E/j6U3ZFMGC1O93ZBe0SSws5 Z/c9AITd6IPmkGZ98KLA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdDQ-0000QW-ON; Thu, 24 Oct 2019 13:31:48 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6F-0000ki-Lc for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:34 +0000 Received: by mail-wm1-x344.google.com with SMTP id g24so2827750wmh.5 for ; Thu, 24 Oct 2019 06:24:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dIBlYIM7JpanSQ8gUz4gqfgKYH0bGpfP4NHGwTwYVtE=; b=xAAuv74DGFbtuiS42g2b3ShMd+MW1t6cixbUTJR/hv1OSmwBV0220c9cbhq7VbH3Dv wRQQjhERHLexZTH3H6S3wyLnq+Pu0HwtdgRZYrL4K1A/2J8qTQm1s3U1qZRpCv2WwKgf 1NjKTDT+yNYFf7xKYeMC+TS0y03QDTqyBgcC9NwHj4WDdIiH3QB21y6p2qsdPxl6WfvO rgu3Zj0ZjX4O0QoEcQwakNO9g7XM9tr/Xc8iHzCx21M7NCuvF8EMqOy6o+9b79Z6/xJd Ko40c0gRqeaZLXA9XIuePLPXV8byum7gs/PA23C1H84LnNqUgm/g+uOR/1clklywoc9S kJAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dIBlYIM7JpanSQ8gUz4gqfgKYH0bGpfP4NHGwTwYVtE=; b=IGvheDxhOB1ID7+A//QzM06jiFMz2c/XiLCkbx6MbOphFOPWjFgVjGgp+428pDftv4 nHY9/k5CsdjSkJSjfbWRe5DIe7YLRnPS9E+I/MRcHY/NlDC8QjWikUQWtwua+QdYlamz c7uYm6QCyyFs3gOsW6w5F7SEhlhKkGf8ZK++srT0t1FDkea71cnlxpE3pAH3NlPdLfiv 5eJrwjbaxuPTOXBBqAIrgL+QC1flZX40IgQhSwrVQlOfryyrQcSabEKYfQXNR4+VPs8f BYjnAch9KDxLAq3ygBa+2GMkQXvPej4eSYtPDWWEXuVHbwYnmCgZMMmnS+M+ksn5np9z qpmQ== X-Gm-Message-State: APjAAAWL7pW6a4ZgIYEiJ0TyF28/PNeb+ITuPN8zOg1MI1Ahn/08OHK+ Tcgdskvg0cMZ/Pv8zEvLUBwrBg== X-Google-Smtp-Source: APXvYqyNjcrZ/x+Sbl16sipQ+/QJfg2XZ2SZAi8f02FaaS1hGEtzd+akqQfvZPajIwrMjxbuEldHwg== X-Received: by 2002:a1c:49c2:: with SMTP id w185mr4782203wma.16.1571923460842; Thu, 24 Oct 2019 06:24:20 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:19 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 19/27] crypto: picoxcell - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:37 +0200 Message-Id: <20191024132345.5236-20-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062424_030797_0A130715 X-CRM114-Status: GOOD ( 16.96 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , Jamie Iles , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Jamie Iles Signed-off-by: Ard Biesheuvel --- drivers/crypto/picoxcell_crypto.c | 386 ++++++++++---------- 1 file changed, 184 insertions(+), 202 deletions(-) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index 3cbefb41b099..29da449b3e9e 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -134,7 +134,7 @@ struct spacc_engine { struct spacc_alg { unsigned long ctrl_default; unsigned long type; - struct crypto_alg alg; + struct skcipher_alg alg; struct spacc_engine *engine; struct list_head entry; int key_offs; @@ -173,7 +173,7 @@ struct spacc_aead_ctx { static int spacc_ablk_submit(struct spacc_req *req); -static inline struct spacc_alg *to_spacc_alg(struct crypto_alg *alg) +static inline struct spacc_alg *to_spacc_skcipher(struct skcipher_alg *alg) { return alg ? container_of(alg, struct spacc_alg, alg) : NULL; } @@ -733,13 +733,13 @@ static void spacc_aead_cra_exit(struct crypto_aead *tfm) * Set the DES key for a block cipher transform. This also performs weak key * checking if the transform has requested it. */ -static int spacc_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int spacc_des_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct spacc_ablk_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(cipher); int err; - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(cipher, key); if (err) return err; @@ -753,13 +753,13 @@ static int spacc_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, * Set the 3DES key for a block cipher transform. This also performs weak key * checking if the transform has requested it. */ -static int spacc_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int spacc_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct spacc_ablk_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(cipher); int err; - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(cipher, key); if (err) return err; @@ -773,15 +773,15 @@ static int spacc_des3_setkey(struct crypto_ablkcipher *cipher, const u8 *key, * Set the key for an AES block cipher. Some key lengths are not supported in * hardware so this must also check whether a fallback is needed. */ -static int spacc_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int spacc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); int err = 0; if (len > AES_MAX_KEY_SIZE) { - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } @@ -822,15 +822,15 @@ static int spacc_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return err; } -static int spacc_kasumi_f8_setkey(struct crypto_ablkcipher *cipher, +static int spacc_kasumi_f8_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int len) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); int err = 0; if (len > AES_MAX_KEY_SIZE) { - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); err = -EINVAL; goto out; } @@ -844,12 +844,12 @@ static int spacc_kasumi_f8_setkey(struct crypto_ablkcipher *cipher, static int spacc_ablk_need_fallback(struct spacc_req *req) { + struct skcipher_request *ablk_req = skcipher_request_cast(req->req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ablk_req); + struct spacc_alg *spacc_alg = to_spacc_skcipher(crypto_skcipher_alg(tfm)); struct spacc_ablk_ctx *ctx; - struct crypto_tfm *tfm = req->req->tfm; - struct crypto_alg *alg = req->req->tfm->__crt_alg; - struct spacc_alg *spacc_alg = to_spacc_alg(alg); - ctx = crypto_tfm_ctx(tfm); + ctx = crypto_skcipher_ctx(tfm); return (spacc_alg->ctrl_default & SPACC_CRYPTO_ALG_MASK) == SPA_CTRL_CIPH_ALG_AES && @@ -859,39 +859,39 @@ static int spacc_ablk_need_fallback(struct spacc_req *req) static void spacc_ablk_complete(struct spacc_req *req) { - struct ablkcipher_request *ablk_req = ablkcipher_request_cast(req->req); + struct skcipher_request *ablk_req = skcipher_request_cast(req->req); if (ablk_req->src != ablk_req->dst) { spacc_free_ddt(req, req->src_ddt, req->src_addr, ablk_req->src, - ablk_req->nbytes, DMA_TO_DEVICE); + ablk_req->cryptlen, DMA_TO_DEVICE); spacc_free_ddt(req, req->dst_ddt, req->dst_addr, ablk_req->dst, - ablk_req->nbytes, DMA_FROM_DEVICE); + ablk_req->cryptlen, DMA_FROM_DEVICE); } else spacc_free_ddt(req, req->dst_ddt, req->dst_addr, ablk_req->dst, - ablk_req->nbytes, DMA_BIDIRECTIONAL); + ablk_req->cryptlen, DMA_BIDIRECTIONAL); req->req->complete(req->req, req->result); } static int spacc_ablk_submit(struct spacc_req *req) { - struct crypto_tfm *tfm = req->req->tfm; - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); - struct ablkcipher_request *ablk_req = ablkcipher_request_cast(req->req); - struct crypto_alg *alg = req->req->tfm->__crt_alg; - struct spacc_alg *spacc_alg = to_spacc_alg(alg); + struct skcipher_request *ablk_req = skcipher_request_cast(req->req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ablk_req); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); + struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); struct spacc_engine *engine = ctx->generic.engine; u32 ctrl; req->ctx_id = spacc_load_ctx(&ctx->generic, ctx->key, - ctx->key_len, ablk_req->info, alg->cra_ablkcipher.ivsize, + ctx->key_len, ablk_req->iv, alg->ivsize, NULL, 0); writel(req->src_addr, engine->regs + SPA_SRC_PTR_REG_OFFSET); writel(req->dst_addr, engine->regs + SPA_DST_PTR_REG_OFFSET); writel(0, engine->regs + SPA_OFFSET_REG_OFFSET); - writel(ablk_req->nbytes, engine->regs + SPA_PROC_LEN_REG_OFFSET); + writel(ablk_req->cryptlen, engine->regs + SPA_PROC_LEN_REG_OFFSET); writel(0, engine->regs + SPA_ICV_OFFSET_REG_OFFSET); writel(0, engine->regs + SPA_AUX_INFO_REG_OFFSET); writel(0, engine->regs + SPA_AAD_LEN_REG_OFFSET); @@ -907,11 +907,11 @@ static int spacc_ablk_submit(struct spacc_req *req) return -EINPROGRESS; } -static int spacc_ablk_do_fallback(struct ablkcipher_request *req, +static int spacc_ablk_do_fallback(struct skcipher_request *req, unsigned alg_type, bool is_encrypt) { struct crypto_tfm *old_tfm = - crypto_ablkcipher_tfm(crypto_ablkcipher_reqtfm(req)); + crypto_skcipher_tfm(crypto_skcipher_reqtfm(req)); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); int err; @@ -924,7 +924,7 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req, skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); err = is_encrypt ? crypto_skcipher_encrypt(subreq) : crypto_skcipher_decrypt(subreq); skcipher_request_zero(subreq); @@ -932,12 +932,13 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req, return err; } -static int spacc_ablk_setup(struct ablkcipher_request *req, unsigned alg_type, +static int spacc_ablk_setup(struct skcipher_request *req, unsigned alg_type, bool is_encrypt) { - struct crypto_alg *alg = req->base.tfm->__crt_alg; - struct spacc_engine *engine = to_spacc_alg(alg)->engine; - struct spacc_req *dev_req = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct spacc_engine *engine = to_spacc_skcipher(alg)->engine; + struct spacc_req *dev_req = skcipher_request_ctx(req); unsigned long flags; int err = -ENOMEM; @@ -956,17 +957,17 @@ static int spacc_ablk_setup(struct ablkcipher_request *req, unsigned alg_type, */ if (req->src != req->dst) { dev_req->src_ddt = spacc_sg_to_ddt(engine, req->src, - req->nbytes, DMA_TO_DEVICE, &dev_req->src_addr); + req->cryptlen, DMA_TO_DEVICE, &dev_req->src_addr); if (!dev_req->src_ddt) goto out; dev_req->dst_ddt = spacc_sg_to_ddt(engine, req->dst, - req->nbytes, DMA_FROM_DEVICE, &dev_req->dst_addr); + req->cryptlen, DMA_FROM_DEVICE, &dev_req->dst_addr); if (!dev_req->dst_ddt) goto out_free_src; } else { dev_req->dst_ddt = spacc_sg_to_ddt(engine, req->dst, - req->nbytes, DMA_BIDIRECTIONAL, &dev_req->dst_addr); + req->cryptlen, DMA_BIDIRECTIONAL, &dev_req->dst_addr); if (!dev_req->dst_ddt) goto out; @@ -999,65 +1000,65 @@ static int spacc_ablk_setup(struct ablkcipher_request *req, unsigned alg_type, out_free_ddts: spacc_free_ddt(dev_req, dev_req->dst_ddt, dev_req->dst_addr, req->dst, - req->nbytes, req->src == req->dst ? + req->cryptlen, req->src == req->dst ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE); out_free_src: if (req->src != req->dst) spacc_free_ddt(dev_req, dev_req->src_ddt, dev_req->src_addr, - req->src, req->nbytes, DMA_TO_DEVICE); + req->src, req->cryptlen, DMA_TO_DEVICE); out: return err; } -static int spacc_ablk_cra_init(struct crypto_tfm *tfm) +static int spacc_ablk_init_tfm(struct crypto_skcipher *tfm) { - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_alg *alg = tfm->__crt_alg; - struct spacc_alg *spacc_alg = to_spacc_alg(alg); + struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); struct spacc_engine *engine = spacc_alg->engine; ctx->generic.flags = spacc_alg->type; ctx->generic.engine = engine; - if (alg->cra_flags & CRYPTO_ALG_NEED_FALLBACK) { + if (alg->base.cra_flags & CRYPTO_ALG_NEED_FALLBACK) { ctx->sw_cipher = crypto_alloc_sync_skcipher( - alg->cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); + alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->sw_cipher)) { dev_warn(engine->dev, "failed to allocate fallback for %s\n", - alg->cra_name); + alg->base.cra_name); return PTR_ERR(ctx->sw_cipher); } } ctx->generic.key_offs = spacc_alg->key_offs; ctx->generic.iv_offs = spacc_alg->iv_offs; - tfm->crt_ablkcipher.reqsize = sizeof(struct spacc_req); + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); return 0; } -static void spacc_ablk_cra_exit(struct crypto_tfm *tfm) +static void spacc_ablk_exit_tfm(struct crypto_skcipher *tfm) { - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); + struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); crypto_free_sync_skcipher(ctx->sw_cipher); } -static int spacc_ablk_encrypt(struct ablkcipher_request *req) +static int spacc_ablk_encrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct spacc_alg *alg = to_spacc_alg(tfm->__crt_alg); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg = crypto_skcipher_alg(cipher); + struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - return spacc_ablk_setup(req, alg->type, 1); + return spacc_ablk_setup(req, spacc_alg->type, 1); } -static int spacc_ablk_decrypt(struct ablkcipher_request *req) +static int spacc_ablk_decrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct spacc_alg *alg = to_spacc_alg(tfm->__crt_alg); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg = crypto_skcipher_alg(cipher); + struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - return spacc_ablk_setup(req, alg->type, 0); + return spacc_ablk_setup(req, spacc_alg->type, 0); } static inline int spacc_fifo_stat_empty(struct spacc_engine *engine) @@ -1233,27 +1234,24 @@ static struct spacc_alg ipsec_engine_algs[] = { .key_offs = 0, .iv_offs = AES_MAX_KEY_SIZE, .alg = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_aes_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_aes_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, { @@ -1261,25 +1259,23 @@ static struct spacc_alg ipsec_engine_algs[] = { .iv_offs = AES_MAX_KEY_SIZE, .ctrl_default = SPA_CTRL_CIPH_ALG_AES | SPA_CTRL_CIPH_MODE_ECB, .alg = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_aes_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_aes_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, { @@ -1287,26 +1283,23 @@ static struct spacc_alg ipsec_engine_algs[] = { .iv_offs = 0, .ctrl_default = SPA_CTRL_CIPH_ALG_DES | SPA_CTRL_CIPH_MODE_CBC, .alg = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_des_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_des_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, { @@ -1314,25 +1307,22 @@ static struct spacc_alg ipsec_engine_algs[] = { .iv_offs = 0, .ctrl_default = SPA_CTRL_CIPH_ALG_DES | SPA_CTRL_CIPH_MODE_ECB, .alg = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_des_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_des_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, { @@ -1340,26 +1330,23 @@ static struct spacc_alg ipsec_engine_algs[] = { .iv_offs = 0, .ctrl_default = SPA_CTRL_CIPH_ALG_DES | SPA_CTRL_CIPH_MODE_CBC, .alg = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3-ede-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_des3_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3-ede-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_des3_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, { @@ -1367,25 +1354,22 @@ static struct spacc_alg ipsec_engine_algs[] = { .iv_offs = 0, .ctrl_default = SPA_CTRL_CIPH_ALG_DES | SPA_CTRL_CIPH_MODE_ECB, .alg = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3-ede-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_des3_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3-ede-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY, + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_des3_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, }; @@ -1581,25 +1565,23 @@ static struct spacc_alg l2_engine_algs[] = { .ctrl_default = SPA_CTRL_CIPH_ALG_KASUMI | SPA_CTRL_CIPH_MODE_F8, .alg = { - .cra_name = "f8(kasumi)", - .cra_driver_name = "f8-kasumi-picoxcell", - .cra_priority = SPACC_CRYPTO_ALG_PRIORITY, - .cra_flags = CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = 8, - .cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_ablkcipher = { - .setkey = spacc_kasumi_f8_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = 16, - .max_keysize = 16, - .ivsize = 8, - }, - .cra_init = spacc_ablk_cra_init, - .cra_exit = spacc_ablk_cra_exit, + .base.cra_name = "f8(kasumi)", + .base.cra_driver_name = "f8-kasumi-picoxcell", + .base.cra_priority = SPACC_CRYPTO_ALG_PRIORITY, + .base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY, + .base.cra_blocksize = 8, + .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), + .base.cra_module = THIS_MODULE, + + .setkey = spacc_kasumi_f8_setkey, + .encrypt = spacc_ablk_encrypt, + .decrypt = spacc_ablk_decrypt, + .min_keysize = 16, + .max_keysize = 16, + .ivsize = 8, + .init = spacc_ablk_init_tfm, + .exit = spacc_ablk_exit_tfm, }, }, }; @@ -1721,7 +1703,7 @@ static int spacc_probe(struct platform_device *pdev) INIT_LIST_HEAD(&engine->registered_algs); for (i = 0; i < engine->num_algs; ++i) { engine->algs[i].engine = engine; - err = crypto_register_alg(&engine->algs[i].alg); + err = crypto_register_skcipher(&engine->algs[i].alg); if (!err) { list_add_tail(&engine->algs[i].entry, &engine->registered_algs); @@ -1729,10 +1711,10 @@ static int spacc_probe(struct platform_device *pdev) } if (err) dev_err(engine->dev, "failed to register alg \"%s\"\n", - engine->algs[i].alg.cra_name); + engine->algs[i].alg.base.cra_name); else dev_dbg(engine->dev, "registered alg \"%s\"\n", - engine->algs[i].alg.cra_name); + engine->algs[i].alg.base.cra_name); } INIT_LIST_HEAD(&engine->registered_aeads); @@ -1781,7 +1763,7 @@ static int spacc_remove(struct platform_device *pdev) list_for_each_entry_safe(alg, next, &engine->registered_algs, entry) { list_del(&alg->entry); - crypto_unregister_alg(&alg->alg); + crypto_unregister_skcipher(&alg->alg); } clk_disable_unprepare(engine->clk); From patchwork Thu Oct 24 13:23:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209703 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3FB813BD for ; Thu, 24 Oct 2019 13:31:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B3AA820663 for ; Thu, 24 Oct 2019 13:31:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Oc56PgHn"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="KHMl2DH1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3AA820663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9icWHljYnNoniP0lEymaflzGf4NW6iG1pDy5NI6zP8o=; b=Oc56PgHn2wBmpj 9OOnMGQXAerkKLfC4Zy3TAa3PKwnEVvOR683UsQkUg4flT9SiySy0YIhgVh4NMhMulDPJ+sYiSTiw zqCz4JxRy+95+oPgXtohBauaB5Ua/veVn85lrIbqJTBEBqWpmNI4Z29r1g2CjaVyw7EQq0pbAquip twK7tV3tLYq0RitojGmv/OksRUqdabCsUSRW+mmSJPZklaF68gcluDMp6TMWyT8oP9sOvhNxA6xOy wxU9z7m0uvQ0Df4AbUhe8msjoqhHy2aUX1s5Xa/D6/dW63cQ852gplIRSus39LthnHO04ZQSKgoRI sCo1cv2V/XeIT2B5xLQw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdD7-0000Bk-5E; Thu, 24 Oct 2019 13:31:29 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6H-0000lE-1X for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:33 +0000 Received: by mail-wm1-x343.google.com with SMTP id q70so2854198wme.1 for ; Thu, 24 Oct 2019 06:24:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2zqT7OVVd7qCJ3vg1grCGYRmo9cdTUamZFK4JKrRmk0=; b=KHMl2DH15HbmBoJrw3Uh/3Qh9WLw7uKboIUSfzTpWCi3bsXlyrbyAdRcY6VH/sW6mW t90EQpPB3a+rqjGQ6BlwtHCx9rzDA5EF9mHCeDFtdVPTXaQ+IfqZEr17K1zDgvkUKHot d8gp2oi+yHJMaMqO8N/j4w7UUj9REhz8noC24Sqr5uA0PUOKcQSOGzT+EE0nfoFLtAxt fmffNxeyYrHMFYYlSwwhrGZUJqdMIOtHv6FBDWvBO+0cKyFzDk/HpbYSMrLVZZtOnJQv 2K3T+SawAbSS9HFjvXjjPKpgejpGvhGra2nWE9CjOXUBSrLQSX2cHVEhq9eV3B4r+BBK RTkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2zqT7OVVd7qCJ3vg1grCGYRmo9cdTUamZFK4JKrRmk0=; b=SJqM+xWqj3Tzfldhl8zzv7s7ytBOyKlECaLX5YTA5pqb1a0wWKe4owcUvpjcJ5F6IV nJ168HZiWO6E/5B3GgLmxq56+iH9hxIN2J87U6OmOv+BhBmVMF79MutoRqGQDxae4YCM z2Me4j2uv02a6l4m4hK6Lwb2INUef0ea1j6sefWsHqZrR/DbSfxFTKs5/TW7yhSUu9Cw i2+/4KpG/ZreUWILghuqRk9md7WTWwJEddbDb067S3ulWYYzhqYS6qUjF30rNuDpv/Di amhuk1tYBHOAF8RwdJCn19Mm5Z2x0JI1JYmXV/gmbawKai/a/SBG1jgdssNsH1Jwb0OM MAsQ== X-Gm-Message-State: APjAAAVF/avo4jCExdZ3XTo8Ud/2EkVVaiJQjFneYffHTTjoZmqNVyR3 eXzU748NGDdK58RhSUR/KowMnw== X-Google-Smtp-Source: APXvYqweHwjbzSFyH7cu4TKFUXp44PkF6f4etNOZDxwf9f5A+Tu/WsuBvDEvXYy/qcTwLVf/Ntsqrw== X-Received: by 2002:a1c:38c3:: with SMTP id f186mr5036221wma.58.1571923462491; Thu, 24 Oct 2019 06:24:22 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:21 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 20/27] crypto: qce - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:38 +0200 Message-Id: <20191024132345.5236-21-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062425_134328_89EF89FF X-CRM114-Status: GOOD ( 18.51 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , Stanimir Varbanov , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Stanimir Varbanov Signed-off-by: Ard Biesheuvel Reviewed-by: Stanimir Varbanov --- drivers/crypto/qce/Makefile | 2 +- drivers/crypto/qce/cipher.h | 8 +- drivers/crypto/qce/common.c | 12 +- drivers/crypto/qce/common.h | 3 +- drivers/crypto/qce/core.c | 2 +- drivers/crypto/qce/{ablkcipher.c => skcipher.c} | 172 ++++++++++---------- 6 files changed, 100 insertions(+), 99 deletions(-) diff --git a/drivers/crypto/qce/Makefile b/drivers/crypto/qce/Makefile index 19a7f899acff..8caa04e1ec43 100644 --- a/drivers/crypto/qce/Makefile +++ b/drivers/crypto/qce/Makefile @@ -4,4 +4,4 @@ qcrypto-objs := core.o \ common.o \ dma.o \ sha.o \ - ablkcipher.o + skcipher.o diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h index 5cab8f0706a8..7770660bc853 100644 --- a/drivers/crypto/qce/cipher.h +++ b/drivers/crypto/qce/cipher.h @@ -45,12 +45,12 @@ struct qce_cipher_reqctx { unsigned int cryptlen; }; -static inline struct qce_alg_template *to_cipher_tmpl(struct crypto_tfm *tfm) +static inline struct qce_alg_template *to_cipher_tmpl(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; - return container_of(alg, struct qce_alg_template, alg.crypto); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + return container_of(alg, struct qce_alg_template, alg.skcipher); } -extern const struct qce_algo_ops ablkcipher_ops; +extern const struct qce_algo_ops skcipher_ops; #endif /* _CIPHER_H_ */ diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index 3fb510164326..da1188abc9ba 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -304,13 +304,13 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req, return 0; } -static int qce_setup_regs_ablkcipher(struct crypto_async_request *async_req, +static int qce_setup_regs_skcipher(struct crypto_async_request *async_req, u32 totallen, u32 offset) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); - struct qce_cipher_reqctx *rctx = ablkcipher_request_ctx(req); + struct skcipher_request *req = skcipher_request_cast(async_req); + struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req); struct qce_cipher_ctx *ctx = crypto_tfm_ctx(async_req->tfm); - struct qce_alg_template *tmpl = to_cipher_tmpl(async_req->tfm); + struct qce_alg_template *tmpl = to_cipher_tmpl(crypto_skcipher_reqtfm(req)); struct qce_device *qce = tmpl->qce; __be32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(__be32)] = {0}; __be32 enciv[QCE_MAX_IV_SIZE / sizeof(__be32)] = {0}; @@ -389,8 +389,8 @@ int qce_start(struct crypto_async_request *async_req, u32 type, u32 totallen, u32 offset) { switch (type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - return qce_setup_regs_ablkcipher(async_req, totallen, offset); + case CRYPTO_ALG_TYPE_SKCIPHER: + return qce_setup_regs_skcipher(async_req, totallen, offset); case CRYPTO_ALG_TYPE_AHASH: return qce_setup_regs_ahash(async_req, totallen, offset); default: diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h index 47fb523357ac..282d4317470d 100644 --- a/drivers/crypto/qce/common.h +++ b/drivers/crypto/qce/common.h @@ -10,6 +10,7 @@ #include #include #include +#include /* key size in bytes */ #define QCE_SHA_HMAC_KEY_SIZE 64 @@ -79,7 +80,7 @@ struct qce_alg_template { unsigned long alg_flags; const u32 *std_iv; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg ahash; } alg; struct qce_device *qce; diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c index 08d4ce3bfddf..0a44a6eeacf5 100644 --- a/drivers/crypto/qce/core.c +++ b/drivers/crypto/qce/core.c @@ -22,7 +22,7 @@ #define QCE_QUEUE_LENGTH 1 static const struct qce_algo_ops *qce_ops[] = { - &ablkcipher_ops, + &skcipher_ops, &ahash_ops, }; diff --git a/drivers/crypto/qce/ablkcipher.c b/drivers/crypto/qce/skcipher.c similarity index 61% rename from drivers/crypto/qce/ablkcipher.c rename to drivers/crypto/qce/skcipher.c index f0b59a8bbed0..fee07323f8f9 100644 --- a/drivers/crypto/qce/ablkcipher.c +++ b/drivers/crypto/qce/skcipher.c @@ -12,14 +12,14 @@ #include "cipher.h" -static LIST_HEAD(ablkcipher_algs); +static LIST_HEAD(skcipher_algs); -static void qce_ablkcipher_done(void *data) +static void qce_skcipher_done(void *data) { struct crypto_async_request *async_req = data; - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); - struct qce_cipher_reqctx *rctx = ablkcipher_request_ctx(req); - struct qce_alg_template *tmpl = to_cipher_tmpl(async_req->tfm); + struct skcipher_request *req = skcipher_request_cast(async_req); + struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req); + struct qce_alg_template *tmpl = to_cipher_tmpl(crypto_skcipher_reqtfm(req)); struct qce_device *qce = tmpl->qce; enum dma_data_direction dir_src, dir_dst; u32 status; @@ -32,7 +32,7 @@ static void qce_ablkcipher_done(void *data) error = qce_dma_terminate_all(&qce->dma); if (error) - dev_dbg(qce->dev, "ablkcipher dma termination error (%d)\n", + dev_dbg(qce->dev, "skcipher dma termination error (%d)\n", error); if (diff_dst) @@ -43,18 +43,18 @@ static void qce_ablkcipher_done(void *data) error = qce_check_status(qce, &status); if (error < 0) - dev_dbg(qce->dev, "ablkcipher operation error (%x)\n", status); + dev_dbg(qce->dev, "skcipher operation error (%x)\n", status); qce->async_req_done(tmpl->qce, error); } static int -qce_ablkcipher_async_req_handle(struct crypto_async_request *async_req) +qce_skcipher_async_req_handle(struct crypto_async_request *async_req) { - struct ablkcipher_request *req = ablkcipher_request_cast(async_req); - struct qce_cipher_reqctx *rctx = ablkcipher_request_ctx(req); - struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); - struct qce_alg_template *tmpl = to_cipher_tmpl(async_req->tfm); + struct skcipher_request *req = skcipher_request_cast(async_req); + struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req); + struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + struct qce_alg_template *tmpl = to_cipher_tmpl(crypto_skcipher_reqtfm(req)); struct qce_device *qce = tmpl->qce; enum dma_data_direction dir_src, dir_dst; struct scatterlist *sg; @@ -62,17 +62,17 @@ qce_ablkcipher_async_req_handle(struct crypto_async_request *async_req) gfp_t gfp; int ret; - rctx->iv = req->info; - rctx->ivsize = crypto_ablkcipher_ivsize(ablkcipher); - rctx->cryptlen = req->nbytes; + rctx->iv = req->iv; + rctx->ivsize = crypto_skcipher_ivsize(skcipher); + rctx->cryptlen = req->cryptlen; diff_dst = (req->src != req->dst) ? true : false; dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; dir_dst = diff_dst ? DMA_FROM_DEVICE : DMA_BIDIRECTIONAL; - rctx->src_nents = sg_nents_for_len(req->src, req->nbytes); + rctx->src_nents = sg_nents_for_len(req->src, req->cryptlen); if (diff_dst) - rctx->dst_nents = sg_nents_for_len(req->dst, req->nbytes); + rctx->dst_nents = sg_nents_for_len(req->dst, req->cryptlen); else rctx->dst_nents = rctx->src_nents; if (rctx->src_nents < 0) { @@ -125,13 +125,13 @@ qce_ablkcipher_async_req_handle(struct crypto_async_request *async_req) ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, rctx->src_nents, rctx->dst_sg, rctx->dst_nents, - qce_ablkcipher_done, async_req); + qce_skcipher_done, async_req); if (ret) goto error_unmap_src; qce_dma_issue_pending(&qce->dma); - ret = qce_start(async_req, tmpl->crypto_alg_type, req->nbytes, 0); + ret = qce_start(async_req, tmpl->crypto_alg_type, req->cryptlen, 0); if (ret) goto error_terminate; @@ -149,10 +149,10 @@ qce_ablkcipher_async_req_handle(struct crypto_async_request *async_req) return ret; } -static int qce_ablkcipher_setkey(struct crypto_ablkcipher *ablk, const u8 *key, +static int qce_skcipher_setkey(struct crypto_skcipher *ablk, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk); + struct crypto_tfm *tfm = crypto_skcipher_tfm(ablk); struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm); int ret; @@ -177,13 +177,13 @@ static int qce_ablkcipher_setkey(struct crypto_ablkcipher *ablk, const u8 *key, return ret; } -static int qce_des_setkey(struct crypto_ablkcipher *ablk, const u8 *key, +static int qce_des_setkey(struct crypto_skcipher *ablk, const u8 *key, unsigned int keylen) { - struct qce_cipher_ctx *ctx = crypto_ablkcipher_ctx(ablk); + struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(ablk); int err; - err = verify_ablkcipher_des_key(ablk, key); + err = verify_skcipher_des_key(ablk, key); if (err) return err; @@ -192,13 +192,13 @@ static int qce_des_setkey(struct crypto_ablkcipher *ablk, const u8 *key, return 0; } -static int qce_des3_setkey(struct crypto_ablkcipher *ablk, const u8 *key, +static int qce_des3_setkey(struct crypto_skcipher *ablk, const u8 *key, unsigned int keylen) { - struct qce_cipher_ctx *ctx = crypto_ablkcipher_ctx(ablk); + struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(ablk); int err; - err = verify_ablkcipher_des3_key(ablk, key); + err = verify_skcipher_des3_key(ablk, key); if (err) return err; @@ -207,12 +207,11 @@ static int qce_des3_setkey(struct crypto_ablkcipher *ablk, const u8 *key, return 0; } -static int qce_ablkcipher_crypt(struct ablkcipher_request *req, int encrypt) +static int qce_skcipher_crypt(struct skcipher_request *req, int encrypt) { - struct crypto_tfm *tfm = - crypto_ablkcipher_tfm(crypto_ablkcipher_reqtfm(req)); - struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - struct qce_cipher_reqctx *rctx = ablkcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct qce_cipher_reqctx *rctx = skcipher_request_ctx(req); struct qce_alg_template *tmpl = to_cipher_tmpl(tfm); int ret; @@ -227,7 +226,7 @@ static int qce_ablkcipher_crypt(struct ablkcipher_request *req, int encrypt) skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, - req->nbytes, req->info); + req->cryptlen, req->iv); ret = encrypt ? crypto_skcipher_encrypt(subreq) : crypto_skcipher_decrypt(subreq); skcipher_request_zero(subreq); @@ -237,36 +236,36 @@ static int qce_ablkcipher_crypt(struct ablkcipher_request *req, int encrypt) return tmpl->qce->async_req_enqueue(tmpl->qce, &req->base); } -static int qce_ablkcipher_encrypt(struct ablkcipher_request *req) +static int qce_skcipher_encrypt(struct skcipher_request *req) { - return qce_ablkcipher_crypt(req, 1); + return qce_skcipher_crypt(req, 1); } -static int qce_ablkcipher_decrypt(struct ablkcipher_request *req) +static int qce_skcipher_decrypt(struct skcipher_request *req) { - return qce_ablkcipher_crypt(req, 0); + return qce_skcipher_crypt(req, 0); } -static int qce_ablkcipher_init(struct crypto_tfm *tfm) +static int qce_skcipher_init(struct crypto_skcipher *tfm) { - struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); memset(ctx, 0, sizeof(*ctx)); - tfm->crt_ablkcipher.reqsize = sizeof(struct qce_cipher_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct qce_cipher_reqctx)); - ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(tfm), + ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(&tfm->base), 0, CRYPTO_ALG_NEED_FALLBACK); return PTR_ERR_OR_ZERO(ctx->fallback); } -static void qce_ablkcipher_exit(struct crypto_tfm *tfm) +static void qce_skcipher_exit(struct crypto_skcipher *tfm) { - struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qce_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); crypto_free_sync_skcipher(ctx->fallback); } -struct qce_ablkcipher_def { +struct qce_skcipher_def { unsigned long flags; const char *name; const char *drv_name; @@ -276,7 +275,7 @@ struct qce_ablkcipher_def { unsigned int max_keysize; }; -static const struct qce_ablkcipher_def ablkcipher_def[] = { +static const struct qce_skcipher_def skcipher_def[] = { { .flags = QCE_ALG_AES | QCE_MODE_ECB, .name = "ecb(aes)", @@ -351,90 +350,91 @@ static const struct qce_ablkcipher_def ablkcipher_def[] = { }, }; -static int qce_ablkcipher_register_one(const struct qce_ablkcipher_def *def, +static int qce_skcipher_register_one(const struct qce_skcipher_def *def, struct qce_device *qce) { struct qce_alg_template *tmpl; - struct crypto_alg *alg; + struct skcipher_alg *alg; int ret; tmpl = kzalloc(sizeof(*tmpl), GFP_KERNEL); if (!tmpl) return -ENOMEM; - alg = &tmpl->alg.crypto; + alg = &tmpl->alg.skcipher; - snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); - snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", def->drv_name); - alg->cra_blocksize = def->blocksize; - alg->cra_ablkcipher.ivsize = def->ivsize; - alg->cra_ablkcipher.min_keysize = def->min_keysize; - alg->cra_ablkcipher.max_keysize = def->max_keysize; - alg->cra_ablkcipher.setkey = IS_3DES(def->flags) ? qce_des3_setkey : - IS_DES(def->flags) ? qce_des_setkey : - qce_ablkcipher_setkey; - alg->cra_ablkcipher.encrypt = qce_ablkcipher_encrypt; - alg->cra_ablkcipher.decrypt = qce_ablkcipher_decrypt; - - alg->cra_priority = 300; - alg->cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_KERN_DRIVER_ONLY; - alg->cra_ctxsize = sizeof(struct qce_cipher_ctx); - alg->cra_alignmask = 0; - alg->cra_type = &crypto_ablkcipher_type; - alg->cra_module = THIS_MODULE; - alg->cra_init = qce_ablkcipher_init; - alg->cra_exit = qce_ablkcipher_exit; + alg->base.cra_blocksize = def->blocksize; + alg->ivsize = def->ivsize; + alg->min_keysize = def->min_keysize; + alg->max_keysize = def->max_keysize; + alg->setkey = IS_3DES(def->flags) ? qce_des3_setkey : + IS_DES(def->flags) ? qce_des_setkey : + qce_skcipher_setkey; + alg->encrypt = qce_skcipher_encrypt; + alg->decrypt = qce_skcipher_decrypt; + + alg->base.cra_priority = 300; + alg->base.cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_ctxsize = sizeof(struct qce_cipher_ctx); + alg->base.cra_alignmask = 0; + alg->base.cra_module = THIS_MODULE; + + alg->init = qce_skcipher_init; + alg->exit = qce_skcipher_exit; INIT_LIST_HEAD(&tmpl->entry); - tmpl->crypto_alg_type = CRYPTO_ALG_TYPE_ABLKCIPHER; + tmpl->crypto_alg_type = CRYPTO_ALG_TYPE_SKCIPHER; tmpl->alg_flags = def->flags; tmpl->qce = qce; - ret = crypto_register_alg(alg); + ret = crypto_register_skcipher(alg); if (ret) { kfree(tmpl); - dev_err(qce->dev, "%s registration failed\n", alg->cra_name); + dev_err(qce->dev, "%s registration failed\n", alg->base.cra_name); return ret; } - list_add_tail(&tmpl->entry, &ablkcipher_algs); - dev_dbg(qce->dev, "%s is registered\n", alg->cra_name); + list_add_tail(&tmpl->entry, &skcipher_algs); + dev_dbg(qce->dev, "%s is registered\n", alg->base.cra_name); return 0; } -static void qce_ablkcipher_unregister(struct qce_device *qce) +static void qce_skcipher_unregister(struct qce_device *qce) { struct qce_alg_template *tmpl, *n; - list_for_each_entry_safe(tmpl, n, &ablkcipher_algs, entry) { - crypto_unregister_alg(&tmpl->alg.crypto); + list_for_each_entry_safe(tmpl, n, &skcipher_algs, entry) { + crypto_unregister_skcipher(&tmpl->alg.skcipher); list_del(&tmpl->entry); kfree(tmpl); } } -static int qce_ablkcipher_register(struct qce_device *qce) +static int qce_skcipher_register(struct qce_device *qce) { int ret, i; - for (i = 0; i < ARRAY_SIZE(ablkcipher_def); i++) { - ret = qce_ablkcipher_register_one(&ablkcipher_def[i], qce); + for (i = 0; i < ARRAY_SIZE(skcipher_def); i++) { + ret = qce_skcipher_register_one(&skcipher_def[i], qce); if (ret) goto err; } return 0; err: - qce_ablkcipher_unregister(qce); + qce_skcipher_unregister(qce); return ret; } -const struct qce_algo_ops ablkcipher_ops = { - .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .register_algs = qce_ablkcipher_register, - .unregister_algs = qce_ablkcipher_unregister, - .async_req_handle = qce_ablkcipher_async_req_handle, +const struct qce_algo_ops skcipher_ops = { + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .register_algs = qce_skcipher_register, + .unregister_algs = qce_skcipher_unregister, + .async_req_handle = qce_skcipher_async_req_handle, }; From patchwork Thu Oct 24 13:23:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209707 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73CFB13B1 for ; Thu, 24 Oct 2019 13:32:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 124C020663 for ; Thu, 24 Oct 2019 13:32:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qX0AeOtl"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="dVfbsjpW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 124C020663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TpgLtWWpAHZ35mqOkFSbkBerkrIznbpSf2dhje8iwxo=; b=qX0AeOtlZUf67F j9TUmmwiMRaF2dpFOJm6hExkIg46Q5vyz+h+NvifyVe8WfQB+R1UUQ30aLYhFRoymXahXrW77DiyM irbzG30ppGgNssaKYTCeJl/nzfxkYQ+q9baeaL5QUGhwP2oOYvb7HyPL/tBLKIrGhU3ObKDxoxNJ0 Nr0cOz0oBW6oc9vLqJEPQtlPT7ncklbvwZ2eKJRLieogoBEqKGmyc6PpxOgU8IjJniCLniC234p7e 0apl97K0Eubs92bfT+cAXK49RZpRfPAAIuG61jddnLTFtTdMKHVrLxh/7orMqm08fzo+vPIACDWhD WEo23pQ4Dta0v/kPmASg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdDm-0000gF-DS; Thu, 24 Oct 2019 13:32:10 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6H-0000m5-Ea for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:37 +0000 Received: by mail-wm1-x344.google.com with SMTP id q70so2854263wme.1 for ; Thu, 24 Oct 2019 06:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iDXwkl7O2yYcupCDQGZpR6b45iWm8a4P/toc4fBfMes=; b=dVfbsjpWPo/GYy+JSPaMH0/zKNgBIOZ7POf14cgNG8fPGwNg5mEwcVDGURprrVdfUq YEEI+GPctqoU+yjTdhfrgrdphEzWpFqq3Uz8El8/50UPXEacL0HM0bLpJrAClJukje3M T1WsnlfJo315AQGK+NA2CU96W5jBM9UddOT/Pgv2XxDjLnV2wMugSgfave0mr0Lyvp2L 0wEsrfeE9A9k4L1NvB+q8s7FQpojVNx+0YUstbu5U/9CWyM+Eh76HRZhhdMPLVJ+myib aIo5E1x+YYqrlKnbo4tuhS1jqEi5Neaks8TzOetJ5WjcTq++w/SxDPuFYBhWvDbICcIs urQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iDXwkl7O2yYcupCDQGZpR6b45iWm8a4P/toc4fBfMes=; b=oJYJi8QG7lEyAAen28BPz21Ein7946esRWhCGjVrj7MmXl2I6gbzwthIBm7h8X18fu spM3U04ZgmwcRrThmeIKs/ZgKkbyPRhPtwHMh7YvmaUO5pL+6M1quctxafNC8mw50lX+ nFcNZvZ4q2mYCjgzbHEt3TzcVEGmIcFdQZlRxyuQiqe8IAJbKxssy6p4xTB5Yc8xQStA j7K/IKSDGafx0ATFgzJqqRRxmwSUMPWEhImlEu2gErjh+qSWxxawvt5MJDQ9zMiVeDKD Bp/7tFn17QuE0nlzQYNlfrS8pAnZ28jXZblL+vFmtcjXn7cnLbvfOmxkizcAXErYFonK btvQ== X-Gm-Message-State: APjAAAUnd+++4+k+caApDo/K4HsdEZx9MFvDBEM/KdIQ+saKapOvJS7Z MwS8ZFHA9LsEyR3tMx0Q7rnA9jdnuClcQgUi X-Google-Smtp-Source: APXvYqw61GlHsNEda6zKasaGrxOh0xTuRvXMjXGrfZl1hav9FD2KiBOSoM2apPlVEEQzpWM1hLElfg== X-Received: by 2002:a1c:480a:: with SMTP id v10mr3213746wma.138.1571923463619; Thu, 24 Oct 2019 06:24:23 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:22 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 21/27] crypto: stm32 - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:39 +0200 Message-Id: <20191024132345.5236-22-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062425_653418_C599191C X-CRM114-Status: GOOD ( 15.95 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Herbert Xu , Eric Biggers , Ard Biesheuvel , Maxime Coquelin , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Alexandre Torgue Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Maxime Coquelin Cc: Alexandre Torgue Signed-off-by: Ard Biesheuvel --- drivers/crypto/stm32/stm32-cryp.c | 338 +++++++++----------- 1 file changed, 159 insertions(+), 179 deletions(-) diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c index ba5ea6434f9c..d347a1d6e351 100644 --- a/drivers/crypto/stm32/stm32-cryp.c +++ b/drivers/crypto/stm32/stm32-cryp.c @@ -19,6 +19,7 @@ #include #include #include +#include #define DRIVER_NAME "stm32-cryp" @@ -137,7 +138,7 @@ struct stm32_cryp { struct crypto_engine *engine; - struct ablkcipher_request *req; + struct skcipher_request *req; struct aead_request *areq; size_t authsize; @@ -395,8 +396,8 @@ static void stm32_cryp_hw_write_iv(struct stm32_cryp *cryp, u32 *iv) static void stm32_cryp_get_iv(struct stm32_cryp *cryp) { - struct ablkcipher_request *req = cryp->req; - u32 *tmp = req->info; + struct skcipher_request *req = cryp->req; + u32 *tmp = (void *)req->iv; if (!tmp) return; @@ -616,7 +617,7 @@ static int stm32_cryp_hw_init(struct stm32_cryp *cryp) case CR_TDES_CBC: case CR_AES_CBC: case CR_AES_CTR: - stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->req->info); + stm32_cryp_hw_write_iv(cryp, (u32 *)cryp->req->iv); break; default: @@ -667,7 +668,7 @@ static void stm32_cryp_finish_req(struct stm32_cryp *cryp, int err) if (is_gcm(cryp) || is_ccm(cryp)) crypto_finalize_aead_request(cryp->engine, cryp->areq, err); else - crypto_finalize_ablkcipher_request(cryp->engine, cryp->req, + crypto_finalize_skcipher_request(cryp->engine, cryp->req, err); memset(cryp->ctx->key, 0, cryp->ctx->keylen); @@ -685,11 +686,11 @@ static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq); static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine, void *areq); -static int stm32_cryp_cra_init(struct crypto_tfm *tfm) +static int stm32_cryp_init_tfm(struct crypto_skcipher *tfm) { - struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm); + struct stm32_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx); + crypto_skcipher_set_reqsize(tfm, sizeof(struct stm32_cryp_reqctx)); ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req; ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req; @@ -714,11 +715,11 @@ static int stm32_cryp_aes_aead_init(struct crypto_aead *tfm) return 0; } -static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode) +static int stm32_cryp_crypt(struct skcipher_request *req, unsigned long mode) { - struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); - struct stm32_cryp_reqctx *rctx = ablkcipher_request_ctx(req); + struct stm32_cryp_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); + struct stm32_cryp_reqctx *rctx = skcipher_request_ctx(req); struct stm32_cryp *cryp = stm32_cryp_find_dev(ctx); if (!cryp) @@ -726,7 +727,7 @@ static int stm32_cryp_crypt(struct ablkcipher_request *req, unsigned long mode) rctx->mode = mode; - return crypto_transfer_ablkcipher_request_to_engine(cryp->engine, req); + return crypto_transfer_skcipher_request_to_engine(cryp->engine, req); } static int stm32_cryp_aead_crypt(struct aead_request *req, unsigned long mode) @@ -743,10 +744,10 @@ static int stm32_cryp_aead_crypt(struct aead_request *req, unsigned long mode) return crypto_transfer_aead_request_to_engine(cryp->engine, req); } -static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int stm32_cryp_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct stm32_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); memcpy(ctx->key, key, keylen); ctx->keylen = keylen; @@ -754,7 +755,7 @@ static int stm32_cryp_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return 0; } -static int stm32_cryp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int stm32_cryp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && @@ -764,17 +765,17 @@ static int stm32_cryp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, return stm32_cryp_setkey(tfm, key, keylen); } -static int stm32_cryp_des_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int stm32_cryp_des_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - return verify_ablkcipher_des_key(tfm, key) ?: + return verify_skcipher_des_key(tfm, key) ?: stm32_cryp_setkey(tfm, key, keylen); } -static int stm32_cryp_tdes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, +static int stm32_cryp_tdes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - return verify_ablkcipher_des3_key(tfm, key) ?: + return verify_skcipher_des3_key(tfm, key) ?: stm32_cryp_setkey(tfm, key, keylen); } @@ -818,32 +819,32 @@ static int stm32_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm, return 0; } -static int stm32_cryp_aes_ecb_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_ecb_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_ECB | FLG_ENCRYPT); } -static int stm32_cryp_aes_ecb_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_ecb_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_ECB); } -static int stm32_cryp_aes_cbc_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_cbc_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_CBC | FLG_ENCRYPT); } -static int stm32_cryp_aes_cbc_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_cbc_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_CBC); } -static int stm32_cryp_aes_ctr_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_ctr_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_CTR | FLG_ENCRYPT); } -static int stm32_cryp_aes_ctr_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_aes_ctr_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_AES | FLG_CTR); } @@ -868,47 +869,47 @@ static int stm32_cryp_aes_ccm_decrypt(struct aead_request *req) return stm32_cryp_aead_crypt(req, FLG_AES | FLG_CCM); } -static int stm32_cryp_des_ecb_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_des_ecb_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_DES | FLG_ECB | FLG_ENCRYPT); } -static int stm32_cryp_des_ecb_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_des_ecb_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_DES | FLG_ECB); } -static int stm32_cryp_des_cbc_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_des_cbc_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_DES | FLG_CBC | FLG_ENCRYPT); } -static int stm32_cryp_des_cbc_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_des_cbc_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_DES | FLG_CBC); } -static int stm32_cryp_tdes_ecb_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_tdes_ecb_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB | FLG_ENCRYPT); } -static int stm32_cryp_tdes_ecb_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_tdes_ecb_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_TDES | FLG_ECB); } -static int stm32_cryp_tdes_cbc_encrypt(struct ablkcipher_request *req) +static int stm32_cryp_tdes_cbc_encrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC | FLG_ENCRYPT); } -static int stm32_cryp_tdes_cbc_decrypt(struct ablkcipher_request *req) +static int stm32_cryp_tdes_cbc_decrypt(struct skcipher_request *req) { return stm32_cryp_crypt(req, FLG_TDES | FLG_CBC); } -static int stm32_cryp_prepare_req(struct ablkcipher_request *req, +static int stm32_cryp_prepare_req(struct skcipher_request *req, struct aead_request *areq) { struct stm32_cryp_ctx *ctx; @@ -919,7 +920,7 @@ static int stm32_cryp_prepare_req(struct ablkcipher_request *req, if (!req && !areq) return -EINVAL; - ctx = req ? crypto_ablkcipher_ctx(crypto_ablkcipher_reqtfm(req)) : + ctx = req ? crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)) : crypto_aead_ctx(crypto_aead_reqtfm(areq)); cryp = ctx->cryp; @@ -927,7 +928,7 @@ static int stm32_cryp_prepare_req(struct ablkcipher_request *req, if (!cryp) return -ENODEV; - rctx = req ? ablkcipher_request_ctx(req) : aead_request_ctx(areq); + rctx = req ? skcipher_request_ctx(req) : aead_request_ctx(areq); rctx->mode &= FLG_MODE_MASK; ctx->cryp = cryp; @@ -939,7 +940,7 @@ static int stm32_cryp_prepare_req(struct ablkcipher_request *req, if (req) { cryp->req = req; cryp->areq = NULL; - cryp->total_in = req->nbytes; + cryp->total_in = req->cryptlen; cryp->total_out = cryp->total_in; } else { /* @@ -1016,8 +1017,8 @@ static int stm32_cryp_prepare_req(struct ablkcipher_request *req, static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, - struct ablkcipher_request, + struct skcipher_request *req = container_of(areq, + struct skcipher_request, base); return stm32_cryp_prepare_req(req, NULL); @@ -1025,11 +1026,11 @@ static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine, static int stm32_cryp_cipher_one_req(struct crypto_engine *engine, void *areq) { - struct ablkcipher_request *req = container_of(areq, - struct ablkcipher_request, + struct skcipher_request *req = container_of(areq, + struct skcipher_request, base); - struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx( - crypto_ablkcipher_reqtfm(req)); + struct stm32_cryp_ctx *ctx = crypto_skcipher_ctx( + crypto_skcipher_reqtfm(req)); struct stm32_cryp *cryp = ctx->cryp; if (!cryp) @@ -1724,150 +1725,129 @@ static irqreturn_t stm32_cryp_irq(int irq, void *arg) return IRQ_WAKE_THREAD; } -static struct crypto_alg crypto_algs[] = { -{ - .cra_name = "ecb(aes)", - .cra_driver_name = "stm32-ecb-aes", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = stm32_cryp_aes_setkey, - .encrypt = stm32_cryp_aes_ecb_encrypt, - .decrypt = stm32_cryp_aes_ecb_decrypt, - } +static struct skcipher_alg crypto_algs[] = { +{ + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "stm32-ecb-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = stm32_cryp_aes_setkey, + .encrypt = stm32_cryp_aes_ecb_encrypt, + .decrypt = stm32_cryp_aes_ecb_decrypt, }, { - .cra_name = "cbc(aes)", - .cra_driver_name = "stm32-cbc-aes", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = stm32_cryp_aes_setkey, - .encrypt = stm32_cryp_aes_cbc_encrypt, - .decrypt = stm32_cryp_aes_cbc_decrypt, - } + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "stm32-cbc-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = stm32_cryp_aes_setkey, + .encrypt = stm32_cryp_aes_cbc_encrypt, + .decrypt = stm32_cryp_aes_cbc_decrypt, }, { - .cra_name = "ctr(aes)", - .cra_driver_name = "stm32-ctr-aes", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = stm32_cryp_aes_setkey, - .encrypt = stm32_cryp_aes_ctr_encrypt, - .decrypt = stm32_cryp_aes_ctr_decrypt, - } + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "stm32-ctr-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = stm32_cryp_aes_setkey, + .encrypt = stm32_cryp_aes_ctr_encrypt, + .decrypt = stm32_cryp_aes_ctr_decrypt, }, { - .cra_name = "ecb(des)", - .cra_driver_name = "stm32-ecb-des", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = DES_BLOCK_SIZE, - .max_keysize = DES_BLOCK_SIZE, - .setkey = stm32_cryp_des_setkey, - .encrypt = stm32_cryp_des_ecb_encrypt, - .decrypt = stm32_cryp_des_ecb_decrypt, - } + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "stm32-ecb-des", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = DES_BLOCK_SIZE, + .max_keysize = DES_BLOCK_SIZE, + .setkey = stm32_cryp_des_setkey, + .encrypt = stm32_cryp_des_ecb_encrypt, + .decrypt = stm32_cryp_des_ecb_decrypt, }, { - .cra_name = "cbc(des)", - .cra_driver_name = "stm32-cbc-des", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = DES_BLOCK_SIZE, - .max_keysize = DES_BLOCK_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = stm32_cryp_des_setkey, - .encrypt = stm32_cryp_des_cbc_encrypt, - .decrypt = stm32_cryp_des_cbc_decrypt, - } + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "stm32-cbc-des", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = DES_BLOCK_SIZE, + .max_keysize = DES_BLOCK_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = stm32_cryp_des_setkey, + .encrypt = stm32_cryp_des_cbc_encrypt, + .decrypt = stm32_cryp_des_cbc_decrypt, }, { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "stm32-ecb-des3", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = 3 * DES_BLOCK_SIZE, - .max_keysize = 3 * DES_BLOCK_SIZE, - .setkey = stm32_cryp_tdes_setkey, - .encrypt = stm32_cryp_tdes_ecb_encrypt, - .decrypt = stm32_cryp_tdes_ecb_decrypt, - } + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "stm32-ecb-des3", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = 3 * DES_BLOCK_SIZE, + .max_keysize = 3 * DES_BLOCK_SIZE, + .setkey = stm32_cryp_tdes_setkey, + .encrypt = stm32_cryp_tdes_ecb_encrypt, + .decrypt = stm32_cryp_tdes_ecb_decrypt, }, { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "stm32-cbc-des3", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct stm32_cryp_ctx), - .cra_alignmask = 0xf, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = stm32_cryp_cra_init, - .cra_ablkcipher = { - .min_keysize = 3 * DES_BLOCK_SIZE, - .max_keysize = 3 * DES_BLOCK_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = stm32_cryp_tdes_setkey, - .encrypt = stm32_cryp_tdes_cbc_encrypt, - .decrypt = stm32_cryp_tdes_cbc_decrypt, - } + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "stm32-cbc-des3", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct stm32_cryp_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + + .init = stm32_cryp_init_tfm, + .min_keysize = 3 * DES_BLOCK_SIZE, + .max_keysize = 3 * DES_BLOCK_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = stm32_cryp_tdes_setkey, + .encrypt = stm32_cryp_tdes_cbc_encrypt, + .decrypt = stm32_cryp_tdes_cbc_decrypt, }, }; @@ -2010,7 +1990,7 @@ static int stm32_cryp_probe(struct platform_device *pdev) goto err_engine2; } - ret = crypto_register_algs(crypto_algs, ARRAY_SIZE(crypto_algs)); + ret = crypto_register_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); if (ret) { dev_err(dev, "Could not register algs\n"); goto err_algs; @@ -2027,7 +2007,7 @@ static int stm32_cryp_probe(struct platform_device *pdev) return 0; err_aead_algs: - crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs)); + crypto_unregister_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); err_algs: err_engine2: crypto_engine_exit(cryp->engine); @@ -2059,7 +2039,7 @@ static int stm32_cryp_remove(struct platform_device *pdev) return ret; crypto_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs)); - crypto_unregister_algs(crypto_algs, ARRAY_SIZE(crypto_algs)); + crypto_unregister_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); crypto_engine_exit(cryp->engine); From patchwork Thu Oct 24 13:23:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209709 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0260E1864 for ; Thu, 24 Oct 2019 13:32:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D173821906 for ; Thu, 24 Oct 2019 13:32:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="hnDhVTuy"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="zA/ycU2o" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D173821906 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x75bCysJDOrU5wG/arpXCVx/s2n6w+lMgKDNOz0tQaw=; b=hnDhVTuyrYNUFH 0sHSAe3Yci5c5Y9sJTWMgdTREmV/dfT0ZlSP/LsxKTRUDzO8MdLg/uXdtHY7sEJvDWuAKO8lVPihZ NJQCWFWI50uH6fYUAOEHSxIvI4QPBavMnP41sBRvtuI+IoNW1lVvWM6P6IQjRQ95TecSSnnHqdTUQ 1r3y5fzsFEqC7s/I0gaCaqMuDUuC6KWU0TkdrrpnN6FFL5YH39M5yL+D6l+Ncp0b9q6kt8J6qoLy8 p5Uwxtm/yJdDdnKL8jV5ej1VtXWp74dgCMmWSVygNCleDLYrLJwY9f9ZFQBhWrlIj1ONiM0vXTeK6 /XToWogpCyf/32O3XrRA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdE5-0000wg-SU; Thu, 24 Oct 2019 13:32:29 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6I-0000nC-HJ for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:39 +0000 Received: by mail-wr1-x443.google.com with SMTP id c2so20843852wrr.10 for ; Thu, 24 Oct 2019 06:24:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=F96jKzWertERfGibWWWAj+auGlDCmMYLLof5VlwatCo=; b=zA/ycU2o66Iw4D26FWTDSs1RBBMNAZ+qN3iPzvFW7sT7Cmy4wE+9eHO4cIgosWmY5u KEF3dgci6PA0sY61WkWZ8CFGGbgHvOc6czpNHq6pxigWrIS8cELdYgEaOiwr1RLGs/8D 7lr9sNHFFc64WWK3QZ9jAEy8SJiXxpTjx03G/VWo5mYG/OQM/9tXj64fuQQZay3Xkoeo W0QxfOxKI53oluUG0a4z7msHhtCS7G20LaZBITFrly6kPvnOZ6mq1yNGr9VAzxugB1Dm lHUG3DoLLY9LtCGcsxNBUEKDKBoftCNd6798XYbRbXrYXVA4Lh4PzCZ4OG8kB044ci83 5qIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=F96jKzWertERfGibWWWAj+auGlDCmMYLLof5VlwatCo=; b=skveQjah1CtTVROt2A3IajMGoI7CGfrzitjUrv7Bgz5F4z0kNYmcJ0ZAJSokHEGAaW 0307K4Tg/wThnLx+//ThRC77bBEmzbuMPsoD83oYYV/sPjZkndMHDfN7AJ791EIVfo/o QEyVKY3FYDyb/ZRSo3skHkaAcux99jb1qPdOuAttP9K8z+AATfWJ2m5PejuCBLFO//cP iA1t8aRSfb5/kYT9Z1q6BGOLErOoxkXpLyfBWK2MLPCHQSIPYRbCWXRlMUNhG9kbJrtQ Je1u/TscV4SM1Q1mhTCiKHdD/r3MW6rWgM+UmrhshYwDiOiIDSDWzAzGr0I/Noz4Z4LM x/Vg== X-Gm-Message-State: APjAAAVSHqyXKxNJSoXn0gjzMcni5kFw4kv7b5XS+v0B1mHN8DBmYjf6 oWdYHuF0q5LT+UzLiA/Rf41u9w== X-Google-Smtp-Source: APXvYqymwQkhyqq6TdupTeNPpJdhApiqTZ0q+RsRoWx+igx/DJPqN6ke+/3l/u0v/XvGhppwdYDHUA== X-Received: by 2002:adf:e34b:: with SMTP id n11mr3748456wrj.250.1571923464778; Thu, 24 Oct 2019 06:24:24 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:23 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 22/27] crypto: niagara2 - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:40 +0200 Message-Id: <20191024132345.5236-23-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062426_735915_CEFDA7CA X-CRM114-Status: GOOD ( 17.31 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Acked-by: David S. Miller Signed-off-by: Ard Biesheuvel --- drivers/crypto/n2_core.c | 194 ++++++++++---------- 1 file changed, 96 insertions(+), 98 deletions(-) diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c index dc15b06e96ab..e040912f790e 100644 --- a/drivers/crypto/n2_core.c +++ b/drivers/crypto/n2_core.c @@ -23,6 +23,7 @@ #include #include +#include #include #include @@ -657,7 +658,7 @@ static int n2_hmac_async_digest(struct ahash_request *req) ctx->hash_key_len); } -struct n2_cipher_context { +struct n2_skcipher_context { int key_len; int enc_type; union { @@ -683,7 +684,7 @@ struct n2_crypto_chunk { }; struct n2_request_context { - struct ablkcipher_walk walk; + struct skcipher_walk walk; struct list_head chunk_list; struct n2_crypto_chunk chunk; u8 temp_iv[16]; @@ -708,29 +709,29 @@ struct n2_request_context { * is not a valid sequence. */ -struct n2_cipher_alg { +struct n2_skcipher_alg { struct list_head entry; u8 enc_type; - struct crypto_alg alg; + struct skcipher_alg skcipher; }; -static inline struct n2_cipher_alg *n2_cipher_alg(struct crypto_tfm *tfm) +static inline struct n2_skcipher_alg *n2_skcipher_alg(struct crypto_skcipher *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - return container_of(alg, struct n2_cipher_alg, alg); + return container_of(alg, struct n2_skcipher_alg, skcipher); } -struct n2_cipher_request_context { - struct ablkcipher_walk walk; +struct n2_skcipher_request_context { + struct skcipher_walk walk; }; -static int n2_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int n2_aes_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct n2_cipher_context *ctx = crypto_tfm_ctx(tfm); - struct n2_cipher_alg *n2alg = n2_cipher_alg(tfm); + struct crypto_tfm *tfm = crypto_skcipher_tfm(skcipher); + struct n2_skcipher_context *ctx = crypto_tfm_ctx(tfm); + struct n2_skcipher_alg *n2alg = n2_skcipher_alg(skcipher); ctx->enc_type = (n2alg->enc_type & ENC_TYPE_CHAINING_MASK); @@ -745,7 +746,7 @@ static int n2_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, ctx->enc_type |= ENC_TYPE_ALG_AES256; break; default: - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } @@ -754,15 +755,15 @@ static int n2_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int n2_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int n2_des_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct n2_cipher_context *ctx = crypto_tfm_ctx(tfm); - struct n2_cipher_alg *n2alg = n2_cipher_alg(tfm); + struct crypto_tfm *tfm = crypto_skcipher_tfm(skcipher); + struct n2_skcipher_context *ctx = crypto_tfm_ctx(tfm); + struct n2_skcipher_alg *n2alg = n2_skcipher_alg(skcipher); int err; - err = verify_ablkcipher_des_key(cipher, key); + err = verify_skcipher_des_key(skcipher, key); if (err) return err; @@ -773,15 +774,15 @@ static int n2_des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int n2_3des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int n2_3des_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct n2_cipher_context *ctx = crypto_tfm_ctx(tfm); - struct n2_cipher_alg *n2alg = n2_cipher_alg(tfm); + struct crypto_tfm *tfm = crypto_skcipher_tfm(skcipher); + struct n2_skcipher_context *ctx = crypto_tfm_ctx(tfm); + struct n2_skcipher_alg *n2alg = n2_skcipher_alg(skcipher); int err; - err = verify_ablkcipher_des3_key(cipher, key); + err = verify_skcipher_des3_key(skcipher, key); if (err) return err; @@ -792,12 +793,12 @@ static int n2_3des_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static int n2_arc4_setkey(struct crypto_ablkcipher *cipher, const u8 *key, +static int n2_arc4_setkey(struct crypto_skcipher *skcipher, const u8 *key, unsigned int keylen) { - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct n2_cipher_context *ctx = crypto_tfm_ctx(tfm); - struct n2_cipher_alg *n2alg = n2_cipher_alg(tfm); + struct crypto_tfm *tfm = crypto_skcipher_tfm(skcipher); + struct n2_skcipher_context *ctx = crypto_tfm_ctx(tfm); + struct n2_skcipher_alg *n2alg = n2_skcipher_alg(skcipher); u8 *s = ctx->key.arc4; u8 *x = s + 256; u8 *y = x + 1; @@ -822,7 +823,7 @@ static int n2_arc4_setkey(struct crypto_ablkcipher *cipher, const u8 *key, return 0; } -static inline int cipher_descriptor_len(int nbytes, unsigned int block_size) +static inline int skcipher_descriptor_len(int nbytes, unsigned int block_size) { int this_len = nbytes; @@ -830,10 +831,11 @@ static inline int cipher_descriptor_len(int nbytes, unsigned int block_size) return this_len > (1 << 16) ? (1 << 16) : this_len; } -static int __n2_crypt_chunk(struct crypto_tfm *tfm, struct n2_crypto_chunk *cp, +static int __n2_crypt_chunk(struct crypto_skcipher *skcipher, + struct n2_crypto_chunk *cp, struct spu_queue *qp, bool encrypt) { - struct n2_cipher_context *ctx = crypto_tfm_ctx(tfm); + struct n2_skcipher_context *ctx = crypto_skcipher_ctx(skcipher); struct cwq_initial_entry *ent; bool in_place; int i; @@ -877,18 +879,17 @@ static int __n2_crypt_chunk(struct crypto_tfm *tfm, struct n2_crypto_chunk *cp, return (spu_queue_submit(qp, ent) != HV_EOK) ? -EINVAL : 0; } -static int n2_compute_chunks(struct ablkcipher_request *req) +static int n2_compute_chunks(struct skcipher_request *req) { - struct n2_request_context *rctx = ablkcipher_request_ctx(req); - struct ablkcipher_walk *walk = &rctx->walk; + struct n2_request_context *rctx = skcipher_request_ctx(req); + struct skcipher_walk *walk = &rctx->walk; struct n2_crypto_chunk *chunk; unsigned long dest_prev; unsigned int tot_len; bool prev_in_place; int err, nbytes; - ablkcipher_walk_init(walk, req->dst, req->src, req->nbytes); - err = ablkcipher_walk_phys(req, walk); + err = skcipher_walk_async(walk, req); if (err) return err; @@ -910,12 +911,12 @@ static int n2_compute_chunks(struct ablkcipher_request *req) bool in_place; int this_len; - src_paddr = (page_to_phys(walk->src.page) + - walk->src.offset); - dest_paddr = (page_to_phys(walk->dst.page) + - walk->dst.offset); + src_paddr = (page_to_phys(walk->src.phys.page) + + walk->src.phys.offset); + dest_paddr = (page_to_phys(walk->dst.phys.page) + + walk->dst.phys.offset); in_place = (src_paddr == dest_paddr); - this_len = cipher_descriptor_len(nbytes, walk->blocksize); + this_len = skcipher_descriptor_len(nbytes, walk->blocksize); if (chunk->arr_len != 0) { if (in_place != prev_in_place || @@ -946,7 +947,7 @@ static int n2_compute_chunks(struct ablkcipher_request *req) prev_in_place = in_place; tot_len += this_len; - err = ablkcipher_walk_done(req, walk, nbytes - this_len); + err = skcipher_walk_done(walk, nbytes - this_len); if (err) break; } @@ -958,15 +959,14 @@ static int n2_compute_chunks(struct ablkcipher_request *req) return err; } -static void n2_chunk_complete(struct ablkcipher_request *req, void *final_iv) +static void n2_chunk_complete(struct skcipher_request *req, void *final_iv) { - struct n2_request_context *rctx = ablkcipher_request_ctx(req); + struct n2_request_context *rctx = skcipher_request_ctx(req); struct n2_crypto_chunk *c, *tmp; if (final_iv) memcpy(rctx->walk.iv, final_iv, rctx->walk.blocksize); - ablkcipher_walk_complete(&rctx->walk); list_for_each_entry_safe(c, tmp, &rctx->chunk_list, entry) { list_del(&c->entry); if (unlikely(c != &rctx->chunk)) @@ -975,10 +975,10 @@ static void n2_chunk_complete(struct ablkcipher_request *req, void *final_iv) } -static int n2_do_ecb(struct ablkcipher_request *req, bool encrypt) +static int n2_do_ecb(struct skcipher_request *req, bool encrypt) { - struct n2_request_context *rctx = ablkcipher_request_ctx(req); - struct crypto_tfm *tfm = req->base.tfm; + struct n2_request_context *rctx = skcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); int err = n2_compute_chunks(req); struct n2_crypto_chunk *c, *tmp; unsigned long flags, hv_ret; @@ -1017,20 +1017,20 @@ static int n2_do_ecb(struct ablkcipher_request *req, bool encrypt) return err; } -static int n2_encrypt_ecb(struct ablkcipher_request *req) +static int n2_encrypt_ecb(struct skcipher_request *req) { return n2_do_ecb(req, true); } -static int n2_decrypt_ecb(struct ablkcipher_request *req) +static int n2_decrypt_ecb(struct skcipher_request *req) { return n2_do_ecb(req, false); } -static int n2_do_chaining(struct ablkcipher_request *req, bool encrypt) +static int n2_do_chaining(struct skcipher_request *req, bool encrypt) { - struct n2_request_context *rctx = ablkcipher_request_ctx(req); - struct crypto_tfm *tfm = req->base.tfm; + struct n2_request_context *rctx = skcipher_request_ctx(req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); unsigned long flags, hv_ret, iv_paddr; int err = n2_compute_chunks(req); struct n2_crypto_chunk *c, *tmp; @@ -1107,32 +1107,32 @@ static int n2_do_chaining(struct ablkcipher_request *req, bool encrypt) return err; } -static int n2_encrypt_chaining(struct ablkcipher_request *req) +static int n2_encrypt_chaining(struct skcipher_request *req) { return n2_do_chaining(req, true); } -static int n2_decrypt_chaining(struct ablkcipher_request *req) +static int n2_decrypt_chaining(struct skcipher_request *req) { return n2_do_chaining(req, false); } -struct n2_cipher_tmpl { +struct n2_skcipher_tmpl { const char *name; const char *drv_name; u8 block_size; u8 enc_type; - struct ablkcipher_alg ablkcipher; + struct skcipher_alg skcipher; }; -static const struct n2_cipher_tmpl cipher_tmpls[] = { +static const struct n2_skcipher_tmpl skcipher_tmpls[] = { /* ARC4: only ECB is supported (chaining bits ignored) */ { .name = "ecb(arc4)", .drv_name = "ecb-arc4", .block_size = 1, .enc_type = (ENC_TYPE_ALG_RC4_STREAM | ENC_TYPE_CHAINING_ECB), - .ablkcipher = { + .skcipher = { .min_keysize = 1, .max_keysize = 256, .setkey = n2_arc4_setkey, @@ -1147,7 +1147,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_DES | ENC_TYPE_CHAINING_ECB), - .ablkcipher = { + .skcipher = { .min_keysize = DES_KEY_SIZE, .max_keysize = DES_KEY_SIZE, .setkey = n2_des_setkey, @@ -1160,7 +1160,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_DES | ENC_TYPE_CHAINING_CBC), - .ablkcipher = { + .skcipher = { .ivsize = DES_BLOCK_SIZE, .min_keysize = DES_KEY_SIZE, .max_keysize = DES_KEY_SIZE, @@ -1174,7 +1174,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_DES | ENC_TYPE_CHAINING_CFB), - .ablkcipher = { + .skcipher = { .min_keysize = DES_KEY_SIZE, .max_keysize = DES_KEY_SIZE, .setkey = n2_des_setkey, @@ -1189,7 +1189,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_3DES | ENC_TYPE_CHAINING_ECB), - .ablkcipher = { + .skcipher = { .min_keysize = 3 * DES_KEY_SIZE, .max_keysize = 3 * DES_KEY_SIZE, .setkey = n2_3des_setkey, @@ -1202,7 +1202,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_3DES | ENC_TYPE_CHAINING_CBC), - .ablkcipher = { + .skcipher = { .ivsize = DES_BLOCK_SIZE, .min_keysize = 3 * DES_KEY_SIZE, .max_keysize = 3 * DES_KEY_SIZE, @@ -1216,7 +1216,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = DES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_3DES | ENC_TYPE_CHAINING_CFB), - .ablkcipher = { + .skcipher = { .min_keysize = 3 * DES_KEY_SIZE, .max_keysize = 3 * DES_KEY_SIZE, .setkey = n2_3des_setkey, @@ -1230,7 +1230,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = AES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_AES128 | ENC_TYPE_CHAINING_ECB), - .ablkcipher = { + .skcipher = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .setkey = n2_aes_setkey, @@ -1243,7 +1243,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = AES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_AES128 | ENC_TYPE_CHAINING_CBC), - .ablkcipher = { + .skcipher = { .ivsize = AES_BLOCK_SIZE, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, @@ -1257,7 +1257,7 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { .block_size = AES_BLOCK_SIZE, .enc_type = (ENC_TYPE_ALG_AES128 | ENC_TYPE_CHAINING_COUNTER), - .ablkcipher = { + .skcipher = { .ivsize = AES_BLOCK_SIZE, .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, @@ -1268,9 +1268,9 @@ static const struct n2_cipher_tmpl cipher_tmpls[] = { }, }; -#define NUM_CIPHER_TMPLS ARRAY_SIZE(cipher_tmpls) +#define NUM_CIPHER_TMPLS ARRAY_SIZE(skcipher_tmpls) -static LIST_HEAD(cipher_algs); +static LIST_HEAD(skcipher_algs); struct n2_hash_tmpl { const char *name; @@ -1344,14 +1344,14 @@ static int algs_registered; static void __n2_unregister_algs(void) { - struct n2_cipher_alg *cipher, *cipher_tmp; + struct n2_skcipher_alg *skcipher, *skcipher_tmp; struct n2_ahash_alg *alg, *alg_tmp; struct n2_hmac_alg *hmac, *hmac_tmp; - list_for_each_entry_safe(cipher, cipher_tmp, &cipher_algs, entry) { - crypto_unregister_alg(&cipher->alg); - list_del(&cipher->entry); - kfree(cipher); + list_for_each_entry_safe(skcipher, skcipher_tmp, &skcipher_algs, entry) { + crypto_unregister_skcipher(&skcipher->skcipher); + list_del(&skcipher->entry); + kfree(skcipher); } list_for_each_entry_safe(hmac, hmac_tmp, &hmac_algs, derived.entry) { crypto_unregister_ahash(&hmac->derived.alg); @@ -1365,44 +1365,42 @@ static void __n2_unregister_algs(void) } } -static int n2_cipher_cra_init(struct crypto_tfm *tfm) +static int n2_skcipher_init_tfm(struct crypto_skcipher *tfm) { - tfm->crt_ablkcipher.reqsize = sizeof(struct n2_request_context); + crypto_skcipher_set_reqsize(tfm, sizeof(struct n2_request_context)); return 0; } -static int __n2_register_one_cipher(const struct n2_cipher_tmpl *tmpl) +static int __n2_register_one_skcipher(const struct n2_skcipher_tmpl *tmpl) { - struct n2_cipher_alg *p = kzalloc(sizeof(*p), GFP_KERNEL); - struct crypto_alg *alg; + struct n2_skcipher_alg *p = kzalloc(sizeof(*p), GFP_KERNEL); + struct skcipher_alg *alg; int err; if (!p) return -ENOMEM; - alg = &p->alg; + alg = &p->skcipher; + *alg = tmpl->skcipher; - snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", tmpl->name); - snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-n2", tmpl->drv_name); - alg->cra_priority = N2_CRA_PRIORITY; - alg->cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC; - alg->cra_blocksize = tmpl->block_size; + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", tmpl->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-n2", tmpl->drv_name); + alg->base.cra_priority = N2_CRA_PRIORITY; + alg->base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC; + alg->base.cra_blocksize = tmpl->block_size; p->enc_type = tmpl->enc_type; - alg->cra_ctxsize = sizeof(struct n2_cipher_context); - alg->cra_type = &crypto_ablkcipher_type; - alg->cra_u.ablkcipher = tmpl->ablkcipher; - alg->cra_init = n2_cipher_cra_init; - alg->cra_module = THIS_MODULE; - - list_add(&p->entry, &cipher_algs); - err = crypto_register_alg(alg); + alg->base.cra_ctxsize = sizeof(struct n2_skcipher_context); + alg->base.cra_module = THIS_MODULE; + alg->init = n2_skcipher_init_tfm; + + list_add(&p->entry, &skcipher_algs); + err = crypto_register_skcipher(alg); if (err) { - pr_err("%s alg registration failed\n", alg->cra_name); + pr_err("%s alg registration failed\n", alg->base.cra_name); list_del(&p->entry); kfree(p); } else { - pr_info("%s alg registered\n", alg->cra_name); + pr_info("%s alg registered\n", alg->base.cra_name); } return err; } @@ -1517,7 +1515,7 @@ static int n2_register_algs(void) } } for (i = 0; i < NUM_CIPHER_TMPLS; i++) { - err = __n2_register_one_cipher(&cipher_tmpls[i]); + err = __n2_register_one_skcipher(&skcipher_tmpls[i]); if (err) { __n2_unregister_algs(); goto out; From patchwork Thu Oct 24 13:23:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D606413B1 for ; Thu, 24 Oct 2019 13:33:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA76020663 for ; Thu, 24 Oct 2019 13:33:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="PTQL+JMO"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="nCeNNwhn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA76020663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rwUPOxceLV0gSrtk/987SvNKR2WQYWEi0VRz1RzCk2U=; b=PTQL+JMOTlQOV1 +hCoTmfCTtuFkNzr6RuCjrYqSsXgWWWzyDcH4AauNGThzNXsogTDaBXSvuDDkUKNxU95fTUfNB8kQ AMjyODfP82Exj+zwfznbBkJxry0FGBNJf2A1R20ksju3l0WfPUltgZ5hogbGaO8cKjIU9r+v+acEG YHLr6VuLcoH1uZtbg2fpeOB2jYtdAgMAqyuN2tOtUN4RFKb9YUxRvmxkM5E2PPhhhidCG1ab08naU eztwq9QegvIOiHhs+yxj+z1z1dR5JqrH7+wPmW+8wdOX5s45FooDXmNUZONa/FUtQfq5QHaGJcp3n 600xRZtVdnmreJvkt3Lg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdF9-0001rw-Uy; Thu, 24 Oct 2019 13:33:35 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6K-0000ou-6N for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:47 +0000 Received: by mail-wm1-x343.google.com with SMTP id c22so2605108wmd.1 for ; Thu, 24 Oct 2019 06:24:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g2guTtZQNpFmVRb5THlXvx+Ph/1cQAsy90V4CVYSsag=; b=nCeNNwhnfQKyLswrzwpba3mA3U6yVfyGX6WdvSfLEHcHYWXazli+FyVa05jZzJYyLO 6JWe+MPWL5gBTM4EgtH43iWvkXaiKfVQmTgGBKdmKmF5tCiEEF/2h3XhidOTfsN5yjzX rNyBr8jMlYcudlTrMfqZZUZpXHIMa1beoisjiXVY/cAmVhv/JYN33htFS9pgQM8AzHPU 2pgCAfdkdEHZyVFwNVyScU1AbE/r34htkn2QkSkcOpxT4iHu3oQGKD34yLIcnH5ByegS 3v73E5ryqSJTeHa0aCE4zrGhT/rymB+yC9xcu46TnBUjU0zDfOPiSHZ6oAcAVxE7Osiy YtVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g2guTtZQNpFmVRb5THlXvx+Ph/1cQAsy90V4CVYSsag=; b=g1yKdciZtRIXrponQW4vGuv9QnIPHLcjEjIq6UgO8+zYvGsqfiTI6RZ1W//wcZFzZL 018d940ihNbhxKC4eg6DMJasIVLzDIv9vSboqvxSVujackRxBez0Fn93Au3S2Sf8EoYe kWD2QEzIwtyYCYc4gGmwud8pfFU6M5xPZ8pvt8H6Yy/V67md+vCGL94VAZg4Ce9+24VC c0yxMaeLDmQTmPgT4gvT4pkHOt3cgi9ApjxtotXqEVwz/fHk9LXTAa5QGj2ieBHLW/6I 0V6LM2rkz0A0Y1Tjpvx3BfqGKstE1tXz2P0eUiiG5wPkGrv8lZjWDkdJhzZJjqS9M9Xi rK5Q== X-Gm-Message-State: APjAAAUHp/XzhgxeAR88AjLfhv4QYmB5TBdU/9mE5we84mcMx08IYuqp pwt9B92qocp24QNCUvT1O0Kp6A== X-Google-Smtp-Source: APXvYqxyS3DWGdWwGtlflO2EVB07GqBuSHK5MeOC2wl+H0T9ELOhLvu+VCdoiM1IVrHi8ReqTc3YkQ== X-Received: by 2002:a7b:c4cf:: with SMTP id g15mr4622410wmk.122.1571923466064; Thu, 24 Oct 2019 06:24:26 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:25 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 23/27] crypto: rockchip - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:41 +0200 Message-Id: <20191024132345.5236-24-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062428_524009_BDF799C0 X-CRM114-Status: GOOD ( 19.23 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Heiko Stuebner , Eric Biggers , Ard Biesheuvel , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Herbert Xu Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Heiko Stuebner Signed-off-by: Ard Biesheuvel --- drivers/crypto/rockchip/Makefile | 2 +- drivers/crypto/rockchip/rk3288_crypto.c | 8 +- drivers/crypto/rockchip/rk3288_crypto.h | 3 +- drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c | 556 -------------------- drivers/crypto/rockchip/rk3288_crypto_skcipher.c | 538 +++++++++++++++++++ 5 files changed, 545 insertions(+), 562 deletions(-) diff --git a/drivers/crypto/rockchip/Makefile b/drivers/crypto/rockchip/Makefile index 6e23764e6c8a..785277aca71e 100644 --- a/drivers/crypto/rockchip/Makefile +++ b/drivers/crypto/rockchip/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) += rk_crypto.o rk_crypto-objs := rk3288_crypto.o \ - rk3288_crypto_ablkcipher.o \ + rk3288_crypto_skcipher.o \ rk3288_crypto_ahash.o diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c index e5714ef24bf2..f385587f99af 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.c +++ b/drivers/crypto/rockchip/rk3288_crypto.c @@ -264,8 +264,8 @@ static int rk_crypto_register(struct rk_crypto_info *crypto_info) for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { rk_cipher_algs[i]->dev = crypto_info; if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - err = crypto_register_alg( - &rk_cipher_algs[i]->alg.crypto); + err = crypto_register_skcipher( + &rk_cipher_algs[i]->alg.skcipher); else err = crypto_register_ahash( &rk_cipher_algs[i]->alg.hash); @@ -277,7 +277,7 @@ static int rk_crypto_register(struct rk_crypto_info *crypto_info) err_cipher_algs: for (k = 0; k < i; k++) { if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - crypto_unregister_alg(&rk_cipher_algs[k]->alg.crypto); + crypto_unregister_skcipher(&rk_cipher_algs[k]->alg.skcipher); else crypto_unregister_ahash(&rk_cipher_algs[i]->alg.hash); } @@ -290,7 +290,7 @@ static void rk_crypto_unregister(void) for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { if (rk_cipher_algs[i]->type == ALG_TYPE_CIPHER) - crypto_unregister_alg(&rk_cipher_algs[i]->alg.crypto); + crypto_unregister_skcipher(&rk_cipher_algs[i]->alg.skcipher); else crypto_unregister_ahash(&rk_cipher_algs[i]->alg.hash); } diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h index 18e2b3f29336..2b49c677afdb 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.h +++ b/drivers/crypto/rockchip/rk3288_crypto.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -256,7 +257,7 @@ enum alg_type { struct rk_crypto_tmp { struct rk_crypto_info *dev; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg hash; } alg; enum alg_type type; diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c deleted file mode 100644 index d0f4b2d18059..000000000000 --- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +++ /dev/null @@ -1,556 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Crypto acceleration support for Rockchip RK3288 - * - * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd - * - * Author: Zain Wang - * - * Some ideas are from marvell-cesa.c and s5p-sss.c driver. - */ -#include "rk3288_crypto.h" - -#define RK_CRYPTO_DEC BIT(0) - -static void rk_crypto_complete(struct crypto_async_request *base, int err) -{ - if (base->complete) - base->complete(base, err); -} - -static int rk_handle_req(struct rk_crypto_info *dev, - struct ablkcipher_request *req) -{ - if (!IS_ALIGNED(req->nbytes, dev->align_size)) - return -EINVAL; - else - return dev->enqueue(dev, &req->base); -} - -static int rk_aes_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - - if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && - keylen != AES_KEYSIZE_256) { - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); - return -EINVAL; - } - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen); - return 0; -} - -static int rk_des_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - int err; - - err = verify_ablkcipher_des_key(cipher, key); - if (err) - return err; - - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); - return 0; -} - -static int rk_tdes_setkey(struct crypto_ablkcipher *cipher, - const u8 *key, unsigned int keylen) -{ - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - int err; - - err = verify_ablkcipher_des3_key(cipher, key); - if (err) - return err; - - ctx->keylen = keylen; - memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); - return 0; -} - -static int rk_aes_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_ECB_MODE; - return rk_handle_req(dev, req); -} - -static int rk_aes_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_aes_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_CBC_MODE; - return rk_handle_req(dev, req); -} - -static int rk_aes_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = 0; - return rk_handle_req(dev, req); -} - -static int rk_des_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; - return rk_handle_req(dev, req); -} - -static int rk_des_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_ecb_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_ecb_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_cbc_encrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; - return rk_handle_req(dev, req); -} - -static int rk_des3_ede_cbc_decrypt(struct ablkcipher_request *req) -{ - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - struct rk_crypto_info *dev = ctx->dev; - - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | - RK_CRYPTO_DEC; - return rk_handle_req(dev, req); -} - -static void rk_ablk_hw_init(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(cipher); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(cipher); - u32 ivsize, block, conf_reg = 0; - - block = crypto_tfm_alg_blocksize(tfm); - ivsize = crypto_ablkcipher_ivsize(cipher); - - if (block == DES_BLOCK_SIZE) { - ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | - RK_CRYPTO_TDES_BYTESWAP_KEY | - RK_CRYPTO_TDES_BYTESWAP_IV; - CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode); - memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->info, ivsize); - conf_reg = RK_CRYPTO_DESSEL; - } else { - ctx->mode |= RK_CRYPTO_AES_FIFO_MODE | - RK_CRYPTO_AES_KEY_CHANGE | - RK_CRYPTO_AES_BYTESWAP_KEY | - RK_CRYPTO_AES_BYTESWAP_IV; - if (ctx->keylen == AES_KEYSIZE_192) - ctx->mode |= RK_CRYPTO_AES_192BIT_key; - else if (ctx->keylen == AES_KEYSIZE_256) - ctx->mode |= RK_CRYPTO_AES_256BIT_key; - CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode); - memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->info, ivsize); - } - conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO | - RK_CRYPTO_BYTESWAP_BRFIFO; - CRYPTO_WRITE(dev, RK_CRYPTO_CONF, conf_reg); - CRYPTO_WRITE(dev, RK_CRYPTO_INTENA, - RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA); -} - -static void crypto_dma_start(struct rk_crypto_info *dev) -{ - CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in); - CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4); - CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out); - CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START | - _SBF(RK_CRYPTO_BLOCK_START, 16)); -} - -static int rk_set_data_start(struct rk_crypto_info *dev) -{ - int err; - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + - dev->sg_src->offset + dev->sg_src->length - ivsize; - - /* Store the iv that need to be updated in chain mode. - * And update the IV buffer to contain the next IV for decryption mode. - */ - if (ctx->mode & RK_CRYPTO_DEC) { - memcpy(ctx->iv, src_last_blk, ivsize); - sg_pcopy_to_buffer(dev->first, dev->src_nents, req->info, - ivsize, dev->total - ivsize); - } - - err = dev->load_data(dev, dev->sg_src, dev->sg_dst); - if (!err) - crypto_dma_start(dev); - return err; -} - -static int rk_ablk_start(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - unsigned long flags; - int err = 0; - - dev->left_bytes = req->nbytes; - dev->total = req->nbytes; - dev->sg_src = req->src; - dev->first = req->src; - dev->src_nents = sg_nents(req->src); - dev->sg_dst = req->dst; - dev->dst_nents = sg_nents(req->dst); - dev->aligned = 1; - - spin_lock_irqsave(&dev->lock, flags); - rk_ablk_hw_init(dev); - err = rk_set_data_start(dev); - spin_unlock_irqrestore(&dev->lock, flags); - return err; -} - -static void rk_iv_copyback(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - - /* Update the IV buffer to contain the next IV for encryption mode. */ - if (!(ctx->mode & RK_CRYPTO_DEC)) { - if (dev->aligned) { - memcpy(req->info, sg_virt(dev->sg_dst) + - dev->sg_dst->length - ivsize, ivsize); - } else { - memcpy(req->info, dev->addr_vir + - dev->count - ivsize, ivsize); - } - } -} - -static void rk_update_iv(struct rk_crypto_info *dev) -{ - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); - struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); - u32 ivsize = crypto_ablkcipher_ivsize(tfm); - u8 *new_iv = NULL; - - if (ctx->mode & RK_CRYPTO_DEC) { - new_iv = ctx->iv; - } else { - new_iv = page_address(sg_page(dev->sg_dst)) + - dev->sg_dst->offset + dev->sg_dst->length - ivsize; - } - - if (ivsize == DES_BLOCK_SIZE) - memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); - else if (ivsize == AES_BLOCK_SIZE) - memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); -} - -/* return: - * true some err was occurred - * fault no err, continue - */ -static int rk_ablk_rx(struct rk_crypto_info *dev) -{ - int err = 0; - struct ablkcipher_request *req = - ablkcipher_request_cast(dev->async_req); - - dev->unload_data(dev); - if (!dev->aligned) { - if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, - dev->addr_vir, dev->count, - dev->total - dev->left_bytes - - dev->count)) { - err = -EINVAL; - goto out_rx; - } - } - if (dev->left_bytes) { - rk_update_iv(dev); - if (dev->aligned) { - if (sg_is_last(dev->sg_src)) { - dev_err(dev->dev, "[%s:%d] Lack of data\n", - __func__, __LINE__); - err = -ENOMEM; - goto out_rx; - } - dev->sg_src = sg_next(dev->sg_src); - dev->sg_dst = sg_next(dev->sg_dst); - } - err = rk_set_data_start(dev); - } else { - rk_iv_copyback(dev); - /* here show the calculation is over without any err */ - dev->complete(dev->async_req, 0); - tasklet_schedule(&dev->queue_task); - } -out_rx: - return err; -} - -static int rk_ablk_cra_init(struct crypto_tfm *tfm) -{ - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_alg *alg = tfm->__crt_alg; - struct rk_crypto_tmp *algt; - - algt = container_of(alg, struct rk_crypto_tmp, alg.crypto); - - ctx->dev = algt->dev; - ctx->dev->align_size = crypto_tfm_alg_alignmask(tfm) + 1; - ctx->dev->start = rk_ablk_start; - ctx->dev->update = rk_ablk_rx; - ctx->dev->complete = rk_crypto_complete; - ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL); - - return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM; -} - -static void rk_ablk_cra_exit(struct crypto_tfm *tfm) -{ - struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - - free_page((unsigned long)ctx->dev->addr_vir); - ctx->dev->disable_clk(ctx->dev); -} - -struct rk_crypto_tmp rk_ecb_aes_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = rk_aes_setkey, - .encrypt = rk_aes_ecb_encrypt, - .decrypt = rk_aes_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_aes_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x0f, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = rk_aes_setkey, - .encrypt = rk_aes_cbc_encrypt, - .decrypt = rk_aes_cbc_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_ecb_des_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = rk_des_setkey, - .encrypt = rk_des_ecb_encrypt, - .decrypt = rk_des_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_des_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_des_setkey, - .encrypt = rk_des_cbc_encrypt, - .decrypt = rk_des_cbc_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_ecb_des3_ede_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-des3-ede-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_tdes_setkey, - .encrypt = rk_des3_ede_ecb_encrypt, - .decrypt = rk_des3_ede_ecb_decrypt, - } - } -}; - -struct rk_crypto_tmp rk_cbc_des3_ede_alg = { - .type = ALG_TYPE_CIPHER, - .alg.crypto = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-des3-ede-rk", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_blocksize = DES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct rk_cipher_ctx), - .cra_alignmask = 0x07, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = rk_ablk_cra_init, - .cra_exit = rk_ablk_cra_exit, - .cra_u.ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = rk_tdes_setkey, - .encrypt = rk_des3_ede_cbc_encrypt, - .decrypt = rk_des3_ede_cbc_decrypt, - } - } -}; diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c new file mode 100644 index 000000000000..ca4de4ddfe1f --- /dev/null +++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c @@ -0,0 +1,538 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Rockchip RK3288 + * + * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd + * + * Author: Zain Wang + * + * Some ideas are from marvell-cesa.c and s5p-sss.c driver. + */ +#include "rk3288_crypto.h" + +#define RK_CRYPTO_DEC BIT(0) + +static void rk_crypto_complete(struct crypto_async_request *base, int err) +{ + if (base->complete) + base->complete(base, err); +} + +static int rk_handle_req(struct rk_crypto_info *dev, + struct skcipher_request *req) +{ + if (!IS_ALIGNED(req->cryptlen, dev->align_size)) + return -EINVAL; + else + return dev->enqueue(dev, &req->base); +} + +static int rk_aes_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) { + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_AES_KEY_0, key, keylen); + return 0; +} + +static int rk_des_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + int err; + + err = verify_skcipher_des_key(cipher, key); + if (err) + return err; + + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); + return 0; +} + +static int rk_tdes_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + int err; + + err = verify_skcipher_des3_key(cipher, key); + if (err) + return err; + + ctx->keylen = keylen; + memcpy_toio(ctx->dev->reg + RK_CRYPTO_TDES_KEY1_0, key, keylen); + return 0; +} + +static int rk_aes_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_ECB_MODE; + return rk_handle_req(dev, req); +} + +static int rk_aes_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_aes_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_CBC_MODE; + return rk_handle_req(dev, req); +} + +static int rk_aes_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = 0; + return rk_handle_req(dev, req); +} + +static int rk_des_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; + return rk_handle_req(dev, req); +} + +static int rk_des_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; + return rk_handle_req(dev, req); +} + +static int rk_des3_ede_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_crypto_info *dev = ctx->dev; + + ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | + RK_CRYPTO_DEC; + return rk_handle_req(dev, req); +} + +static void rk_ablk_hw_init(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); + u32 ivsize, block, conf_reg = 0; + + block = crypto_tfm_alg_blocksize(tfm); + ivsize = crypto_skcipher_ivsize(cipher); + + if (block == DES_BLOCK_SIZE) { + ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | + RK_CRYPTO_TDES_BYTESWAP_KEY | + RK_CRYPTO_TDES_BYTESWAP_IV; + CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode); + memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->iv, ivsize); + conf_reg = RK_CRYPTO_DESSEL; + } else { + ctx->mode |= RK_CRYPTO_AES_FIFO_MODE | + RK_CRYPTO_AES_KEY_CHANGE | + RK_CRYPTO_AES_BYTESWAP_KEY | + RK_CRYPTO_AES_BYTESWAP_IV; + if (ctx->keylen == AES_KEYSIZE_192) + ctx->mode |= RK_CRYPTO_AES_192BIT_key; + else if (ctx->keylen == AES_KEYSIZE_256) + ctx->mode |= RK_CRYPTO_AES_256BIT_key; + CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode); + memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->iv, ivsize); + } + conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO | + RK_CRYPTO_BYTESWAP_BRFIFO; + CRYPTO_WRITE(dev, RK_CRYPTO_CONF, conf_reg); + CRYPTO_WRITE(dev, RK_CRYPTO_INTENA, + RK_CRYPTO_BCDMA_ERR_ENA | RK_CRYPTO_BCDMA_DONE_ENA); +} + +static void crypto_dma_start(struct rk_crypto_info *dev) +{ + CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAS, dev->addr_in); + CRYPTO_WRITE(dev, RK_CRYPTO_BRDMAL, dev->count / 4); + CRYPTO_WRITE(dev, RK_CRYPTO_BTDMAS, dev->addr_out); + CRYPTO_WRITE(dev, RK_CRYPTO_CTRL, RK_CRYPTO_BLOCK_START | + _SBF(RK_CRYPTO_BLOCK_START, 16)); +} + +static int rk_set_data_start(struct rk_crypto_info *dev) +{ + int err; + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + + dev->sg_src->offset + dev->sg_src->length - ivsize; + + /* Store the iv that need to be updated in chain mode. + * And update the IV buffer to contain the next IV for decryption mode. + */ + if (ctx->mode & RK_CRYPTO_DEC) { + memcpy(ctx->iv, src_last_blk, ivsize); + sg_pcopy_to_buffer(dev->first, dev->src_nents, req->iv, + ivsize, dev->total - ivsize); + } + + err = dev->load_data(dev, dev->sg_src, dev->sg_dst); + if (!err) + crypto_dma_start(dev); + return err; +} + +static int rk_ablk_start(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + unsigned long flags; + int err = 0; + + dev->left_bytes = req->cryptlen; + dev->total = req->cryptlen; + dev->sg_src = req->src; + dev->first = req->src; + dev->src_nents = sg_nents(req->src); + dev->sg_dst = req->dst; + dev->dst_nents = sg_nents(req->dst); + dev->aligned = 1; + + spin_lock_irqsave(&dev->lock, flags); + rk_ablk_hw_init(dev); + err = rk_set_data_start(dev); + spin_unlock_irqrestore(&dev->lock, flags); + return err; +} + +static void rk_iv_copyback(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + + /* Update the IV buffer to contain the next IV for encryption mode. */ + if (!(ctx->mode & RK_CRYPTO_DEC)) { + if (dev->aligned) { + memcpy(req->iv, sg_virt(dev->sg_dst) + + dev->sg_dst->length - ivsize, ivsize); + } else { + memcpy(req->iv, dev->addr_vir + + dev->count - ivsize, ivsize); + } + } +} + +static void rk_update_iv(struct rk_crypto_info *dev) +{ + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + u32 ivsize = crypto_skcipher_ivsize(tfm); + u8 *new_iv = NULL; + + if (ctx->mode & RK_CRYPTO_DEC) { + new_iv = ctx->iv; + } else { + new_iv = page_address(sg_page(dev->sg_dst)) + + dev->sg_dst->offset + dev->sg_dst->length - ivsize; + } + + if (ivsize == DES_BLOCK_SIZE) + memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); + else if (ivsize == AES_BLOCK_SIZE) + memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); +} + +/* return: + * true some err was occurred + * fault no err, continue + */ +static int rk_ablk_rx(struct rk_crypto_info *dev) +{ + int err = 0; + struct skcipher_request *req = + skcipher_request_cast(dev->async_req); + + dev->unload_data(dev); + if (!dev->aligned) { + if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, + dev->addr_vir, dev->count, + dev->total - dev->left_bytes - + dev->count)) { + err = -EINVAL; + goto out_rx; + } + } + if (dev->left_bytes) { + rk_update_iv(dev); + if (dev->aligned) { + if (sg_is_last(dev->sg_src)) { + dev_err(dev->dev, "[%s:%d] Lack of data\n", + __func__, __LINE__); + err = -ENOMEM; + goto out_rx; + } + dev->sg_src = sg_next(dev->sg_src); + dev->sg_dst = sg_next(dev->sg_dst); + } + err = rk_set_data_start(dev); + } else { + rk_iv_copyback(dev); + /* here show the calculation is over without any err */ + dev->complete(dev->async_req, 0); + tasklet_schedule(&dev->queue_task); + } +out_rx: + return err; +} + +static int rk_ablk_init_tfm(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_tmp *algt; + + algt = container_of(alg, struct rk_crypto_tmp, alg.skcipher); + + ctx->dev = algt->dev; + ctx->dev->align_size = crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm)) + 1; + ctx->dev->start = rk_ablk_start; + ctx->dev->update = rk_ablk_rx; + ctx->dev->complete = rk_crypto_complete; + ctx->dev->addr_vir = (char *)__get_free_page(GFP_KERNEL); + + return ctx->dev->addr_vir ? ctx->dev->enable_clk(ctx->dev) : -ENOMEM; +} + +static void rk_ablk_exit_tfm(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + + free_page((unsigned long)ctx->dev->addr_vir); + ctx->dev->disable_clk(ctx->dev); +} + +struct rk_crypto_tmp rk_ecb_aes_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_ecb_encrypt, + .decrypt = rk_aes_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_aes_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_cbc_encrypt, + .decrypt = rk_aes_cbc_decrypt, + } +}; + +struct rk_crypto_tmp rk_ecb_des_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = rk_des_setkey, + .encrypt = rk_des_ecb_encrypt, + .decrypt = rk_des_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_des_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_des_setkey, + .encrypt = rk_des_cbc_encrypt, + .decrypt = rk_des_cbc_decrypt, + } +}; + +struct rk_crypto_tmp rk_ecb_des3_ede_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-des3-ede-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_tdes_setkey, + .encrypt = rk_des3_ede_ecb_encrypt, + .decrypt = rk_des3_ede_ecb_decrypt, + } +}; + +struct rk_crypto_tmp rk_cbc_des3_ede_alg = { + .type = ALG_TYPE_CIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-des3-ede-rk", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x07, + .base.cra_module = THIS_MODULE, + + .init = rk_ablk_init_tfm, + .exit = rk_ablk_exit_tfm, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = rk_tdes_setkey, + .encrypt = rk_des3_ede_cbc_encrypt, + .decrypt = rk_des3_ede_cbc_decrypt, + } +}; From patchwork Thu Oct 24 13:23:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209713 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 203B213BD for ; Thu, 24 Oct 2019 13:32:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EE0DD20663 for ; Thu, 24 Oct 2019 13:32:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jRgvutzs"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ejiliErF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE0DD20663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+B/GVqAhYD4Tg6kXffvS+R2yFgSNbqG04baTozMq2Qo=; b=jRgvutzsQ0TpfI cvjs+LtD/NOwyv6v1JCucdnOPdglV00aIXzZA2j23LqkMS1UCDxpViNAoc9bLFmyZ1d0rP6uHMeUE 39MNvSUShYZD15oqRSwg/jvmUhdO2RE2kC+//fM+T6V4Yf8JCEU0EHyLtf5+tEY9qYseGwirGSTro 30NnEtF48Vl2nE/PqE/NiUuXJTK/eo5Fb5ZjH/Fjn9PEuA/mMEDv0WFygaFivvhkAdYLAc6rAuwZ5 W33zIRky3+OzNaohLS5UUYYYCoqGd/KFFTIFRNYGqfNnLk6r4jmIK6YD3PnwyXKrnrr2Jd6XEo1Vs GvPvgwptXWSApw71DTrA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdEP-0001B4-Bh; Thu, 24 Oct 2019 13:32:49 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6L-0000pi-Fn for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:44 +0000 Received: by mail-wm1-x343.google.com with SMTP id 3so2601546wmi.3 for ; Thu, 24 Oct 2019 06:24:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1OUClwmukVUZ9t4zs2HvbaIYKb0NUnxaWL0jQGurdic=; b=ejiliErFhUlsv6ReQHjc3HlvA4QrL6r/2bFrcG8ZIim2YCekpkvfn+GDGx0v3NwQya +WWywETjCOA1iFkkqx84KzGT8HTNp1YUX3N8YKrn4zPddZxv+5SRW+4pppyTNvngCFrA 2IedI2YuBC6eM/8JAG+QPA6N5qPe7vDG6035OGYLxrTKk51HBuzvBvGBQ/BHYQJWZRDb agnB9wp+uKZBC2WwizPVGXtJJPLP5mjRd+8JFVhpan8xpHhz/LBBykbVEt64dhq5CXPq KkDTNPkNg+xljZ4pUQTNUNwri6i0GmzgpiAjpdVtdfChu3ctKmMJk3LuEzxBNQ1GOvYU yhDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1OUClwmukVUZ9t4zs2HvbaIYKb0NUnxaWL0jQGurdic=; b=iswd4GwWIn/nTrQeYZLRslTqv6ePqSQP1UjS5C0y6uC2zxQhlYHO5YBRtPm0cqFl7A yGzHOcc6QSWrCbtB3HDPHpiKziyG1x8oDKFhZ2EB5dwlSX/BSixlDUQk/vqTDHHDm7VS 8sJVsZTD/t88BwO2yxLXEnH/S7lyazDvl64JRc5WLq7ZTmDNoB6FYTJKbsXcR8eWl/+v k0yh5wUH/i1MokIrfq8wYdXjW0ByuSaVMBHq63epL+G72r4qo7r3YVRp+jmhBV0wppSW SuKtUyML7D+8nVaTUAkMU4vwuyWEFyif7lrwXrytQrSWGIpMuM7z7NBhL/DNDIfRn+nw 5Flg== X-Gm-Message-State: APjAAAUTcFUql7Gyx7yMgaD7y7twbxsnYKsGJ4u9/sKdcNal3mxk41WC 4SYJpBAhKWbriSwpJIB7TnEYig== X-Google-Smtp-Source: APXvYqyegvJfU8OV/9tr3h2anWtKPMsNv16WwqHepJ3Yj93Ex2OOIZZUS93PavvNwdi0nGWwwqh3Jw== X-Received: by 2002:a1c:f714:: with SMTP id v20mr5101429wmh.55.1571923467548; Thu, 24 Oct 2019 06:24:27 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:26 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 24/27] crypto: talitos - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:42 +0200 Message-Id: <20191024132345.5236-25-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062429_747751_5E9D31C6 X-CRM114-Status: GOOD ( 17.15 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/talitos.c | 306 +++++++++----------- 1 file changed, 142 insertions(+), 164 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index bcd533671ccc..c29f8c02ea05 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include #include @@ -1490,10 +1490,10 @@ static int aead_decrypt(struct aead_request *req) return ipsec_esp(edesc, req, false, ipsec_esp_decrypt_swauth_done); } -static int ablkcipher_setkey(struct crypto_ablkcipher *cipher, +static int skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); struct device *dev = ctx->dev; if (ctx->keylen) @@ -1507,39 +1507,39 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *cipher, return 0; } -static int ablkcipher_des_setkey(struct crypto_ablkcipher *cipher, +static int skcipher_des_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - return verify_ablkcipher_des_key(cipher, key) ?: - ablkcipher_setkey(cipher, key, keylen); + return verify_skcipher_des_key(cipher, key) ?: + skcipher_setkey(cipher, key, keylen); } -static int ablkcipher_des3_setkey(struct crypto_ablkcipher *cipher, +static int skcipher_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { - return verify_ablkcipher_des3_key(cipher, key) ?: - ablkcipher_setkey(cipher, key, keylen); + return verify_skcipher_des3_key(cipher, key) ?: + skcipher_setkey(cipher, key, keylen); } -static int ablkcipher_aes_setkey(struct crypto_ablkcipher *cipher, +static int skcipher_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, unsigned int keylen) { if (keylen == AES_KEYSIZE_128 || keylen == AES_KEYSIZE_192 || keylen == AES_KEYSIZE_256) - return ablkcipher_setkey(cipher, key, keylen); + return skcipher_setkey(cipher, key, keylen); - crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } static void common_nonsnoop_unmap(struct device *dev, struct talitos_edesc *edesc, - struct ablkcipher_request *areq) + struct skcipher_request *areq) { unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE); - talitos_sg_unmap(dev, edesc, areq->src, areq->dst, areq->nbytes, 0); + talitos_sg_unmap(dev, edesc, areq->src, areq->dst, areq->cryptlen, 0); unmap_single_talitos_ptr(dev, &edesc->desc.ptr[1], DMA_TO_DEVICE); if (edesc->dma_len) @@ -1547,20 +1547,20 @@ static void common_nonsnoop_unmap(struct device *dev, DMA_BIDIRECTIONAL); } -static void ablkcipher_done(struct device *dev, +static void skcipher_done(struct device *dev, struct talitos_desc *desc, void *context, int err) { - struct ablkcipher_request *areq = context; - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); - unsigned int ivsize = crypto_ablkcipher_ivsize(cipher); + struct skcipher_request *areq = context; + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); + unsigned int ivsize = crypto_skcipher_ivsize(cipher); struct talitos_edesc *edesc; edesc = container_of(desc, struct talitos_edesc, desc); common_nonsnoop_unmap(dev, edesc, areq); - memcpy(areq->info, ctx->iv, ivsize); + memcpy(areq->iv, ctx->iv, ivsize); kfree(edesc); @@ -1568,17 +1568,17 @@ static void ablkcipher_done(struct device *dev, } static int common_nonsnoop(struct talitos_edesc *edesc, - struct ablkcipher_request *areq, + struct skcipher_request *areq, void (*callback) (struct device *dev, struct talitos_desc *desc, void *context, int error)) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); struct device *dev = ctx->dev; struct talitos_desc *desc = &edesc->desc; - unsigned int cryptlen = areq->nbytes; - unsigned int ivsize = crypto_ablkcipher_ivsize(cipher); + unsigned int cryptlen = areq->cryptlen; + unsigned int ivsize = crypto_skcipher_ivsize(cipher); int sg_count, ret; bool sync_needed = false; struct talitos_private *priv = dev_get_drvdata(dev); @@ -1638,65 +1638,65 @@ static int common_nonsnoop(struct talitos_edesc *edesc, return ret; } -static struct talitos_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request * +static struct talitos_edesc *skcipher_edesc_alloc(struct skcipher_request * areq, bool encrypt) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); - unsigned int ivsize = crypto_ablkcipher_ivsize(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); + unsigned int ivsize = crypto_skcipher_ivsize(cipher); return talitos_edesc_alloc(ctx->dev, areq->src, areq->dst, - areq->info, 0, areq->nbytes, 0, ivsize, 0, + areq->iv, 0, areq->cryptlen, 0, ivsize, 0, areq->base.flags, encrypt); } -static int ablkcipher_encrypt(struct ablkcipher_request *areq) +static int skcipher_encrypt(struct skcipher_request *areq) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); struct talitos_edesc *edesc; unsigned int blocksize = - crypto_tfm_alg_blocksize(crypto_ablkcipher_tfm(cipher)); + crypto_tfm_alg_blocksize(crypto_skcipher_tfm(cipher)); - if (!areq->nbytes) + if (!areq->cryptlen) return 0; - if (areq->nbytes % blocksize) + if (areq->cryptlen % blocksize) return -EINVAL; /* allocate extended descriptor */ - edesc = ablkcipher_edesc_alloc(areq, true); + edesc = skcipher_edesc_alloc(areq, true); if (IS_ERR(edesc)) return PTR_ERR(edesc); /* set encrypt */ edesc->desc.hdr = ctx->desc_hdr_template | DESC_HDR_MODE0_ENCRYPT; - return common_nonsnoop(edesc, areq, ablkcipher_done); + return common_nonsnoop(edesc, areq, skcipher_done); } -static int ablkcipher_decrypt(struct ablkcipher_request *areq) +static int skcipher_decrypt(struct skcipher_request *areq) { - struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); - struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); + struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(areq); + struct talitos_ctx *ctx = crypto_skcipher_ctx(cipher); struct talitos_edesc *edesc; unsigned int blocksize = - crypto_tfm_alg_blocksize(crypto_ablkcipher_tfm(cipher)); + crypto_tfm_alg_blocksize(crypto_skcipher_tfm(cipher)); - if (!areq->nbytes) + if (!areq->cryptlen) return 0; - if (areq->nbytes % blocksize) + if (areq->cryptlen % blocksize) return -EINVAL; /* allocate extended descriptor */ - edesc = ablkcipher_edesc_alloc(areq, false); + edesc = skcipher_edesc_alloc(areq, false); if (IS_ERR(edesc)) return PTR_ERR(edesc); edesc->desc.hdr = ctx->desc_hdr_template | DESC_HDR_DIR_INBOUND; - return common_nonsnoop(edesc, areq, ablkcipher_done); + return common_nonsnoop(edesc, areq, skcipher_done); } static void common_nonsnoop_hash_unmap(struct device *dev, @@ -2257,7 +2257,7 @@ struct talitos_alg_template { u32 type; u32 priority; union { - struct crypto_alg crypto; + struct skcipher_alg skcipher; struct ahash_alg hash; struct aead_alg aead; } alg; @@ -2702,123 +2702,102 @@ static struct talitos_alg_template driver_algs[] = { DESC_HDR_MODE1_MDEU_PAD | DESC_HDR_MODE1_MDEU_MD5_HMAC, }, - /* ABLKCIPHER algorithms. */ - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(aes)", - .cra_driver_name = "ecb-aes-talitos", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = ablkcipher_aes_setkey, - } + /* SKCIPHER algorithms. */ + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-talitos", + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = skcipher_aes_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_AESU, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(aes)", - .cra_driver_name = "cbc-aes-talitos", - .cra_blocksize = AES_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ablkcipher_aes_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-talitos", + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = skcipher_aes_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_AESU | DESC_HDR_MODE0_AESU_CBC, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ctr(aes)", - .cra_driver_name = "ctr-aes-talitos", - .cra_blocksize = 1, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ablkcipher_aes_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-talitos", + .base.cra_blocksize = 1, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = skcipher_aes_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_AESU_CTR_NONSNOOP | DESC_HDR_SEL0_AESU | DESC_HDR_MODE0_AESU_CTR, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(des)", - .cra_driver_name = "ecb-des-talitos", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .setkey = ablkcipher_des_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des)", + .base.cra_driver_name = "ecb-des-talitos", + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .setkey = skcipher_des_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_DEU, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(des)", - .cra_driver_name = "cbc-des-talitos", - .cra_blocksize = DES_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = DES_KEY_SIZE, - .max_keysize = DES_KEY_SIZE, - .ivsize = DES_BLOCK_SIZE, - .setkey = ablkcipher_des_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des)", + .base.cra_driver_name = "cbc-des-talitos", + .base.cra_blocksize = DES_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + .setkey = skcipher_des_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_DEU | DESC_HDR_MODE0_DEU_CBC, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "ecb(des3_ede)", - .cra_driver_name = "ecb-3des-talitos", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .setkey = ablkcipher_des3_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "ecb-3des-talitos", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .setkey = skcipher_des3_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_DEU | DESC_HDR_MODE0_DEU_3DES, }, - { .type = CRYPTO_ALG_TYPE_ABLKCIPHER, - .alg.crypto = { - .cra_name = "cbc(des3_ede)", - .cra_driver_name = "cbc-3des-talitos", - .cra_blocksize = DES3_EDE_BLOCK_SIZE, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | - CRYPTO_ALG_ASYNC, - .cra_ablkcipher = { - .min_keysize = DES3_EDE_KEY_SIZE, - .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, - .setkey = ablkcipher_des3_setkey, - } + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cbc-3des-talitos", + .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + .setkey = skcipher_des3_setkey, }, .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | DESC_HDR_SEL0_DEU | @@ -3036,40 +3015,39 @@ static int talitos_init_common(struct talitos_ctx *ctx, return 0; } -static int talitos_cra_init(struct crypto_tfm *tfm) +static int talitos_cra_init_aead(struct crypto_aead *tfm) { - struct crypto_alg *alg = tfm->__crt_alg; + struct aead_alg *alg = crypto_aead_alg(tfm); struct talitos_crypto_alg *talitos_alg; - struct talitos_ctx *ctx = crypto_tfm_ctx(tfm); + struct talitos_ctx *ctx = crypto_aead_ctx(tfm); - if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_AHASH) - talitos_alg = container_of(__crypto_ahash_alg(alg), - struct talitos_crypto_alg, - algt.alg.hash); - else - talitos_alg = container_of(alg, struct talitos_crypto_alg, - algt.alg.crypto); + talitos_alg = container_of(alg, struct talitos_crypto_alg, + algt.alg.aead); return talitos_init_common(ctx, talitos_alg); } -static int talitos_cra_init_aead(struct crypto_aead *tfm) +static int talitos_cra_init_skcipher(struct crypto_skcipher *tfm) { - struct aead_alg *alg = crypto_aead_alg(tfm); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct talitos_crypto_alg *talitos_alg; - struct talitos_ctx *ctx = crypto_aead_ctx(tfm); + struct talitos_ctx *ctx = crypto_skcipher_ctx(tfm); talitos_alg = container_of(alg, struct talitos_crypto_alg, - algt.alg.aead); + algt.alg.skcipher); return talitos_init_common(ctx, talitos_alg); } static int talitos_cra_init_ahash(struct crypto_tfm *tfm) { + struct crypto_alg *alg = tfm->__crt_alg; + struct talitos_crypto_alg *talitos_alg; struct talitos_ctx *ctx = crypto_tfm_ctx(tfm); - talitos_cra_init(tfm); + talitos_alg = container_of(__crypto_ahash_alg(alg), + struct talitos_crypto_alg, + algt.alg.hash); ctx->keylen = 0; crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), @@ -3116,7 +3094,8 @@ static int talitos_remove(struct platform_device *ofdev) list_for_each_entry_safe(t_alg, n, &priv->alg_list, entry) { switch (t_alg->algt.type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: + case CRYPTO_ALG_TYPE_SKCIPHER: + crypto_unregister_skcipher(&t_alg->algt.alg.skcipher); break; case CRYPTO_ALG_TYPE_AEAD: crypto_unregister_aead(&t_alg->algt.alg.aead); @@ -3160,15 +3139,14 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev, t_alg->algt = *template; switch (t_alg->algt.type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - alg = &t_alg->algt.alg.crypto; - alg->cra_init = talitos_cra_init; + case CRYPTO_ALG_TYPE_SKCIPHER: + alg = &t_alg->algt.alg.skcipher.base; alg->cra_exit = talitos_cra_exit; - alg->cra_type = &crypto_ablkcipher_type; - alg->cra_ablkcipher.setkey = alg->cra_ablkcipher.setkey ?: - ablkcipher_setkey; - alg->cra_ablkcipher.encrypt = ablkcipher_encrypt; - alg->cra_ablkcipher.decrypt = ablkcipher_decrypt; + t_alg->algt.alg.skcipher.init = talitos_cra_init_skcipher; + t_alg->algt.alg.skcipher.setkey = + t_alg->algt.alg.skcipher.setkey ?: skcipher_setkey; + t_alg->algt.alg.skcipher.encrypt = skcipher_encrypt; + t_alg->algt.alg.skcipher.decrypt = skcipher_decrypt; break; case CRYPTO_ALG_TYPE_AEAD: alg = &t_alg->algt.alg.aead.base; @@ -3465,10 +3443,10 @@ static int talitos_probe(struct platform_device *ofdev) } switch (t_alg->algt.type) { - case CRYPTO_ALG_TYPE_ABLKCIPHER: - err = crypto_register_alg( - &t_alg->algt.alg.crypto); - alg = &t_alg->algt.alg.crypto; + case CRYPTO_ALG_TYPE_SKCIPHER: + err = crypto_register_skcipher( + &t_alg->algt.alg.skcipher); + alg = &t_alg->algt.alg.skcipher.base; break; case CRYPTO_ALG_TYPE_AEAD: From patchwork Thu Oct 24 13:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209723 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7ECB7913 for ; Thu, 24 Oct 2019 13:33:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4627F20663 for ; Thu, 24 Oct 2019 13:33:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Kv3+HOsJ"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="h55kQURI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4627F20663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r3v/hgcXdeaq9b0sV3S9jBt/WGeVV3TimbCrECEqOQA=; b=Kv3+HOsJ2tVfWs OebLiepvRAgEvg687AGNDH1jqk7txGIt/ABqeN7EzZABdHLWzqGNXr0rMfpNwduK2mEH4L7FSK0Y0 nyTFKa2l/1DBB/CKXLpkuQQccH8bGof+YGjBG4AwdqXUDhAnQRorfPzY/Goxdv/nn8BXs3n0F+b/Y eDUlS3ZUpe/1nWyo208UllcV5g6umAtYpVQNMCYRM40uobxU9fDeO5VATX3x50rlKX0e9Uk5b+JX2 jtAQjeB1S7KdSbm0EnE67Z7k5NXq19Nx3LAgXm7q51ipjgEbcaSPomzQJNsINZjggtBvVATJYd32t 8vfvb1ZK9fR3oB1Z93wg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdFS-00027w-9c; Thu, 24 Oct 2019 13:33:54 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6M-0000qY-PG for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:47 +0000 Received: by mail-wr1-x443.google.com with SMTP id l10so25654920wrb.2 for ; Thu, 24 Oct 2019 06:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rcDO2bT8v/OmxMeNX6poUvN70Y9rV6avmvTJQXKaF6s=; b=h55kQURIoijaqm4g2hFpIizPL2T+aAJtYdFe4vIslxDp1zMQdJ2GBTmMijlxTxNJ9W n4uSjF+QBcrIepY1OPqLAMQ0f3LwuJx9HifkwSFLvTqD71fh5/RE2QHtMb08pCM/tdUF HnBZEAY8zl7TPtawk1rIk1oCB3YXEyxAC/yhCDs0o5skw/JR0KPE8e9NMPbPVqqI+Ud0 oCvjclNihzZKYthcnKd/QbuP8mGWA5jy/nAQ8kjmMQjDR7M/k+x3Ob5kuXcLpo6oUQOx 3JUPY+TCj6hfWRvJLnKFkFYSq6dv/t4vPk4dtLTbha+535fhPir2kCSy34zRwc7RnbQB GHFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rcDO2bT8v/OmxMeNX6poUvN70Y9rV6avmvTJQXKaF6s=; b=nYIvoFqYn4QJmPag0V9TtbH7RRB5t7Vdil48MIzYmh6SUY8BoSlmTig+N6ZmfEOGEp mg3RBa22n6eaBM2Z0SeaGe+ZID8oAnp3m8D2XcPopfzwEYKrAIfFjIWvQ53FblFVyB+U jKArmZp8sHJvDjtT5vg6DSJpAx0/8P6Q09qZPnWI3AobgrhFbkD5b1JF9RZ0IR1XJmfK 1CxqqtVQUSExgykWgMeamTfDYJDLCzqCJfca4OumKAyhogw2t4ISASWmBDdOfLu5Vnrr 4bz0v7lZ4elAvaNIeNGVcH3l5VkTobNA8Y40gIGwoF/p5iYmSItdd9tlKTkL513nPZsK mnSg== X-Gm-Message-State: APjAAAUZ0g//HIDwkAN/q8j5vL/Gg9AgUYkNqZX/YaVdn8C2pRbpxad7 rGRw6rcv7bezGcEpZJwMcy7GFw== X-Google-Smtp-Source: APXvYqyBQjHJRji0/30JlxElM0rsugG9lqjJTkU3zCSU3IsWoflx8UWepMSZSyskd8j8u5gwd/U6ow== X-Received: by 2002:a05:6000:1048:: with SMTP id c8mr3845589wrx.349.1571923468909; Thu, 24 Oct 2019 06:24:28 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:27 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 25/27] crypto: qat - switch to skcipher API Date: Thu, 24 Oct 2019 15:23:43 +0200 Message-Id: <20191024132345.5236-26-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062431_231929_4DE205FC X-CRM114-Status: GOOD ( 17.08 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Giovanni Cabiddu , Herbert Xu , Eric Biggers , Ard Biesheuvel , "David S. Miller" , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit 7a7ffe65c8c5 ("crypto: skcipher - Add top-level skcipher interface") dated 20 august 2015 introduced the new skcipher API which is supposed to replace both blkcipher and ablkcipher. While all consumers of the API have been converted long ago, some producers of the ablkcipher remain, forcing us to keep the ablkcipher support routines alive, along with the matching code to expose [a]blkciphers via the skcipher API. So switch this driver to the skcipher API, allowing us to finally drop the blkcipher code in the near future. Cc: Giovanni Cabiddu Signed-off-by: Ard Biesheuvel --- drivers/crypto/qat/qat_common/qat_algs.c | 255 +++++++++----------- drivers/crypto/qat/qat_common/qat_crypto.h | 4 +- 2 files changed, 121 insertions(+), 138 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index b50eb55f8f57..e460a40bf67c 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include #include @@ -122,7 +123,7 @@ struct qat_alg_aead_ctx { char opad[SHA512_BLOCK_SIZE]; }; -struct qat_alg_ablkcipher_ctx { +struct qat_alg_skcipher_ctx { struct icp_qat_hw_cipher_algo_blk *enc_cd; struct icp_qat_hw_cipher_algo_blk *dec_cd; dma_addr_t enc_cd_paddr; @@ -130,7 +131,7 @@ struct qat_alg_ablkcipher_ctx { struct icp_qat_fw_la_bulk_req enc_fw_req; struct icp_qat_fw_la_bulk_req dec_fw_req; struct qat_crypto_instance *inst; - struct crypto_tfm *tfm; + struct crypto_skcipher *tfm; }; static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg) @@ -463,7 +464,7 @@ static int qat_alg_aead_init_dec_session(struct crypto_aead *aead_tfm, return 0; } -static void qat_alg_ablkcipher_init_com(struct qat_alg_ablkcipher_ctx *ctx, +static void qat_alg_skcipher_init_com(struct qat_alg_skcipher_ctx *ctx, struct icp_qat_fw_la_bulk_req *req, struct icp_qat_hw_cipher_algo_blk *cd, const uint8_t *key, unsigned int keylen) @@ -485,7 +486,7 @@ static void qat_alg_ablkcipher_init_com(struct qat_alg_ablkcipher_ctx *ctx, ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); } -static void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_ctx *ctx, +static void qat_alg_skcipher_init_enc(struct qat_alg_skcipher_ctx *ctx, int alg, const uint8_t *key, unsigned int keylen, int mode) { @@ -493,12 +494,12 @@ static void qat_alg_ablkcipher_init_enc(struct qat_alg_ablkcipher_ctx *ctx, struct icp_qat_fw_la_bulk_req *req = &ctx->enc_fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars; - qat_alg_ablkcipher_init_com(ctx, req, enc_cd, key, keylen); + qat_alg_skcipher_init_com(ctx, req, enc_cd, key, keylen); cd_pars->u.s.content_desc_addr = ctx->enc_cd_paddr; enc_cd->aes.cipher_config.val = QAT_AES_HW_CONFIG_ENC(alg, mode); } -static void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_ctx *ctx, +static void qat_alg_skcipher_init_dec(struct qat_alg_skcipher_ctx *ctx, int alg, const uint8_t *key, unsigned int keylen, int mode) { @@ -506,7 +507,7 @@ static void qat_alg_ablkcipher_init_dec(struct qat_alg_ablkcipher_ctx *ctx, struct icp_qat_fw_la_bulk_req *req = &ctx->dec_fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req->cd_pars; - qat_alg_ablkcipher_init_com(ctx, req, dec_cd, key, keylen); + qat_alg_skcipher_init_com(ctx, req, dec_cd, key, keylen); cd_pars->u.s.content_desc_addr = ctx->dec_cd_paddr; if (mode != ICP_QAT_HW_CIPHER_CTR_MODE) @@ -577,7 +578,7 @@ static int qat_alg_aead_init_sessions(struct crypto_aead *tfm, const u8 *key, return -EFAULT; } -static int qat_alg_ablkcipher_init_sessions(struct qat_alg_ablkcipher_ctx *ctx, +static int qat_alg_skcipher_init_sessions(struct qat_alg_skcipher_ctx *ctx, const uint8_t *key, unsigned int keylen, int mode) @@ -587,11 +588,11 @@ static int qat_alg_ablkcipher_init_sessions(struct qat_alg_ablkcipher_ctx *ctx, if (qat_alg_validate_key(keylen, &alg, mode)) goto bad_key; - qat_alg_ablkcipher_init_enc(ctx, alg, key, keylen, mode); - qat_alg_ablkcipher_init_dec(ctx, alg, key, keylen, mode); + qat_alg_skcipher_init_enc(ctx, alg, key, keylen, mode); + qat_alg_skcipher_init_dec(ctx, alg, key, keylen, mode); return 0; bad_key: - crypto_tfm_set_flags(ctx->tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + crypto_skcipher_set_flags(ctx->tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } @@ -832,12 +833,12 @@ static void qat_aead_alg_callback(struct icp_qat_fw_la_resp *qat_resp, areq->base.complete(&areq->base, res); } -static void qat_ablkcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, +static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, struct qat_crypto_request *qat_req) { - struct qat_alg_ablkcipher_ctx *ctx = qat_req->ablkcipher_ctx; + struct qat_alg_skcipher_ctx *ctx = qat_req->skcipher_ctx; struct qat_crypto_instance *inst = ctx->inst; - struct ablkcipher_request *areq = qat_req->ablkcipher_req; + struct skcipher_request *areq = qat_req->skcipher_req; uint8_t stat_filed = qat_resp->comn_resp.comn_status; struct device *dev = &GET_DEV(ctx->inst->accel_dev); int res = 0, qat_res = ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(stat_filed); @@ -846,7 +847,7 @@ static void qat_ablkcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, if (unlikely(qat_res != ICP_QAT_FW_COMN_STATUS_FLAG_OK)) res = -EINVAL; - memcpy(areq->info, qat_req->iv, AES_BLOCK_SIZE); + memcpy(areq->iv, qat_req->iv, AES_BLOCK_SIZE); dma_free_coherent(dev, AES_BLOCK_SIZE, qat_req->iv, qat_req->iv_paddr); @@ -949,7 +950,7 @@ static int qat_alg_aead_enc(struct aead_request *areq) return -EINPROGRESS; } -static int qat_alg_ablkcipher_rekey(struct qat_alg_ablkcipher_ctx *ctx, +static int qat_alg_skcipher_rekey(struct qat_alg_skcipher_ctx *ctx, const u8 *key, unsigned int keylen, int mode) { @@ -958,10 +959,10 @@ static int qat_alg_ablkcipher_rekey(struct qat_alg_ablkcipher_ctx *ctx, memset(&ctx->enc_fw_req, 0, sizeof(ctx->enc_fw_req)); memset(&ctx->dec_fw_req, 0, sizeof(ctx->dec_fw_req)); - return qat_alg_ablkcipher_init_sessions(ctx, key, keylen, mode); + return qat_alg_skcipher_init_sessions(ctx, key, keylen, mode); } -static int qat_alg_ablkcipher_newkey(struct qat_alg_ablkcipher_ctx *ctx, +static int qat_alg_skcipher_newkey(struct qat_alg_skcipher_ctx *ctx, const u8 *key, unsigned int keylen, int mode) { @@ -990,7 +991,7 @@ static int qat_alg_ablkcipher_newkey(struct qat_alg_ablkcipher_ctx *ctx, goto out_free_enc; } - ret = qat_alg_ablkcipher_init_sessions(ctx, key, keylen, mode); + ret = qat_alg_skcipher_init_sessions(ctx, key, keylen, mode); if (ret) goto out_free_all; @@ -1012,51 +1013,51 @@ static int qat_alg_ablkcipher_newkey(struct qat_alg_ablkcipher_ctx *ctx, return ret; } -static int qat_alg_ablkcipher_setkey(struct crypto_ablkcipher *tfm, +static int qat_alg_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen, int mode) { - struct qat_alg_ablkcipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); if (ctx->enc_cd) - return qat_alg_ablkcipher_rekey(ctx, key, keylen, mode); + return qat_alg_skcipher_rekey(ctx, key, keylen, mode); else - return qat_alg_ablkcipher_newkey(ctx, key, keylen, mode); + return qat_alg_skcipher_newkey(ctx, key, keylen, mode); } -static int qat_alg_ablkcipher_cbc_setkey(struct crypto_ablkcipher *tfm, +static int qat_alg_skcipher_cbc_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - return qat_alg_ablkcipher_setkey(tfm, key, keylen, + return qat_alg_skcipher_setkey(tfm, key, keylen, ICP_QAT_HW_CIPHER_CBC_MODE); } -static int qat_alg_ablkcipher_ctr_setkey(struct crypto_ablkcipher *tfm, +static int qat_alg_skcipher_ctr_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - return qat_alg_ablkcipher_setkey(tfm, key, keylen, + return qat_alg_skcipher_setkey(tfm, key, keylen, ICP_QAT_HW_CIPHER_CTR_MODE); } -static int qat_alg_ablkcipher_xts_setkey(struct crypto_ablkcipher *tfm, +static int qat_alg_skcipher_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - return qat_alg_ablkcipher_setkey(tfm, key, keylen, + return qat_alg_skcipher_setkey(tfm, key, keylen, ICP_QAT_HW_CIPHER_XTS_MODE); } -static int qat_alg_ablkcipher_encrypt(struct ablkcipher_request *req) +static int qat_alg_skcipher_encrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *atfm = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(atfm); - struct qat_alg_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); - struct qat_crypto_request *qat_req = ablkcipher_request_ctx(req); + struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_skcipher_tfm(atfm); + struct qat_alg_skcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; struct device *dev = &GET_DEV(ctx->inst->accel_dev); int ret, ctr = 0; - if (req->nbytes == 0) + if (req->cryptlen == 0) return 0; qat_req->iv = dma_alloc_coherent(dev, AES_BLOCK_SIZE, @@ -1073,17 +1074,17 @@ static int qat_alg_ablkcipher_encrypt(struct ablkcipher_request *req) msg = &qat_req->req; *msg = ctx->enc_fw_req; - qat_req->ablkcipher_ctx = ctx; - qat_req->ablkcipher_req = req; - qat_req->cb = qat_ablkcipher_alg_callback; + qat_req->skcipher_ctx = ctx; + qat_req->skcipher_req = req; + qat_req->cb = qat_skcipher_alg_callback; qat_req->req.comn_mid.opaque_data = (uint64_t)(__force long)qat_req; qat_req->req.comn_mid.src_data_addr = qat_req->buf.blp; qat_req->req.comn_mid.dest_data_addr = qat_req->buf.bloutp; cipher_param = (void *)&qat_req->req.serv_specif_rqpars; - cipher_param->cipher_length = req->nbytes; + cipher_param->cipher_length = req->cryptlen; cipher_param->cipher_offset = 0; cipher_param->u.s.cipher_IV_ptr = qat_req->iv_paddr; - memcpy(qat_req->iv, req->info, AES_BLOCK_SIZE); + memcpy(qat_req->iv, req->iv, AES_BLOCK_SIZE); do { ret = adf_send_message(ctx->inst->sym_tx, (uint32_t *)msg); } while (ret == -EAGAIN && ctr++ < 10); @@ -1097,26 +1098,26 @@ static int qat_alg_ablkcipher_encrypt(struct ablkcipher_request *req) return -EINPROGRESS; } -static int qat_alg_ablkcipher_blk_encrypt(struct ablkcipher_request *req) +static int qat_alg_skcipher_blk_encrypt(struct skcipher_request *req) { - if (req->nbytes % AES_BLOCK_SIZE != 0) + if (req->cryptlen % AES_BLOCK_SIZE != 0) return -EINVAL; - return qat_alg_ablkcipher_encrypt(req); + return qat_alg_skcipher_encrypt(req); } -static int qat_alg_ablkcipher_decrypt(struct ablkcipher_request *req) +static int qat_alg_skcipher_decrypt(struct skcipher_request *req) { - struct crypto_ablkcipher *atfm = crypto_ablkcipher_reqtfm(req); - struct crypto_tfm *tfm = crypto_ablkcipher_tfm(atfm); - struct qat_alg_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); - struct qat_crypto_request *qat_req = ablkcipher_request_ctx(req); + struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_skcipher_tfm(atfm); + struct qat_alg_skcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; struct device *dev = &GET_DEV(ctx->inst->accel_dev); int ret, ctr = 0; - if (req->nbytes == 0) + if (req->cryptlen == 0) return 0; qat_req->iv = dma_alloc_coherent(dev, AES_BLOCK_SIZE, @@ -1133,17 +1134,17 @@ static int qat_alg_ablkcipher_decrypt(struct ablkcipher_request *req) msg = &qat_req->req; *msg = ctx->dec_fw_req; - qat_req->ablkcipher_ctx = ctx; - qat_req->ablkcipher_req = req; - qat_req->cb = qat_ablkcipher_alg_callback; + qat_req->skcipher_ctx = ctx; + qat_req->skcipher_req = req; + qat_req->cb = qat_skcipher_alg_callback; qat_req->req.comn_mid.opaque_data = (uint64_t)(__force long)qat_req; qat_req->req.comn_mid.src_data_addr = qat_req->buf.blp; qat_req->req.comn_mid.dest_data_addr = qat_req->buf.bloutp; cipher_param = (void *)&qat_req->req.serv_specif_rqpars; - cipher_param->cipher_length = req->nbytes; + cipher_param->cipher_length = req->cryptlen; cipher_param->cipher_offset = 0; cipher_param->u.s.cipher_IV_ptr = qat_req->iv_paddr; - memcpy(qat_req->iv, req->info, AES_BLOCK_SIZE); + memcpy(qat_req->iv, req->iv, AES_BLOCK_SIZE); do { ret = adf_send_message(ctx->inst->sym_tx, (uint32_t *)msg); } while (ret == -EAGAIN && ctr++ < 10); @@ -1157,12 +1158,12 @@ static int qat_alg_ablkcipher_decrypt(struct ablkcipher_request *req) return -EINPROGRESS; } -static int qat_alg_ablkcipher_blk_decrypt(struct ablkcipher_request *req) +static int qat_alg_skcipher_blk_decrypt(struct skcipher_request *req) { - if (req->nbytes % AES_BLOCK_SIZE != 0) + if (req->cryptlen % AES_BLOCK_SIZE != 0) return -EINVAL; - return qat_alg_ablkcipher_decrypt(req); + return qat_alg_skcipher_decrypt(req); } static int qat_alg_aead_init(struct crypto_aead *tfm, enum icp_qat_hw_auth_algo hash, @@ -1218,18 +1219,18 @@ static void qat_alg_aead_exit(struct crypto_aead *tfm) qat_crypto_put_instance(inst); } -static int qat_alg_ablkcipher_init(struct crypto_tfm *tfm) +static int qat_alg_skcipher_init_tfm(struct crypto_skcipher *tfm) { - struct qat_alg_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct qat_crypto_request); + crypto_skcipher_set_reqsize(tfm, sizeof(struct qat_crypto_request)); ctx->tfm = tfm; return 0; } -static void qat_alg_ablkcipher_exit(struct crypto_tfm *tfm) +static void qat_alg_skcipher_exit_tfm(struct crypto_skcipher *tfm) { - struct qat_alg_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); struct qat_crypto_instance *inst = ctx->inst; struct device *dev; @@ -1308,92 +1309,74 @@ static struct aead_alg qat_aeads[] = { { .maxauthsize = SHA512_DIGEST_SIZE, } }; -static struct crypto_alg qat_algs[] = { { - .cra_name = "cbc(aes)", - .cra_driver_name = "qat_aes_cbc", - .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct qat_alg_ablkcipher_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = qat_alg_ablkcipher_init, - .cra_exit = qat_alg_ablkcipher_exit, - .cra_u = { - .ablkcipher = { - .setkey = qat_alg_ablkcipher_cbc_setkey, - .decrypt = qat_alg_ablkcipher_blk_decrypt, - .encrypt = qat_alg_ablkcipher_blk_encrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - }, - }, +static struct skcipher_alg qat_skciphers[] = { { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "qat_aes_cbc", + .base.cra_priority = 4001, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), + .base.cra_alignmask = 0, + .base.cra_module = THIS_MODULE, + + .init = qat_alg_skcipher_init_tfm, + .exit = qat_alg_skcipher_exit_tfm, + .setkey = qat_alg_skcipher_cbc_setkey, + .decrypt = qat_alg_skcipher_blk_decrypt, + .encrypt = qat_alg_skcipher_blk_encrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "ctr(aes)", - .cra_driver_name = "qat_aes_ctr", - .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct qat_alg_ablkcipher_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = qat_alg_ablkcipher_init, - .cra_exit = qat_alg_ablkcipher_exit, - .cra_u = { - .ablkcipher = { - .setkey = qat_alg_ablkcipher_ctr_setkey, - .decrypt = qat_alg_ablkcipher_decrypt, - .encrypt = qat_alg_ablkcipher_encrypt, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - }, - }, + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "qat_aes_ctr", + .base.cra_priority = 4001, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), + .base.cra_alignmask = 0, + .base.cra_module = THIS_MODULE, + + .init = qat_alg_skcipher_init_tfm, + .exit = qat_alg_skcipher_exit_tfm, + .setkey = qat_alg_skcipher_ctr_setkey, + .decrypt = qat_alg_skcipher_decrypt, + .encrypt = qat_alg_skcipher_encrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, }, { - .cra_name = "xts(aes)", - .cra_driver_name = "qat_aes_xts", - .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct qat_alg_ablkcipher_ctx), - .cra_alignmask = 0, - .cra_type = &crypto_ablkcipher_type, - .cra_module = THIS_MODULE, - .cra_init = qat_alg_ablkcipher_init, - .cra_exit = qat_alg_ablkcipher_exit, - .cra_u = { - .ablkcipher = { - .setkey = qat_alg_ablkcipher_xts_setkey, - .decrypt = qat_alg_ablkcipher_blk_decrypt, - .encrypt = qat_alg_ablkcipher_blk_encrypt, - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - }, - }, + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "qat_aes_xts", + .base.cra_priority = 4001, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), + .base.cra_alignmask = 0, + .base.cra_module = THIS_MODULE, + + .init = qat_alg_skcipher_init_tfm, + .exit = qat_alg_skcipher_exit_tfm, + .setkey = qat_alg_skcipher_xts_setkey, + .decrypt = qat_alg_skcipher_blk_decrypt, + .encrypt = qat_alg_skcipher_blk_encrypt, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, } }; int qat_algs_register(void) { - int ret = 0, i; + int ret = 0; mutex_lock(&algs_lock); if (++active_devs != 1) goto unlock; - for (i = 0; i < ARRAY_SIZE(qat_algs); i++) - qat_algs[i].cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC; - - ret = crypto_register_algs(qat_algs, ARRAY_SIZE(qat_algs)); + ret = crypto_register_skciphers(qat_skciphers, ARRAY_SIZE(qat_skciphers)); if (ret) goto unlock; - for (i = 0; i < ARRAY_SIZE(qat_aeads); i++) - qat_aeads[i].base.cra_flags = CRYPTO_ALG_ASYNC; - ret = crypto_register_aeads(qat_aeads, ARRAY_SIZE(qat_aeads)); if (ret) goto unreg_algs; @@ -1403,7 +1386,7 @@ int qat_algs_register(void) return ret; unreg_algs: - crypto_unregister_algs(qat_algs, ARRAY_SIZE(qat_algs)); + crypto_unregister_skciphers(qat_skciphers, ARRAY_SIZE(qat_skciphers)); goto unlock; } @@ -1414,7 +1397,7 @@ void qat_algs_unregister(void) goto unlock; crypto_unregister_aeads(qat_aeads, ARRAY_SIZE(qat_aeads)); - crypto_unregister_algs(qat_algs, ARRAY_SIZE(qat_algs)); + crypto_unregister_skciphers(qat_skciphers, ARRAY_SIZE(qat_skciphers)); unlock: mutex_unlock(&algs_lock); diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index c77a80020cde..300bb919a33a 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -79,11 +79,11 @@ struct qat_crypto_request { struct icp_qat_fw_la_bulk_req req; union { struct qat_alg_aead_ctx *aead_ctx; - struct qat_alg_ablkcipher_ctx *ablkcipher_ctx; + struct qat_alg_skcipher_ctx *skcipher_ctx; }; union { struct aead_request *aead_req; - struct ablkcipher_request *ablkcipher_req; + struct skcipher_request *skcipher_req; }; struct qat_crypto_request_buffs buf; void (*cb)(struct icp_qat_fw_la_resp *resp, From patchwork Thu Oct 24 13:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209717 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2ADCC13B1 for ; Thu, 24 Oct 2019 13:33:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E171820663 for ; Thu, 24 Oct 2019 13:33:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="WOeYMpTY"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="FhF56Pxn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E171820663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eb0vrqqaIud7evfp7bWLlS5g2cBvFUNmj2cNSwiqQBU=; b=WOeYMpTYPBH/eY AiWPtfrkOhxZMOdPog76MdKd2CV4jANCApxrOSj7wRXHZM9phNj46Bp4jFC2zhjPWEn0INGxg9Ajc uc5RcMvNpcsT7jXysc+nP8yK8jhjlbk6esbGWFFZYKCMWBpVyMqD+aVE3cw6qkF0UTK25RNxWN6aL BKGuBDSqNK7kfYz5TyBq+zy+2qqxscvcRgxP7AMFDZGa1BmHQU+vAERTA76KakK8a/EZjA0Z/rjBB I1gPu61n0JOlY4qIw5TZHtZuzHEuv/cxbOJKKyVDtZ/jeSN3W1g4u7rJQI4ts4A4Gozj+rPMc01gD QftIvOZk+jBxdZwAAQcw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdEh-0001P6-0i; Thu, 24 Oct 2019 13:33:07 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6N-0000rH-P7 for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:47 +0000 Received: by mail-wm1-x341.google.com with SMTP id 6so2010858wmf.0 for ; Thu, 24 Oct 2019 06:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pIvf3uaGexOd3cBtkCmgOi+/J9vxhbC8kqb/606IHXY=; b=FhF56Pxn4X3XQVxawn/+Eu+hcpuuhw0C9uevsNl8pV9C/b0nSJu6BuMkHF23q4KPHP oTVaS7ramu0vMoIUrzbPGj99cz/+em3Y8RUF0OJZbHaGBAfjMWUX+jy7li206eJNLk0G l33a4FEzF1w6/MabcLmZJCRjvHdwKPtZPYwgNXB7fJ9JILxo9PdF8DgmmaSxRWnzeUIU DDvwe1/4A4kzlNv/eL5Dxe+zp0FfhFV8tO3KDrngcRbD/B38M29QCnmQV+NvbuX5kyRZ wV4saxxqa1dSh3MwQf7bUIMUcWYJyB8DTpnEotJajuraygBur7fdbEfCbL1L/MrqTWL3 r08w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pIvf3uaGexOd3cBtkCmgOi+/J9vxhbC8kqb/606IHXY=; b=GnyKUp6jc62W5e0/zh3/d1iEuBdpv9J0jfURi3l8F0b7BQ5qvW1AZ/JlC5EqvXW5XN XYCq1IkuL0UAXJdK3vVb8tHgvCD7Ywji6BN7C7duVs9Bg+4ucSc7/b/NDfs27dklGwF6 6MWvR33D7R7HZaaSrPkRJvNip0CLYzutL4EY4mGTDDF8JxiizNWE5byTUiFgYBitX91R s8tHQteCqDYCp18o+ij23v7HAzzADsCfk112b7hY4zTNKFrdR7FOipNvsXhZg0UNgcH5 Nn2bY0uUJwS6G5s0Vtd2LUBia3wSF+YFoDUWirL2aKrHgv6lXOdRoN1o80v5hm4QhiJb Oj1Q== X-Gm-Message-State: APjAAAV4h7sMFgQwUKw9eAO55aGqoNiZmBjtqbRH6MPKQCe9r0GpOnen PhjTb92h2S9WN1MI/8oQbnJX8A== X-Google-Smtp-Source: APXvYqzoCg2b5XHvYPu+5eB6aX89cQV4TlVbsQXRylMtCiOq+8zfsUcRYhaMc0a1t03d37CsAZTqXA== X-Received: by 2002:a7b:c94f:: with SMTP id i15mr5107043wml.8.1571923470209; Thu, 24 Oct 2019 06:24:30 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:29 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 26/27] crypto: marvell/cesa - rename blkcipher to skcipher Date: Thu, 24 Oct 2019 15:23:44 +0200 Message-Id: <20191024132345.5236-27-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062431_940529_FDF574DF X-CRM114-Status: GOOD ( 13.51 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The driver specific types contain some rudimentary references to the blkcipher API, which is deprecated and will be removed. To avoid confusion, rename these to skcipher. This is a cosmetic change only, as the code does not actually use the blkcipher API. Signed-off-by: Ard Biesheuvel --- drivers/crypto/marvell/cesa.h | 6 +++--- drivers/crypto/marvell/cipher.c | 14 +++++++------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h index d63a6ee905c9..f1ed3b85c0d2 100644 --- a/drivers/crypto/marvell/cesa.h +++ b/drivers/crypto/marvell/cesa.h @@ -232,13 +232,13 @@ struct mv_cesa_sec_accel_desc { }; /** - * struct mv_cesa_blkcipher_op_ctx - cipher operation context + * struct mv_cesa_skcipher_op_ctx - cipher operation context * @key: cipher key * @iv: cipher IV * * Context associated to a cipher operation. */ -struct mv_cesa_blkcipher_op_ctx { +struct mv_cesa_skcipher_op_ctx { u32 key[8]; u32 iv[4]; }; @@ -265,7 +265,7 @@ struct mv_cesa_hash_op_ctx { struct mv_cesa_op_ctx { struct mv_cesa_sec_accel_desc desc; union { - struct mv_cesa_blkcipher_op_ctx blkcipher; + struct mv_cesa_skcipher_op_ctx skcipher; struct mv_cesa_hash_op_ctx hash; } ctx; }; diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c index 84ceddfee76b..d8e8c857770c 100644 --- a/drivers/crypto/marvell/cipher.c +++ b/drivers/crypto/marvell/cipher.c @@ -209,7 +209,7 @@ mv_cesa_skcipher_complete(struct crypto_async_request *req) struct mv_cesa_req *basereq; basereq = &creq->base; - memcpy(skreq->iv, basereq->chain.last->op->ctx.blkcipher.iv, + memcpy(skreq->iv, basereq->chain.last->op->ctx.skcipher.iv, ivsize); } else { memcpy_fromio(skreq->iv, @@ -470,7 +470,7 @@ static int mv_cesa_des_op(struct skcipher_request *req, mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTM_DES, CESA_SA_DESC_CFG_CRYPTM_MSK); - memcpy(tmpl->ctx.blkcipher.key, ctx->key, DES_KEY_SIZE); + memcpy(tmpl->ctx.skcipher.key, ctx->key, DES_KEY_SIZE); return mv_cesa_skcipher_queue_req(req, tmpl); } @@ -523,7 +523,7 @@ static int mv_cesa_cbc_des_op(struct skcipher_request *req, mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTCM_CBC, CESA_SA_DESC_CFG_CRYPTCM_MSK); - memcpy(tmpl->ctx.blkcipher.iv, req->iv, DES_BLOCK_SIZE); + memcpy(tmpl->ctx.skcipher.iv, req->iv, DES_BLOCK_SIZE); return mv_cesa_des_op(req, tmpl); } @@ -575,7 +575,7 @@ static int mv_cesa_des3_op(struct skcipher_request *req, mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTM_3DES, CESA_SA_DESC_CFG_CRYPTM_MSK); - memcpy(tmpl->ctx.blkcipher.key, ctx->key, DES3_EDE_KEY_SIZE); + memcpy(tmpl->ctx.skcipher.key, ctx->key, DES3_EDE_KEY_SIZE); return mv_cesa_skcipher_queue_req(req, tmpl); } @@ -628,7 +628,7 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = { static int mv_cesa_cbc_des3_op(struct skcipher_request *req, struct mv_cesa_op_ctx *tmpl) { - memcpy(tmpl->ctx.blkcipher.iv, req->iv, DES3_EDE_BLOCK_SIZE); + memcpy(tmpl->ctx.skcipher.iv, req->iv, DES3_EDE_BLOCK_SIZE); return mv_cesa_des3_op(req, tmpl); } @@ -694,7 +694,7 @@ static int mv_cesa_aes_op(struct skcipher_request *req, key = ctx->aes.key_enc; for (i = 0; i < ctx->aes.key_length / sizeof(u32); i++) - tmpl->ctx.blkcipher.key[i] = cpu_to_le32(key[i]); + tmpl->ctx.skcipher.key[i] = cpu_to_le32(key[i]); if (ctx->aes.key_length == 24) cfg |= CESA_SA_DESC_CFG_AES_LEN_192; @@ -755,7 +755,7 @@ static int mv_cesa_cbc_aes_op(struct skcipher_request *req, { mv_cesa_update_op_cfg(tmpl, CESA_SA_DESC_CFG_CRYPTCM_CBC, CESA_SA_DESC_CFG_CRYPTCM_MSK); - memcpy(tmpl->ctx.blkcipher.iv, req->iv, AES_BLOCK_SIZE); + memcpy(tmpl->ctx.skcipher.iv, req->iv, AES_BLOCK_SIZE); return mv_cesa_aes_op(req, tmpl); } From patchwork Thu Oct 24 13:23:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 11209719 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A19BA13BD for ; Thu, 24 Oct 2019 13:33:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6E2AF20663 for ; Thu, 24 Oct 2019 13:33:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="G2pn1ETq"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="CHB38qnm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E2AF20663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=B8eHYAG19m7Zs7k4DObJTI/eZuLeBqMK8EfzJ0i7QJo=; b=G2pn1ETqCgk5rh jsUD669jF6Zw5fji4mpFrOLnlPExRJ4N9dKbcarIoHPgaEcQ8dBmveyav58copLBVv+oLaN/dX+r8 gvYtw5A3W2eocef4geXWchV2VUy9efPMua8q+6ShLi9nufqqmiNhLlFqIRPrLqW3nfv1GzpFhLN2D M0qiRXdXYqOrLZsJDGKsgW6S7l3qnZTUKmVIHOIe6ABsPfMShvg7XapPd6aeqGNhw/neR650AJQSa PgBxaEcVSOHxQ9sszAuv4noyche7nFgXFJoleCKJGaL90wD6WhPwspHpoobwHeVLPsOh0HbYXEMJi 2Ki1iUwAeBlJQL0hRCaA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNdEu-0001bX-8c; Thu, 24 Oct 2019 13:33:20 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNd6O-0000rw-Nc for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 13:24:47 +0000 Received: by mail-wm1-x342.google.com with SMTP id g24so2828367wmh.5 for ; Thu, 24 Oct 2019 06:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gafT1c0FgkNr2xD+zkDSgemAEsZK/gkAa/8lkYnPPKw=; b=CHB38qnmiIoWY26c3wRxuzk+ZOSa7lwUYBKbyXFt+GT9s2qCB78UmvViVvoiQM7QTY rW2Rcd6SGJIGgGVFNX0SJQXjO+WQVjD4QkIgV0+UI5iEFxbOyFlyTAx0uxmVAn/COzNa ZzY0Bz17VitJ6ppDTaokb69HwslnO20Mo6PNP4LQbUHsHC/kX1f+Al6G3aWrHGy7jvdv 30OisuNk6PgBOI27Znki2YEYKb2BtGep5t8l/NUS1bxAoMb29C03ETB/wv32800J7BBz KwB8Gti8rHMxcu+7DWapIuAph1mtDeASezoAay2t19YtwNGS+qginVZDnBYSm5G+v2XW O0ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gafT1c0FgkNr2xD+zkDSgemAEsZK/gkAa/8lkYnPPKw=; b=i8VnM8B9sWBSzd7o2Eb8ZTSIEzbL/ihKsk+ILC12Grx8Knk8DdnFLg8Z69bxMKGY0U lYKcrZzD1sWrEx2nLJeBCd8iVcOxJQGR9wE9Ygmt1oG2XdTZ55AR69/zSZHKpuP5Koz9 wDeeXkx6XEcV8QpiHp+MMGdZA2Ku6kGp5AtbMVrq1uTC/SVpbdSUuVO/XcPTJUD/tIyX Hmf+7yCckpGj2mqm5+z2GoBUowspUa1OSOFYo23KC/trmBbJeDN6Oz4R/MApeY1NI3PR JGTYOsDokjFptokOY7BP2qDksQQm5ygrOoqrWlbJgLCKSakjcVoqxnlaEUJ8YixbBCzW LDsw== X-Gm-Message-State: APjAAAU/Qz/yvo79DuW7Ja10HQEeojuU9JtUClmDPhjfYczWNrSMuqFQ 6XhrSxBMjr5fwVEmUhj+vljlSQ== X-Google-Smtp-Source: APXvYqylavvVBkkME63FbqEWNpFeVOHYol92i8elmCht1vf/qMPHxvNflkj3oX9akZNufJ8WOdgm6A== X-Received: by 2002:a1c:49c2:: with SMTP id w185mr4782947wma.16.1571923471290; Thu, 24 Oct 2019 06:24:31 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-657-1-83-120.w92-154.abo.wanadoo.fr. [92.154.90.120]) by smtp.gmail.com with ESMTPSA id e3sm2346310wme.36.2019.10.24.06.24.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2019 06:24:30 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH v2 27/27] crypto: nx - remove stale comment referring to the blkcipher walk API Date: Thu, 24 Oct 2019 15:23:45 +0200 Message-Id: <20191024132345.5236-28-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191024132345.5236-1-ard.biesheuvel@linaro.org> References: <20191024132345.5236-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_062432_801390_F6ED31BE X-CRM114-Status: GOOD ( 10.80 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:342 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "David S. Miller" , Eric Biggers , Herbert Xu , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org These drivers do not use either the deprecated blkcipher or the current skcipher walk API, so this comment must refer to a previous state of the driver that no longer exists. So drop the comments. Signed-off-by: Ard Biesheuvel --- drivers/crypto/nx/nx-aes-ccm.c | 5 ----- drivers/crypto/nx/nx-aes-gcm.c | 5 ----- 2 files changed, 10 deletions(-) diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c index 84fed736ed2e..4c9362eebefd 100644 --- a/drivers/crypto/nx/nx-aes-ccm.c +++ b/drivers/crypto/nx/nx-aes-ccm.c @@ -525,11 +525,6 @@ static int ccm_aes_nx_decrypt(struct aead_request *req) return ccm_nx_decrypt(req, req->iv, req->assoclen); } -/* tell the block cipher walk routines that this is a stream cipher by - * setting cra_blocksize to 1. Even using blkcipher_walk_virt_block - * during encrypt/decrypt doesn't solve this problem, because it calls - * blkcipher_walk_done under the covers, which doesn't use walk->blocksize, - * but instead uses this tfm->blocksize. */ struct aead_alg nx_ccm_aes_alg = { .base = { .cra_name = "ccm(aes)", diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c index 898220e159d3..19c6ed5baea4 100644 --- a/drivers/crypto/nx/nx-aes-gcm.c +++ b/drivers/crypto/nx/nx-aes-gcm.c @@ -467,11 +467,6 @@ static int gcm4106_aes_nx_decrypt(struct aead_request *req) return gcm_aes_nx_crypt(req, 0, req->assoclen - 8); } -/* tell the block cipher walk routines that this is a stream cipher by - * setting cra_blocksize to 1. Even using blkcipher_walk_virt_block - * during encrypt/decrypt doesn't solve this problem, because it calls - * blkcipher_walk_done under the covers, which doesn't use walk->blocksize, - * but instead uses this tfm->blocksize. */ struct aead_alg nx_gcm_aes_alg = { .base = { .cra_name = "gcm(aes)",