From patchwork Tue Sep 27 08:00:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12990098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32549C6FA94 for ; Tue, 27 Sep 2022 08:08:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZXaIP+q6ISnHc5XXOEwfhDGKslYjKObxPvbAXSOxkvo=; b=Oo65p/aJQK/jHE U4rgPEZE5g5ackERGMZEJoErWMqFZ6HUkREsxU/VuFzxUKLYhai3bUirhDHQ/homa9VnareKDp4ws YaugvyXsKE0oTvMSD83oTqJmRJMvYareTD/Wsa7wnDrbZOMbmIKelE6IZFrKVgeknUyJARAnHldXn Q3oWwBxwWrM14K0sdtw9CIJLyJJSbcFX14qOY4dS781NKCgyCV/A6371+VVfb2tKtUMWGDGES849h xhQZ+o7c5ra2xbyXPCxU8VKNISly0FthfTNMPh13W5/2xU2kbCgFk+sg1C2vaOUWe1LussfHETpX1 1ZTQ0530/zWZTIS31lrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5cx-0090kZ-SB; Tue, 27 Sep 2022 08:07:39 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5Wg-008xrM-Fj for linux-rockchip@lists.infradead.org; Tue, 27 Sep 2022 08:01:12 +0000 Received: by mail-wr1-x42b.google.com with SMTP id s14so13789768wro.0 for ; Tue, 27 Sep 2022 01:01:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=66ISj6OAwBdZy2WsaubsZZCEn9jiwURYi2fgqCIpC8s=; b=1mqntLJxFl8NL2Eg+2hgPmHlQC9P332VFuEA4I5cg7lCW8RiNRotgeet1cvFVizxT7 Lnw2h3k1leHID1TT7fALHv5NVlS77N+oOVIooyILUJPpq0e4DpVxiq6RtylXUGaH4x3l NH3CetRm4L+2bMxE67XiVAaclpmbFuYpubyXxywFvK5BqG8pgIeH6s7hmUVCZjj4jTpe L5JnGg1/Bs+VKCl2gI44hO2/0TjpGO+rLt1XPBBgW0+M4u6R3ZMIU86Dp1l3yh6z6nPJ lNKulyDwWNx275A3Vd0lLOz7B4837DbEsuozSM4/z4o0SvC9bxWE+iUbGu/xIlG8H0dO Zy3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=66ISj6OAwBdZy2WsaubsZZCEn9jiwURYi2fgqCIpC8s=; b=FKtSPs3dvwJsuL9XbEn/dK75F8xhDGG6VJ2EbUhf6ogegGredcxXYt7/XSnDJETSbE AKwzC33yuNJ8mrHXCJt5y3sscWX6ttiWUkWpKhHXiwyTO97Qdf49Oy10yTOkQAMMOYef 4ZfGbJurUg6BXXaidsUE+WqqjoGgzD7DCyfV/cZy20nrin2E0t6gAGMiSX6mEeWFQbOh Dfdh1n0EYE287PDPrvk2f7RZI3b7ztV/NIFRr2zWGMgbOjg5EnxbcTyNnK5wBzEBpfJ/ /nXgCiwv5MllGQ+Vb2AcPcgVGQILLLsGXQ8m0PzvBdzn6KRur3MBHVb9MO5w4jM1f899 eyvQ== X-Gm-Message-State: ACrzQf2bW8mrRz4WwMF0pciuDL+G+RUEpVIqUsHgDNw5hpTJfXMzMA2x 3KTHnCE6L74F7HG6ZhihKKfvXg== X-Google-Smtp-Source: AMsMyM44jzPbkv/qWHVlKZx1LuvcjcsjkuXaATAdw37jBdECmPALd0aMe5J1f3w/V2LKB3PJT3Zdzg== X-Received: by 2002:a05:6000:1f9d:b0:22a:fc9b:435c with SMTP id bw29-20020a0560001f9d00b0022afc9b435cmr15644239wrb.667.1664265667657; Tue, 27 Sep 2022 01:01:07 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id p4-20020a1c5444000000b003a5c7a942edsm13357199wmi.28.2022.09.27.01.01.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 01:01:07 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, davem@davemloft.net, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH RFT 1/5] crypto: rockchip: move kconfig to its dedicated directory Date: Tue, 27 Sep 2022 08:00:44 +0000 Message-Id: <20220927080048.3151911-2-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220927080048.3151911-1-clabbe@baylibre.com> References: <20220927080048.3151911-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220927_010110_678122_260D16E0 X-CRM114-Status: GOOD ( 15.00 ) X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Move all rockchip kconfig in its own subdirectory. Signed-off-by: Corentin Labbe --- drivers/crypto/Kconfig | 32 ++------------------------------ drivers/crypto/Makefile | 2 +- drivers/crypto/rockchip/Kconfig | 28 ++++++++++++++++++++++++++++ 3 files changed, 31 insertions(+), 31 deletions(-) create mode 100644 drivers/crypto/rockchip/Kconfig diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 2947888d3b82..96c5b8060db0 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -646,6 +646,8 @@ config CRYPTO_DEV_QCOM_RNG To compile this driver as a module, choose M here. The module will be called qcom-rng. If unsure, say N. +source "drivers/crypto/rockchip/Kconfig" + config CRYPTO_DEV_VMX bool "Support for VMX cryptographic acceleration instructions" depends on PPC64 && VSX @@ -666,36 +668,6 @@ config CRYPTO_DEV_IMGTEC_HASH hardware hash accelerator. Supporting MD5/SHA1/SHA224/SHA256 hashing algorithms. -config CRYPTO_DEV_ROCKCHIP - tristate "Rockchip's Cryptographic Engine driver" - depends on OF && ARCH_ROCKCHIP - depends on PM - select CRYPTO_ECB - select CRYPTO_CBC - select CRYPTO_DES - select CRYPTO_AES - select CRYPTO_ENGINE - select CRYPTO_LIB_DES - select CRYPTO_MD5 - select CRYPTO_SHA1 - select CRYPTO_SHA256 - select CRYPTO_HASH - select CRYPTO_SKCIPHER - - help - This driver interfaces with the hardware crypto accelerator. - Supporting cbc/ecb chainmode, and aes/des/des3_ede cipher mode. - -config CRYPTO_DEV_ROCKCHIP_DEBUG - bool "Enable Rockchip crypto stats" - depends on CRYPTO_DEV_ROCKCHIP - depends on DEBUG_FS - help - Say y to enable Rockchip crypto debug stats. - This will create /sys/kernel/debug/rk3288_crypto/stats for displaying - the number of requests per algorithm and other internal stats. - - config CRYPTO_DEV_ZYNQMP_AES tristate "Support for Xilinx ZynqMP AES hw accelerator" depends on ZYNQMP_FIRMWARE || COMPILE_TEST diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index 116de173a66c..7a9c627a09c6 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -36,7 +36,7 @@ obj-$(CONFIG_CRYPTO_DEV_PPC4XX) += amcc/ obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/ obj-$(CONFIG_CRYPTO_DEV_QCE) += qce/ obj-$(CONFIG_CRYPTO_DEV_QCOM_RNG) += qcom-rng.o -obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) += rockchip/ +obj-y += rockchip/ obj-$(CONFIG_CRYPTO_DEV_S5P) += s5p-sss.o obj-$(CONFIG_CRYPTO_DEV_SA2UL) += sa2ul.o obj-$(CONFIG_CRYPTO_DEV_SAHARA) += sahara.o diff --git a/drivers/crypto/rockchip/Kconfig b/drivers/crypto/rockchip/Kconfig new file mode 100644 index 000000000000..1010d897d9ef --- /dev/null +++ b/drivers/crypto/rockchip/Kconfig @@ -0,0 +1,28 @@ +config CRYPTO_DEV_ROCKCHIP + tristate "Rockchip's Cryptographic Engine driver" + depends on OF && ARCH_ROCKCHIP + depends on PM + select CRYPTO_ECB + select CRYPTO_CBC + select CRYPTO_DES + select CRYPTO_AES + select CRYPTO_ENGINE + select CRYPTO_LIB_DES + select CRYPTO_MD5 + select CRYPTO_SHA1 + select CRYPTO_SHA256 + select CRYPTO_HASH + select CRYPTO_SKCIPHER + + help + This driver interfaces with the hardware crypto accelerator. + Supporting cbc/ecb chainmode, and aes/des/des3_ede cipher mode. + +config CRYPTO_DEV_ROCKCHIP_DEBUG + bool "Enable Rockchip crypto stats" + depends on CRYPTO_DEV_ROCKCHIP + depends on DEBUG_FS + help + Say y to enable Rockchip crypto debug stats. + This will create /sys/kernel/debug/rk3288_crypto/stats for displaying + the number of requests per algorithm and other internal stats. From patchwork Tue Sep 27 08:00:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12990097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3FE5FC6FA83 for ; Tue, 27 Sep 2022 08:07:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Q+rpQaWzxXOp93QoIPAGM8ylDgKkzFCwfd4MAcNbrnA=; b=f3LMqGHM5TcrRb UkNH0JXUrhSs5t2cTGZI8XJY/hdr3oGJnqwz9JqMrr4zY9P05MFQkE0FukiNehdEw+4d7Khm6iYp1 yv8y0yWFiqLf1fLPLi4b+nCZPRHOpgh1H5A5txI4+ADRBl/9T3gIfXlPtu5EaDnAYv8CixQF5jshx byxHrnYlrYvhqFDH5PW5Y+grY0VhCu2p8hjBq6Q4qpPCfKIFX2KXwSe/+TbIrV3XCPn6Ldl/0e5CV xZulH+bKlr4rotb3s+HQ6OBibUSXUldwvgXHMZ2uIDbddQCP6nHKozWlDX+DAVQz3wRV+jwmB3vke ASfZwe6VIitsDRf9GLSQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5d0-0090ld-1s; Tue, 27 Sep 2022 08:07:42 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5Wg-008xrq-Fx for linux-rockchip@lists.infradead.org; Tue, 27 Sep 2022 08:01:14 +0000 Received: by mail-wr1-x42b.google.com with SMTP id cc5so13709761wrb.6 for ; Tue, 27 Sep 2022 01:01:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=5mdo6NGLlB92HNx1YdBVXQY1nz6YTKAI7LSgWV6iJpg=; b=TSMUqXpBGA6O740Vbe76KnkTC6Rzlba5NxIu8AhZkfGCcFSJjaFFBtesClkcZA+GXr d86YrtgrRC7mOqx+CXyq29Bl5ZNd13wRycLSj+hmLJLqApJpA86x0NZ46MZGXkETE13V GRTEp4Z3u1MKMPRXmjoo9rcz7lC1fMCTe2cnLgEjjuQCFqQTB3UPygH+Fpd8f/rvrP8E T8gQeE6GHf7rfXIiNZUX+ZFYjAACzbEqsamzVviFOrVBJrmBKPz8KvXpPSth3xZeB0VB QJH3xwSiGZt9nG3yXTqpIFwEeZQ1kz1c/GkNTFSheME1zrNrbJN1IxKCCqIVnqo2HuVW 8Dfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=5mdo6NGLlB92HNx1YdBVXQY1nz6YTKAI7LSgWV6iJpg=; b=kHyoNTVf3vuHnaHhobCA62GAYR6DekmTVylXp94i7EzoYpZ21GJIkIaomu0RRZAAJ8 42H2KJAE+dVFiD7ZmV/JQdETuoEQcfu4cFlFiWqaXxWVYY91eplcexPeH3TOQJx+KYAf 6+0a1tXAaBlv9gRLTwQ63IetNuSqKRENzSdOqqdB0oaMznpfzXfIbuZUXO0Dr5CfQB9/ g6cEzGS+REpIxI2mEMv9qowJQxBv5/TE4FvxGVcOYQ5UVG+6BnUE/kSei2Jqw+Ca9R6c 8760Ez6os2qTjwDjhjONZtA+G8SpGemnYE5R5+3ccTliDb7c9jH74sUGHwd0FgD1maFj /saw== X-Gm-Message-State: ACrzQf0JtemqoAci+CtI97KtHNxt3i6nTbhRYI56EYHF92Srl+czrtKz 3k/tZQxvPGmDnpj4R8Hh9JTi7Q== X-Google-Smtp-Source: AMsMyM57EG7r/INW+QuGEjqPJWyO6T++k36fW0Iawm5RTeCdV8Ph+Z3zSVihCgeH5lWgEy/GWHK8gQ== X-Received: by 2002:adf:9dd0:0:b0:22c:3fe0:42ef with SMTP id q16-20020adf9dd0000000b0022c3fe042efmr12390756wre.348.1664265668642; Tue, 27 Sep 2022 01:01:08 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id p4-20020a1c5444000000b003a5c7a942edsm13357199wmi.28.2022.09.27.01.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 01:01:08 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, davem@davemloft.net, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH RFT 2/5] dt-bindings: crypto: add support for rockchip,crypto-rk3588 Date: Tue, 27 Sep 2022 08:00:45 +0000 Message-Id: <20220927080048.3151911-3-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220927080048.3151911-1-clabbe@baylibre.com> References: <20220927080048.3151911-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220927_010110_649394_6A0CE5EC X-CRM114-Status: GOOD ( 12.31 ) X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Add device tree binding documentation for the Rockchip cryptographic offloader V2. Signed-off-by: Corentin Labbe Reviewed-by: Rob Herring --- .../crypto/rockchip,rk3588-crypto.yaml | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml diff --git a/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml b/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml new file mode 100644 index 000000000000..aa01c3c681f4 --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml @@ -0,0 +1,71 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/crypto/rockchip,rk3588-crypto.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Rockchip cryptographic offloader V2 + +maintainers: + - Corentin Labbe + +properties: + compatible: + enum: + - rockchip,rk3568-crypto + - rockchip,rk3588-crypto + + reg: + maxItems: 1 + + interrupts: + maxItems: 1 + + clocks: + minItems: 4 + + clock-names: + items: + - const: aclk + - const: hclk + - const: sclk + - const: pka + + resets: + minItems: 5 + + reset-names: + items: + - const: core + - const: a + - const: h + - const: rng + - const: pka + +required: + - compatible + - reg + - interrupts + - clocks + - clock-names + - resets + - reset-names + +additionalProperties: false + +examples: + - | + #include + #include + crypto@fe380000 { + compatible = "rockchip,rk3588-crypto"; + reg = <0xfe380000 0x4000>; + interrupts = ; + clocks = <&cru ACLK_CRYPTO_NS>, <&cru HCLK_CRYPTO_NS>, + <&cru CLK_CRYPTO_NS_CORE>, <&cru CLK_CRYPTO_NS_PKA>; + clock-names = "aclk", "hclk", "sclk", "pka"; + resets = <&cru SRST_CRYPTO_NS_CORE>, <&cru SRST_A_CRYPTO_NS>, + <&cru SRST_H_CRYPTO_NS>, <&cru SRST_CRYPTO_NS_RNG>, + <&cru SRST_CRYPTO_NS_PKA>; + reset-names = "core", "a", "h", "rng", "pka"; + }; From patchwork Tue Sep 27 08:00:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12990096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2387BC07E9D for ; Tue, 27 Sep 2022 08:07:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4rd8jIIMLjJDt8FMsXVJbZznwZg3Qd6TrmDbHndSsIM=; b=SEH3EldGV9niqP hckpl4lYIdUVm9V99897BzRVuwJyQx65LLcv2cEPUb6w0fiVAbxX2Th8JAmunQLeB8yUMYR0AGRNS i2STjU0HD1kvAQDE7Tmbh8YyPqOli/ycFAYIavcTvVIj3m9GHk/XEjAddfO0C0Fz54lCwTyxFhnVQ jt4mS++kWBDejvNuwS4vjr/3HMxGKWJ8O6Nm36/6XczDQ1eAjEOv16m6fo+NC/HQa0LuQG12wU/OQ CEi/AxxtKu163yrRVbfnRKDCCebhG2Z+W16UBQlUxoKDI0jyQv4+huO0HNCtLvwmZDNyZlBbj/I98 uISzvPdo6BF4+XpTlsxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5cu-0090j7-OJ; Tue, 27 Sep 2022 08:07:36 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5Wh-008xsS-Cv for linux-rockchip@lists.infradead.org; Tue, 27 Sep 2022 08:01:14 +0000 Received: by mail-wm1-x32b.google.com with SMTP id r3-20020a05600c35c300b003b4b5f6c6bdso4969805wmq.2 for ; Tue, 27 Sep 2022 01:01:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=wPGAhdd0/cVSzg/VD0f276D5LseQyNwXzj19lsEfYlU=; b=0vxLZEGja3Z/vABzdvDHqbhHIpXhhJzdAnPiiYDz7Evr0FarEc6VZtXmUGNDadiRlF Pd5rc+dt+crkZUBayKcBU6gLWM7C5vJrVYSMzYpaFDengcOQOWJnyDK+2Qp+nRHl800L lrlJOQ9Ey4hzdsmiMaLY370jgWZOHhVhwDlXttSQUKIMRr0VhK/W8d6tzSQ4cWK/e9P6 rZfaHm8+PhuEDEYWpHOr3QGdx+g4rvF7+OEDAUdsmh64i9ZcAAFw3RqCVAMaPJbiXRZD NvUZgoLTYLTKC2Zrk5Erx8Nr3OmVyFrN/+uQRSTAq0t4srB0sG6XfjDo6VLQK3ah2dqn AZ3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=wPGAhdd0/cVSzg/VD0f276D5LseQyNwXzj19lsEfYlU=; b=BsUyGLpdnh7UDXFL5byUj7c/ittIbbmXoGEVBzqNdNLL0WLJPSeDflw+V0XgZ04ems F8xKUiWiOwA0HKCqG0t4ypVJtVNDw+aveRTlhjWdIyAP6GFvTnYS3SlczyaF2eukBUMI u9KKyYzT1BSbTovm2oF+O25iMonO3BfKrUIkiS+31/xXOY4Uu5LhrWcoGFwr7TpaCsBV ordJvBSaASewkYu4gEOpIdsCBtMLo8AP5eSxBXf4PLnt1wJ43S12YHquYcb7oOllibji lzu+hta0YmFxyOVORW+ZCAJSr/+zfRj57ls3A80TgnLH7AeexF1DsVyKpOnD0LjUdNJY NEcQ== X-Gm-Message-State: ACrzQf2dmuRCbkKVLR4F1ruUasXs3fKrP1lIsFjo7mMTjHknpxsP9THa 2I9XwtWfUzjgIgxGrUdvIxIiyw== X-Google-Smtp-Source: AMsMyM5z40kZ9WpI9Hrvz83bGhzPKvE5thgsJk4hmPydLZ463Byz6OMkecVmVm1d/vWNJef245sRew== X-Received: by 2002:a05:600c:410d:b0:3b4:9454:f894 with SMTP id j13-20020a05600c410d00b003b49454f894mr1567838wmi.111.1664265669754; Tue, 27 Sep 2022 01:01:09 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id p4-20020a1c5444000000b003a5c7a942edsm13357199wmi.28.2022.09.27.01.01.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 01:01:09 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, davem@davemloft.net, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH RFT 3/5] MAINTAINERS: add new dt-binding doc to the right entry Date: Tue, 27 Sep 2022 08:00:46 +0000 Message-Id: <20220927080048.3151911-4-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220927080048.3151911-1-clabbe@baylibre.com> References: <20220927080048.3151911-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220927_010111_484866_C9A16677 X-CRM114-Status: UNSURE ( 9.96 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Rockchip crypto driver have a new file to be added. Signed-off-by: Corentin Labbe --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index fffc36ee3c99..2d9c95ae44e7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17712,6 +17712,7 @@ M: Corentin Labbe L: linux-crypto@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/crypto/rockchip,rk3288-crypto.yaml +F: Documentation/devicetree/bindings/crypto/rockchip,rk3588-crypto.yaml F: drivers/crypto/rockchip/ ROCKCHIP I2S TDM DRIVER From patchwork Tue Sep 27 08:00:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12990112 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D209CC54EE9 for ; Tue, 27 Sep 2022 08:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=14dw46GzpP46y9mAiTmm3/VsbFVKVvhMPNJ9E7ietwE=; b=CzSgX7nMsCEQme 8WVWL9qSabPqUweeokhaudN5Kov4uA13vg71esXHUnYwXNh62O5RgGLbcyHEkALEZIwCSVJYUQ1IS A4zPcW4UN/pMi29oHTQX9Wsy+27ebfhjDpwVj0Q589WOucEZDz+XoH01e8K/gZwDklVPLY1NSNPe/ OQFd0+oeJVMLtLMr4SzTc4BvdbYDTr8yl7HtJjy9DrrZ/d5vnjN+XQhNsKHrplcpX6dOq7MwC2ajn ITQ5hRvD9EPTCjuKlKt4lX8yAE2Z3YhSdUv5+5EFHAIy2TXiVV1tFXaTueTLJ5vxzTY30aZin7kdY ZDYxVrMKjORsJS48Eqig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5iV-0093B6-L4; Tue, 27 Sep 2022 08:13:23 +0000 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5Wo-008xuV-Go for linux-rockchip@lists.infradead.org; Tue, 27 Sep 2022 08:01:25 +0000 Received: by mail-wr1-x430.google.com with SMTP id t14so13685695wrx.8 for ; Tue, 27 Sep 2022 01:01:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=jr5Ecya7f02Fy9CIGIZnQgyLjyPGQt48d5jJk43FDzM=; b=Mm2IYXWGR3i4bhPLVB250tX4UA4BJrhVqkDuPBa1KmH/+H/eFBj6zZolMHehqCbCA5 TalfO5YAyj/n4xEPcpWdcYqsDQyOeQFKXm1360mBdmVVCjVZDB8ehWAa/lXZTbnb8tOW foEkus/QYTZfyPdq4rCTS3Tn4x3E2ndSMgU4a0283pPc1/jU5/po0ZzrAuCdv2jGv/IK lSjI0fSgDBIrZaO802iKoiY9UUyEauKYk6JaLNXrD3tYo87bnNNH3gCruKK+Gg5CaDL9 N3b4NMmVaakQAc659RCTOPHiVM/UO9+kOCddm3cwGiRPtKsdZLg90kBKmpwsk34IlYND xaHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=jr5Ecya7f02Fy9CIGIZnQgyLjyPGQt48d5jJk43FDzM=; b=HKeGHkVgIKy+/ClPecIyiKWQ74yHiOzdJBA5Cf2/O0tMqj6use6ro1vlnimN9LlPoN L5DNKQSqgKIBj9nAdnP/MCrBQ8yg9c/NXV2yMhkYoowhc53gEUubjPVuaL1NqGCJ5q6+ 856MACeH1PPc4dyxBd7xn10Ks8eWtrgbA4B2q29VlNQGGdPC/f68MBb18jOPOQ3r2eQU u9/Ogq7DRccH9AW9F/eTjd2Nbir91mTSSCZ8z/NZjq22EZdR7RtLfhTg7Fz2zxYl3G8/ STB6Gl214Rj/SgKDw58izV7bUU+DMNp6GKAKemZE86bmL/JvuHIzISxEEr7ibqYJGU5Q qfQA== X-Gm-Message-State: ACrzQf0MGCbAnfysU9yCwrttpbwVVJjgwfh9JtAr5ml59zm/naDE8mLu SSGBQe5KObFklBifcwhgV3pC1Q== X-Google-Smtp-Source: AMsMyM4xeb2ui8ScXMiCcZETxM3gurxx5G1LnboZaBDbE488+ugYP5X6i28MYocTqeW4RB74MNLGPw== X-Received: by 2002:a5d:47aa:0:b0:226:dbf6:680c with SMTP id 10-20020a5d47aa000000b00226dbf6680cmr16064878wrb.581.1664265670856; Tue, 27 Sep 2022 01:01:10 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id p4-20020a1c5444000000b003a5c7a942edsm13357199wmi.28.2022.09.27.01.01.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 01:01:10 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, davem@davemloft.net, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH RFT 4/5] crypto: rockchip: support the new crypto IP for rk3568/rk3588 Date: Tue, 27 Sep 2022 08:00:47 +0000 Message-Id: <20220927080048.3151911-5-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220927080048.3151911-1-clabbe@baylibre.com> References: <20220927080048.3151911-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220927_010118_794506_E0CE1FCF X-CRM114-Status: GOOD ( 24.29 ) X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Rockchip rk3568 and rk3588 have a common crypto offloader IP. This driver adds support for it. Signed-off-by: Corentin Labbe --- drivers/crypto/rockchip/Kconfig | 28 + drivers/crypto/rockchip/Makefile | 5 + drivers/crypto/rockchip/rk3588_crypto.c | 646 ++++++++++++++++++ drivers/crypto/rockchip/rk3588_crypto.h | 221 ++++++ drivers/crypto/rockchip/rk3588_crypto_ahash.c | 346 ++++++++++ .../crypto/rockchip/rk3588_crypto_skcipher.c | 340 +++++++++ 6 files changed, 1586 insertions(+) create mode 100644 drivers/crypto/rockchip/rk3588_crypto.c create mode 100644 drivers/crypto/rockchip/rk3588_crypto.h create mode 100644 drivers/crypto/rockchip/rk3588_crypto_ahash.c create mode 100644 drivers/crypto/rockchip/rk3588_crypto_skcipher.c diff --git a/drivers/crypto/rockchip/Kconfig b/drivers/crypto/rockchip/Kconfig index 1010d897d9ef..84ca1081fd0c 100644 --- a/drivers/crypto/rockchip/Kconfig +++ b/drivers/crypto/rockchip/Kconfig @@ -26,3 +26,31 @@ config CRYPTO_DEV_ROCKCHIP_DEBUG Say y to enable Rockchip crypto debug stats. This will create /sys/kernel/debug/rk3288_crypto/stats for displaying the number of requests per algorithm and other internal stats. + +config CRYPTO_DEV_ROCKCHIP2 + tristate "Rockchip's cryptographic offloader V2" + depends on OF && ARCH_ROCKCHIP + depends on PM + select CRYPTO_ECB + select CRYPTO_CBC + select CRYPTO_AES + select CRYPTO_MD5 + select CRYPTO_SHA1 + select CRYPTO_SHA256 + select CRYPTO_SM3 + select CRYPTO_HASH + select CRYPTO_SKCIPHER + select CRYPTO_ENGINE + + help + This driver interfaces with the hardware crypto offloader present + on RK3568 and RK3588. + +config CRYPTO_DEV_ROCKCHIP2_DEBUG + bool "Enable Rockchip V2 crypto stats" + depends on CRYPTO_DEV_ROCKCHIP2 + depends on DEBUG_FS + help + Say y to enable Rockchip crypto debug stats. + This will create /sys/kernel/debug/rk3588_crypto/stats for displaying + the number of requests per algorithm and other internal stats. diff --git a/drivers/crypto/rockchip/Makefile b/drivers/crypto/rockchip/Makefile index 785277aca71e..2132d8326223 100644 --- a/drivers/crypto/rockchip/Makefile +++ b/drivers/crypto/rockchip/Makefile @@ -3,3 +3,8 @@ obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) += rk_crypto.o rk_crypto-objs := rk3288_crypto.o \ rk3288_crypto_skcipher.o \ rk3288_crypto_ahash.o + +obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP2) += rk_crypto2.o +rk_crypto2-objs := rk3588_crypto.o \ + rk3588_crypto_skcipher.o \ + rk3588_crypto_ahash.o diff --git a/drivers/crypto/rockchip/rk3588_crypto.c b/drivers/crypto/rockchip/rk3588_crypto.c new file mode 100644 index 000000000000..121cde1fd18c --- /dev/null +++ b/drivers/crypto/rockchip/rk3588_crypto.c @@ -0,0 +1,646 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * hardware cryptographic offloader for rk3568/rk3588 SoC + * + * Copyright (c) 2022, Corentin Labbe + */ + +#include "rk3588_crypto.h" +#include +#include +#include +#include +#include +#include +#include +#include + +static struct rockchip_ip rocklist = { + .dev_list = LIST_HEAD_INIT(rocklist.dev_list), + .lock = __SPIN_LOCK_UNLOCKED(rocklist.lock), +}; + +struct rk_crypto_dev *get_rk_crypto(void) +{ + struct rk_crypto_dev *first; + + spin_lock(&rocklist.lock); + first = list_first_entry_or_null(&rocklist.dev_list, + struct rk_crypto_dev, list); + list_rotate_left(&rocklist.dev_list); + spin_unlock(&rocklist.lock); + return first; +} + +static const struct rk_variant rk3568_variant = { + .num_clks = 4, +}; + +static const struct rk_variant rk3588_variant = { + .num_clks = 4, +}; + +static int rk_crypto_get_clks(struct rk_crypto_dev *dev) +{ + int i, j, err; + unsigned long cr; + + dev->num_clks = devm_clk_bulk_get_all(dev->dev, &dev->clks); + if (dev->num_clks < dev->variant->num_clks) { + dev_err(dev->dev, "Missing clocks, got %d instead of %d\n", + dev->num_clks, dev->variant->num_clks); + return -EINVAL; + } + + for (i = 0; i < dev->num_clks; i++) { + cr = clk_get_rate(dev->clks[i].clk); + for (j = 0; j < ARRAY_SIZE(dev->variant->rkclks); j++) { + if (dev->variant->rkclks[j].max == 0) + continue; + if (strcmp(dev->variant->rkclks[j].name, dev->clks[i].id)) + continue; + if (cr > dev->variant->rkclks[j].max) { + err = clk_set_rate(dev->clks[i].clk, + dev->variant->rkclks[j].max); + if (err) + dev_err(dev->dev, "Fail downclocking %s from %lu to %lu\n", + dev->variant->rkclks[j].name, cr, + dev->variant->rkclks[j].max); + else + dev_info(dev->dev, "Downclocking %s from %lu to %lu\n", + dev->variant->rkclks[j].name, cr, + dev->variant->rkclks[j].max); + } + } + } + return 0; +} + +static int rk_crypto_enable_clk(struct rk_crypto_dev *dev) +{ + int err; + + err = clk_bulk_prepare_enable(dev->num_clks, dev->clks); + if (err) + dev_err(dev->dev, "Could not enable clock clks\n"); + + return err; +} + +static void rk_crypto_disable_clk(struct rk_crypto_dev *dev) +{ + clk_bulk_disable_unprepare(dev->num_clks, dev->clks); +} + +/* + * Power management strategy: The device is suspended until a request + * is handled. For avoiding suspend/resume yoyo, the autosuspend is set to 2s. + */ +static int rk_crypto_pm_suspend(struct device *dev) +{ + struct rk_crypto_dev *rkdev = dev_get_drvdata(dev); + + rk_crypto_disable_clk(rkdev); + reset_control_assert(rkdev->rst); + + return 0; +} + +static int rk_crypto_pm_resume(struct device *dev) +{ + struct rk_crypto_dev *rkdev = dev_get_drvdata(dev); + int ret; + + ret = rk_crypto_enable_clk(rkdev); + if (ret) + return ret; + + reset_control_deassert(rkdev->rst); + return 0; +} + +static const struct dev_pm_ops rk_crypto_pm_ops = { + SET_RUNTIME_PM_OPS(rk_crypto_pm_suspend, rk_crypto_pm_resume, NULL) +}; + +static int rk_crypto_pm_init(struct rk_crypto_dev *rkdev) +{ + int err; + + pm_runtime_use_autosuspend(rkdev->dev); + pm_runtime_set_autosuspend_delay(rkdev->dev, 2000); + + err = pm_runtime_set_suspended(rkdev->dev); + if (err) + return err; + pm_runtime_enable(rkdev->dev); + return err; +} + +static void rk_crypto_pm_exit(struct rk_crypto_dev *rkdev) +{ + pm_runtime_disable(rkdev->dev); +} + +static irqreturn_t rk_crypto_irq_handle(int irq, void *dev_id) +{ + struct rk_crypto_dev *rkc = platform_get_drvdata(dev_id); + u32 v; + + v = readl(rkc->reg + RK_CRYPTO_DMA_INT_ST); + writel(v, rkc->reg + RK_CRYPTO_DMA_INT_ST); + + rkc->status = 1; + if (v & 0xF8) { + dev_warn(rkc->dev, "DMA Error\n"); + rkc->status = 0; + } + complete(&rkc->complete); + + return IRQ_HANDLED; +} + +static struct rk_crypto_template rk_cipher_algs[] = { + { + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "ecb-aes-rk2", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_cipher_tfm_init, + .exit = rk_cipher_tfm_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_ecb_encrypt, + .decrypt = rk_aes_ecb_decrypt, + } + }, + { + .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cbc-aes-rk2", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct rk_cipher_ctx), + .base.cra_alignmask = 0x0f, + .base.cra_module = THIS_MODULE, + + .init = rk_cipher_tfm_init, + .exit = rk_cipher_tfm_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = rk_aes_setkey, + .encrypt = rk_aes_cbc_encrypt, + .decrypt = rk_aes_cbc_decrypt, + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_MD5, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = MD5_DIGEST_SIZE, + .statesize = sizeof(struct md5_state), + .base = { + .cra_name = "md5", + .cra_driver_name = "rk2-md5", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_SHA1, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = SHA1_DIGEST_SIZE, + .statesize = sizeof(struct sha1_state), + .base = { + .cra_name = "sha1", + .cra_driver_name = "rk2-sha1", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_SHA256, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), + .base = { + .cra_name = "sha256", + .cra_driver_name = "rk2-sha256", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_SHA384, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = SHA384_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), + .base = { + .cra_name = "sha384", + .cra_driver_name = "rk2-sha384", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_SHA512, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = SHA512_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), + .base = { + .cra_name = "sha512", + .cra_driver_name = "rk2-sha512", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .rk_mode = RK_CRYPTO_SM3, + .alg.hash = { + .init = rk_ahash_init, + .update = rk_ahash_update, + .final = rk_ahash_final, + .finup = rk_ahash_finup, + .export = rk_ahash_export, + .import = rk_ahash_import, + .digest = rk_ahash_digest, + .halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = sizeof(struct sm3_state), + .base = { + .cra_name = "sm3", + .cra_driver_name = "rk2-sm3", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rk_ahash_ctx), + .cra_alignmask = 3, + .cra_init = rk_cra_hash_init, + .cra_exit = rk_cra_hash_exit, + .cra_module = THIS_MODULE, + } + } + } + }, +}; + +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG +static int rk_crypto_debugfs_show(struct seq_file *seq, void *v) +{ + struct rk_crypto_dev *dd; + unsigned int i; + + spin_lock(&rocklist.lock); + list_for_each_entry(dd, &rocklist.dev_list, list) { + seq_printf(seq, "%s %s requests: %lu\n", + dev_driver_string(dd->dev), dev_name(dd->dev), + dd->nreq); + } + spin_unlock(&rocklist.lock); + + for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { + if (!rk_cipher_algs[i].dev) + continue; + switch (rk_cipher_algs[i].type) { + case CRYPTO_ALG_TYPE_SKCIPHER: + seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n", + rk_cipher_algs[i].alg.skcipher.base.cra_driver_name, + rk_cipher_algs[i].alg.skcipher.base.cra_name, + rk_cipher_algs[i].stat_req, rk_cipher_algs[i].stat_fb); + seq_printf(seq, "\tfallback due to length: %lu\n", + rk_cipher_algs[i].stat_fb_len); + seq_printf(seq, "\tfallback due to alignment: %lu\n", + rk_cipher_algs[i].stat_fb_align); + seq_printf(seq, "\tfallback due to SGs: %lu\n", + rk_cipher_algs[i].stat_fb_sgdiff); + break; + case CRYPTO_ALG_TYPE_AHASH: + seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n", + rk_cipher_algs[i].alg.hash.halg.base.cra_driver_name, + rk_cipher_algs[i].alg.hash.halg.base.cra_name, + rk_cipher_algs[i].stat_req, rk_cipher_algs[i].stat_fb); + break; + } + } + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(rk_crypto_debugfs); +#endif + +static void register_debugfs(struct rk_crypto_dev *crypto_dev) +{ +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG + /* Ignore error of debugfs */ + rocklist.dbgfs_dir = debugfs_create_dir("rk3588_crypto", NULL); + rocklist.dbgfs_stats = debugfs_create_file("stats", 0444, + rocklist.dbgfs_dir, + &rocklist, + &rk_crypto_debugfs_fops); +#endif +} + +static int rk_crypto_register(struct rk_crypto_dev *rkc) +{ + unsigned int i, k; + int err = 0; + + for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { + rk_cipher_algs[i].dev = rkc; + switch (rk_cipher_algs[i].type) { + case CRYPTO_ALG_TYPE_SKCIPHER: + dev_info(rkc->dev, "Register %s as %s\n", + rk_cipher_algs[i].alg.skcipher.base.cra_name, + rk_cipher_algs[i].alg.skcipher.base.cra_driver_name); + err = crypto_register_skcipher(&rk_cipher_algs[i].alg.skcipher); + break; + case CRYPTO_ALG_TYPE_AHASH: + dev_info(rkc->dev, "Register %s as %s\n", + rk_cipher_algs[i].alg.hash.halg.base.cra_name, + rk_cipher_algs[i].alg.hash.halg.base.cra_driver_name); + err = crypto_register_ahash(&rk_cipher_algs[i].alg.hash); + break; + default: + dev_err(rkc->dev, "unknown algorithm\n"); + } + if (err) + goto err_cipher_algs; + } + return 0; + +err_cipher_algs: + for (k = 0; k < i; k++) { + if (rk_cipher_algs[i].type == CRYPTO_ALG_TYPE_SKCIPHER) + crypto_unregister_skcipher(&rk_cipher_algs[k].alg.skcipher); + else + crypto_unregister_ahash(&rk_cipher_algs[i].alg.hash); + } + return err; +} + +static void rk_crypto_unregister(void) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(rk_cipher_algs); i++) { + if (rk_cipher_algs[i].type == CRYPTO_ALG_TYPE_SKCIPHER) + crypto_unregister_skcipher(&rk_cipher_algs[i].alg.skcipher); + else + crypto_unregister_ahash(&rk_cipher_algs[i].alg.hash); + } +} + +static const struct of_device_id crypto_of_id_table[] = { + { .compatible = "rockchip,rk3568-crypto", + .data = &rk3568_variant, + }, + { .compatible = "rockchip,rk3588-crypto", + .data = &rk3588_variant, + }, + {} +}; +MODULE_DEVICE_TABLE(of, crypto_of_id_table); + +static int rk_crypto_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct rk_crypto_dev *rkc, *first; + int err = 0; + + rkc = devm_kzalloc(&pdev->dev, sizeof(*rkc), GFP_KERNEL); + if (!rkc) { + err = -ENOMEM; + goto err_crypto; + } + + rkc->dev = &pdev->dev; + platform_set_drvdata(pdev, rkc); + + rkc->variant = of_device_get_match_data(&pdev->dev); + if (!rkc->variant) { + dev_err(&pdev->dev, "Missing variant\n"); + return -EINVAL; + } + + rkc->rst = devm_reset_control_array_get_exclusive(dev); + if (IS_ERR(rkc->rst)) { + err = PTR_ERR(rkc->rst); + goto err_crypto; + } + + rkc->tl = dma_alloc_coherent(rkc->dev, + sizeof(struct rk_crypto_lli) * MAX_LLI, + &rkc->t_phy, GFP_KERNEL); + if (!rkc->tl) { + dev_err(rkc->dev, "Cannot get DMA memory for task\n"); + err = -ENOMEM; + goto err_crypto; + } + + reset_control_assert(rkc->rst); + usleep_range(10, 20); + reset_control_deassert(rkc->rst); + + rkc->reg = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(rkc->reg)) { + err = PTR_ERR(rkc->reg); + goto err_crypto; + } + + err = rk_crypto_get_clks(rkc); + if (err) + goto err_crypto; + + rkc->irq = platform_get_irq(pdev, 0); + if (rkc->irq < 0) { + dev_err(&pdev->dev, "control Interrupt is not available.\n"); + err = rkc->irq; + goto err_crypto; + } + + err = devm_request_irq(&pdev->dev, rkc->irq, + rk_crypto_irq_handle, IRQF_SHARED, + "rk-crypto", pdev); + + if (err) { + dev_err(&pdev->dev, "irq request failed.\n"); + goto err_crypto; + } + + rkc->engine = crypto_engine_alloc_init(&pdev->dev, true); + crypto_engine_start(rkc->engine); + init_completion(&rkc->complete); + + err = rk_crypto_pm_init(rkc); + if (err) + goto err_pm; + + err = pm_runtime_resume_and_get(&pdev->dev); + + spin_lock(&rocklist.lock); + first = list_first_entry_or_null(&rocklist.dev_list, + struct rk_crypto_dev, list); + list_add_tail(&rkc->list, &rocklist.dev_list); + spin_unlock(&rocklist.lock); + + if (!first) { + dev_info(dev, "Registers crypto algos\n"); + err = rk_crypto_register(rkc); + if (err) { + dev_err(dev, "Fail to register crypto algorithms"); + goto err_register_alg; + } + + register_debugfs(rkc); + } + + return 0; + +err_register_alg: + rk_crypto_pm_exit(rkc); +err_pm: + crypto_engine_exit(rkc->engine); +err_crypto: + dev_err(dev, "Crypto Accelerator not successfully registered\n"); + return err; +} + +static int rk_crypto_remove(struct platform_device *pdev) +{ + struct rk_crypto_dev *crypto_tmp = platform_get_drvdata(pdev); + struct rk_crypto_dev *first; + + spin_lock_bh(&rocklist.lock); + list_del(&crypto_tmp->list); + first = list_first_entry_or_null(&rocklist.dev_list, + struct rk_crypto_dev, list); + spin_unlock_bh(&rocklist.lock); + + if (!first) { +#ifdef CONFIG_CRYPTO_DEV_ROCKCHIP2_DEBUG + debugfs_remove_recursive(rocklist.dbgfs_dir); +#endif + rk_crypto_unregister(); + } + rk_crypto_pm_exit(crypto_tmp); + crypto_engine_exit(crypto_tmp->engine); + return 0; +} + +static struct platform_driver crypto_driver = { + .probe = rk_crypto_probe, + .remove = rk_crypto_remove, + .driver = { + .name = "rk3588-crypto", + .pm = &rk_crypto_pm_ops, + .of_match_table = crypto_of_id_table, + }, +}; + +module_platform_driver(crypto_driver); + +MODULE_DESCRIPTION("Rockchip Crypto Engine cryptographic offloader"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Corentin Labbe "); diff --git a/drivers/crypto/rockchip/rk3588_crypto.h b/drivers/crypto/rockchip/rk3588_crypto.h new file mode 100644 index 000000000000..91139a4dade8 --- /dev/null +++ b/drivers/crypto/rockchip/rk3588_crypto.h @@ -0,0 +1,221 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define RK_CRYPTO_CLK_CTL 0x0000 +#define RK_CRYPTO_RST_CTL 0x0004 + +#define RK_CRYPTO_DMA_INT_EN 0x0008 +/* values for RK_CRYPTO_DMA_INT_EN */ +#define RK_CRYPTO_DMA_INT_LISTDONE BIT(0) + +#define RK_CRYPTO_DMA_INT_ST 0x000C +/* values in RK_CRYPTO_DMA_INT_ST are the same than in RK_CRYPTO_DMA_INT_EN */ + +#define RK_CRYPTO_DMA_CTL 0x0010 +#define RK_CRYPTO_DMA_CTL_START BIT(0) + +#define RK_CRYPTO_DMA_LLI_ADDR 0x0014 + +#define RK_CRYPTO_FIFO_CTL 0x0040 + +#define RK_CRYPTO_BC_CTL 0x0044 +#define RK_CRYPTO_AES (0 << 8) +#define RK_CRYPTO_MODE_ECB (0 << 4) +#define RK_CRYPTO_MODE_CBC (1 << 4) + +#define RK_CRYPTO_HASH_CTL 0x0048 +#define RK_CRYPTO_HW_PAD BIT(2) +#define RK_CRYPTO_SHA1 (0 << 4) +#define RK_CRYPTO_MD5 (1 << 4) +#define RK_CRYPTO_SHA224 (3 << 4) +#define RK_CRYPTO_SHA256 (2 << 4) +#define RK_CRYPTO_SHA384 (9 << 4) +#define RK_CRYPTO_SHA512 (8 << 4) +#define RK_CRYPTO_SM3 (4 << 4) + +#define RK_CRYPTO_AES_ECB_MODE (RK_CRYPTO_AES | RK_CRYPTO_MODE_ECB) +#define RK_CRYPTO_AES_CBC_MODE (RK_CRYPTO_AES | RK_CRYPTO_MODE_CBC) +#define RK_CRYPTO_AES_CTR_MODE 3 +#define RK_CRYPTO_AES_128BIT_key (0 << 2) +#define RK_CRYPTO_AES_192BIT_key (1 << 2) +#define RK_CRYPTO_AES_256BIT_key (2 << 2) + +#define RK_CRYPTO_DEC BIT(1) +#define RK_CRYPTO_ENABLE BIT(0) + +#define RK_CRYPTO_CH0_IV_0 0x0100 + +#define RK_CRYPTO_KEY0 0x0180 +#define RK_CRYPTO_KEY1 0x0184 +#define RK_CRYPTO_KEY2 0x0188 +#define RK_CRYPTO_KEY3 0x018C +#define RK_CRYPTO_KEY4 0x0190 +#define RK_CRYPTO_KEY5 0x0194 +#define RK_CRYPTO_KEY6 0x0198 +#define RK_CRYPTO_KEY7 0x019C + +#define RK_CRYPTO_CH0_PC_LEN_0 0x0280 + +#define RK_CRYPTO_CH0_IV_LEN 0x0300 + +#define RK_CRYPTO_HASH_DOUT_0 0x03A0 +#define RK_CRYPTO_HASH_VALID 0x03E4 + +#define CRYPTO_AES_VERSION 0x0680 +#define CRYPTO_DES_VERSION 0x0684 +#define CRYPTO_SM4_VERSION 0x0688 +#define CRYPTO_HASH_VERSION 0x068C +#define CRYPTO_HMAC_VERSION 0x0690 +#define CRYPTO_RNG_VERSION 0x0694 +#define CRYPTO_PKA_VERSION 0x0698 +#define CRYPTO_CRYPTO_VERSION 0x06F0 + +#define RK_LLI_DMA_CTRL_SRC_INT BIT(10) +#define RK_LLI_DMA_CTRL_DST_INT BIT(9) +#define RK_LLI_DMA_CTRL_LIST_INT BIT(8) +#define RK_LLI_DMA_CTRL_LAST BIT(0) + +#define RK_LLI_STRING_LAST BIT(2) +#define RK_LLI_STRING_FIRST BIT(1) +#define RK_LLI_CIPHER_START BIT(0) + +#define RK_MAX_CLKS 4 + +/* there are no hw limit, but we need to choose a maximum of descriptor to allocate */ +#define MAX_LLI 20 + +struct rk_crypto_lli { + __le32 src_addr; + __le32 src_len; + __le32 dst_addr; + __le32 dst_len; + __le32 user; + __le32 iv; + __le32 dma_ctrl; + __le32 next; +}; + +/* + * struct rockchip_ip - struct for managing a list of RK crypto instance + * @dev_list: Used for doing a list of rk_crypto_dev + * @lock: Control access to dev_list + * @dbgfs_dir: Debugfs dentry for statistic directory + * @dbgfs_stats: Debugfs dentry for statistic counters + */ +struct rockchip_ip { + struct list_head dev_list; + spinlock_t lock; /* Control access to dev_list */ + struct dentry *dbgfs_dir; + struct dentry *dbgfs_stats; +}; + +struct rk_clks { + const char *name; + unsigned long max; +}; + +struct rk_variant { + int num_clks; + struct rk_clks rkclks[RK_MAX_CLKS]; +}; + +struct rk_crypto_dev { + struct list_head list; + struct device *dev; + struct clk_bulk_data *clks; + int num_clks; + struct reset_control *rst; + void __iomem *reg; + int irq; + const struct rk_variant *variant; + unsigned long nreq; + struct crypto_engine *engine; + struct completion complete; + int status; + struct rk_crypto_lli *tl; + dma_addr_t t_phy; +}; + +/* the private variable of hash */ +struct rk_ahash_ctx { + struct crypto_engine_ctx enginectx; + /* for fallback */ + struct crypto_ahash *fallback_tfm; +}; + +/* the private variable of hash for fallback */ +struct rk_ahash_rctx { + struct rk_crypto_dev *dev; + struct ahash_request fallback_req; + u32 mode; + int nrsgs; +}; + +/* the private variable of cipher */ +struct rk_cipher_ctx { + struct crypto_engine_ctx enginectx; + unsigned int keylen; + u8 key[AES_MAX_KEY_SIZE]; + u8 iv[AES_BLOCK_SIZE]; + struct crypto_skcipher *fallback_tfm; +}; + +struct rk_cipher_rctx { + struct rk_crypto_dev *dev; + u8 backup_iv[AES_BLOCK_SIZE]; + u32 mode; + struct skcipher_request fallback_req; // keep at the end +}; + +struct rk_crypto_template { + u32 type; + u32 rk_mode; + struct rk_crypto_dev *dev; + union { + struct skcipher_alg skcipher; + struct ahash_alg hash; + } alg; + unsigned long stat_req; + unsigned long stat_fb; + unsigned long stat_fb_len; + unsigned long stat_fb_sglen; + unsigned long stat_fb_align; + unsigned long stat_fb_sgdiff; +}; + +struct rk_crypto_dev *get_rk_crypto(void); + +int rk_cipher_tfm_init(struct crypto_skcipher *tfm); +void rk_cipher_tfm_exit(struct crypto_skcipher *tfm); +int rk_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen); +int rk_aes_ecb_encrypt(struct skcipher_request *req); +int rk_aes_ecb_decrypt(struct skcipher_request *req); +int rk_aes_cbc_encrypt(struct skcipher_request *req); +int rk_aes_cbc_decrypt(struct skcipher_request *req); + +int rk_ahash_init(struct ahash_request *req); +int rk_ahash_update(struct ahash_request *req); +int rk_ahash_final(struct ahash_request *req); +int rk_ahash_finup(struct ahash_request *req); +int rk_ahash_import(struct ahash_request *req, const void *in); +int rk_ahash_export(struct ahash_request *req, void *out); +int rk_ahash_digest(struct ahash_request *req); +int rk_cra_hash_init(struct crypto_tfm *tfm); +void rk_cra_hash_exit(struct crypto_tfm *tfm); diff --git a/drivers/crypto/rockchip/rk3588_crypto_ahash.c b/drivers/crypto/rockchip/rk3588_crypto_ahash.c new file mode 100644 index 000000000000..9776c777bd7c --- /dev/null +++ b/drivers/crypto/rockchip/rk3588_crypto_ahash.c @@ -0,0 +1,346 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Rockchip RK3588 + * + * Copyright (c) 2022 Corentin Labbe + */ +#include +#include +#include "rk3588_crypto.h" + +static bool rk_ahash_need_fallback(struct ahash_request *areq) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.hash); + struct scatterlist *sg; + + sg = areq->src; + while (sg) { + if (!IS_ALIGNED(sg->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + if (sg->length % 4) { + algt->stat_fb_sglen++; + return true; + } + sg = sg_next(sg); + } + return false; +} + +static int rk_ahash_digest_fb(struct ahash_request *areq) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct rk_ahash_ctx *tfmctx = crypto_ahash_ctx(tfm); + struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.hash); + + algt->stat_fb++; + + ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm); + rctx->fallback_req.base.flags = areq->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + rctx->fallback_req.nbytes = areq->nbytes; + rctx->fallback_req.src = areq->src; + rctx->fallback_req.result = areq->result; + + return crypto_ahash_digest(&rctx->fallback_req); +} + +static int zero_message_process(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.hash); + int digestsize = crypto_ahash_digestsize(tfm); + + switch (algt->rk_mode) { + case RK_CRYPTO_SHA1: + memcpy(req->result, sha1_zero_message_hash, digestsize); + break; + case RK_CRYPTO_SHA256: + memcpy(req->result, sha256_zero_message_hash, digestsize); + break; + case RK_CRYPTO_SHA384: + memcpy(req->result, sha384_zero_message_hash, digestsize); + break; + case RK_CRYPTO_SHA512: + memcpy(req->result, sha512_zero_message_hash, digestsize); + break; + case RK_CRYPTO_MD5: + memcpy(req->result, md5_zero_message_hash, digestsize); + break; + case RK_CRYPTO_SM3: + memcpy(req->result, sm3_zero_message_hash, digestsize); + break; + default: + return -EINVAL; + } + + return 0; +} + +int rk_ahash_init(struct ahash_request *req) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_init(&rctx->fallback_req); +} + +int rk_ahash_update(struct ahash_request *req) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + rctx->fallback_req.nbytes = req->nbytes; + rctx->fallback_req.src = req->src; + + return crypto_ahash_update(&rctx->fallback_req); +} + +int rk_ahash_final(struct ahash_request *req) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + rctx->fallback_req.result = req->result; + + return crypto_ahash_final(&rctx->fallback_req); +} + +int rk_ahash_finup(struct ahash_request *req) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + rctx->fallback_req.nbytes = req->nbytes; + rctx->fallback_req.src = req->src; + rctx->fallback_req.result = req->result; + + return crypto_ahash_finup(&rctx->fallback_req); +} + +int rk_ahash_import(struct ahash_request *req, const void *in) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_import(&rctx->fallback_req, in); +} + +int rk_ahash_export(struct ahash_request *req, void *out) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct rk_ahash_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); + rctx->fallback_req.base.flags = req->base.flags & + CRYPTO_TFM_REQ_MAY_SLEEP; + + return crypto_ahash_export(&rctx->fallback_req, out); +} + +int rk_ahash_digest(struct ahash_request *req) +{ + struct rk_ahash_rctx *rctx = ahash_request_ctx(req); + struct rk_crypto_dev *dev; + struct crypto_engine *engine; + + if (rk_ahash_need_fallback(req)) + return rk_ahash_digest_fb(req); + + if (!req->nbytes) + return zero_message_process(req); + + dev = get_rk_crypto(); + + rctx->dev = dev; + engine = dev->engine; + + return crypto_transfer_hash_request_to_engine(engine, req); +} + +static int rk_hash_prepare(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq = container_of(breq, struct ahash_request, base); + struct rk_ahash_rctx *rctx = ahash_request_ctx(areq); + struct rk_crypto_dev *rkc = rctx->dev; + int ret; + + ret = dma_map_sg(rkc->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE); + if (ret <= 0) + return -EINVAL; + + rctx->nrsgs = ret; + + return 0; +} + +static int rk_hash_unprepare(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq = container_of(breq, struct ahash_request, base); + struct rk_ahash_rctx *rctx = ahash_request_ctx(areq); + struct rk_crypto_dev *rkc = rctx->dev; + + dma_unmap_sg(rkc->dev, areq->src, rctx->nrsgs, DMA_TO_DEVICE); + return 0; +} + +static int rk_hash_run(struct crypto_engine *engine, void *breq) +{ + struct ahash_request *areq = container_of(breq, struct ahash_request, base); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct rk_ahash_rctx *rctx = ahash_request_ctx(areq); + struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.hash); + struct scatterlist *sgs = areq->src; + struct rk_crypto_dev *rkc = rctx->dev; + struct rk_crypto_lli *dd = &rkc->tl[0]; + int ddi = 0; + int err = 0; + unsigned int len = areq->nbytes; + unsigned int todo; + u32 v; + int i; + + err = pm_runtime_resume_and_get(rkc->dev); + if (err) + return err; + + dev_dbg(rkc->dev, "%s %s len=%d\n", __func__, + crypto_tfm_alg_name(areq->base.tfm), areq->nbytes); + + algt->stat_req++; + rkc->nreq++; + + rctx->mode = algt->rk_mode; + rctx->mode |= 0xffff0000; + rctx->mode |= RK_CRYPTO_ENABLE | RK_CRYPTO_HW_PAD; + writel(rctx->mode, rkc->reg + RK_CRYPTO_HASH_CTL); + + while (sgs && len > 0) { + dd = &rkc->tl[ddi]; + + todo = min(sg_dma_len(sgs), len); + dd->src_addr = sg_dma_address(sgs); + dd->src_len = todo; + dd->dst_addr = 0; + dd->dst_len = 0; + dd->dma_ctrl = ddi << 24; + dd->iv = 0; + dd->next = rkc->t_phy + sizeof(struct rk_crypto_lli) * (ddi + 1); + + if (ddi == 0) + dd->user = RK_LLI_CIPHER_START | RK_LLI_STRING_FIRST; + else + dd->user = 0; + + len -= todo; + dd->dma_ctrl |= RK_LLI_DMA_CTRL_SRC_INT; + if (len == 0) { + dd->user |= RK_LLI_STRING_LAST; + dd->dma_ctrl |= RK_LLI_DMA_CTRL_LAST; + } + dev_dbg(rkc->dev, "HASH SG %d sglen=%d user=%x dma=%x mode=%x len=%d todo=%d phy=%llx\n", + ddi, sgs->length, dd->user, dd->dma_ctrl, rctx->mode, len, todo, rkc->t_phy); + + sgs = sg_next(sgs); + ddi++; + } + dd->next = 1; + writel(RK_CRYPTO_DMA_INT_LISTDONE | 0x7F, rkc->reg + RK_CRYPTO_DMA_INT_EN); + + writel(rkc->t_phy, rkc->reg + RK_CRYPTO_DMA_LLI_ADDR); + + reinit_completion(&rkc->complete); + rkc->status = 0; + + writel(RK_CRYPTO_DMA_CTL_START | 1 << 16, rkc->reg + RK_CRYPTO_DMA_CTL); + + wait_for_completion_interruptible_timeout(&rkc->complete, + msecs_to_jiffies(2000)); + if (!rkc->status) { + dev_err(rkc->dev, "DMA timeout\n"); + err = -EFAULT; + goto theend; + } + + readl_poll_timeout_atomic(rkc->reg + RK_CRYPTO_HASH_VALID, v, v == 1, + 10, 1000); + + for (i = 0; i < crypto_ahash_digestsize(tfm) / 4; i++) { + v = readl(rkc->reg + RK_CRYPTO_HASH_DOUT_0 + i * 4); + put_unaligned_le32(be32_to_cpu(v), areq->result + i * 4); + } + +theend: + pm_runtime_put_autosuspend(rkc->dev); + + local_bh_disable(); + crypto_finalize_hash_request(engine, breq, err); + local_bh_enable(); + + return 0; +} + +int rk_cra_hash_init(struct crypto_tfm *tfm) +{ + struct rk_ahash_ctx *tctx = crypto_tfm_ctx(tfm); + const char *alg_name = crypto_tfm_alg_name(tfm); + struct ahash_alg *alg = __crypto_ahash_alg(tfm->__crt_alg); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.hash); + + /* for fallback */ + tctx->fallback_tfm = crypto_alloc_ahash(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(tctx->fallback_tfm)) { + dev_err(algt->dev->dev, "Could not load fallback driver.\n"); + return PTR_ERR(tctx->fallback_tfm); + } + + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct rk_ahash_rctx) + + crypto_ahash_reqsize(tctx->fallback_tfm)); + + tctx->enginectx.op.do_one_request = rk_hash_run; + tctx->enginectx.op.prepare_request = rk_hash_prepare; + tctx->enginectx.op.unprepare_request = rk_hash_unprepare; + + return 0; +} + +void rk_cra_hash_exit(struct crypto_tfm *tfm) +{ + struct rk_ahash_ctx *tctx = crypto_tfm_ctx(tfm); + + crypto_free_ahash(tctx->fallback_tfm); +} diff --git a/drivers/crypto/rockchip/rk3588_crypto_skcipher.c b/drivers/crypto/rockchip/rk3588_crypto_skcipher.c new file mode 100644 index 000000000000..e1d3be04e985 --- /dev/null +++ b/drivers/crypto/rockchip/rk3588_crypto_skcipher.c @@ -0,0 +1,340 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * hardware cryptographic offloader for rk3568/rk3588 SoC + * + * Copyright (c) 2022 Corentin Labbe + */ +#include +#include "rk3588_crypto.h" + +static int rk_cipher_need_fallback(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.skcipher); + struct scatterlist *sgs, *sgd; + unsigned int stodo, dtodo, len; + unsigned int bs = crypto_skcipher_blocksize(tfm); + + if (!req->cryptlen) + return true; + + len = req->cryptlen; + sgs = req->src; + sgd = req->dst; + while (sgs && sgd) { + if (!IS_ALIGNED(sgs->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + if (!IS_ALIGNED(sgd->offset, sizeof(u32))) { + algt->stat_fb_align++; + return true; + } + stodo = min(len, sgs->length); + if (stodo % bs) { + algt->stat_fb_len++; + return true; + } + dtodo = min(len, sgd->length); + if (dtodo % bs) { + algt->stat_fb_len++; + return true; + } + if (stodo != dtodo) { + algt->stat_fb_sgdiff++; + return true; + } + len -= stodo; + sgs = sg_next(sgs); + sgd = sg_next(sgd); + } + return false; +} + +static int rk_cipher_fallback(struct skcipher_request *areq) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); + struct rk_cipher_ctx *op = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.skcipher); + int err; + + algt->stat_fb++; + + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, + areq->cryptlen, areq->iv); + if (rctx->mode & RK_CRYPTO_DEC) + err = crypto_skcipher_decrypt(&rctx->fallback_req); + else + err = crypto_skcipher_encrypt(&rctx->fallback_req); + return err; +} + +static int rk_cipher_handle_req(struct skcipher_request *req) +{ + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); + struct rk_crypto_dev *rkc; + struct crypto_engine *engine; + + if (rk_cipher_need_fallback(req)) + return rk_cipher_fallback(req); + + rkc = get_rk_crypto(); + + engine = rkc->engine; + rctx->dev = rkc; + + return crypto_transfer_skcipher_request_to_engine(engine, req); +} + +int rk_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, + unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) + return -EINVAL; + ctx->keylen = keylen; + memcpy(ctx->key, key, keylen); + + return crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen); +} + +int rk_aes_ecb_encrypt(struct skcipher_request *req) +{ + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); + + rctx->mode = RK_CRYPTO_AES_ECB_MODE; + return rk_cipher_handle_req(req); +} + +int rk_aes_ecb_decrypt(struct skcipher_request *req) +{ + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); + + rctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; + return rk_cipher_handle_req(req); +} + +int rk_aes_cbc_encrypt(struct skcipher_request *req) +{ + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); + + rctx->mode = RK_CRYPTO_AES_CBC_MODE; + return rk_cipher_handle_req(req); +} + +int rk_aes_cbc_decrypt(struct skcipher_request *req) +{ + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); + + rctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; + return rk_cipher_handle_req(req); +} + +static int rk_cipher_run(struct crypto_engine *engine, void *async_req) +{ + struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(areq); + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct scatterlist *sgs, *sgd; + int err = 0; + int ivsize = crypto_skcipher_ivsize(tfm); + unsigned int len = areq->cryptlen; + unsigned int todo; + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.skcipher); + struct rk_crypto_dev *rkc = rctx->dev; + struct rk_crypto_lli *dd = &rkc->tl[0]; + u32 m, v; + u32 *rkey = (u32 *)ctx->key; + u32 *riv = (u32 *)areq->iv; + int i; + unsigned int offset; + + err = pm_runtime_resume_and_get(rkc->dev); + if (err) + return err; + + algt->stat_req++; + rkc->nreq++; + + m = rctx->mode | RK_CRYPTO_ENABLE; + switch (ctx->keylen) { + case AES_KEYSIZE_128: + m |= RK_CRYPTO_AES_128BIT_key; + break; + case AES_KEYSIZE_192: + m |= RK_CRYPTO_AES_192BIT_key; + break; + case AES_KEYSIZE_256: + m |= RK_CRYPTO_AES_256BIT_key; + break; + } + /* the upper bits are a write enable mask, so we need to write 1 to all + * upper 16 bits to allow write to the 16 lower bits + */ + m |= 0xffff0000; + + dev_dbg(rkc->dev, "%s %s len=%u keylen=%u mode=%x\n", __func__, + crypto_tfm_alg_name(areq->base.tfm), + areq->cryptlen, ctx->keylen, m); + sgs = areq->src; + sgd = areq->dst; + + while (sgs && sgd && len) { + ivsize = crypto_skcipher_ivsize(tfm); + if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { + if (rctx->mode & RK_CRYPTO_DEC) { + offset = sgs->length - ivsize; + scatterwalk_map_and_copy(rctx->backup_iv, sgs, + offset, ivsize, 0); + } + } + + if (sgs == sgd) { + err = dma_map_sg(rkc->dev, sgs, 1, DMA_BIDIRECTIONAL); + if (err != 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err = -EINVAL; + goto theend; + } + } else { + err = dma_map_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + if (err != 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err = -EINVAL; + goto theend; + } + err = dma_map_sg(rkc->dev, sgd, 1, DMA_FROM_DEVICE); + if (err != 1) { + dev_err(rkc->dev, "Invalid sg number %d\n", err); + err = -EINVAL; + dma_unmap_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + goto theend; + } + } + err = 0; + writel(m, rkc->reg + RK_CRYPTO_BC_CTL); + + for (i = 0; i < ctx->keylen / 4; i++) { + v = cpu_to_be32(rkey[i]); + writel(v, rkc->reg + RK_CRYPTO_KEY0 + i * 4); + } + + if (ivsize) { + for (i = 0; i < ivsize / 4; i++) + writel(cpu_to_be32(riv[i]), + rkc->reg + RK_CRYPTO_CH0_IV_0 + i * 4); + writel(ivsize, rkc->reg + RK_CRYPTO_CH0_IV_LEN); + } + if (!sgs->length) { + sgs = sg_next(sgs); + sgd = sg_next(sgd); + continue; + } + + /* The hw support multiple descriptor, so why this driver use + * only one descritor ? + * Using one descriptor per SG seems the way to do and it works + * but only when doing encryption. + * With decryption it always fail on second descriptor. + * Probably the HW dont know how to use IV. + */ + todo = min(sg_dma_len(sgs), len); + len -= todo; + dd->src_addr = sg_dma_address(sgs); + dd->src_len = todo; + dd->dst_addr = sg_dma_address(sgd); + dd->dst_len = todo; + dd->iv = 0; + dd->next = 1; + + dd->user = RK_LLI_CIPHER_START | RK_LLI_STRING_FIRST | RK_LLI_STRING_LAST; + dd->dma_ctrl |= RK_LLI_DMA_CTRL_DST_INT | RK_LLI_DMA_CTRL_LAST; + + writel(RK_CRYPTO_DMA_INT_LISTDONE | 0x7F, rkc->reg + RK_CRYPTO_DMA_INT_EN); + + writel(rkc->t_phy, rkc->reg + RK_CRYPTO_DMA_LLI_ADDR); + + reinit_completion(&rkc->complete); + rkc->status = 0; + + writel(RK_CRYPTO_DMA_CTL_START | 1 << 16, rkc->reg + RK_CRYPTO_DMA_CTL); + + wait_for_completion_interruptible_timeout(&rkc->complete, + msecs_to_jiffies(10000)); + if (sgs == sgd) { + dma_unmap_sg(rkc->dev, sgs, 1, DMA_BIDIRECTIONAL); + } else { + dma_unmap_sg(rkc->dev, sgs, 1, DMA_TO_DEVICE); + dma_unmap_sg(rkc->dev, sgd, 1, DMA_FROM_DEVICE); + } + + if (!rkc->status) { + dev_err(rkc->dev, "DMA timeout\n"); + err = -EFAULT; + goto theend; + } + if (areq->iv && ivsize > 0) { + offset = sgd->length - ivsize; + if (rctx->mode & RK_CRYPTO_DEC) { + memcpy(areq->iv, rctx->backup_iv, ivsize); + memzero_explicit(rctx->backup_iv, ivsize); + } else { + scatterwalk_map_and_copy(areq->iv, sgd, offset, + ivsize, 0); + } + } + sgs = sg_next(sgs); + sgd = sg_next(sgd); + } +theend: + writel(0xffff0000, rkc->reg + RK_CRYPTO_BC_CTL); + pm_runtime_put_autosuspend(rkc->dev); + + local_bh_disable(); + crypto_finalize_skcipher_request(engine, areq, err); + local_bh_enable(); + return 0; +} + +int rk_cipher_tfm_init(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + const char *name = crypto_tfm_alg_name(&tfm->base); + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct rk_crypto_template *algt = container_of(alg, struct rk_crypto_template, alg.skcipher); + + ctx->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->fallback_tfm)) { + dev_err(algt->dev->dev, "ERROR: Cannot allocate fallback for %s %ld\n", + name, PTR_ERR(ctx->fallback_tfm)); + return PTR_ERR(ctx->fallback_tfm); + } + + tfm->reqsize = sizeof(struct rk_cipher_rctx) + + crypto_skcipher_reqsize(ctx->fallback_tfm); + + ctx->enginectx.op.do_one_request = rk_cipher_run; + ctx->enginectx.op.prepare_request = NULL; + ctx->enginectx.op.unprepare_request = NULL; + + return 0; +} + +void rk_cipher_tfm_exit(struct crypto_skcipher *tfm) +{ + struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + + memzero_explicit(ctx->key, ctx->keylen); + crypto_free_skcipher(ctx->fallback_tfm); +} From patchwork Tue Sep 27 08:00:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin LABBE X-Patchwork-Id: 12990099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A41EFC07E9D for ; Tue, 27 Sep 2022 08:10:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HaJO1u9/r1st57b9tGH98oTCTuIvxXbuIjOj+61hVm0=; b=CJiPXMTQ8CKFfD Z62O4BrpXbxZwIihHgf08Eyt5FXbM5E6FPc87B9UQMQuWFRhdd+WNx2PbKL9lqR0MVyaGFv2R1cjy h7gn55Ukgcmm+Yd5EHT9bdJGVKrxTO/an2WNfph028FL1CP/coKlly63oIS9ce65+vJC7Hw/2d1QS 0fsjqO5WbSBbGQ9ScS4Eul1sUflD3trXITlcBEAd6CKai5V5ccqNGoNv7Q5U0PWTRSYyxu2utzvbK /nkQJd89j8t8GB+LLmn7v8ewyI9nZ+4gPXOvqa/jqk8OuL2xfNfQtNH54ARMrVBoFWGaTNmwUrJAg fneAhsCBV1jPewvVjW5Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5fI-0091v3-FQ; Tue, 27 Sep 2022 08:10:04 +0000 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1od5Wj-008xv3-8d for linux-rockchip@lists.infradead.org; Tue, 27 Sep 2022 08:01:15 +0000 Received: by mail-wr1-x42b.google.com with SMTP id l18so1712911wrw.9 for ; Tue, 27 Sep 2022 01:01:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=k3Uie5pxkgijukjmsxCrYCPs7jEPbo3aKWq5eUqCVmA=; b=atrB2K6hZteWnvzbPkbVoOb6uaiXmiR3AyWDp/KoYcRjXAeCmkNgkEscK/WeMsu4B/ DbnQLJ7+Atl/C9djjI/Z4TBfY33e67mCqc7PoP65Ry1dATFWrYzhG16let04hYYRE1C4 552SG6rUw2D9uWXZSWWgkVMWsSciEpfO2yArasPWPYahvhA+IUHG2Ys2Sy2T/MM0VwzJ 1u8vTYqO56bCWIqp0vZSjtPUGicxle0VpFrkLicOXJKZoPr8j9aQBvkJod6gdLTLJjYB Sdf6tyirbY/0u5uAqqPJp9tEY9/GS7ec4NTkyzr2mKUhrnTdMKOrkJIoTpTmsbQ0wfy9 i2YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=k3Uie5pxkgijukjmsxCrYCPs7jEPbo3aKWq5eUqCVmA=; b=H6upKJ8EbAD/OXso5G8JmtJ3HXlVJ3SnSjEdLMNiPf9pIuT/u4X+S/5hZy1BZn4JOg YZ+XbHx4FZhoPNywcBYdbweplpk0OLsYelXZwxEDdfc+1dwSxUDTumcloMCZtxYcfccT ROAl6ZohzRH66miY1PCumZSV+t3Zz9vYKUwVUWNCoENqV/kHk8+43JSMO5OomXCrGDNG QUtG8vWzDX4nkG1N+hGCW4VXAMprZkcz6w5whW+pOzpWLLccvZZ2nniTYrrO2Cg3KYye qRZRZ7dNO+6mywHtBEbEocdalfcK55CwGoOYqxZ2UjdJg9DGe0ZHBcex1QXREznFc64z 4FXA== X-Gm-Message-State: ACrzQf3ttmhWd1xTmvsTM5MEJrjkHyMYM7mFzxz6oOlhYVX/gPvxxAbX 7Oi/5WCiiBtcJUBxAdxHeXZtfg== X-Google-Smtp-Source: AMsMyM7Nsk7GUy0Dn+s5dXjE9WoMQoVO5BreYgsM9yVfqjD93GEr6OzmJ9TbwoNWDiPStSCfSr/OMA== X-Received: by 2002:a5d:6484:0:b0:226:dd0e:b09c with SMTP id o4-20020a5d6484000000b00226dd0eb09cmr16103248wri.388.1664265671924; Tue, 27 Sep 2022 01:01:11 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id p4-20020a1c5444000000b003a5c7a942edsm13357199wmi.28.2022.09.27.01.01.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 01:01:11 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, davem@davemloft.net, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH RFT 5/5] ARM64: dts: rk3568: add crypto node Date: Tue, 27 Sep 2022 08:00:48 +0000 Message-Id: <20220927080048.3151911-6-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220927080048.3151911-1-clabbe@baylibre.com> References: <20220927080048.3151911-1-clabbe@baylibre.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220927_010113_451982_FE00840F X-CRM114-Status: UNSURE ( 9.55 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org The rk3568 has a crypto IP handled by the rk3588 crypto driver so adds a node for it. Signed-off-by: Corentin Labbe --- arch/arm64/boot/dts/rockchip/rk3568.dtsi | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi index ba67b58f05b7..62432297f234 100644 --- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi @@ -211,6 +211,20 @@ gmac0_mtl_tx_setup: tx-queues-config { }; }; + crypto: crypto@fe380000 { + compatible = "rockchip,rk3568-crypto"; + reg = <0x0 0xfe380000 0x0 0x2000>; + interrupts = ; + clocks = <&cru ACLK_CRYPTO_NS>, <&cru HCLK_CRYPTO_NS>, + <&cru CLK_CRYPTO_NS_CORE>, <&cru CLK_CRYPTO_NS_PKA>; + clock-names = "aclk", "hclk", "sclk", "pka"; + resets = <&cru SRST_CRYPTO_NS_CORE>, <&cru SRST_A_CRYPTO_NS>, + <&cru SRST_H_CRYPTO_NS>, <&cru SRST_CRYPTO_NS_RNG>, + <&cru SRST_CRYPTO_NS_PKA>; + reset-names = "core", "a", "h", "rng,", "pka"; + status = "okay"; + }; + combphy0: phy@fe820000 { compatible = "rockchip,rk3568-naneng-combphy"; reg = <0x0 0xfe820000 0x0 0x100>;