From patchwork Wed Nov 30 05:52:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B93EC433FE for ; Wed, 30 Nov 2022 05:53:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RWWnmkIUo6D2/84C15vK1unSoTjZBOtAPqxRy5O1Z8E=; b=1zbVzBXU6kYXgd F5T077+fKYL0YO/lX4CwALNv36IQr6TMAM4mG317VZxRUlRQ6vBaei28jxnflmtbNFFnycR9udpLR lwsoY0j2Cd3KBjp5bJRaG4pplT1sa3ISDkj9WUhvX20hkSwDBD6VScU2KDdxRI39Gjs+Uav6mCZpa RRfdUH7XixEjfe2oSaBGgB4BvPeNcHLj1MKnd5O2pK1PrmVo3w/qmiGcxPZMwTSNUu1xR9wJJ+HyU jD0fUhWKsq/HqpzxeHKDiOFIWsAIvESsmoSdyqBnvJlW+ik781Pvf9KJhvA3bMn1S9jgpA/whZZDz RwBZfqoMaBovCbdP9qAQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G28-00Dak2-1j; Wed, 30 Nov 2022 05:53:24 +0000 Received: from fd01.gateway.ufhost.com ([61.152.239.71]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G21-00DaG8-T4 for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:53:20 +0000 Received: from EXMBX166.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX166", Issuer "EXMBX166" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id EACD324E2E3; Wed, 30 Nov 2022 13:52:24 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX166.cuchost.com (172.16.6.76) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:25 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:20 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 1/6] crypto: starfive - Add StarFive crypto engine support Date: Wed, 30 Nov 2022 13:52:09 +0800 Message-ID: <20221130055214.2416888-2-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215318_394719_281B8F4F X-CRM114-Status: GOOD ( 23.06 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding device probe and DMA init for StarFive hardware crypto engine. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng --- MAINTAINERS | 7 + drivers/crypto/Kconfig | 1 + drivers/crypto/Makefile | 1 + drivers/crypto/starfive/Kconfig | 20 ++ drivers/crypto/starfive/Makefile | 4 + drivers/crypto/starfive/starfive-cryp.c | 268 ++++++++++++++++++++++++ drivers/crypto/starfive/starfive-regs.h | 26 +++ drivers/crypto/starfive/starfive-str.h | 74 +++++++ 8 files changed, 401 insertions(+) create mode 100644 drivers/crypto/starfive/Kconfig create mode 100644 drivers/crypto/starfive/Makefile create mode 100644 drivers/crypto/starfive/starfive-cryp.c create mode 100644 drivers/crypto/starfive/starfive-regs.h create mode 100644 drivers/crypto/starfive/starfive-str.h diff --git a/MAINTAINERS b/MAINTAINERS index 65140500d9f8..ca189a563a39 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19609,6 +19609,13 @@ F: Documentation/devicetree/bindings/clock/starfive* F: drivers/clk/starfive/ F: include/dt-bindings/clock/starfive* +STARFIVE CRYPTO DRIVER +M: Jia Jie Ho +M: William Qiu +S: Maintained +F: Documentation/devicetree/bindings/crypto/starfive* +F: drivers/crypto/starfive/ + STARFIVE PINCTRL DRIVER M: Emil Renner Berthing M: Jianlong Huang diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 55e75fbb658e..64b94376601c 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -817,5 +817,6 @@ config CRYPTO_DEV_SA2UL source "drivers/crypto/keembay/Kconfig" source "drivers/crypto/aspeed/Kconfig" +source "drivers/crypto/starfive/Kconfig" endif # CRYPTO_HW diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index 116de173a66c..212931c84412 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -53,3 +53,4 @@ obj-y += xilinx/ obj-y += hisilicon/ obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/ obj-y += keembay/ +obj-y += starfive/ diff --git a/drivers/crypto/starfive/Kconfig b/drivers/crypto/starfive/Kconfig new file mode 100644 index 000000000000..f8a2b6ecbddc --- /dev/null +++ b/drivers/crypto/starfive/Kconfig @@ -0,0 +1,20 @@ +# +# StarFive crypto drivers configuration +# + +config CRYPTO_DEV_STARFIVE + tristate "StarFive cryptographic engine driver" + depends on SOC_STARFIVE + select CRYPTO_ENGINE + select CRYPTO_RSA + select CRYPTO_AES + select CRYPTO_CCM + select ARM_AMBA + select DMADEVICES + select AMBA_PL08X + help + Support for StarFive crypto hardware acceleration engine. + This module provides acceleration for public key algo, + skciphers, AEAD and hash functions. + + If you choose 'M' here, this module will be called starfive-crypto. diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile new file mode 100644 index 000000000000..5a84f808a671 --- /dev/null +++ b/drivers/crypto/starfive/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_CRYPTO_DEV_STARFIVE) += starfive-crypto.o +starfive-crypto-objs := starfive-cryp.o diff --git a/drivers/crypto/starfive/starfive-cryp.c b/drivers/crypto/starfive/starfive-cryp.c new file mode 100644 index 000000000000..574f9e8f4cc1 --- /dev/null +++ b/drivers/crypto/starfive/starfive-cryp.c @@ -0,0 +1,268 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Cryptographic API. + * + * Support for StarFive hardware cryptographic engine. + * Copyright (c) 2022 StarFive Technology + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "starfive-str.h" + +#define DRIVER_NAME "starfive-crypto" + +struct starfive_dev_list { + struct list_head dev_list; + spinlock_t lock; /* protect dev_list */ +}; + +static struct starfive_dev_list dev_list = { + .dev_list = LIST_HEAD_INIT(dev_list.dev_list), + .lock = __SPIN_LOCK_UNLOCKED(dev_list.lock), +}; + +struct starfive_sec_dev *starfive_sec_find_dev(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = NULL, *tmp; + + spin_lock_bh(&dev_list.lock); + if (!ctx->sdev) { + list_for_each_entry(tmp, &dev_list.dev_list, list) { + sdev = tmp; + break; + } + ctx->sdev = sdev; + } else { + sdev = ctx->sdev; + } + + spin_unlock_bh(&dev_list.lock); + + return sdev; +} + +static const struct of_device_id starfive_dt_ids[] = { + { .compatible = "starfive,jh7110-crypto", .data = NULL}, + {}, +}; +MODULE_DEVICE_TABLE(of, starfive_dt_ids); + +static int starfive_dma_init(struct starfive_sec_dev *sdev) +{ + dma_cap_mask_t mask; + int err; + + sdev->sec_xm_m = NULL; + sdev->sec_xm_p = NULL; + + dma_cap_zero(mask); + dma_cap_set(DMA_SLAVE, mask); + + sdev->sec_xm_m = dma_request_chan(sdev->dev, "sec_m"); + if (IS_ERR(sdev->sec_xm_m)) { + dev_err(sdev->dev, "sec_m dma channel request failed.\n"); + return PTR_ERR(sdev->sec_xm_m); + } + + sdev->sec_xm_p = dma_request_chan(sdev->dev, "sec_p"); + if (IS_ERR(sdev->sec_xm_p)) { + dev_err(sdev->dev, "sec_p dma channel request failed.\n"); + goto err_dma_out; + } + + init_completion(&sdev->sec_comp_m); + init_completion(&sdev->sec_comp_p); + + return 0; + +err_dma_out: + dma_release_channel(sdev->sec_xm_m); + + return err; +} + +static void starfive_dma_cleanup(struct starfive_sec_dev *sdev) +{ + dma_release_channel(sdev->sec_xm_p); + dma_release_channel(sdev->sec_xm_m); +} + +static int starfive_cryp_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct starfive_sec_dev *sdev; + struct resource *res; + int pages = 0; + int ret; + + sdev = devm_kzalloc(dev, sizeof(*sdev), GFP_KERNEL); + if (!sdev) + return -ENOMEM; + + sdev->dev = dev; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "secreg"); + sdev->io_base = devm_ioremap_resource(dev, res); + + if (IS_ERR(sdev->io_base)) + return PTR_ERR(sdev->io_base); + + sdev->use_side_channel_mitigation = + device_property_read_bool(dev, "enable-side-channel-mitigation"); + sdev->use_dma = device_property_read_bool(dev, "enable-dma"); + sdev->dma_maxburst = 32; + + sdev->sec_hclk = devm_clk_get(dev, "sec_hclk"); + if (IS_ERR(sdev->sec_hclk)) { + dev_err(dev, "Failed to get sec_hclk.\n"); + return PTR_ERR(sdev->sec_hclk); + } + + sdev->sec_ahb = devm_clk_get(dev, "sec_ahb"); + if (IS_ERR(sdev->sec_ahb)) { + dev_err(dev, "Failed to get sec_ahb.\n"); + return PTR_ERR(sdev->sec_ahb); + } + + sdev->rst_hresetn = devm_reset_control_get_shared(sdev->dev, "sec_hre"); + if (IS_ERR(sdev->rst_hresetn)) { + dev_err(sdev->dev, "Failed to get sec_hre.\n"); + return PTR_ERR(sdev->rst_hresetn); + } + + clk_prepare_enable(sdev->sec_hclk); + clk_prepare_enable(sdev->sec_ahb); + reset_control_deassert(sdev->rst_hresetn); + + platform_set_drvdata(pdev, sdev); + + spin_lock(&dev_list.lock); + list_add(&sdev->list, &dev_list.dev_list); + spin_unlock(&dev_list.lock); + + if (sdev->use_dma) { + ret = starfive_dma_init(sdev); + if (ret) { + dev_err(dev, "Failed to initialize DMA channel.\n"); + goto err_dma_init; + } + } + + pages = get_order(STARFIVE_MSG_BUFFER_SIZE); + + sdev->pages_count = pages >> 1; + sdev->data_buf_len = STARFIVE_MSG_BUFFER_SIZE >> 1; + + /* Initialize crypto engine */ + sdev->engine = crypto_engine_alloc_init(dev, 1); + if (!sdev->engine) { + ret = -ENOMEM; + goto err_engine; + } + + ret = crypto_engine_start(sdev->engine); + if (ret) + goto err_engine_start; + + dev_info(dev, "Crypto engine started\n"); + + return 0; + +err_engine_start: + crypto_engine_exit(sdev->engine); +err_engine: + starfive_dma_cleanup(sdev); +err_dma_init: + spin_lock(&dev_list.lock); + list_del(&sdev->list); + spin_unlock(&dev_list.lock); + + return ret; +} + +static int starfive_cryp_remove(struct platform_device *pdev) +{ + struct starfive_sec_dev *sdev = platform_get_drvdata(pdev); + + if (!sdev) + return -ENODEV; + + crypto_engine_stop(sdev->engine); + crypto_engine_exit(sdev->engine); + + starfive_dma_cleanup(sdev); + + spin_lock(&dev_list.lock); + list_del(&sdev->list); + spin_unlock(&dev_list.lock); + + clk_disable_unprepare(sdev->sec_hclk); + clk_disable_unprepare(sdev->sec_ahb); + reset_control_assert(sdev->rst_hresetn); + + return 0; +} + +#ifdef CONFIG_PM +static int starfive_cryp_runtime_suspend(struct device *dev) +{ + struct starfive_sec_dev *sdev = dev_get_drvdata(dev); + + clk_disable_unprepare(sdev->sec_ahb); + clk_disable_unprepare(sdev->sec_hclk); + + return 0; +} + +static int starfive_cryp_runtime_resume(struct device *dev) +{ + struct starfive_sec_dev *sdev = dev_get_drvdata(dev); + int ret; + + ret = clk_prepare_enable(sdev->sec_ahb); + if (ret) { + dev_err(sdev->dev, "Failed to prepare_enable sec_ahb clock\n"); + return ret; + } + + ret = clk_prepare_enable(sdev->sec_hclk); + if (ret) { + dev_err(sdev->dev, "Failed to prepare_enable sec_hclk clock\n"); + return ret; + } + + return 0; +} +#endif + +static const struct dev_pm_ops starfive_cryp_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(starfive_cryp_runtime_suspend, + starfive_cryp_runtime_resume, NULL) +}; + +static struct platform_driver starfive_cryp_driver = { + .probe = starfive_cryp_probe, + .remove = starfive_cryp_remove, + .driver = { + .name = DRIVER_NAME, + .pm = &starfive_cryp_pm_ops, + .of_match_table = starfive_dt_ids, + }, +}; + +module_platform_driver(starfive_cryp_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("StarFive hardware crypto acceleration"); diff --git a/drivers/crypto/starfive/starfive-regs.h b/drivers/crypto/starfive/starfive-regs.h new file mode 100644 index 000000000000..0d680cb1f502 --- /dev/null +++ b/drivers/crypto/starfive/starfive-regs.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __STARFIVE_REGS_H__ +#define __STARFIVE_REGS_H__ + +#define STARFIVE_ALG_CR_OFFSET 0x0 +#define STARFIVE_ALG_FIFO_OFFSET 0x4 +#define STARFIVE_IE_MASK_OFFSET 0x8 +#define STARFIVE_IE_FLAG_OFFSET 0xc +#define STARFIVE_DMA_IN_LEN_OFFSET 0x10 +#define STARFIVE_DMA_OUT_LEN_OFFSET 0x14 + +union starfive_alg_cr { + u32 v; + struct { + u32 start :1; + u32 aes_dma_en :1; + u32 rsvd_0 :1; + u32 hash_dma_en :1; + u32 alg_done :1; + u32 rsvd_1 :3; + u32 clear :1; + u32 rsvd_2 :23; + }; +}; + +#endif diff --git a/drivers/crypto/starfive/starfive-str.h b/drivers/crypto/starfive/starfive-str.h new file mode 100644 index 000000000000..4ba3c56f0573 --- /dev/null +++ b/drivers/crypto/starfive/starfive-str.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __STARFIVE_STR_H__ +#define __STARFIVE_STR_H__ + +#include +#include +#include + +#include + +#include "starfive-regs.h" + +#define STARFIVE_MSG_BUFFER_SIZE SZ_16K + +struct starfive_sec_ctx { + struct crypto_engine_ctx enginectx; + struct starfive_sec_dev *sdev; + + u8 *buffer; +}; + +struct starfive_sec_dev { + struct list_head list; + struct device *dev; + + struct clk *sec_hclk; + struct clk *sec_ahb; + struct reset_control *rst_hresetn; + + void __iomem *io_base; + phys_addr_t io_phys_base; + + size_t data_buf_len; + int pages_count; + u32 use_side_channel_mitigation; + u32 use_dma; + u32 dma_maxburst; + struct dma_chan *sec_xm_m; + struct dma_chan *sec_xm_p; + struct dma_slave_config cfg_in; + struct dma_slave_config cfg_out; + struct completion sec_comp_m; + struct completion sec_comp_p; + + struct crypto_engine *engine; + + union starfive_alg_cr alg_cr; +}; + +static inline u32 starfive_sec_read(struct starfive_sec_dev *sdev, u32 offset) +{ + return __raw_readl(sdev->io_base + offset); +} + +static inline u8 starfive_sec_readb(struct starfive_sec_dev *sdev, u32 offset) +{ + return __raw_readb(sdev->io_base + offset); +} + +static inline void starfive_sec_write(struct starfive_sec_dev *sdev, + u32 offset, u32 value) +{ + __raw_writel(value, sdev->io_base + offset); +} + +static inline void starfive_sec_writeb(struct starfive_sec_dev *sdev, + u32 offset, u8 value) +{ + __raw_writeb(value, sdev->io_base + offset); +} + +struct starfive_sec_dev *starfive_sec_find_dev(struct starfive_sec_ctx *ctx); + +#endif From patchwork Wed Nov 30 05:52:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F10FBC4332F for ; Wed, 30 Nov 2022 05:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VrEKXfMlDxAezRXUQD9uj8aiz088o76tdx3lXgTIPk4=; b=fRH+4u4OVREI3a 93iBIWmAaOhHfMdXdaZIPh3DAzGZJj9nprJGdXYCVRi69tKHms+ZvFiVh5mr1KsUJb3Em4X/e8qIn qTruFq4DX2Evfq93gBX5Du3NMeGBa8P1/NUwJVap6yuIHAV0/ywMYp/WhAoK/0alPkGBQrJ+A85A7 N/ZOcXLFXhQmcSnjCm4t07bbXnaOY6BADiXaJwwhSXcQNTrG1tI18fWDss99qkA9Vk0dyeKuKwY3R lxrHHHAPQEHRrz17tliRC2qejRp7M08LMEMXUKIzrEg3ff24SAgdBf/3UZfLYSXIBaLR6cQPxARgA ynQNGWRhs2g1NUbf0/Ug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1n-00DaTt-KG; Wed, 30 Nov 2022 05:53:03 +0000 Received: from fd01.gateway.ufhost.com ([61.152.239.71]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1f-00DaG6-SF for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:53:01 +0000 Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id CB09124E1D1; Wed, 30 Nov 2022 13:52:30 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:30 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:24 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 2/6] crypto: starfive - Add hash and HMAC support Date: Wed, 30 Nov 2022 13:52:10 +0800 Message-ID: <20221130055214.2416888-3-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215256_544066_1E5D0E6F X-CRM114-Status: GOOD ( 22.42 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding hash function and HMAC support for SHA-2 family and SM3 to Starfive crypto hardware driver. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng --- drivers/crypto/starfive/Makefile | 2 +- drivers/crypto/starfive/starfive-cryp.c | 22 + drivers/crypto/starfive/starfive-hash.c | 1152 +++++++++++++++++++++++ drivers/crypto/starfive/starfive-regs.h | 45 + drivers/crypto/starfive/starfive-str.h | 52 +- 5 files changed, 1271 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/starfive/starfive-hash.c diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile index 5a84f808a671..437b8f036038 100644 --- a/drivers/crypto/starfive/Makefile +++ b/drivers/crypto/starfive/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_STARFIVE) += starfive-crypto.o -starfive-crypto-objs := starfive-cryp.o +starfive-crypto-objs := starfive-cryp.o starfive-hash.o diff --git a/drivers/crypto/starfive/starfive-cryp.c b/drivers/crypto/starfive/starfive-cryp.c index 574f9e8f4cc1..9f77cae758ac 100644 --- a/drivers/crypto/starfive/starfive-cryp.c +++ b/drivers/crypto/starfive/starfive-cryp.c @@ -109,6 +109,8 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (!sdev) return -ENOMEM; + mutex_init(&sdev->lock); + sdev->dev = dev; res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "secreg"); @@ -117,6 +119,7 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (IS_ERR(sdev->io_base)) return PTR_ERR(sdev->io_base); + sdev->io_phys_base = res->start; sdev->use_side_channel_mitigation = device_property_read_bool(dev, "enable-side-channel-mitigation"); sdev->use_dma = device_property_read_bool(dev, "enable-dma"); @@ -160,6 +163,12 @@ static int starfive_cryp_probe(struct platform_device *pdev) pages = get_order(STARFIVE_MSG_BUFFER_SIZE); + sdev->hash_data = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, pages); + if (!sdev->hash_data) { + dev_err(sdev->dev, "Can't allocate hash buffer pages when unaligned\n"); + goto err_hash_data; + } + sdev->pages_count = pages >> 1; sdev->data_buf_len = STARFIVE_MSG_BUFFER_SIZE >> 1; @@ -174,13 +183,21 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (ret) goto err_engine_start; + ret = starfive_hash_register_algs(); + if (ret) + goto err_algs_hash; + dev_info(dev, "Crypto engine started\n"); return 0; +err_algs_hash: + crypto_engine_stop(sdev->engine); err_engine_start: crypto_engine_exit(sdev->engine); err_engine: + free_pages((unsigned long)sdev->hash_data, pages); +err_hash_data: starfive_dma_cleanup(sdev); err_dma_init: spin_lock(&dev_list.lock); @@ -197,11 +214,16 @@ static int starfive_cryp_remove(struct platform_device *pdev) if (!sdev) return -ENODEV; + starfive_hash_unregister_algs(); + crypto_engine_stop(sdev->engine); crypto_engine_exit(sdev->engine); starfive_dma_cleanup(sdev); + free_pages((unsigned long)sdev->hash_data, sdev->pages_count); + sdev->hash_data = NULL; + spin_lock(&dev_list.lock); list_del(&sdev->list); spin_unlock(&dev_list.lock); diff --git a/drivers/crypto/starfive/starfive-hash.c b/drivers/crypto/starfive/starfive-hash.c new file mode 100644 index 000000000000..c85dec784df9 --- /dev/null +++ b/drivers/crypto/starfive/starfive-hash.c @@ -0,0 +1,1152 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hash function and HMAC support for StarFive driver + * + * Copyright (c) 2022 StarFive Technology + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "starfive-str.h" + +#define HASH_OP_UPDATE 1 +#define HASH_OP_FINAL 2 + +#define HASH_FLAGS_INIT BIT(0) +#define HASH_FLAGS_FINAL BIT(1) +#define HASH_FLAGS_FINUP BIT(2) + +#define STARFIVE_MAX_ALIGN_SIZE SHA512_BLOCK_SIZE + +#define STARFIVE_HASH_BUFLEN 8192 +#define STARFIVE_HASH_THRES 2048 + +static inline int starfive_hash_wait_hmac_done(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_HASH_SHACSR, status, + (status & STARFIVE_HASH_HMAC_DONE), 10, 100000); +} + +static inline int starfive_hash_wait_busy(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_HASH_SHACSR, status, + !(status & STARFIVE_HASH_BUSY), 10, 100000); +} + +static inline int starfive_hash_wait_key_done(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_HASH_SHACSR, status, + (status & STARFIVE_HASH_KEY_DONE), 10, 100000); +} + +static int starfive_get_hash_size(struct starfive_sec_ctx *ctx) +{ + unsigned int hashsize; + + switch (ctx->hash_mode & STARFIVE_HASH_MODE_MASK) { + case STARFIVE_HASH_SHA224: + hashsize = SHA224_DIGEST_SIZE; + break; + case STARFIVE_HASH_SHA256: + hashsize = SHA256_DIGEST_SIZE; + break; + case STARFIVE_HASH_SHA384: + hashsize = SHA384_DIGEST_SIZE; + break; + case STARFIVE_HASH_SHA512: + hashsize = SHA512_DIGEST_SIZE; + break; + case STARFIVE_HASH_SM3: + hashsize = SM3_DIGEST_SIZE; + break; + default: + return 0; + } + + return hashsize; +} + +static void starfive_hash_start(struct starfive_sec_ctx *ctx, int flags) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + + rctx->csr.hash.v = starfive_sec_read(sdev, STARFIVE_HASH_SHACSR); + rctx->csr.hash.firstb = 0; + + if (flags) + rctx->csr.hash.final = 1; + + starfive_sec_write(sdev, STARFIVE_HASH_SHACSR, rctx->csr.hash.v); +} + +static int starfive_hash_hmac_key(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + int klen = ctx->keylen, loop; + unsigned int *key = (unsigned int *)ctx->key; + unsigned char *cl; + + starfive_sec_write(sdev, STARFIVE_HASH_SHAWKLEN, ctx->keylen); + + rctx->csr.hash.hmac = !!(ctx->hash_mode & STARFIVE_HASH_HMAC_FLAGS); + rctx->csr.hash.key_flag = 1; + + starfive_sec_write(sdev, STARFIVE_HASH_SHACSR, rctx->csr.hash.v); + + for (loop = 0; loop < klen / sizeof(unsigned int); loop++, key++) + starfive_sec_write(sdev, STARFIVE_HASH_SHAWKR, *key); + + if (klen & 0x3) { + cl = (unsigned char *)key; + for (loop = 0; loop < (klen & 0x3); loop++, cl++) + starfive_sec_writeb(sdev, STARFIVE_HASH_SHAWKR, *cl); + } + + if (starfive_hash_wait_key_done(ctx)) { + dev_err(sdev->dev, " starfive_hash_wait_key_done error\n"); + return -ETIMEDOUT; + } + + return 0; +} + +static void starfive_hash_dma_callback(void *param) +{ + struct starfive_sec_dev *sdev = param; + + complete(&sdev->sec_comp_m); +} + +static int starfive_hash_xmit_dma(struct starfive_sec_ctx *ctx, int flags) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + struct dma_async_tx_descriptor *in_desc; + dma_cookie_t cookie; + union starfive_alg_cr alg_cr; + int total_len; + int ret; + + if (!rctx->bufcnt) + return 0; + + ctx->hash_len_total += rctx->bufcnt; + + total_len = rctx->bufcnt; + + starfive_sec_write(sdev, STARFIVE_DMA_IN_LEN_OFFSET, rctx->bufcnt); + + total_len = (total_len & 0x3) ? (((total_len >> 2) + 1) << 2) : total_len; + + memset(sdev->hash_data + rctx->bufcnt, 0, total_len - rctx->bufcnt); + + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.hash_dma_en = 1; + starfive_sec_write(sdev, STARFIVE_ALG_CR_OFFSET, alg_cr.v); + + sg_init_table(&ctx->sg[0], 1); + sg_set_buf(&ctx->sg[0], sdev->hash_data, total_len); + sg_dma_address(&ctx->sg[0]) = phys_to_dma(sdev->dev, (unsigned long long)(sdev->hash_data)); + sg_dma_len(&ctx->sg[0]) = total_len; + + ret = dma_map_sg(sdev->dev, &ctx->sg[0], 1, DMA_TO_DEVICE); + if (!ret) { + dev_err(sdev->dev, "dma_map_sg() error\n"); + return -EINVAL; + } + + sdev->cfg_in.direction = DMA_MEM_TO_DEV; + sdev->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_in.src_maxburst = sdev->dma_maxburst; + sdev->cfg_in.dst_maxburst = sdev->dma_maxburst; + sdev->cfg_in.dst_addr = sdev->io_phys_base + STARFIVE_ALG_FIFO_OFFSET; + + dmaengine_slave_config(sdev->sec_xm_m, &sdev->cfg_in); + + in_desc = dmaengine_prep_slave_sg(sdev->sec_xm_m, &ctx->sg[0], + 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) + return -EINVAL; + + reinit_completion(&sdev->sec_comp_m); + + in_desc->callback = starfive_hash_dma_callback; + in_desc->callback_param = sdev; + + cookie = dmaengine_submit(in_desc); + dma_async_issue_pending(sdev->sec_xm_m); + + if (!wait_for_completion_timeout(&sdev->sec_comp_m, + msecs_to_jiffies(10000))) { + dev_err(sdev->dev, "wait_for_completion_timeout error, cookie = %x\n", + dma_async_is_tx_complete(sdev->sec_xm_p, cookie, + NULL, NULL)); + } + + dma_unmap_sg(sdev->dev, &ctx->sg[0], 1, DMA_TO_DEVICE); + + alg_cr.v = 0; + alg_cr.clear = 1; + starfive_sec_write(sdev, STARFIVE_ALG_CR_OFFSET, alg_cr.v); + + return 0; +} + +static int starfive_hash_xmit_cpu(struct starfive_sec_ctx *ctx, int flags) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + int total_len, mlen, loop; + unsigned int *buffer; + unsigned char *cl; + + if (!rctx->bufcnt) + return 0; + + ctx->hash_len_total += rctx->bufcnt; + + total_len = rctx->bufcnt; + mlen = total_len / sizeof(u32); + buffer = (unsigned int *)ctx->buffer; + + for (loop = 0; loop < mlen; loop++, buffer++) { + starfive_sec_write(sdev, STARFIVE_HASH_SHAWDR, *buffer); + udelay(2); + } + + if (total_len & 0x3) { + cl = (unsigned char *)buffer; + for (loop = 0; loop < (total_len & 0x3); loop++, cl++) { + starfive_sec_writeb(sdev, STARFIVE_HASH_SHAWDR, *cl); + udelay(2); + } + } + + return 0; +} + +static void starfive_hash_append_sg(struct starfive_sec_request_ctx *rctx) +{ + struct starfive_sec_ctx *ctx = rctx->ctx; + size_t count; + + while ((rctx->bufcnt < rctx->buflen) && rctx->total) { + count = min(rctx->in_sg->length - rctx->offset, rctx->total); + count = min(count, rctx->buflen - rctx->bufcnt); + + if (count <= 0) { + if (rctx->in_sg->length == 0 && !sg_is_last(rctx->in_sg)) { + rctx->in_sg = sg_next(rctx->in_sg); + continue; + } else { + break; + } + } + + scatterwalk_map_and_copy(ctx->buffer + rctx->bufcnt, rctx->in_sg, + rctx->offset, count, 0); + + rctx->bufcnt += count; + rctx->offset += count; + rctx->total -= count; + + if (rctx->offset == rctx->in_sg->length) { + rctx->in_sg = sg_next(rctx->in_sg); + if (rctx->in_sg) + rctx->offset = 0; + else + rctx->total = 0; + } + } +} + +static int starfive_hash_xmit(struct starfive_sec_ctx *ctx, int flags) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + int ret; + + rctx->csr.hash.v = 0; + rctx->csr.hash.reset = 1; + starfive_sec_write(sdev, STARFIVE_HASH_SHACSR, rctx->csr.hash.v); + + if (starfive_hash_wait_busy(ctx)) { + dev_err(sdev->dev, "Error resetting engine.\n"); + return -ETIMEDOUT; + } + + rctx->csr.hash.v = 0; + rctx->csr.hash.mode = ctx->hash_mode & STARFIVE_HASH_MODE_MASK; + + if (ctx->hash_mode & STARFIVE_HASH_HMAC_FLAGS) { + ret = starfive_hash_hmac_key(ctx); + if (ret) + return ret; + } else { + rctx->csr.hash.start = 1; + rctx->csr.hash.firstb = 1; + starfive_sec_write(sdev, STARFIVE_HASH_SHACSR, rctx->csr.hash.v); + } + + if (ctx->sdev->use_dma) + ret = starfive_hash_xmit_dma(ctx, flags); + else + ret = starfive_hash_xmit_cpu(ctx, flags); + + if (ret) + return ret; + + rctx->flags |= HASH_FLAGS_FINAL; + starfive_hash_start(ctx, flags); + + if (starfive_hash_wait_busy(ctx)) { + dev_err(sdev->dev, "Timeout waiting for hash completion\n"); + return -ETIMEDOUT; + } + + if (ctx->hash_mode & STARFIVE_HASH_HMAC_FLAGS) + if (starfive_hash_wait_hmac_done(ctx)) { + dev_err(sdev->dev, "Timeout waiting for hmac completion\n"); + return -ETIMEDOUT; + } + + return 0; +} + +static int starfive_hash_update_req(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + int ret, final; + + final = (rctx->flags & HASH_FLAGS_FINUP); + + while ((rctx->total >= rctx->buflen) || + (rctx->bufcnt + rctx->total >= rctx->buflen)) { + starfive_hash_append_sg(rctx); + ret = starfive_hash_xmit(ctx, 0); + rctx->bufcnt = 0; + } + + starfive_hash_append_sg(rctx); + + if (final) { + ret = starfive_hash_xmit(ctx, (rctx->flags & HASH_FLAGS_FINUP)); + rctx->bufcnt = 0; + } + + return ret; +} + +static int starfive_hash_final_req(struct starfive_sec_ctx *ctx) +{ + struct ahash_request *req = ctx->rctx->req.hreq; + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + int ret; + + ret = starfive_hash_xmit(ctx, 1); + rctx->bufcnt = 0; + + return ret; +} + +static int starfive_hash_out_cpu(struct ahash_request *req) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_ctx *ctx = rctx->ctx; + int count, *data; + int mlen; + + if (!req->result) + return 0; + + mlen = starfive_get_hash_size(ctx) / sizeof(u32); + data = (u32 *)req->result; + + for (count = 0; count < mlen; count++) + data[count] = starfive_sec_read(ctx->sdev, STARFIVE_HASH_SHARDR); + + return 0; +} + +static int starfive_hash_copy_hash(struct ahash_request *req) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_ctx *ctx = rctx->ctx; + int hashsize; + int ret; + + hashsize = starfive_get_hash_size(ctx); + + ret = starfive_hash_out_cpu(req); + + if (ret) + return ret; + + memcpy(rctx->hash_digest_mid, req->result, hashsize); + rctx->hash_digest_len = hashsize; + + return ret; +} + +static void starfive_hash_finish_req(struct ahash_request *req, int err) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_dev *sdev = rctx->sdev; + + if (!err && (HASH_FLAGS_FINAL & rctx->flags)) { + err = starfive_hash_copy_hash(req); + rctx->flags &= ~(HASH_FLAGS_FINAL | + HASH_FLAGS_INIT); + } + + crypto_finalize_hash_request(sdev->engine, req, err); +} + +static int starfive_hash_prepare_req(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = container_of(areq, struct ahash_request, + base); + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx; + + if (!sdev) + return -ENODEV; + + rctx = ahash_request_ctx(req); + + rctx->req.hreq = req; + + return 0; +} + +static int starfive_hash_one_request(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = container_of(areq, struct ahash_request, + base); + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx; + int err = 0; + + if (!sdev) + return -ENODEV; + + rctx = ahash_request_ctx(req); + + mutex_lock(&ctx->sdev->lock); + + if (rctx->op == HASH_OP_UPDATE) + err = starfive_hash_update_req(ctx); + else if (rctx->op == HASH_OP_FINAL) + err = starfive_hash_final_req(ctx); + + if (err != -EINPROGRESS) + /* done task will not finish it, so do it here */ + starfive_hash_finish_req(req, err); + + mutex_unlock(&ctx->sdev->lock); + + return 0; +} + +static int starfive_hash_handle_queue(struct starfive_sec_dev *sdev, + struct ahash_request *req) +{ + return crypto_transfer_hash_request_to_engine(sdev->engine, req); +} + +static int starfive_hash_enqueue(struct ahash_request *req, unsigned int op) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_ctx *ctx = crypto_tfm_ctx(req->base.tfm); + struct starfive_sec_dev *sdev = ctx->sdev; + + rctx->op = op; + + return starfive_hash_handle_queue(sdev, req); +} + +static int starfive_hash_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_dev *sdev = ctx->sdev; + + memset(rctx, 0, sizeof(struct starfive_sec_request_ctx)); + + rctx->sdev = sdev; + rctx->ctx = ctx; + rctx->req.hreq = req; + rctx->bufcnt = 0; + + rctx->total = 0; + rctx->offset = 0; + rctx->bufcnt = 0; + rctx->buflen = STARFIVE_HASH_BUFLEN; + + memset(ctx->buffer, 0, STARFIVE_HASH_BUFLEN); + + ctx->rctx = rctx; + + dev_dbg(sdev->dev, "%s Flags %lx\n", __func__, rctx->flags); + + return 0; +} + +static int starfive_hash_update(struct ahash_request *req) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + + if (!req->nbytes) + return 0; + + rctx->total = req->nbytes; + rctx->in_sg = req->src; + rctx->offset = 0; + + if ((rctx->bufcnt + rctx->total < rctx->buflen)) { + starfive_hash_append_sg(rctx); + return 0; + } + + return starfive_hash_enqueue(req, HASH_OP_UPDATE); +} + +static int starfive_hash_final(struct ahash_request *req) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_sec_ctx *ctx = crypto_tfm_ctx(req->base.tfm); + + rctx->flags |= HASH_FLAGS_FINUP; + + if (ctx->fallback_available && rctx->bufcnt < STARFIVE_HASH_THRES) { + if (ctx->hash_mode & STARFIVE_HASH_HMAC_FLAGS) + crypto_shash_setkey(ctx->fallback.shash, ctx->key, + ctx->keylen); + + return crypto_shash_tfm_digest(ctx->fallback.shash, ctx->buffer, + rctx->bufcnt, req->result); + } + + return starfive_hash_enqueue(req, HASH_OP_FINAL); +} + +static int starfive_hash_finup(struct ahash_request *req) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + int err1, err2; + int nents; + + nents = sg_nents_for_len(req->src, req->nbytes); + + rctx->flags |= HASH_FLAGS_FINUP; + + err1 = starfive_hash_update(req); + + if (err1 == -EINPROGRESS || err1 == -EBUSY) + return err1; + + /* + * final() has to be always called to cleanup resources + * even if update() failed, except EINPROGRESS + */ + err2 = starfive_hash_final(req); + + return err1 ?: err2; +} + +static int starfive_hash_digest(struct ahash_request *req) +{ + return starfive_hash_init(req) ?: starfive_hash_finup(req); +} + +static int starfive_hash_export(struct ahash_request *req, void *out) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + + memcpy(out, rctx, sizeof(*rctx)); + + return 0; +} + +static int starfive_hash_import(struct ahash_request *req, const void *in) +{ + struct starfive_sec_request_ctx *rctx = ahash_request_ctx(req); + + memcpy(rctx, in, sizeof(*rctx)); + + return 0; +} + +static int starfive_hash_cra_init_algs(struct crypto_tfm *tfm, + const char *algs_hmac_name, + unsigned int mode) +{ + struct starfive_sec_ctx *ctx = crypto_tfm_ctx(tfm); + const char *alg_name = crypto_tfm_alg_name(tfm); + + ctx->sdev = starfive_sec_find_dev(ctx); + + if (!ctx->sdev) + return -ENODEV; + + ctx->fallback_available = true; + ctx->fallback.shash = crypto_alloc_shash(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(ctx->fallback.shash)) + ctx->fallback_available = false; + + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct starfive_sec_request_ctx)); + + ctx->keylen = 0; + ctx->hash_mode = mode; + ctx->hash_len_total = 0; + ctx->buffer = ctx->sdev->hash_data; + + if (algs_hmac_name) + ctx->hash_mode |= STARFIVE_HASH_HMAC_FLAGS; + + ctx->enginectx.op.do_one_request = starfive_hash_one_request; + ctx->enginectx.op.prepare_request = starfive_hash_prepare_req; + ctx->enginectx.op.unprepare_request = NULL; + + return 0; +} + +static void starfive_hash_cra_exit(struct crypto_tfm *tfm) +{ + struct starfive_sec_ctx *ctx = crypto_tfm_ctx(tfm); + + crypto_free_shash(ctx->fallback.shash); + + ctx->fallback.shash = NULL; + ctx->enginectx.op.do_one_request = NULL; + ctx->enginectx.op.prepare_request = NULL; + ctx->enginectx.op.unprepare_request = NULL; +} + +static int starfive_hash_long_setkey(struct starfive_sec_ctx *ctx, + const u8 *key, unsigned int keylen, + const char *alg_name) +{ + struct crypto_wait wait; + struct ahash_request *req; + struct scatterlist sg; + struct crypto_ahash *ahash_tfm; + u8 *buf; + int ret; + + ahash_tfm = crypto_alloc_ahash(alg_name, 0, 0); + if (IS_ERR(ahash_tfm)) + return PTR_ERR(ahash_tfm); + + req = ahash_request_alloc(ahash_tfm, GFP_KERNEL); + if (!req) { + ret = -ENOMEM; + goto err_free_ahash; + } + + crypto_init_wait(&wait); + ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + crypto_ahash_clear_flags(ahash_tfm, ~0); + + buf = kzalloc(keylen + STARFIVE_MAX_ALIGN_SIZE, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto err_free_req; + } + + memcpy(buf, key, keylen); + sg_init_one(&sg, buf, keylen); + ahash_request_set_crypt(req, &sg, ctx->key, keylen); + + ret = crypto_wait_req(crypto_ahash_digest(req), &wait); + +err_free_req: + ahash_request_free(req); +err_free_ahash: + crypto_free_ahash(ahash_tfm); + return ret; +} + +static int starfive_hash224_setkey(struct crypto_ahash *tfm, + const u8 *key, unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int digestsize = crypto_ahash_digestsize(tfm); + unsigned int blocksize; + int ret = 0; + + blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + } else { + ctx->keylen = digestsize; + ret = starfive_hash_long_setkey(ctx, key, keylen, "starfive-sha224"); + } + + return ret; +} + +static int starfive_hash256_setkey(struct crypto_ahash *tfm, + const u8 *key, unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int digestsize = crypto_ahash_digestsize(tfm); + unsigned int blocksize; + int ret = 0; + + blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + } else { + ctx->keylen = digestsize; + ret = starfive_hash_long_setkey(ctx, key, keylen, "starfive-sha256"); + } + + return ret; +} + +static int starfive_hash384_setkey(struct crypto_ahash *tfm, + const u8 *key, unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int digestsize = crypto_ahash_digestsize(tfm); + unsigned int blocksize; + int ret = 0; + + blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + } else { + ctx->keylen = digestsize; + ret = starfive_hash_long_setkey(ctx, key, keylen, "starfive-sha384"); + } + + return ret; +} + +static int starfive_hash512_setkey(struct crypto_ahash *tfm, + const u8 *key, unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int digestsize = crypto_ahash_digestsize(tfm); + unsigned int blocksize; + int ret = 0; + + blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + } else { + ctx->keylen = digestsize; + ret = starfive_hash_long_setkey(ctx, key, keylen, "starfive-sha512"); + } + + return ret; +} + +static int starfive_sm3_setkey(struct crypto_ahash *tfm, + const u8 *key, unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int digestsize = crypto_ahash_digestsize(tfm); + unsigned int blocksize; + int ret = 0; + + blocksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + } else { + ctx->keylen = digestsize; + ret = starfive_hash_long_setkey(ctx, key, keylen, "starfive-sm3"); + } + + return ret; +} + +static int starfive_hash_cra_sha224_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, NULL, STARFIVE_HASH_SHA224); +} + +static int starfive_hash_cra_sha256_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, NULL, STARFIVE_HASH_SHA256); +} + +static int starfive_hash_cra_sha384_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, NULL, STARFIVE_HASH_SHA384); +} + +static int starfive_hash_cra_sha512_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, NULL, STARFIVE_HASH_SHA512); +} + +static int starfive_hash_cra_sm3_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, NULL, STARFIVE_HASH_SM3); +} + +static int starfive_hash_cra_hmac_sha224_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, "sha224", STARFIVE_HASH_SHA224); +} + +static int starfive_hash_cra_hmac_sha256_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, "sha256", STARFIVE_HASH_SHA256); +} + +static int starfive_hash_cra_hmac_sha384_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, "sha384", STARFIVE_HASH_SHA384); +} + +static int starfive_hash_cra_hmac_sha512_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, "sha512", STARFIVE_HASH_SHA512); +} + +static int starfive_hash_cra_hmac_sm3_init(struct crypto_tfm *tfm) +{ + return starfive_hash_cra_init_algs(tfm, "sm3", STARFIVE_HASH_SM3); +} + +static struct ahash_alg algs_sha2_sm3[] = { +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA224_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "sha224", + .cra_driver_name = "starfive-sha224", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA224_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_sha224_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .setkey = starfive_hash224_setkey, + .halg = { + .digestsize = SHA224_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "hmac(sha224)", + .cra_driver_name = "starfive-hmac-sha224", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA224_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_hmac_sha224_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "sha256", + .cra_driver_name = "starfive-sha256", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_sha256_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .setkey = starfive_hash256_setkey, + .halg = { + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "hmac(sha256)", + .cra_driver_name = "starfive-hmac-sha256", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_hmac_sha256_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA384_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "sha384", + .cra_driver_name = "starfive-sha384", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_sha384_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .setkey = starfive_hash384_setkey, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA384_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "hmac(sha384)", + .cra_driver_name = "starfive-hmac-sha384", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_hmac_sha384_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA512_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "sha512", + .cra_driver_name = "starfive-sha512", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_sha512_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .setkey = starfive_hash512_setkey, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SHA512_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "hmac(sha512)", + .cra_driver_name = "starfive-hmac-sha512", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_hmac_sha512_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "sm3", + .cra_driver_name = "starfive-sm3", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_sm3_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +{ + .init = starfive_hash_init, + .update = starfive_hash_update, + .final = starfive_hash_final, + .finup = starfive_hash_finup, + .digest = starfive_hash_digest, + .setkey = starfive_sm3_setkey, + .export = starfive_hash_export, + .import = starfive_hash_import, + .halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "hmac(sm3)", + .cra_driver_name = "starfive-hmac-sm3", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 3, + .cra_init = starfive_hash_cra_hmac_sm3_init, + .cra_exit = starfive_hash_cra_exit, + .cra_module = THIS_MODULE, + } + } +}, +}; + +int starfive_hash_register_algs(void) +{ + int ret = 0; + + ret = crypto_register_ahashes(algs_sha2_sm3, ARRAY_SIZE(algs_sha2_sm3)); + + return ret; +} + +void starfive_hash_unregister_algs(void) +{ + crypto_unregister_ahashes(algs_sha2_sm3, ARRAY_SIZE(algs_sha2_sm3)); +} diff --git a/drivers/crypto/starfive/starfive-regs.h b/drivers/crypto/starfive/starfive-regs.h index 0d680cb1f502..3f5e8881b3c0 100644 --- a/drivers/crypto/starfive/starfive-regs.h +++ b/drivers/crypto/starfive/starfive-regs.h @@ -9,6 +9,8 @@ #define STARFIVE_DMA_IN_LEN_OFFSET 0x10 #define STARFIVE_DMA_OUT_LEN_OFFSET 0x14 +#define STARFIVE_HASH_REGS_OFFSET 0x300 + union starfive_alg_cr { u32 v; struct { @@ -23,4 +25,47 @@ union starfive_alg_cr { }; }; +#define STARFIVE_HASH_SHACSR (STARFIVE_HASH_REGS_OFFSET + 0x0) +#define STARFIVE_HASH_SHAWDR (STARFIVE_HASH_REGS_OFFSET + 0x4) +#define STARFIVE_HASH_SHARDR (STARFIVE_HASH_REGS_OFFSET + 0x8) +#define STARFIVE_HASH_SHAWSR (STARFIVE_HASH_REGS_OFFSET + 0xC) +#define STARFIVE_HASH_SHAWLEN3 (STARFIVE_HASH_REGS_OFFSET + 0x10) +#define STARFIVE_HASH_SHAWLEN2 (STARFIVE_HASH_REGS_OFFSET + 0x14) +#define STARFIVE_HASH_SHAWLEN1 (STARFIVE_HASH_REGS_OFFSET + 0x18) +#define STARFIVE_HASH_SHAWLEN0 (STARFIVE_HASH_REGS_OFFSET + 0x1C) +#define STARFIVE_HASH_SHAWKR (STARFIVE_HASH_REGS_OFFSET + 0x20) +#define STARFIVE_HASH_SHAWKLEN (STARFIVE_HASH_REGS_OFFSET + 0x24) + +union starfive_hash_csr { + u32 v; + struct { + u32 start :1; + u32 reset :1; + u32 rsvd_0 :1; + u32 firstb :1; +#define STARFIVE_HASH_SM3 0x0 +#define STARFIVE_HASH_SHA224 0x3 +#define STARFIVE_HASH_SHA256 0x4 +#define STARFIVE_HASH_SHA384 0x5 +#define STARFIVE_HASH_SHA512 0x6 +#define STARFIVE_HASH_MODE_MASK 0x7 + u32 mode :3; + u32 rsvd_1 :1; + u32 final :1; + u32 rsvd_2 :2; +#define STARFIVE_HASH_HMAC_FLAGS 0x800 + u32 hmac :1; + u32 rsvd_3 :1; +#define STARFIVE_HASH_KEY_DONE BIT(13) + u32 key_done :1; + u32 key_flag :1; +#define STARFIVE_HASH_HMAC_DONE BIT(15) + u32 hmac_done :1; +#define STARFIVE_HASH_BUSY BIT(16) + u32 busy :1; + u32 hashdone :1; + u32 rsvd_4 :14; + }; +}; + #endif diff --git a/drivers/crypto/starfive/starfive-str.h b/drivers/crypto/starfive/starfive-str.h index 4ba3c56f0573..a6fed48a0b19 100644 --- a/drivers/crypto/starfive/starfive-str.h +++ b/drivers/crypto/starfive/starfive-str.h @@ -7,16 +7,29 @@ #include #include +#include +#include #include "starfive-regs.h" #define STARFIVE_MSG_BUFFER_SIZE SZ_16K +#define MAX_KEY_SIZE SHA512_BLOCK_SIZE struct starfive_sec_ctx { struct crypto_engine_ctx enginectx; struct starfive_sec_dev *sdev; - + struct starfive_sec_request_ctx *rctx; + unsigned int hash_mode; + u8 key[MAX_KEY_SIZE]; + int keylen; + struct scatterlist sg[2]; + size_t hash_len_total; u8 *buffer; + + union { + struct crypto_shash *shash; + } fallback; + bool fallback_available; }; struct starfive_sec_dev { @@ -29,6 +42,7 @@ struct starfive_sec_dev { void __iomem *io_base; phys_addr_t io_phys_base; + void *hash_data; size_t data_buf_len; int pages_count; @@ -41,12 +55,45 @@ struct starfive_sec_dev { struct dma_slave_config cfg_out; struct completion sec_comp_m; struct completion sec_comp_p; + /* To synchronize concurrent request from different + * crypto module accessing the hardware engine. + */ + struct mutex lock; struct crypto_engine *engine; union starfive_alg_cr alg_cr; }; +struct starfive_sec_request_ctx { + struct starfive_sec_ctx *ctx; + struct starfive_sec_dev *sdev; + + union { + struct ahash_request *hreq; + } req; +#define STARFIVE_AHASH_REQ 0 + unsigned int req_type; + + union { + union starfive_hash_csr hash; + } csr; + + struct scatterlist *in_sg; + + unsigned long flags; + unsigned long op; + + size_t bufcnt; + size_t buflen; + size_t total; + size_t offset; + size_t data_offset; + + unsigned int hash_digest_len; + u8 hash_digest_mid[SHA512_DIGEST_SIZE]__aligned(sizeof(u32)); +}; + static inline u32 starfive_sec_read(struct starfive_sec_dev *sdev, u32 offset) { return __raw_readl(sdev->io_base + offset); @@ -71,4 +118,7 @@ static inline void starfive_sec_writeb(struct starfive_sec_dev *sdev, struct starfive_sec_dev *starfive_sec_find_dev(struct starfive_sec_ctx *ctx); +int starfive_hash_register_algs(void); +void starfive_hash_unregister_algs(void); + #endif From patchwork Wed Nov 30 05:52:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6996DC4332F for ; Wed, 30 Nov 2022 05:53:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b4WVT2FFYKdiB5SokxIgGaLwH0Abx/K8jAQiRn9NgBc=; b=XCSEllv0ayTuQm kx2MmujKXKgMSTNHdS4choO9pQmmDqCWR1im7yYj/JrNb/fEfC/cTmtEwAKODVsErrKQiiEpGDPVy 5KPk4eBI6vfBB0S832L30acH5dDsZiav/zYLpKWzbZLwjmtXozmS9NCBDLEMBWaOeeOASHySTBeuC SofQqijb7vdTZkVmkKLADilhib3Yp9YmLsG1PG2+HALJ7wRrYT9Cgz61cL2XmVCacGwnQ1p91fObU u6WybZVYamQWx47mRkIIsOAjQ933mzL4a6+7N9uDYaDKbEYCLQJWNUCrov11GJY5REPx+4SDdmfsU rk2uHfoLSo8S2+hBgNsw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G25-00Dahe-9z; Wed, 30 Nov 2022 05:53:21 +0000 Received: from fd01.gateway.ufhost.com ([61.152.239.71]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1w-00DaG9-OG for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:53:18 +0000 Received: from EXMBX166.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX166", Issuer "EXMBX166" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id D03E324E2E2; Wed, 30 Nov 2022 13:52:34 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX166.cuchost.com (172.16.6.76) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:35 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:30 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 3/6] crypto: starfive - Add AES skcipher and aead support Date: Wed, 30 Nov 2022 13:52:11 +0800 Message-ID: <20221130055214.2416888-4-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215313_536239_B70A5C9F X-CRM114-Status: GOOD ( 19.84 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding AES skcipher and AEAD support to Starfive crypto driver. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng --- drivers/crypto/starfive/Makefile | 2 +- drivers/crypto/starfive/starfive-aes.c | 1723 +++++++++++++++++++++++ drivers/crypto/starfive/starfive-cryp.c | 17 + drivers/crypto/starfive/starfive-regs.h | 64 + drivers/crypto/starfive/starfive-str.h | 39 +- 5 files changed, 1842 insertions(+), 3 deletions(-) create mode 100644 drivers/crypto/starfive/starfive-aes.c diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile index 437b8f036038..4958b1f6812c 100644 --- a/drivers/crypto/starfive/Makefile +++ b/drivers/crypto/starfive/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_STARFIVE) += starfive-crypto.o -starfive-crypto-objs := starfive-cryp.o starfive-hash.o +starfive-crypto-objs := starfive-cryp.o starfive-hash.o starfive-aes.o diff --git a/drivers/crypto/starfive/starfive-aes.c b/drivers/crypto/starfive/starfive-aes.c new file mode 100644 index 000000000000..0f79f72cafcd --- /dev/null +++ b/drivers/crypto/starfive/starfive-aes.c @@ -0,0 +1,1723 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * StarFive AES acceleration driver + * + * Copyright (c) 2022 StarFive Technology + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "starfive-str.h" + +/* Mode mask = bits [3..0] */ +#define FLG_MODE_MASK GENMASK(2, 0) + +/* Bit [4] encrypt / decrypt */ +#define FLG_ENCRYPT BIT(4) + +/* Misc */ +#define AES_BLOCK_32 (AES_BLOCK_SIZE / sizeof(u32)) +#define CCM_B0_ADATA 0x40 + +static inline int starfive_aes_wait_busy(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_AES_CSR, status, + !(status & STARFIVE_AES_BUSY), 10, 100000); +} + +static inline int starfive_aes_wait_keydone(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_AES_CSR, status, + (status & STARFIVE_AES_KEY_DONE), 10, 100000); +} + +static inline int starfive_aes_wait_gcmdone(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_AES_CSR, status, + (status & STARFIVE_AES_GCM_DONE), 10, 100000); +} + +static int starfive_cryp_check_aligned(struct scatterlist *sg, size_t total, + size_t align) +{ + int len = 0; + + if (!total) + return 0; + + if (!IS_ALIGNED(total, align)) + return -EINVAL; + + while (sg) { + if (!IS_ALIGNED(sg->offset, sizeof(u32))) + return -EINVAL; + + if (!IS_ALIGNED(sg->length, align)) + return -EINVAL; + + len += sg->length; + sg = sg_next(sg); + } + + if (len != total) + return -EINVAL; + + return 0; +} + +static int starfive_cryp_check_io_aligned(struct starfive_sec_request_ctx *rctx) +{ + int ret; + + ret = starfive_cryp_check_aligned(rctx->in_sg, rctx->total_in, + rctx->hw_blocksize); + + if (ret) + return ret; + + return starfive_cryp_check_aligned(rctx->out_sg, rctx->total_out, + rctx->hw_blocksize); +} + +static void sg_copy_buf(void *buf, struct scatterlist *sg, + unsigned int start, unsigned int nbytes, int out) +{ + struct scatter_walk walk; + + if (!nbytes) + return; + + scatterwalk_start(&walk, sg); + scatterwalk_advance(&walk, start); + scatterwalk_copychunks(buf, &walk, nbytes, out); + scatterwalk_done(&walk, out, 0); +} + +static int starfive_cryp_copy_sgs(struct starfive_sec_request_ctx *rctx) +{ + void *buf_in, *buf_out; + int pages, total_in, total_out; + + if (!starfive_cryp_check_io_aligned(rctx)) { + rctx->sgs_copied = 0; + return 0; + } + + total_in = ALIGN(rctx->total_in, rctx->hw_blocksize); + pages = total_in ? get_order(total_in) : 1; + buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages); + + total_out = ALIGN(rctx->total_out, rctx->hw_blocksize); + pages = total_out ? get_order(total_out) : 1; + buf_out = (void *)__get_free_pages(GFP_ATOMIC, pages); + + if (!buf_in || !buf_out) { + dev_err(rctx->sdev->dev, "Can't allocate pages when unaligned\n"); + rctx->sgs_copied = 0; + return -EFAULT; + } + + sg_copy_buf(buf_in, rctx->in_sg, 0, rctx->total_in, 0); + + sg_init_one(&rctx->in_sgl, buf_in, total_in); + rctx->in_sg = &rctx->in_sgl; + rctx->in_sg_len = 1; + + sg_init_one(&rctx->out_sgl, buf_out, total_out); + rctx->out_sg_save = rctx->out_sg; + rctx->out_sg = &rctx->out_sgl; + rctx->out_sg_len = 1; + + rctx->sgs_copied = 1; + + return 0; +} + +static inline int is_ecb(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_ECB; +} + +static inline int is_cbc(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CBC; +} + +static inline int is_ofb(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_OFB; +} + +static inline int is_cfb(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CFB; +} + +static inline int is_ctr(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CTR; +} + +static inline int is_gcm(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_GCM; +} + +static inline int is_ccm(struct starfive_sec_request_ctx *rctx) +{ + return (rctx->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CCM; +} + +static inline int get_aes_mode(struct starfive_sec_request_ctx *rctx) +{ + return rctx->flags & FLG_MODE_MASK; +} + +static inline int is_encrypt(struct starfive_sec_request_ctx *rctx) +{ + return !!(rctx->flags & FLG_ENCRYPT); +} + +static inline int is_decrypt(struct starfive_sec_request_ctx *rctx) +{ + return !is_encrypt(rctx); +} + +static int starfive_cryp_read_auth_tag(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + int loop, total_len, start_addr; + + total_len = AES_BLOCK_SIZE / sizeof(u32); + start_addr = STARFIVE_AES_NONCE0; + + if (starfive_aes_wait_busy(ctx)) + return -EBUSY; + + if (is_gcm(rctx)) + for (loop = 0; loop < total_len; loop++, start_addr += 4) + rctx->tag_out[loop] = starfive_sec_read(sdev, start_addr); + else + for (loop = 0; loop < total_len; loop++) + rctx->tag_out[loop] = starfive_sec_read(sdev, STARFIVE_AES_AESDIO0R); + + if (is_encrypt(rctx)) { + sg_copy_buffer(rctx->out_sg, sg_nents(rctx->out_sg), rctx->tag_out, + rctx->authsize, rctx->offset, 0); + } else { + scatterwalk_map_and_copy(rctx->tag_in, rctx->in_sg, + rctx->total_in_save - rctx->authsize, + rctx->authsize, 0); + + if (crypto_memneq(rctx->tag_in, rctx->tag_out, rctx->authsize)) + return -EBADMSG; + } + + return 0; +} + +static inline void starfive_aes_reset(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + rctx->csr.aes.v = 0; + rctx->csr.aes.aesrst = 1; + starfive_sec_write(ctx->sdev, STARFIVE_AES_CSR, rctx->csr.aes.v); +} + +static inline void starfive_aes_xcm_start(struct starfive_sec_ctx *ctx, u32 hw_mode) +{ + unsigned int value; + + switch (hw_mode) { + case STARFIVE_AES_MODE_GCM: + value = starfive_sec_read(ctx->sdev, STARFIVE_AES_CSR); + value |= STARFIVE_AES_GCM_START; + starfive_sec_write(ctx->sdev, STARFIVE_AES_CSR, value); + break; + case STARFIVE_AES_MODE_CCM: + value = starfive_sec_read(ctx->sdev, STARFIVE_AES_CSR); + value |= STARFIVE_AES_CCM_START; + starfive_sec_write(ctx->sdev, STARFIVE_AES_CSR, value); + break; + } +} + +static inline void starfive_aes_setup(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + rctx->csr.aes.v = 0; + switch (ctx->keylen) { + case AES_KEYSIZE_128: + rctx->csr.aes.keymode = STARFIVE_AES_KEYMODE_128; + break; + case AES_KEYSIZE_192: + rctx->csr.aes.keymode = STARFIVE_AES_KEYMODE_192; + break; + case AES_KEYSIZE_256: + rctx->csr.aes.keymode = STARFIVE_AES_KEYMODE_256; + break; + default: + return; + } + + rctx->csr.aes.mode = rctx->flags & FLG_MODE_MASK; + rctx->csr.aes.cmode = is_decrypt(rctx); + rctx->csr.aes.stream_mode = rctx->stmode; + + if (ctx->sdev->use_side_channel_mitigation) { + rctx->csr.aes.delay_aes = 1; + rctx->csr.aes.vaes_start = 1; + } + + if (starfive_aes_wait_busy(ctx)) { + dev_err(ctx->sdev->dev, "reset error\n"); + return; + } + + starfive_sec_write(ctx->sdev, STARFIVE_AES_CSR, rctx->csr.aes.v); +} + +static inline void starfive_aes_set_ivlen(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + if (is_gcm(rctx)) + starfive_sec_write(sdev, STARFIVE_AES_IVLEN, GCM_AES_IV_SIZE); + else + starfive_sec_write(sdev, STARFIVE_AES_IVLEN, AES_BLOCK_SIZE); +} + +static inline void starfive_aes_set_alen(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + + starfive_sec_write(sdev, STARFIVE_AES_ALEN0, upper_32_bits(ctx->rctx->assoclen)); + starfive_sec_write(sdev, STARFIVE_AES_ALEN1, lower_32_bits(ctx->rctx->assoclen)); +} + +static unsigned int starfive_cryp_get_input_text_len(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + return is_encrypt(rctx) ? rctx->req.areq->cryptlen : + rctx->req.areq->cryptlen - rctx->authsize; +} + +static inline void starfive_aes_set_mlen(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + size_t data_len; + + data_len = starfive_cryp_get_input_text_len(ctx); + + starfive_sec_write(sdev, STARFIVE_AES_MLEN0, upper_32_bits(data_len)); + starfive_sec_write(sdev, STARFIVE_AES_MLEN1, lower_32_bits(data_len)); +} + +static inline int crypto_ccm_check_iv(const u8 *iv) +{ + /* 2 <= L <= 8, so 1 <= L' <= 7. */ + if (iv[0] < 1 || iv[0] > 7) + return -EINVAL; + + return 0; +} + +static int starfive_cryp_hw_write_iv(struct starfive_sec_ctx *ctx, u32 *iv) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + if (!iv) + return -EINVAL; + + starfive_sec_write(sdev, STARFIVE_AES_IV0, iv[0]); + starfive_sec_write(sdev, STARFIVE_AES_IV1, iv[1]); + starfive_sec_write(sdev, STARFIVE_AES_IV2, iv[2]); + + if (!is_gcm(rctx)) + starfive_sec_write(sdev, STARFIVE_AES_IV3, iv[3]); + else + if (starfive_aes_wait_gcmdone(ctx)) + return -ETIMEDOUT; + + return 0; +} + +static void starfive_cryp_hw_get_iv(struct starfive_sec_ctx *ctx, u32 *iv) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + + if (!iv) + return; + + iv[0] = starfive_sec_read(sdev, STARFIVE_AES_IV0); + iv[1] = starfive_sec_read(sdev, STARFIVE_AES_IV1); + iv[2] = starfive_sec_read(sdev, STARFIVE_AES_IV2); + iv[3] = starfive_sec_read(sdev, STARFIVE_AES_IV3); +} + +static void starfive_cryp_hw_write_ctr(struct starfive_sec_ctx *ctx, u32 *ctr) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + + starfive_sec_write(sdev, STARFIVE_AES_NONCE0, ctr[0]); + starfive_sec_write(sdev, STARFIVE_AES_NONCE1, ctr[1]); + starfive_sec_write(sdev, STARFIVE_AES_NONCE2, ctr[2]); + starfive_sec_write(sdev, STARFIVE_AES_NONCE3, ctr[3]); +} + +static int starfive_cryp_hw_write_key(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 *key = (u32 *)ctx->key; + + switch (ctx->keylen) { + case AES_KEYSIZE_256: + case AES_KEYSIZE_192: + case AES_KEYSIZE_128: + break; + default: + return -EINVAL; + } + + if (ctx->keylen >= AES_KEYSIZE_128) { + starfive_sec_write(sdev, STARFIVE_AES_KEY0, key[0]); + starfive_sec_write(sdev, STARFIVE_AES_KEY1, key[1]); + starfive_sec_write(sdev, STARFIVE_AES_KEY2, key[2]); + starfive_sec_write(sdev, STARFIVE_AES_KEY3, key[3]); + } + + if (ctx->keylen >= AES_KEYSIZE_192) { + starfive_sec_write(sdev, STARFIVE_AES_KEY4, key[4]); + starfive_sec_write(sdev, STARFIVE_AES_KEY5, key[5]); + } + + if (ctx->keylen >= AES_KEYSIZE_256) { + starfive_sec_write(sdev, STARFIVE_AES_KEY6, key[6]); + starfive_sec_write(sdev, STARFIVE_AES_KEY7, key[7]); + } + + if (starfive_aes_wait_keydone(ctx)) + return -ETIMEDOUT; + + return 0; +} + +static int starfive_cryp_gcm_init(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + memcpy(rctx->ctr, rctx->req.areq->iv, 12); + + return starfive_cryp_hw_write_iv(ctx, (u32 *)rctx->ctr); +} + +static void starfive_cryp_ccm_init(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + u8 iv[AES_BLOCK_SIZE], *b0; + unsigned int textlen; + + memcpy(iv, rctx->req.areq->iv, AES_BLOCK_SIZE); + memset(iv + AES_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1); + + /* Build B0 */ + b0 = (u8 *)sdev->aes_data; + memcpy(b0, iv, AES_BLOCK_SIZE); + + b0[0] |= (8 * ((rctx->authsize - 2) / 2)); + + if (rctx->req.areq->assoclen) + b0[0] |= CCM_B0_ADATA; + + textlen = starfive_cryp_get_input_text_len(ctx); + + b0[AES_BLOCK_SIZE - 2] = textlen >> 8; + b0[AES_BLOCK_SIZE - 1] = textlen & 0xFF; + + memcpy((void *)rctx->ctr, sdev->aes_data, AES_BLOCK_SIZE); + starfive_cryp_hw_write_ctr(ctx, (u32 *)b0); +} + +static int starfive_cryp_hw_init(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + int ret; + u32 hw_mode; + + starfive_aes_reset(ctx); + + hw_mode = get_aes_mode(ctx->rctx); + if (hw_mode == STARFIVE_AES_MODE_CFB || + hw_mode == STARFIVE_AES_MODE_OFB) + rctx->stmode = STARFIVE_AES_MODE_XFB_128; + else + rctx->stmode = STARFIVE_AES_MODE_XFB_1; + + starfive_aes_setup(ctx); + + ret = starfive_cryp_hw_write_key(ctx); + if (ret) + return ret; + + switch (hw_mode) { + case STARFIVE_AES_MODE_GCM: + memset(ctx->sdev->aes_data, 0, STARFIVE_MSG_BUFFER_SIZE); + starfive_aes_set_alen(ctx); + starfive_aes_set_mlen(ctx); + starfive_aes_set_ivlen(ctx); + starfive_aes_xcm_start(ctx, hw_mode); + + if (starfive_aes_wait_gcmdone(ctx)) + return -ETIMEDOUT; + + memset((void *)rctx->ctr, 0, sizeof(rctx->ctr)); + ret = starfive_cryp_gcm_init(ctx); + break; + case STARFIVE_AES_MODE_CCM: + memset(ctx->sdev->aes_data, 0, STARFIVE_MSG_BUFFER_SIZE); + memset((void *)rctx->ctr, 0, sizeof(rctx->ctr)); + + starfive_aes_set_alen(ctx); + starfive_aes_set_mlen(ctx); + starfive_cryp_ccm_init(ctx); + starfive_aes_xcm_start(ctx, hw_mode); + break; + case STARFIVE_AES_MODE_OFB: + case STARFIVE_AES_MODE_CFB: + case STARFIVE_AES_MODE_CBC: + case STARFIVE_AES_MODE_CTR: + ret = starfive_cryp_hw_write_iv(ctx, (void *)rctx->req.sreq->iv); + break; + default: + break; + } + + return ret; +} + +static int starfive_cryp_get_from_sg(struct starfive_sec_request_ctx *rctx, + size_t offset, size_t count, + size_t data_offset) +{ + size_t of, ct, index; + struct scatterlist *sg = rctx->in_sg; + + of = offset; + ct = count; + while (sg->length <= of) { + of -= sg->length; + + if (!sg_is_last(sg)) { + sg = sg_next(sg); + continue; + } else { + return -EBADE; + } + } + + index = data_offset; + while (ct > 0) { + if (sg->length - of >= ct) { + scatterwalk_map_and_copy(rctx->sdev->aes_data + index, sg, + of, ct, 0); + index = index + ct; + return index - data_offset; + } + scatterwalk_map_and_copy(rctx->sdev->aes_data + index, sg, + of, sg->length - of, 0); + index += sg->length - of; + ct = ct - (sg->length - of); + + of = 0; + + if (!sg_is_last(sg)) + sg = sg_next(sg); + else + return -EBADE; + } + + return index - data_offset; +} + +static void starfive_cryp_finish_req(struct starfive_sec_ctx *ctx, int err) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + + if (!err && (is_gcm(rctx) || is_ccm(rctx))) { + /* Phase 4 : output tag */ + err = starfive_cryp_read_auth_tag(ctx); + } + + if (!err && (is_cbc(rctx) || is_ctr(rctx))) + starfive_cryp_hw_get_iv(ctx, (void *)rctx->req.sreq->iv); + + if (rctx->sgs_copied) { + void *buf_in, *buf_out; + int pages, len; + + buf_in = sg_virt(&rctx->in_sgl); + buf_out = sg_virt(&rctx->out_sgl); + + sg_copy_buf(buf_out, rctx->out_sg_save, 0, + rctx->total_out_save, 1); + + len = ALIGN(rctx->total_in_save, rctx->hw_blocksize); + pages = len ? get_order(len) : 1; + free_pages((unsigned long)buf_in, pages); + + len = ALIGN(rctx->total_out_save, rctx->hw_blocksize); + pages = len ? get_order(len) : 1; + free_pages((unsigned long)buf_out, pages); + } + + if (is_gcm(rctx) || is_ccm(rctx)) + crypto_finalize_aead_request(ctx->sdev->engine, rctx->req.areq, err); + else + crypto_finalize_skcipher_request(ctx->sdev->engine, rctx->req.sreq, + err); + + memset(ctx->key, 0, ctx->keylen); +} + +static bool starfive_check_counter_overflow(struct starfive_sec_ctx *ctx, size_t count) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + bool ret = false; + u32 start, end, ctr, blocks; + + if (count) { + blocks = DIV_ROUND_UP(count, AES_BLOCK_SIZE); + rctx->ctr[3] = cpu_to_be32(be32_to_cpu(rctx->ctr[3]) + blocks); + + if (rctx->ctr[3] == 0) { + rctx->ctr[2] = cpu_to_be32(be32_to_cpu(rctx->ctr[2]) + 1); + if (rctx->ctr[2] == 0) { + rctx->ctr[1] = cpu_to_be32(be32_to_cpu(rctx->ctr[1]) + 1); + if (rctx->ctr[1] == 0) { + rctx->ctr[0] = cpu_to_be32(be32_to_cpu(rctx->ctr[0]) + 1); + if (rctx->ctr[1] == 0) + starfive_cryp_hw_write_ctr(ctx, (u32 *)rctx->ctr); + } + } + } + } + + /* ctr counter overflow. */ + ctr = rctx->total_in - rctx->assoclen - rctx->authsize; + blocks = DIV_ROUND_UP(ctr, AES_BLOCK_SIZE); + start = be32_to_cpu(rctx->ctr[3]); + + end = start + blocks - 1; + if (end < start) { + rctx->ctr_over_count = AES_BLOCK_SIZE * -start; + ret = true; + } + + return ret; +} + +static void starfive_aes_dma_callback(void *param) +{ + struct starfive_sec_dev *sdev = param; + + complete(&sdev->sec_comp_p); +} + +static int starfive_cryp_write_out_dma(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_sec_dev *sdev = ctx->sdev; + struct dma_async_tx_descriptor *in_desc, *out_desc; + union starfive_alg_cr alg_cr; + dma_cookie_t cookie; + unsigned int *out; + int total_len; + int err; + int loop; + + total_len = rctx->bufcnt; + + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.aes_dma_en = 1; + starfive_sec_write(sdev, STARFIVE_ALG_CR_OFFSET, alg_cr.v); + + total_len = (total_len & 0xf) ? (((total_len >> 4) + 1) << 4) : total_len; + + starfive_sec_write(sdev, STARFIVE_DMA_IN_LEN_OFFSET, total_len); + starfive_sec_write(sdev, STARFIVE_DMA_OUT_LEN_OFFSET, total_len); + + sg_init_table(&ctx->sg[0], 1); + sg_set_buf(&ctx->sg[0], sdev->aes_data, total_len); + sg_dma_address(&ctx->sg[0]) = phys_to_dma(sdev->dev, (unsigned long long)(sdev->aes_data)); + sg_dma_len(&ctx->sg[0]) = total_len; + + sg_init_table(&ctx->sg[1], 1); + sg_set_buf(&ctx->sg[1], sdev->aes_data + (STARFIVE_MSG_BUFFER_SIZE >> 1), total_len); + sg_dma_address(&ctx->sg[1]) = phys_to_dma(sdev->dev, + (unsigned long long)(sdev->aes_data + + (STARFIVE_MSG_BUFFER_SIZE >> 1))); + sg_dma_len(&ctx->sg[1]) = total_len; + + err = dma_map_sg(sdev->dev, &ctx->sg[0], 1, DMA_TO_DEVICE); + if (!err) { + dev_err(sdev->dev, "Error: dma_map_sg() DMA_TO_DEVICE\n"); + return -EINVAL; + } + + err = dma_map_sg(sdev->dev, &ctx->sg[1], 1, DMA_FROM_DEVICE); + if (!err) { + dev_err(sdev->dev, "Error: dma_map_sg() DMA_FROM_DEVICE\n"); + return -EINVAL; + } + + sdev->cfg_in.direction = DMA_MEM_TO_DEV; + sdev->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_in.src_maxburst = sdev->dma_maxburst; + sdev->cfg_in.dst_maxburst = sdev->dma_maxburst; + sdev->cfg_in.dst_addr = sdev->io_phys_base + STARFIVE_ALG_FIFO_OFFSET; + + dmaengine_slave_config(sdev->sec_xm_m, &sdev->cfg_in); + + sdev->cfg_out.direction = DMA_DEV_TO_MEM; + sdev->cfg_out.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_out.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + sdev->cfg_out.src_maxburst = 4; + sdev->cfg_out.dst_maxburst = 4; + sdev->cfg_out.src_addr = sdev->io_phys_base + STARFIVE_ALG_FIFO_OFFSET; + + dmaengine_slave_config(sdev->sec_xm_p, &sdev->cfg_out); + + in_desc = dmaengine_prep_slave_sg(sdev->sec_xm_m, &ctx->sg[0], + 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) + return -EINVAL; + + cookie = dmaengine_submit(in_desc); + dma_async_issue_pending(sdev->sec_xm_m); + + out_desc = dmaengine_prep_slave_sg(sdev->sec_xm_p, &ctx->sg[1], + 1, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!out_desc) + return -EINVAL; + + reinit_completion(&sdev->sec_comp_p); + + out_desc->callback = starfive_aes_dma_callback; + out_desc->callback_param = sdev; + + dmaengine_submit(out_desc); + dma_async_issue_pending(sdev->sec_xm_p); + + if (!wait_for_completion_timeout(&sdev->sec_comp_p, msecs_to_jiffies(10000))) { + alg_cr.v = 0; + alg_cr.clear = 1; + + starfive_sec_write(sdev, STARFIVE_ALG_CR_OFFSET, alg_cr.v); + + dev_err(sdev->dev, "wait_for_completion_timeout error, cookie = %x\n", + dma_async_is_tx_complete(sdev->sec_xm_p, cookie, + NULL, NULL)); + } + + out = (unsigned int *)(sdev->aes_data + (STARFIVE_MSG_BUFFER_SIZE >> 1)); + + for (loop = 0; loop < total_len / 4; loop++) + dev_dbg(sdev->dev, "this is debug [%d] = %x\n", loop, out[loop]); + + sg_copy_buffer(rctx->out_sg, sg_nents(rctx->out_sg), out, + total_len, rctx->offset, 0); + + dma_unmap_sg(sdev->dev, &ctx->sg[0], 1, DMA_TO_DEVICE); + dma_unmap_sg(sdev->dev, &ctx->sg[1], 1, DMA_FROM_DEVICE); + + alg_cr.v = 0; + alg_cr.clear = 1; + + starfive_sec_write(sdev, STARFIVE_ALG_CR_OFFSET, alg_cr.v); + + return 0; +} + +static int starfive_cryp_write_out_cpu(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + unsigned int *buffer, *out; + int total_len, loop, count; + + total_len = rctx->bufcnt; + buffer = (unsigned int *)sdev->aes_data; + out = (unsigned int *)(sdev->aes_data + (STARFIVE_MSG_BUFFER_SIZE >> 1)); + + while (total_len > 0) { + for (loop = 0; loop < 4; loop++, buffer++) + starfive_sec_write(sdev, STARFIVE_AES_AESDIO0R, *buffer); + + if (starfive_aes_wait_busy(ctx)) { + dev_err(sdev->dev, "Error: timeout processing input text\n"); + return -ETIMEDOUT; + } + + for (loop = 0; loop < 4; loop++, out++) + *out = starfive_sec_read(sdev, STARFIVE_AES_AESDIO0R); + + total_len -= 16; + } + + if (rctx->bufcnt >= rctx->total_out) + count = rctx->total_out; + else + count = rctx->bufcnt; + + out = (unsigned int *)(sdev->aes_data + (STARFIVE_MSG_BUFFER_SIZE >> 1)); + + sg_copy_buffer(rctx->out_sg, sg_nents(rctx->out_sg), out, + count, rctx->offset, 0); + + return 0; +} + +static int starfive_cryp_write_data(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + size_t data_len, total, count, data_buf_len, data_offset; + int ret; + bool fragmented = false; + + /* ctr counter overflow. */ + fragmented = starfive_check_counter_overflow(ctx, 0); + + total = 0; + rctx->offset = 0; + rctx->data_offset = 0; + + data_offset = rctx->data_offset; + while (total < rctx->total_in) { + data_buf_len = sdev->data_buf_len - + (sdev->data_buf_len % ctx->keylen) - data_offset; + count = min(rctx->total_in - rctx->offset, data_buf_len); + + /* ctr counter overflow. */ + if (fragmented && rctx->ctr_over_count != 0) { + if (count >= rctx->ctr_over_count) + count = rctx->ctr_over_count; + } + + data_len = starfive_cryp_get_from_sg(rctx, rctx->offset, count, data_offset); + + if (data_len < 0) + return data_len; + if (data_len != count) + return -EINVAL; + + rctx->bufcnt = data_len + data_offset; + total += data_len; + + if (!is_encrypt(rctx) && (total + rctx->assoclen >= rctx->total_in)) + rctx->bufcnt = rctx->bufcnt - rctx->authsize; + + if (rctx->bufcnt) { + if (sdev->use_dma) + ret = starfive_cryp_write_out_dma(ctx); + else + ret = starfive_cryp_write_out_cpu(ctx); + + if (ret) + return ret; + } + + data_offset = 0; + rctx->offset += data_len; + fragmented = starfive_check_counter_overflow(ctx, data_len); + } + + return ret; +} + +static int starfive_cryp_gcm_write_aad(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + unsigned int *buffer; + int total_len, loop; + + if (rctx->assoclen) { + total_len = rctx->assoclen; + total_len = (total_len & 0xf) ? (((total_len >> 4) + 1) << 2) : (total_len >> 2); + } + + buffer = (unsigned int *)sdev->aes_data; + + for (loop = 0; loop < total_len; loop += 4) { + starfive_sec_write(sdev, STARFIVE_AES_NONCE0, *buffer); + buffer++; + starfive_sec_write(sdev, STARFIVE_AES_NONCE1, *buffer); + buffer++; + starfive_sec_write(sdev, STARFIVE_AES_NONCE2, *buffer); + buffer++; + starfive_sec_write(sdev, STARFIVE_AES_NONCE3, *buffer); + buffer++; + udelay(2); + } + + if (starfive_aes_wait_gcmdone(ctx)) + return -ETIMEDOUT; + + return 0; +} + +static int starfive_cryp_ccm_write_aad(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + unsigned int *buffer, *out; + unsigned char *ci; + int total_len, mlen, loop; + + total_len = rctx->bufcnt; + buffer = (unsigned int *)sdev->aes_data; + out = (unsigned int *)(sdev->aes_data + (STARFIVE_MSG_BUFFER_SIZE >> 1)); + + ci = (unsigned char *)buffer; + starfive_sec_writeb(sdev, STARFIVE_AES_AESDIO0R, *ci); + ci++; + starfive_sec_writeb(sdev, STARFIVE_AES_AESDIO0R, *ci); + ci++; + total_len -= 2; + buffer = (unsigned int *)ci; + + for (loop = 0; loop < 3; loop++, buffer++) + starfive_sec_write(sdev, STARFIVE_AES_AESDIO0R, *buffer); + + if (starfive_aes_wait_busy(ctx)) { + dev_err(sdev->dev, "Error: timeout processing first AAD block\n"); + return -ETIMEDOUT; + } + + total_len -= 12; + + while (total_len >= 16) { + for (loop = 0; loop < 4; loop++, buffer++) + starfive_sec_write(sdev, STARFIVE_AES_AESDIO0R, *buffer); + + if (starfive_aes_wait_busy(ctx)) { + dev_err(sdev->dev, "Error: timeout processing consecutive AAD block\n"); + return -ETIMEDOUT; + } + + total_len -= 16; + } + + if (total_len > 0) { + mlen = total_len; + for (; total_len >= 4; total_len -= 4, buffer++) + starfive_sec_write(sdev, STARFIVE_AES_AESDIO0R, *buffer); + + ci = (unsigned char *)buffer; + for (; total_len > 0; total_len--, ci++) + starfive_sec_writeb(sdev, STARFIVE_AES_AESDIO0R, *ci); + + if (starfive_aes_wait_busy(ctx)) { + dev_err(sdev->dev, "Error: timeout processing final AAD block\n"); + return -ETIMEDOUT; + } + } + + return 0; +} + +static int starfive_cryp_xcm_write_data(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + size_t data_len, total, count, data_buf_len, offset, auths; + int ret; + bool fragmented = false; + + /* ctr counter overflow. */ + fragmented = starfive_check_counter_overflow(ctx, 0); + + total = 0; + rctx->offset = 0; + rctx->data_offset = 0; + offset = 0; + + while (total < rctx->assoclen) { + data_buf_len = sdev->data_buf_len - (sdev->data_buf_len % ctx->keylen); + count = min(rctx->assoclen - offset, data_buf_len); + count = min(count, rctx->assoclen - total); + data_len = starfive_cryp_get_from_sg(rctx, offset, count, 0); + if (data_len < 0) + return data_len; + if (data_len != count) + return -EINVAL; + + offset += data_len; + rctx->offset += data_len; + if ((data_len + 2) & 0xF) { + memset(sdev->aes_data + data_len, 0, 16 - ((data_len + 2) & 0xf)); + data_len += 16 - ((data_len + 2) & 0xf); + } + + rctx->bufcnt = data_len; + total += data_len; + + if (is_ccm(rctx)) + ret = starfive_cryp_ccm_write_aad(ctx); + else + ret = starfive_cryp_gcm_write_aad(ctx); + + if (ret) + return ret; + } + + total = 0; + auths = 0; + + while (total + auths < rctx->total_in - rctx->assoclen) { + data_buf_len = sdev->data_buf_len - (sdev->data_buf_len % ctx->keylen); + count = min(rctx->total_in - rctx->offset, data_buf_len); + + if (is_encrypt(rctx)) { + count = min(count, rctx->total_in - rctx->assoclen - total); + } else { + count = min(count, + rctx->total_in - rctx->assoclen - total - rctx->authsize); + auths = rctx->authsize; + } + + /* ctr counter overflow. */ + if (fragmented && rctx->ctr_over_count != 0) { + if (count >= rctx->ctr_over_count) + count = rctx->ctr_over_count; + } + + data_len = starfive_cryp_get_from_sg(rctx, rctx->offset, count, 0); + + if (data_len < 0) + return data_len; + if (data_len != count) + return -EINVAL; + + if (data_len % STARFIVE_AES_IV_LEN) { + memset(sdev->aes_data + data_len, 0, + STARFIVE_AES_IV_LEN - (data_len % STARFIVE_AES_IV_LEN)); + data_len = data_len + + (STARFIVE_AES_IV_LEN - (data_len % STARFIVE_AES_IV_LEN)); + } + + rctx->bufcnt = data_len; + total += data_len; + + if (rctx->bufcnt) { + if (sdev->use_dma) + ret = starfive_cryp_write_out_dma(ctx); + else + ret = starfive_cryp_write_out_cpu(ctx); + } + + rctx->offset += count; + offset += count; + + fragmented = starfive_check_counter_overflow(ctx, data_len); + } + + return ret; +} + +static int starfive_gcm_zero_message_data(struct starfive_sec_ctx *ctx) +{ + int ret; + + ctx->rctx->bufcnt = 0; + ctx->rctx->offset = 0; + if (ctx->sdev->use_dma) + ret = starfive_cryp_write_out_dma(ctx); + else + ret = starfive_cryp_write_out_cpu(ctx); + + return ret; +} + +static int starfive_cryp_cpu_start(struct starfive_sec_ctx *ctx, + struct starfive_sec_request_ctx *rctx) +{ + int ret; + + ret = starfive_cryp_write_data(ctx); + if (ret) + return ret; + + starfive_cryp_finish_req(ctx, ret); + + return 0; +} + +static int starfive_cryp_xcm_start(struct starfive_sec_ctx *ctx, + struct starfive_sec_request_ctx *rctx) +{ + int ret; + + mutex_lock(&ctx->sdev->lock); + + ret = starfive_cryp_xcm_write_data(ctx); + if (ret) + return ret; + + starfive_cryp_finish_req(ctx, ret); + + mutex_unlock(&ctx->sdev->lock); + + return 0; +} + +static int starfive_cryp_cipher_one_req(struct crypto_engine *engine, void *areq) +{ + struct skcipher_request *req = + container_of(areq, struct skcipher_request, base); + struct starfive_sec_request_ctx *rctx = skcipher_request_ctx(req); + struct starfive_sec_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct starfive_sec_dev *sdev = ctx->sdev; + int ret; + + if (!sdev) + return -ENODEV; + + mutex_lock(&sdev->lock); + ret = starfive_cryp_cpu_start(ctx, rctx); + mutex_unlock(&sdev->lock); + + return ret; +} + +static int starfive_cryp_prepare_req(struct skcipher_request *req, + struct aead_request *areq) +{ + struct starfive_sec_ctx *ctx; + struct starfive_sec_dev *sdev; + struct starfive_sec_request_ctx *rctx; + int ret; + + if (!req && !areq) + return -EINVAL; + + ctx = req ? crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)) : + crypto_aead_ctx(crypto_aead_reqtfm(areq)); + + sdev = ctx->sdev; + + if (!sdev) + return -ENODEV; + + rctx = req ? skcipher_request_ctx(req) : aead_request_ctx(areq); + + rctx->sdev = sdev; + ctx->rctx = rctx; + rctx->hw_blocksize = AES_BLOCK_SIZE; + + if (req) { + rctx->req.sreq = req; + rctx->req_type = STARFIVE_ABLK_REQ; + rctx->total_in = req->cryptlen; + rctx->total_out = rctx->total_in; + rctx->authsize = 0; + rctx->assoclen = 0; + } else { + /* + * Length of input and output data: + * Encryption case: + * INPUT = AssocData || PlainText + * <- assoclen -> <- cryptlen -> + * <------- total_in -----------> + * + * OUTPUT = AssocData || CipherText || AuthTag + * <- assoclen -> <- cryptlen -> <- authsize -> + * <---------------- total_out -----------------> + * + * Decryption case: + * INPUT = AssocData || CipherText || AuthTag + * <- assoclen -> <--------- cryptlen ---------> + * <- authsize -> + * <---------------- total_in ------------------> + * + * OUTPUT = AssocData || PlainText + * <- assoclen -> <- crypten - authsize -> + * <---------- total_out -----------------> + */ + rctx->req.areq = areq; + rctx->req_type = STARFIVE_AEAD_REQ; + rctx->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq)); + rctx->total_in = areq->assoclen + areq->cryptlen; + rctx->assoclen = areq->assoclen; + if (is_encrypt(rctx)) + /* Append auth tag to output */ + rctx->total_out = rctx->total_in + rctx->authsize; + else + /* No auth tag in output */ + rctx->total_out = rctx->total_in - rctx->authsize; + } + + rctx->total_in_save = rctx->total_in; + rctx->total_out_save = rctx->total_out; + + rctx->in_sg = req ? req->src : areq->src; + rctx->out_sg = req ? req->dst : areq->dst; + rctx->out_sg_save = rctx->out_sg; + + rctx->in_sg_len = sg_nents_for_len(rctx->in_sg, rctx->total_in); + if (rctx->in_sg_len < 0) { + dev_err(sdev->dev, "Cannot get in_sg_len\n"); + ret = rctx->in_sg_len; + return ret; + } + + rctx->out_sg_len = sg_nents_for_len(rctx->out_sg, rctx->total_out); + if (rctx->out_sg_len < 0) { + dev_err(sdev->dev, "Cannot get out_sg_len\n"); + ret = rctx->out_sg_len; + return ret; + } + + ret = starfive_cryp_copy_sgs(rctx); + if (ret) + return ret; + + return starfive_cryp_hw_init(ctx); +} + +static int starfive_cryp_prepare_cipher_req(struct crypto_engine *engine, + void *areq) +{ + struct skcipher_request *req = + container_of(areq, struct skcipher_request, base); + + return starfive_cryp_prepare_req(req, NULL); +} + +static int starfive_cryp_cra_init(struct crypto_skcipher *tfm) +{ + struct starfive_sec_ctx *ctx = crypto_skcipher_ctx(tfm); + + ctx->sdev = starfive_sec_find_dev(ctx); + if (!ctx->sdev) + return -ENODEV; + + crypto_skcipher_set_reqsize(tfm, sizeof(struct starfive_sec_request_ctx)); + + ctx->enginectx.op.do_one_request = starfive_cryp_cipher_one_req; + ctx->enginectx.op.prepare_request = starfive_cryp_prepare_cipher_req; + ctx->enginectx.op.unprepare_request = NULL; + + return 0; +} + +static void starfive_cryp_cra_exit(struct crypto_skcipher *tfm) +{ + struct starfive_sec_ctx *ctx = crypto_skcipher_ctx(tfm); + + ctx->enginectx.op.do_one_request = NULL; + ctx->enginectx.op.prepare_request = NULL; + ctx->enginectx.op.unprepare_request = NULL; +} + +static int starfive_cryp_aead_one_req(struct crypto_engine *engine, void *areq) +{ + struct aead_request *req = + container_of(areq, struct aead_request, base); + struct starfive_sec_request_ctx *rctx = aead_request_ctx(req); + struct starfive_sec_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct starfive_sec_dev *sdev = ctx->sdev; + + if (!sdev) + return -ENODEV; + + if (unlikely(!rctx->req.areq->assoclen && + !starfive_cryp_get_input_text_len(ctx))) { + /* No input data to process: get tag and finish */ + mutex_lock(&ctx->sdev->lock); + + starfive_gcm_zero_message_data(ctx); + starfive_cryp_finish_req(ctx, 0); + + mutex_unlock(&ctx->sdev->lock); + + return 0; + } + + return starfive_cryp_xcm_start(ctx, rctx); +} + +static int starfive_cryp_prepare_aead_req(struct crypto_engine *engine, void *areq) +{ + struct aead_request *req = container_of(areq, struct aead_request, + base); + + return starfive_cryp_prepare_req(NULL, req); +} + +static int starfive_cryp_aes_aead_init(struct crypto_aead *tfm) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(tfm); + struct crypto_tfm *aead = crypto_aead_tfm(tfm); + struct crypto_alg *alg = aead->__crt_alg; + struct starfive_sec_dev *sdev = ctx->sdev; + + ctx->sdev = starfive_sec_find_dev(ctx); + + if (!ctx->sdev) + return -ENODEV; + + crypto_aead_set_reqsize(tfm, sizeof(struct starfive_sec_request_ctx)); + + if (alg->cra_flags & CRYPTO_ALG_NEED_FALLBACK) { + ctx->fallback.aead = + crypto_alloc_aead(alg->cra_name, 0, + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->fallback.aead)) { + dev_err(sdev->dev, "%s() failed to allocate fallback for %s\n", + __func__, alg->cra_name); + return PTR_ERR(ctx->fallback.aead); + } + } + + ctx->enginectx.op.do_one_request = starfive_cryp_aead_one_req; + ctx->enginectx.op.prepare_request = starfive_cryp_prepare_aead_req; + ctx->enginectx.op.unprepare_request = NULL; + + return 0; +} + +static void starfive_cryp_aes_aead_exit(struct crypto_aead *tfm) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(tfm); + + if (ctx->fallback.aead) { + crypto_free_aead(ctx->fallback.aead); + ctx->fallback.aead = NULL; + } + + ctx->enginectx.op.do_one_request = NULL; + ctx->enginectx.op.prepare_request = NULL; + ctx->enginectx.op.unprepare_request = NULL; +} + +static int starfive_cryp_crypt(struct skcipher_request *req, unsigned long flags) +{ + struct starfive_sec_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct starfive_sec_request_ctx *rctx = skcipher_request_ctx(req); + struct starfive_sec_dev *sdev = ctx->sdev; + unsigned int blocksize_align = crypto_skcipher_blocksize(tfm) - 1; + + if (!sdev) + return -ENODEV; + + rctx->flags = flags; + rctx->req_type = STARFIVE_ABLK_REQ; + + if (is_ecb(rctx) || is_cbc(rctx)) + if (req->cryptlen & (blocksize_align)) + return -EINVAL; + + return crypto_transfer_skcipher_request_to_engine(sdev->engine, req); +} + +static int aead_do_fallback(struct aead_request *req) +{ + struct aead_request *subreq = aead_request_ctx(req); + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct starfive_sec_ctx *ctx = crypto_aead_ctx(aead); + + aead_request_set_tfm(subreq, ctx->fallback.aead); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(subreq, req->src, + req->dst, req->cryptlen, req->iv); + aead_request_set_ad(subreq, req->assoclen); + + return crypto_aead_decrypt(subreq); +} + +static int starfive_cryp_aead_crypt(struct aead_request *req, unsigned long flags) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct starfive_sec_request_ctx *rctx = aead_request_ctx(req); + struct starfive_sec_dev *sdev = ctx->sdev; + + if (!sdev) + return -ENODEV; + + rctx->flags = flags; + rctx->req_type = STARFIVE_AEAD_REQ; + + /* HW engine could not perform tag verification on + * non-blocksize aligned ciphertext, use fallback algo instead + */ + if (ctx->fallback.aead && is_decrypt(rctx)) + return aead_do_fallback(req); + + return crypto_transfer_aead_request_to_engine(sdev->engine, req); +} + +static int starfive_cryp_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_skcipher_ctx(tfm); + + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + + return 0; +} + +static int starfive_cryp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + if (!key || !keylen) + return -EINVAL; + + if (keylen != AES_KEYSIZE_128 && + keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) + return -EINVAL; + else + return starfive_cryp_setkey(tfm, key, keylen); +} + +static int starfive_cryp_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(tfm); + int ret = 0; + + if (!key || !keylen) + return -EINVAL; + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) { + return -EINVAL; + } + + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + + if (ctx->fallback.aead) + ret = crypto_aead_setkey(ctx->fallback.aead, key, keylen); + + return ret; +} + +static int starfive_cryp_aes_gcm_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(tfm); + + tfm->authsize = authsize; + + if (ctx->fallback.aead) + ctx->fallback.aead->authsize = authsize; + + return crypto_gcm_check_authsize(authsize); +} + +static int starfive_cryp_aes_ccm_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct starfive_sec_ctx *ctx = crypto_aead_ctx(tfm); + + switch (authsize) { + case 4: + case 6: + case 8: + case 10: + case 12: + case 14: + case 16: + break; + default: + return -EINVAL; + } + + tfm->authsize = authsize; + + if (ctx->fallback.aead) + ctx->fallback.aead->authsize = authsize; + + return 0; +} + +static int starfive_cryp_aes_ecb_encrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_ECB | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_ecb_decrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_ECB); +} + +static int starfive_cryp_aes_cbc_encrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CBC | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_cbc_decrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CBC); +} + +static int starfive_cryp_aes_cfb_encrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CFB | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_cfb_decrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CFB); +} + +static int starfive_cryp_aes_ofb_encrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_OFB | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_ofb_decrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_OFB); +} + +static int starfive_cryp_aes_ctr_encrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CTR | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_ctr_decrypt(struct skcipher_request *req) +{ + return starfive_cryp_crypt(req, STARFIVE_AES_MODE_CTR); +} + +static int starfive_cryp_aes_gcm_encrypt(struct aead_request *req) +{ + return starfive_cryp_aead_crypt(req, STARFIVE_AES_MODE_GCM | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_gcm_decrypt(struct aead_request *req) +{ + return starfive_cryp_aead_crypt(req, STARFIVE_AES_MODE_GCM); +} + +static int starfive_cryp_aes_ccm_encrypt(struct aead_request *req) +{ + int ret; + + ret = crypto_ccm_check_iv(req->iv); + if (ret) + return ret; + + return starfive_cryp_aead_crypt(req, STARFIVE_AES_MODE_CCM | FLG_ENCRYPT); +} + +static int starfive_cryp_aes_ccm_decrypt(struct aead_request *req) +{ + int ret; + + ret = crypto_ccm_check_iv(req->iv); + if (ret) + return ret; + + return starfive_cryp_aead_crypt(req, STARFIVE_AES_MODE_CCM); +} + +static struct skcipher_alg crypto_algs[] = { +{ + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "starfive-ecb-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct starfive_sec_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + .init = starfive_cryp_cra_init, + .exit = starfive_cryp_cra_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = starfive_cryp_aes_setkey, + .encrypt = starfive_cryp_aes_ecb_encrypt, + .decrypt = starfive_cryp_aes_ecb_decrypt, +}, { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "starfive-cbc-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct starfive_sec_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + .init = starfive_cryp_cra_init, + .exit = starfive_cryp_cra_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = starfive_cryp_aes_setkey, + .encrypt = starfive_cryp_aes_cbc_encrypt, + .decrypt = starfive_cryp_aes_cbc_decrypt, +}, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "starfive-ctr-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct starfive_sec_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + .init = starfive_cryp_cra_init, + .exit = starfive_cryp_cra_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = starfive_cryp_aes_setkey, + .encrypt = starfive_cryp_aes_ctr_encrypt, + .decrypt = starfive_cryp_aes_ctr_decrypt, +}, { + .base.cra_name = "cfb(aes)", + .base.cra_driver_name = "starfive-cfb-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct starfive_sec_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + .init = starfive_cryp_cra_init, + .exit = starfive_cryp_cra_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = starfive_cryp_aes_setkey, + .encrypt = starfive_cryp_aes_cfb_encrypt, + .decrypt = starfive_cryp_aes_cfb_decrypt, +}, { + .base.cra_name = "ofb(aes)", + .base.cra_driver_name = "starfive-ofb-aes", + .base.cra_priority = 200, + .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct starfive_sec_ctx), + .base.cra_alignmask = 0xf, + .base.cra_module = THIS_MODULE, + .init = starfive_cryp_cra_init, + .exit = starfive_cryp_cra_exit, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = starfive_cryp_aes_setkey, + .encrypt = starfive_cryp_aes_ofb_encrypt, + .decrypt = starfive_cryp_aes_ofb_decrypt, +}, +}; + +static struct aead_alg aead_algs[] = { +{ + .setkey = starfive_cryp_aes_aead_setkey, + .setauthsize = starfive_cryp_aes_gcm_setauthsize, + .encrypt = starfive_cryp_aes_gcm_encrypt, + .decrypt = starfive_cryp_aes_gcm_decrypt, + .init = starfive_cryp_aes_aead_init, + .exit = starfive_cryp_aes_aead_exit, + .ivsize = GCM_AES_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + + .base = { + .cra_name = "gcm(aes)", + .cra_driver_name = "starfive-gcm-aes", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, +}, { + .setkey = starfive_cryp_aes_aead_setkey, + .setauthsize = starfive_cryp_aes_ccm_setauthsize, + .encrypt = starfive_cryp_aes_ccm_encrypt, + .decrypt = starfive_cryp_aes_ccm_decrypt, + .init = starfive_cryp_aes_aead_init, + .exit = starfive_cryp_aes_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + + .base = { + .cra_name = "ccm(aes)", + .cra_driver_name = "starfive-ccm-aes", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, +}, +}; + +int starfive_aes_register_algs(void) +{ + int ret; + + ret = crypto_register_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); + if (ret) + return ret; + + ret = crypto_register_aeads(aead_algs, ARRAY_SIZE(aead_algs)); + if (ret) { + crypto_unregister_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); + return ret; + } + + return ret; +} + +void starfive_aes_unregister_algs(void) +{ + crypto_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs)); + crypto_unregister_skciphers(crypto_algs, ARRAY_SIZE(crypto_algs)); +} diff --git a/drivers/crypto/starfive/starfive-cryp.c b/drivers/crypto/starfive/starfive-cryp.c index 9f77cae758ac..452bd1ab4f04 100644 --- a/drivers/crypto/starfive/starfive-cryp.c +++ b/drivers/crypto/starfive/starfive-cryp.c @@ -169,6 +169,12 @@ static int starfive_cryp_probe(struct platform_device *pdev) goto err_hash_data; } + sdev->aes_data = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, pages); + if (!sdev->aes_data) { + dev_err(sdev->dev, "Can't allocate aes buffer pages when unaligned\n"); + goto err_aes_data; + } + sdev->pages_count = pages >> 1; sdev->data_buf_len = STARFIVE_MSG_BUFFER_SIZE >> 1; @@ -187,15 +193,23 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (ret) goto err_algs_hash; + ret = starfive_aes_register_algs(); + if (ret) + goto err_algs_aes; + dev_info(dev, "Crypto engine started\n"); return 0; +err_algs_aes: + starfive_hash_unregister_algs(); err_algs_hash: crypto_engine_stop(sdev->engine); err_engine_start: crypto_engine_exit(sdev->engine); err_engine: + free_pages((unsigned long)sdev->aes_data, pages); +err_aes_data: free_pages((unsigned long)sdev->hash_data, pages); err_hash_data: starfive_dma_cleanup(sdev); @@ -215,6 +229,7 @@ static int starfive_cryp_remove(struct platform_device *pdev) return -ENODEV; starfive_hash_unregister_algs(); + starfive_aes_unregister_algs(); crypto_engine_stop(sdev->engine); crypto_engine_exit(sdev->engine); @@ -222,7 +237,9 @@ static int starfive_cryp_remove(struct platform_device *pdev) starfive_dma_cleanup(sdev); free_pages((unsigned long)sdev->hash_data, sdev->pages_count); + free_pages((unsigned long)sdev->aes_data, sdev->pages_count); sdev->hash_data = NULL; + sdev->aes_data = NULL; spin_lock(&dev_list.lock); list_del(&sdev->list); diff --git a/drivers/crypto/starfive/starfive-regs.h b/drivers/crypto/starfive/starfive-regs.h index 3f5e8881b3c0..c53b0303fb66 100644 --- a/drivers/crypto/starfive/starfive-regs.h +++ b/drivers/crypto/starfive/starfive-regs.h @@ -9,6 +9,7 @@ #define STARFIVE_DMA_IN_LEN_OFFSET 0x10 #define STARFIVE_DMA_OUT_LEN_OFFSET 0x14 +#define STARFIVE_AES_REGS_OFFSET 0x100 #define STARFIVE_HASH_REGS_OFFSET 0x300 union starfive_alg_cr { @@ -25,6 +26,69 @@ union starfive_alg_cr { }; }; +#define STARFIVE_AES_AESDIO0R (STARFIVE_AES_REGS_OFFSET + 0x0) +#define STARFIVE_AES_KEY0 (STARFIVE_AES_REGS_OFFSET + 0x4) +#define STARFIVE_AES_KEY1 (STARFIVE_AES_REGS_OFFSET + 0x8) +#define STARFIVE_AES_KEY2 (STARFIVE_AES_REGS_OFFSET + 0xC) +#define STARFIVE_AES_KEY3 (STARFIVE_AES_REGS_OFFSET + 0x10) +#define STARFIVE_AES_KEY4 (STARFIVE_AES_REGS_OFFSET + 0x14) +#define STARFIVE_AES_KEY5 (STARFIVE_AES_REGS_OFFSET + 0x18) +#define STARFIVE_AES_KEY6 (STARFIVE_AES_REGS_OFFSET + 0x1C) +#define STARFIVE_AES_KEY7 (STARFIVE_AES_REGS_OFFSET + 0x20) +#define STARFIVE_AES_CSR (STARFIVE_AES_REGS_OFFSET + 0x24) +#define STARFIVE_AES_IV0 (STARFIVE_AES_REGS_OFFSET + 0x28) +#define STARFIVE_AES_IV1 (STARFIVE_AES_REGS_OFFSET + 0x2C) +#define STARFIVE_AES_IV2 (STARFIVE_AES_REGS_OFFSET + 0x30) +#define STARFIVE_AES_IV3 (STARFIVE_AES_REGS_OFFSET + 0x34) +#define STARFIVE_AES_NONCE0 (STARFIVE_AES_REGS_OFFSET + 0x3C) +#define STARFIVE_AES_NONCE1 (STARFIVE_AES_REGS_OFFSET + 0x40) +#define STARFIVE_AES_NONCE2 (STARFIVE_AES_REGS_OFFSET + 0x44) +#define STARFIVE_AES_NONCE3 (STARFIVE_AES_REGS_OFFSET + 0x48) +#define STARFIVE_AES_ALEN0 (STARFIVE_AES_REGS_OFFSET + 0x4C) +#define STARFIVE_AES_ALEN1 (STARFIVE_AES_REGS_OFFSET + 0x50) +#define STARFIVE_AES_MLEN0 (STARFIVE_AES_REGS_OFFSET + 0x54) +#define STARFIVE_AES_MLEN1 (STARFIVE_AES_REGS_OFFSET + 0x58) +#define STARFIVE_AES_IVLEN (STARFIVE_AES_REGS_OFFSET + 0x5C) + +union starfive_aes_csr { + u32 v; + struct { + u32 cmode :1; +#define STARFIVE_AES_KEYMODE_128 0x0 +#define STARFIVE_AES_KEYMODE_192 0x1 +#define STARFIVE_AES_KEYMODE_256 0x2 + u32 keymode :2; +#define STARFIVE_AES_BUSY BIT(3) + u32 busy :1; + u32 done :1; +#define STARFIVE_AES_KEY_DONE BIT(5) + u32 krdy :1; + u32 aesrst :1; + u32 rsvd_0 :1; +#define STARFIVE_AES_CCM_START BIT(8) + u32 ccm_start :1; +#define STARFIVE_AES_MODE_ECB 0x0 +#define STARFIVE_AES_MODE_CBC 0x1 +#define STARFIVE_AES_MODE_CFB 0x2 +#define STARFIVE_AES_MODE_OFB 0x3 +#define STARFIVE_AES_MODE_CTR 0x4 +#define STARFIVE_AES_MODE_CCM 0x5 +#define STARFIVE_AES_MODE_GCM 0x6 + u32 mode :3; +#define STARFIVE_AES_GCM_START BIT(12) + u32 gcm_start :1; +#define STARFIVE_AES_GCM_DONE BIT(13) + u32 gcm_done :1; + u32 delay_aes :1; + u32 vaes_start :1; + u32 rsvd_1 :8; +#define STARFIVE_AES_MODE_XFB_1 0x0 +#define STARFIVE_AES_MODE_XFB_128 0x5 + u32 stream_mode :3; + u32 rsvd_2 :5; + }; +}; + #define STARFIVE_HASH_SHACSR (STARFIVE_HASH_REGS_OFFSET + 0x0) #define STARFIVE_HASH_SHAWDR (STARFIVE_HASH_REGS_OFFSET + 0x4) #define STARFIVE_HASH_SHARDR (STARFIVE_HASH_REGS_OFFSET + 0x8) diff --git a/drivers/crypto/starfive/starfive-str.h b/drivers/crypto/starfive/starfive-str.h index a6fed48a0b19..396529a9a8f1 100644 --- a/drivers/crypto/starfive/starfive-str.h +++ b/drivers/crypto/starfive/starfive-str.h @@ -6,6 +6,7 @@ #include #include +#include #include #include #include @@ -15,6 +16,9 @@ #define STARFIVE_MSG_BUFFER_SIZE SZ_16K #define MAX_KEY_SIZE SHA512_BLOCK_SIZE +#define STARFIVE_AES_IV_LEN AES_BLOCK_SIZE +#define STARFIVE_AES_CTR_LEN AES_BLOCK_SIZE + struct starfive_sec_ctx { struct crypto_engine_ctx enginectx; struct starfive_sec_dev *sdev; @@ -27,6 +31,7 @@ struct starfive_sec_ctx { u8 *buffer; union { + struct crypto_aead *aead; struct crypto_shash *shash; } fallback; bool fallback_available; @@ -42,6 +47,7 @@ struct starfive_sec_dev { void __iomem *io_base; phys_addr_t io_phys_base; + void *aes_data; void *hash_data; size_t data_buf_len; @@ -70,26 +76,52 @@ struct starfive_sec_request_ctx { struct starfive_sec_dev *sdev; union { + struct aead_request *areq; struct ahash_request *hreq; + struct skcipher_request *sreq; } req; #define STARFIVE_AHASH_REQ 0 +#define STARFIVE_ABLK_REQ 1 +#define STARFIVE_AEAD_REQ 2 unsigned int req_type; union { + union starfive_aes_csr aes; union starfive_hash_csr hash; } csr; struct scatterlist *in_sg; - + struct scatterlist *out_sg; + struct scatterlist *out_sg_save; + struct scatterlist in_sgl; + struct scatterlist out_sgl; + bool sgs_copied; + + unsigned long sg_len; + unsigned long in_sg_len; + unsigned long out_sg_len; unsigned long flags; unsigned long op; + unsigned long stmode; size_t bufcnt; size_t buflen; size_t total; size_t offset; size_t data_offset; - + size_t authsize; + size_t hw_blocksize; + size_t total_in; + size_t total_in_save; + size_t total_out; + size_t total_out_save; + size_t assoclen; + size_t ctr_over_count; + + u32 ctr[4]; + u32 aes_iv[4]; + u32 tag_out[4]; + u32 tag_in[4]; unsigned int hash_digest_len; u8 hash_digest_mid[SHA512_DIGEST_SIZE]__aligned(sizeof(u32)); }; @@ -121,4 +153,7 @@ struct starfive_sec_dev *starfive_sec_find_dev(struct starfive_sec_ctx *ctx); int starfive_hash_register_algs(void); void starfive_hash_unregister_algs(void); +int starfive_aes_register_algs(void); +void starfive_aes_unregister_algs(void); + #endif From patchwork Wed Nov 30 05:52:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68AA2C433FE for ; Wed, 30 Nov 2022 05:53:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=esQN3CYuaCro9VPHq8CyTq+lC1Te8Akois4JYqOHdSk=; b=FKZsUygsIGcW2c py4Ib+sKqTGe+MnElytqOCsTOM7jMycKi6Qbf/m5mCbdeLUzq9+307S2SHi/udnecCEj9y0oKA4fi gtwxskcDAdc4TfqveCwDwEWGmlpDPIrWWB2dIbfGAEG6JmzJQRmLrtXoE/SWaz/kpmariQX+ssL0n yykFtkoAbvXbEWKQaL/ChzzDsm3yKJv1Jjs1a0mC+Jszv8m5UgjB5y0g9hq54gNH94XLfpasnZZf3 G1MgObmkagROvwbO5SOxmVwjiRX8UJUgKeCqC6yMJDy0VurYVmSTXmC4530SjlA6LEq2cx/7rDkqz H9GWMMpsrdXqMR9yt3iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1k-00DaRe-6P; Wed, 30 Nov 2022 05:53:00 +0000 Received: from ex01.ufhost.com ([61.152.239.75]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1e-00DaMN-T7 for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:52:59 +0000 Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by ex01.ufhost.com (Postfix) with ESMTP id AE2B024E1E2; Wed, 30 Nov 2022 13:52:38 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:38 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:34 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 4/6] crypto: starfive - Add Public Key algo support Date: Wed, 30 Nov 2022 13:52:12 +0800 Message-ID: <20221130055214.2416888-5-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215255_453373_0C04AB0B X-CRM114-Status: GOOD ( 22.14 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding RSA enc/dec and sign/verify feature for Starfive crypto driver. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng --- drivers/crypto/starfive/Makefile | 2 +- drivers/crypto/starfive/starfive-cryp.c | 19 +- drivers/crypto/starfive/starfive-pka.c | 683 ++++++++++++++++++++++++ drivers/crypto/starfive/starfive-regs.h | 65 +++ drivers/crypto/starfive/starfive-str.h | 35 ++ 5 files changed, 802 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/starfive/starfive-pka.c diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile index 4958b1f6812c..d44e28063965 100644 --- a/drivers/crypto/starfive/Makefile +++ b/drivers/crypto/starfive/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_STARFIVE) += starfive-crypto.o -starfive-crypto-objs := starfive-cryp.o starfive-hash.o starfive-aes.o +starfive-crypto-objs := starfive-cryp.o starfive-hash.o starfive-aes.o starfive-pka.o diff --git a/drivers/crypto/starfive/starfive-cryp.c b/drivers/crypto/starfive/starfive-cryp.c index 452bd1ab4f04..a9c7f39b5547 100644 --- a/drivers/crypto/starfive/starfive-cryp.c +++ b/drivers/crypto/starfive/starfive-cryp.c @@ -175,6 +175,12 @@ static int starfive_cryp_probe(struct platform_device *pdev) goto err_aes_data; } + sdev->pka_data = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, pages); + if (!sdev->pka_data) { + dev_err(sdev->dev, "Can't allocate pka buffer pages when unaligned\n"); + goto err_pka_data; + } + sdev->pages_count = pages >> 1; sdev->data_buf_len = STARFIVE_MSG_BUFFER_SIZE >> 1; @@ -197,10 +203,16 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (ret) goto err_algs_aes; + ret = starfive_pka_register_algs(); + if (ret) + goto err_algs_pka; + dev_info(dev, "Crypto engine started\n"); return 0; +err_algs_pka: + starfive_aes_unregister_algs(); err_algs_aes: starfive_hash_unregister_algs(); err_algs_hash: @@ -208,6 +220,8 @@ static int starfive_cryp_probe(struct platform_device *pdev) err_engine_start: crypto_engine_exit(sdev->engine); err_engine: + free_pages((unsigned long)sdev->pka_data, pages); +err_pka_data: free_pages((unsigned long)sdev->aes_data, pages); err_aes_data: free_pages((unsigned long)sdev->hash_data, pages); @@ -230,6 +244,7 @@ static int starfive_cryp_remove(struct platform_device *pdev) starfive_hash_unregister_algs(); starfive_aes_unregister_algs(); + starfive_pka_unregister_algs(); crypto_engine_stop(sdev->engine); crypto_engine_exit(sdev->engine); @@ -238,8 +253,10 @@ static int starfive_cryp_remove(struct platform_device *pdev) free_pages((unsigned long)sdev->hash_data, sdev->pages_count); free_pages((unsigned long)sdev->aes_data, sdev->pages_count); - sdev->hash_data = NULL; + free_pages((unsigned long)sdev->pka_data, sdev->pages_count); sdev->aes_data = NULL; + sdev->hash_data = NULL; + sdev->pka_data = NULL; spin_lock(&dev_list.lock); list_del(&sdev->list); diff --git a/drivers/crypto/starfive/starfive-pka.c b/drivers/crypto/starfive/starfive-pka.c new file mode 100644 index 000000000000..e845f2545a9a --- /dev/null +++ b/drivers/crypto/starfive/starfive-pka.c @@ -0,0 +1,683 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * StarFive Public Key Algo acceleration driver + * + * Copyright (c) 2022 StarFive Technology + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "starfive-str.h" + +#define STARFIVE_RSA_KEYSZ_LEN (2048 >> 2) +#define STARFIVE_RSA_KEY_SIZE (STARFIVE_RSA_KEYSZ_LEN * 3) +#define STARFIVE_RSA_MAX_KEYSZ 256 + +static inline int starfive_pka_wait_done(struct starfive_sec_ctx *ctx) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + u32 status; + + return readl_relaxed_poll_timeout(sdev->io_base + STARFIVE_PKA_CASR_OFFSET, status, + (status & STARFIVE_PKA_DONE_FLAGS), 10, 100000); +} + +static void starfive_rsa_free_key(struct starfive_rsa_key *key) +{ + if (key->d) + kfree_sensitive(key->d); + if (key->e) + kfree_sensitive(key->e); + if (key->n) + kfree_sensitive(key->n); + memset(key, 0, sizeof(*key)); +} + +static unsigned int starfive_rsa_get_nbit(u8 *pa, u32 snum, int key_sz) +{ + u32 i; + u8 value; + + i = snum >> 3; + + value = pa[key_sz - i - 1]; + value >>= snum & 0x7; + value &= 0x1; + + return value; +} + +static int starfive_rsa_domain_transfer(struct starfive_sec_ctx *ctx, + u32 *result, u32 *opa, u8 domain, + u32 *mod, int bit_len) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + unsigned int *info; + int loop; + u8 opsize; + u32 temp; + + opsize = (bit_len - 1) >> 5; + rctx->csr.pka.v = 0; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + info = (unsigned int *)mod; + for (loop = 0; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CANR_OFFSET + loop * 4, info[opsize - loop]); + + if (domain != 0) { + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_PRE; + rctx->csr.pka.start = 1; + rctx->csr.pka.not_r2 = 1; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + starfive_pka_wait_done(ctx); + + info = (unsigned int *)opa; + for (loop = 0; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CAAR_OFFSET + loop * 4, + info[opsize - loop]); + + starfive_sec_write(sdev, STARFIVE_PKA_CAER_OFFSET, 0x1000000); + + for (loop = 1; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CAER_OFFSET + loop * 4, 0); + + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_AERN; + rctx->csr.pka.start = 1; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + starfive_pka_wait_done(ctx); + } else { + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_PRE; + rctx->csr.pka.start = 1; + rctx->csr.pka.pre_expf = 1; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + starfive_pka_wait_done(ctx); + + info = (unsigned int *)opa; + for (loop = 0; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CAER_OFFSET + loop * 4, + info[opsize - loop]); + + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_ARN; + rctx->csr.pka.start = 1; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + starfive_pka_wait_done(ctx); + } + + for (loop = 0; loop <= opsize; loop++) { + temp = starfive_sec_read(sdev, STARFIVE_PKA_CAAR_OFFSET + 0x4 * loop); + result[opsize - loop] = temp; + } + + return 0; +} + +static int starfive_rsa_cpu_powm(struct starfive_sec_ctx *ctx, u32 *result, + u8 *de, u32 *n, int key_sz) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_rsa_key *key = &ctx->rsa_key; + u32 initial; + int opsize, mlen, bs, loop; + unsigned int *mta; + + opsize = (key_sz - 1) >> 2; + initial = 1; + + mta = kmalloc(key_sz, GFP_KERNEL); + if (!mta) + return -ENOMEM; + + starfive_rsa_domain_transfer(ctx, mta, sdev->pka_data, 0, n, key_sz << 3); + + for (loop = 0; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CANR_OFFSET + loop * 4, + n[opsize - loop]); + + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_PRE; + rctx->csr.pka.not_r2 = 1; + rctx->csr.pka.start = 1; + + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + starfive_pka_wait_done(ctx); + + for (loop = 0; loop <= opsize; loop++) + starfive_sec_write(sdev, STARFIVE_PKA_CAER_OFFSET + loop * 4, + mta[opsize - loop]); + + for (loop = key->bitlen; loop > 0; loop--) { + if (initial) { + for (bs = 0; bs <= opsize; bs++) + result[bs] = mta[bs]; + + initial = 0; + } else { + mlen = starfive_rsa_get_nbit(de, loop - 1, key_sz); + + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_AARN; + rctx->csr.pka.start = 1; + + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + starfive_pka_wait_done(ctx); + + if (mlen) { + rctx->csr.pka.v = 0; + rctx->csr.pka.cln_done = 1; + rctx->csr.pka.opsize = opsize; + rctx->csr.pka.exposize = opsize; + rctx->csr.pka.cmd = CRYPTO_CMD_AERN; + rctx->csr.pka.start = 1; + + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, + rctx->csr.pka.v); + starfive_pka_wait_done(ctx); + } + } + } + + for (loop = 0; loop <= opsize; loop++) { + unsigned int temp; + + temp = starfive_sec_read(sdev, STARFIVE_PKA_CAAR_OFFSET + 0x4 * loop); + result[opsize - loop] = temp; + } + + kfree(mta); + + return starfive_rsa_domain_transfer(ctx, result, result, 1, n, key_sz << 3); +} + +static int starfive_rsa_powm(struct starfive_sec_ctx *ctx, u8 *result, + u8 *de, u8 *n, int key_sz) +{ + return starfive_rsa_cpu_powm(ctx, (u32 *)result, de, (u32 *)n, key_sz); +} + +static int starfive_rsa_get_from_sg(struct starfive_sec_request_ctx *rctx, + size_t offset, size_t count, size_t data_offset) +{ + size_t of, ct, index; + struct scatterlist *sg = rctx->in_sg; + + of = offset; + ct = count; + + while (sg->length <= of) { + of -= sg->length; + + if (!sg_is_last(sg)) { + sg = sg_next(sg); + continue; + } else { + return -EBADE; + } + } + + index = data_offset; + while (ct > 0) { + if (sg->length - of >= ct) { + scatterwalk_map_and_copy(rctx->sdev->pka_data + index, sg, + of, ct, 0); + index = index + ct; + return index - data_offset; + } + + scatterwalk_map_and_copy(rctx->sdev->pka_data + index, + sg, of, sg->length - of, 0); + index += sg->length - of; + ct = ct - (sg->length - of); + + of = 0; + + if (!sg_is_last(sg)) + sg = sg_next(sg); + else + return -EBADE; + } + + return index - data_offset; +} + +static int starfive_rsa_enc_core(struct starfive_sec_ctx *ctx, int enc) +{ + struct starfive_sec_dev *sdev = ctx->sdev; + struct starfive_sec_request_ctx *rctx = ctx->rctx; + struct starfive_rsa_key *key = &ctx->rsa_key; + size_t data_len, total, count, data_offset; + int ret = 0; + unsigned int *info; + int loop; + + rctx->csr.pka.v = 0; + rctx->csr.pka.reset = 1; + starfive_sec_write(sdev, STARFIVE_PKA_CACR_OFFSET, rctx->csr.pka.v); + + if (starfive_pka_wait_done(ctx)) + dev_dbg(sdev->dev, "this is debug for lophyel pka_casr = %x %s %s %d\n", + starfive_sec_read(sdev, STARFIVE_PKA_CASR_OFFSET), + __FILE__, __func__, __LINE__); + + rctx->offset = 0; + total = 0; + + while (total < rctx->total_in) { + count = min(sdev->data_buf_len, rctx->total_in); + count = min(count, key->key_sz); + memset(sdev->pka_data, 0, key->key_sz); + data_offset = key->key_sz - count; + + data_len = starfive_rsa_get_from_sg(rctx, rctx->offset, count, data_offset); + if (data_len < 0) + return data_len; + if (data_len != count) + return -EINVAL; + + if (enc) { + key->bitlen = key->e_bitlen; + ret = starfive_rsa_powm(ctx, sdev->pka_data + STARFIVE_RSA_KEYSZ_LEN, + key->e, key->n, key->key_sz); + } else { + key->bitlen = key->d_bitlen; + ret = starfive_rsa_powm(ctx, sdev->pka_data + STARFIVE_RSA_KEYSZ_LEN, + key->d, key->n, key->key_sz); + } + + if (ret) + return ret; + + info = (unsigned int *)(sdev->pka_data + STARFIVE_RSA_KEYSZ_LEN); + for (loop = 0; loop < key->key_sz >> 2; loop++) + dev_dbg(sdev->dev, "result[%d] = %x\n", loop, info[loop]); + + sg_copy_buffer(rctx->out_sg, sg_nents(rctx->out_sg), + sdev->pka_data + STARFIVE_RSA_KEYSZ_LEN, + key->key_sz, rctx->offset, 0); + + rctx->offset += data_len; + total += data_len; + } + + return ret; +} + +static int starfive_rsa_enc(struct akcipher_request *req) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + struct starfive_rsa_key *key = &ctx->rsa_key; + struct starfive_sec_request_ctx *rctx = akcipher_request_ctx(req); + int ret = 0; + + if (key->key_sz > STARFIVE_RSA_MAX_KEYSZ) { + akcipher_request_set_tfm(req, ctx->fallback.akcipher); + ret = crypto_akcipher_encrypt(req); + akcipher_request_set_tfm(req, tfm); + return ret; + } + + if (unlikely(!key->n || !key->e)) + return -EINVAL; + + if (req->dst_len < key->key_sz) { + req->dst_len = key->key_sz; + dev_err(ctx->sdev->dev, "Output buffer length less than parameter n\n"); + return -EOVERFLOW; + } + + mutex_lock(&ctx->sdev->lock); + + rctx->in_sg = req->src; + rctx->out_sg = req->dst; + rctx->sdev = ctx->sdev; + ctx->rctx = rctx; + rctx->total_in = req->src_len; + rctx->total_out = req->dst_len; + + ret = starfive_rsa_enc_core(ctx, 1); + + mutex_unlock(&ctx->sdev->lock); + + return ret; +} + +static int starfive_rsa_dec(struct akcipher_request *req) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + struct starfive_rsa_key *key = &ctx->rsa_key; + struct starfive_sec_request_ctx *rctx = akcipher_request_ctx(req); + int ret = 0; + + if (key->key_sz > STARFIVE_RSA_MAX_KEYSZ) { + akcipher_request_set_tfm(req, ctx->fallback.akcipher); + ret = crypto_akcipher_decrypt(req); + akcipher_request_set_tfm(req, tfm); + return ret; + } + + if (unlikely(!key->n || !key->d)) + return -EINVAL; + + if (req->dst_len < key->key_sz) { + req->dst_len = key->key_sz; + dev_err(ctx->sdev->dev, "Output buffer length less than parameter n\n"); + return -EOVERFLOW; + } + + mutex_lock(&ctx->sdev->lock); + + rctx->in_sg = req->src; + rctx->out_sg = req->dst; + rctx->sdev = ctx->sdev; + ctx->rctx = rctx; + rctx->total_in = req->src_len; + rctx->total_out = req->dst_len; + + ret = starfive_rsa_enc_core(ctx, 0); + + mutex_unlock(&ctx->sdev->lock); + + return ret; +} + +static unsigned long starfive_rsa_check_keysz(unsigned int len) +{ + unsigned int bitslen = len << 3; + + if (bitslen & 0x1f) + return -EINVAL; + + return 0; +} + +static int starfive_rsa_set_n(struct starfive_rsa_key *rsa_key, + const char *value, size_t vlen) +{ + const char *ptr = value; + int ret; + + while (!*ptr && vlen) { + ptr++; + vlen--; + } + rsa_key->key_sz = vlen; + + /* invalid key size provided */ + ret = starfive_rsa_check_keysz(rsa_key->key_sz); + if (ret) + return ret; + + ret = -ENOMEM; + rsa_key->n = kmemdup(ptr, rsa_key->key_sz, GFP_KERNEL); + if (!rsa_key->n) + goto err; + + return 0; + err: + rsa_key->key_sz = 0; + rsa_key->n = NULL; + starfive_rsa_free_key(rsa_key); + return ret; +} + +static int starfive_rsa_set_e(struct starfive_rsa_key *rsa_key, + const char *value, size_t vlen) +{ + const char *ptr = value; + unsigned char pt; + int loop; + + while (!*ptr && vlen) { + ptr++; + vlen--; + } + pt = *ptr; + + if (!rsa_key->key_sz || !vlen || vlen > rsa_key->key_sz) { + rsa_key->e = NULL; + return -EINVAL; + } + + rsa_key->e = kzalloc(rsa_key->key_sz, GFP_KERNEL); + if (!rsa_key->e) + return -ENOMEM; + + for (loop = 8; loop > 0; loop--) { + if (pt >> (loop - 1)) + break; + } + + rsa_key->e_bitlen = (vlen - 1) * 8 + loop; + + memcpy(rsa_key->e + (rsa_key->key_sz - vlen), ptr, vlen); + + return 0; +} + +static int starfive_rsa_set_d(struct starfive_rsa_key *rsa_key, + const char *value, size_t vlen) +{ + const char *ptr = value; + unsigned char pt; + int loop; + int ret; + + while (!*ptr && vlen) { + ptr++; + vlen--; + } + pt = *ptr; + + ret = -EINVAL; + if (!rsa_key->key_sz || !vlen || vlen > rsa_key->key_sz) + goto err; + + ret = -ENOMEM; + rsa_key->d = kzalloc(rsa_key->key_sz, GFP_KERNEL); + if (!rsa_key->d) + goto err; + + for (loop = 8; loop > 0; loop--) { + pr_debug("this is debug for lophyel loop = %d pt >> (loop - 1) = %x value[%d] = %x %s %s %d\n", + loop, pt >> (loop - 1), loop, value[loop], __FILE__, __func__, __LINE__); + if (pt >> (loop - 1)) + break; + } + + rsa_key->d_bitlen = (vlen - 1) * 8 + loop; + + memcpy(rsa_key->d + (rsa_key->key_sz - vlen), ptr, vlen); + + return 0; + err: + rsa_key->d = NULL; + return ret; +} + +static int starfive_rsa_setkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen, bool private) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + struct rsa_key raw_key = {NULL}; + struct starfive_rsa_key *rsa_key = &ctx->rsa_key; + int ret; + + starfive_rsa_free_key(rsa_key); + + if (private) + ret = rsa_parse_priv_key(&raw_key, key, keylen); + else + ret = rsa_parse_pub_key(&raw_key, key, keylen); + if (ret < 0) + goto err; + + ret = starfive_rsa_set_n(rsa_key, raw_key.n, raw_key.n_sz); + if (ret < 0) + return ret; + + ret = starfive_rsa_set_e(rsa_key, raw_key.e, raw_key.e_sz); + if (ret < 0) + goto err; + + if (private) { + ret = starfive_rsa_set_d(rsa_key, raw_key.d, raw_key.d_sz); + if (ret < 0) + goto err; + } + + if (!rsa_key->n || !rsa_key->e) { + /* invalid key provided */ + ret = -EINVAL; + goto err; + } + if (private && !rsa_key->d) { + /* invalid private key provided */ + ret = -EINVAL; + goto err; + } + + return 0; + err: + starfive_rsa_free_key(rsa_key); + return ret; +} + +static int starfive_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + int ret; + + ret = crypto_akcipher_set_pub_key(ctx->fallback.akcipher, key, keylen); + if (ret) + return ret; + + return starfive_rsa_setkey(tfm, key, keylen, false); +} + +static int starfive_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + int ret; + + ret = crypto_akcipher_set_priv_key(ctx->fallback.akcipher, key, keylen); + if (ret) + return ret; + + return starfive_rsa_setkey(tfm, key, keylen, true); +} + +static unsigned int starfive_rsa_max_size(struct crypto_akcipher *tfm) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + + /* For key sizes > 2Kb, use software tfm */ + if (ctx->rsa_key.key_sz > STARFIVE_RSA_MAX_KEYSZ) + return crypto_akcipher_maxsize(ctx->fallback.akcipher); + + return ctx->rsa_key.key_sz; +} + +/* Per session pkc's driver context creation function */ +static int starfive_rsa_init_tfm(struct crypto_akcipher *tfm) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + + ctx->fallback.akcipher = crypto_alloc_akcipher("rsa-generic", 0, 0); + if (IS_ERR(ctx->fallback.akcipher)) + return PTR_ERR(ctx->fallback.akcipher); + + ctx->sdev = starfive_sec_find_dev(ctx); + if (!ctx->sdev) { + crypto_free_akcipher(ctx->fallback.akcipher); + return -ENODEV; + } + + akcipher_set_reqsize(tfm, sizeof(struct starfive_sec_request_ctx)); + + return 0; +} + +/* Per session pkc's driver context cleanup function */ +static void starfive_rsa_exit_tfm(struct crypto_akcipher *tfm) +{ + struct starfive_sec_ctx *ctx = akcipher_tfm_ctx(tfm); + struct starfive_rsa_key *key = (struct starfive_rsa_key *)&ctx->rsa_key; + + crypto_free_akcipher(ctx->fallback.akcipher); + starfive_rsa_free_key(key); +} + +static struct akcipher_alg starfive_rsa = { + .encrypt = starfive_rsa_enc, + .decrypt = starfive_rsa_dec, + .sign = starfive_rsa_dec, + .verify = starfive_rsa_enc, + .set_pub_key = starfive_rsa_set_pub_key, + .set_priv_key = starfive_rsa_set_priv_key, + .max_size = starfive_rsa_max_size, + .init = starfive_rsa_init_tfm, + .exit = starfive_rsa_exit_tfm, + .reqsize = sizeof(struct starfive_sec_request_ctx), + .base = { + .cra_name = "rsa", + .cra_driver_name = "starfive-rsa", + .cra_flags = CRYPTO_ALG_TYPE_AKCIPHER | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_priority = 3000, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct starfive_sec_ctx), + }, +}; + +int starfive_pka_register_algs(void) +{ + return crypto_register_akcipher(&starfive_rsa); +} + +void starfive_pka_unregister_algs(void) +{ + crypto_unregister_akcipher(&starfive_rsa); +} diff --git a/drivers/crypto/starfive/starfive-regs.h b/drivers/crypto/starfive/starfive-regs.h index c53b0303fb66..af3967c37a12 100644 --- a/drivers/crypto/starfive/starfive-regs.h +++ b/drivers/crypto/starfive/starfive-regs.h @@ -11,6 +11,7 @@ #define STARFIVE_AES_REGS_OFFSET 0x100 #define STARFIVE_HASH_REGS_OFFSET 0x300 +#define STARFIVE_PKA_REGS_OFFSET 0x400 union starfive_alg_cr { u32 v; @@ -26,6 +27,70 @@ union starfive_alg_cr { }; }; +#define STARFIVE_PKA_CACR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x0) +#define STARFIVE_PKA_CASR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x4) +#define STARFIVE_PKA_CAAR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x8) +#define STARFIVE_PKA_CAER_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x108) +#define STARFIVE_PKA_CANR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x208) +#define STARFIVE_PKA_CAAFR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x308) +#define STARFIVE_PKA_CAEFR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x30c) +#define STARFIVE_PKA_CANFR_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x310) +#define STARFIVE_FIFO_COUNTER_OFFSET (STARFIVE_PKA_REGS_OFFSET + 0x314) + +/* R^2 mod N and N0' */ +#define CRYPTO_CMD_PRE 0x0 +/* (A + A) mod N, ==> A */ +#define CRYPTO_CMD_AAN 0x1 +/* A ^ E mod N ==> A */ +#define CRYPTO_CMD_AMEN 0x2 +/* A + E mod N ==> A */ +#define CRYPTO_CMD_AAEN 0x3 +/* A - E mod N ==> A */ +#define CRYPTO_CMD_ADEN 0x4 +/* A * R mod N ==> A */ +#define CRYPTO_CMD_ARN 0x5 +/* A * E * R mod N ==> A */ +#define CRYPTO_CMD_AERN 0x6 +/* A * A * R mod N ==> A */ +#define CRYPTO_CMD_AARN 0x7 +/* ECC2P ==> A */ +#define CRYPTO_CMD_ECC2P 0x8 +/* ECCPQ ==> A */ +#define CRYPTO_CMD_ECCPQ 0x9 + +union starfive_pka_cacr { + u32 v; + struct { + u32 start :1; + u32 reset :1; + u32 ie :1; + u32 rsvd_0 :1; + u32 fifo_mode :1; + u32 not_r2 :1; + u32 ecc_sub :1; + u32 pre_expf :1; + u32 cmd :4; + u32 rsvd_1 :1; + u32 ctrl_dummy :1; + u32 ctrl_false :1; + u32 cln_done :1; + u32 opsize :6; + u32 rsvd_2 :2; + u32 exposize :6; + u32 rsvd_3 :1; + u32 bigendian :1; + }; +}; + +union starfive_pka_casr { + u32 v; + struct { +#define STARFIVE_PKA_DONE_FLAGS BIT(0) + u32 done :1; + u32 rsvd_0 :31; + }; +}; + #define STARFIVE_AES_AESDIO0R (STARFIVE_AES_REGS_OFFSET + 0x0) #define STARFIVE_AES_KEY0 (STARFIVE_AES_REGS_OFFSET + 0x4) #define STARFIVE_AES_KEY1 (STARFIVE_AES_REGS_OFFSET + 0x8) diff --git a/drivers/crypto/starfive/starfive-str.h b/drivers/crypto/starfive/starfive-str.h index 396529a9a8f1..13e6bf637a34 100644 --- a/drivers/crypto/starfive/starfive-str.h +++ b/drivers/crypto/starfive/starfive-str.h @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include "starfive-regs.h" @@ -19,6 +21,31 @@ #define STARFIVE_AES_IV_LEN AES_BLOCK_SIZE #define STARFIVE_AES_CTR_LEN AES_BLOCK_SIZE +struct starfive_rsa_key { + u8 *n; + u8 *e; + u8 *d; + u8 *p; + u8 *q; + u8 *dp; + u8 *dq; + u8 *qinv; + u8 *rinv; + u8 *rinv_p; + u8 *rinv_q; + u8 *mp; + u8 *rsqr; + u8 *rsqr_p; + u8 *rsqr_q; + u8 *pmp; + u8 *qmp; + int e_bitlen; + int d_bitlen; + int bitlen; + size_t key_sz; + bool crt_mode; +}; + struct starfive_sec_ctx { struct crypto_engine_ctx enginectx; struct starfive_sec_dev *sdev; @@ -28,9 +55,12 @@ struct starfive_sec_ctx { int keylen; struct scatterlist sg[2]; size_t hash_len_total; + size_t rsa_key_sz; + struct starfive_rsa_key rsa_key; u8 *buffer; union { + struct crypto_akcipher *akcipher; struct crypto_aead *aead; struct crypto_shash *shash; } fallback; @@ -49,6 +79,7 @@ struct starfive_sec_dev { phys_addr_t io_phys_base; void *aes_data; void *hash_data; + void *pka_data; size_t data_buf_len; int pages_count; @@ -88,6 +119,7 @@ struct starfive_sec_request_ctx { union { union starfive_aes_csr aes; union starfive_hash_csr hash; + union starfive_pka_cacr pka; } csr; struct scatterlist *in_sg; @@ -156,4 +188,7 @@ void starfive_hash_unregister_algs(void); int starfive_aes_register_algs(void); void starfive_aes_unregister_algs(void); +int starfive_pka_register_algs(void); +void starfive_pka_unregister_algs(void); + #endif From patchwork Wed Nov 30 05:52:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80949C352A1 for ; Wed, 30 Nov 2022 05:53:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=y3DkNRN/AAC4TKIAiQj9EMwnPpCGPR2KRs5iJsq5E60=; b=nkgj+3fzkgwLkH pK9On4YPZwzW7MltIUIsopiQ6ytZk2EPcK6BE7HUe3Spy1yQJW6DEunBGwWgOV5rydgGqf57UdtWP dt9YyI4VFuu2qQqiczXfBGGM0LkUKpfFYK+us2Lyl7UjfjnJ9uPt4XSda0PNwoa70qF0xgbVYZugE EiPCPYJWLDfF2GRURqZfhQRxdFea/YnwkDxi7eZ2jMEm528wMdJGeGobJL6vEJDkQ3WKz2s6yPQfp Mi5aeR064PI3CRw9b0MOl9a3axm4ZPVJTrjklAdpwUx8zA13IFpgCV90ZbfgZPu6avCf+RWjxGNkv +DYVyJLVxMTqsAYT3MdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G2A-00DalU-Bh; Wed, 30 Nov 2022 05:53:26 +0000 Received: from ex01.ufhost.com ([61.152.239.75]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G22-00DaMO-RD for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:53:21 +0000 Received: from EXMBX166.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX166", Issuer "EXMBX166" (not verified)) by ex01.ufhost.com (Postfix) with ESMTP id 533A924E1FE; Wed, 30 Nov 2022 13:52:42 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX166.cuchost.com (172.16.6.76) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:42 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:38 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 5/6] dt-bindings: crypto: Add bindings for Starfive crypto driver Date: Wed, 30 Nov 2022 13:52:13 +0800 Message-ID: <20221130055214.2416888-6-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215319_642592_99D84F71 X-CRM114-Status: GOOD ( 11.95 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add documentation to describe Starfive crypto driver bindings. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng --- .../bindings/crypto/starfive-crypto.yaml | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+) create mode 100644 Documentation/devicetree/bindings/crypto/starfive-crypto.yaml diff --git a/Documentation/devicetree/bindings/crypto/starfive-crypto.yaml b/Documentation/devicetree/bindings/crypto/starfive-crypto.yaml new file mode 100644 index 000000000000..6b852f774c32 --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/starfive-crypto.yaml @@ -0,0 +1,109 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/crypto/starfive-crypto.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: StarFive Crypto Controller Device Tree Bindings + +maintainers: + - Jia Jie Ho + - William Qiu + +properties: + compatible: + const: starfive,jh7110-crypto + + reg: + maxItems: 1 + + reg-names: + items: + - const: secreg + + clocks: + items: + - description: Hardware reference clock + - description: AHB reference clock + + clock-names: + items: + - const: sec_hclk + - const: sec_ahb + + interrupts: + items: + - description: Interrupt pin for algo completion + - description: Interrupt pin for DMA transfer completion + + interrupt-names: + items: + - const: secirq + - const: dmairq + + resets: + items: + - description: STG domain reset line + + reset-names: + items: + - const: sec_hre + + enable-side-channel-mitigation: + description: Enable side-channel-mitigation feature for AES module. + Enabling this feature will affect the speed performance of + crypto engine. + type: boolean + + enable-dma: + description: Enable data transfer using dedicated DMA controller. + type: boolean + + dmas: + items: + - description: TX DMA channel + - description: RX DMA channel + + dma-names: + items: + - const: sec_m + - const: sec_p + +required: + - compatible + - reg + - reg-names + - clocks + - clock-names + - resets + - reset-names + +additionalProperties: false + +examples: + - | + #include + #include + + soc { + #address-cells = <2>; + #size-cells = <2>; + + crypto: crypto@16000000 { + compatible = "starfive,jh7110-crypto"; + reg = <0x0 0x16000000 0x0 0x4000>; + reg-names = "secreg"; + clocks = <&stgcrg JH7110_STGCLK_SEC_HCLK>, + <&stgcrg JH7110_STGCLK_SEC_MISCAHB>; + interrupts = <28>, <29>; + interrupt-names = "secirq", "dmairq"; + clock-names = "sec_hclk","sec_ahb"; + resets = <&stgcrg JH7110_STGRST_SEC_TOP_HRESETN>; + reset-names = "sec_hre"; + enable-side-channel-mitigation; + enable-dma; + dmas = <&sec_dma 1 2>, + <&sec_dma 0 2>; + dma-names = "sec_m","sec_p"; + }; + }; From patchwork Wed Nov 30 05:52:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JiaJie Ho X-Patchwork-Id: 13059476 X-Patchwork-Delegate: mail@conchuod.ie Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99E9CC352A1 for ; Wed, 30 Nov 2022 05:53:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cLe7nU+JX+vAA3YsDJ0mgQmbCEyHtu87IrMSbD/IzfI=; b=tqbjacDidt5IOy DrB5Ell0yfDUFwOV109TrDg3jwF1SpLMj/sW2PjHdYS3j6z+ALaOnPQGb/kE5k/QnMOwXRnTkhdCW x/SHhEwLXqejwgtftIDH+roNgfa6kGmFmJZdlHevF8BjPgGS+8gPZOFAejezbo5HpEnLSygk56dwy v4X1RaKQQ1jJvYgaxiSLxKnJQ6fZ4ay5FGeW0psn0B3lQVfqrYD2DssUqNosCHh/EBj6QjcpvsNTP RQpJgGuSAV+3kPP+sDoQmZLCfw/WSEdZ6kK8wmtpIEllQQ/R6hPD9xPmwfHuvXLJN5vKZexQ6ARiZ omh3eU5XZiM1uaeE6lrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1q-00DaV8-8f; Wed, 30 Nov 2022 05:53:06 +0000 Received: from fd01.gateway.ufhost.com ([61.152.239.71]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0G1m-00DaMR-2d for linux-riscv@lists.infradead.org; Wed, 30 Nov 2022 05:53:03 +0000 Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id 1925A24E31C; Wed, 30 Nov 2022 13:52:46 +0800 (CST) Received: from EXMBX068.cuchost.com (172.16.6.68) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:46 +0800 Received: from ubuntu.localdomain (202.188.176.82) by EXMBX068.cuchost.com (172.16.6.68) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 30 Nov 2022 13:52:42 +0800 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski CC: , , , , Jia Jie Ho Subject: [PATCH 6/6] riscv: dts: starfive: Add crypto and DMA node for VisionFive 2 Date: Wed, 30 Nov 2022 13:52:14 +0800 Message-ID: <20221130055214.2416888-7-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> References: <20221130055214.2416888-1-jiajie.ho@starfivetech.com> MIME-Version: 1.0 X-Originating-IP: [202.188.176.82] X-ClientProxiedBy: EXCAS066.cuchost.com (172.16.6.26) To EXMBX068.cuchost.com (172.16.6.68) X-YovoleRuleAgent: yovoleflag X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221129_215302_315045_6144B651 X-CRM114-Status: UNSURE ( 9.26 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Adding StarFive crypto IP and DMA controller node to VisionFive 2 SoC. Signed-off-by: Jia Jie Ho Signed-off-by: Huan Feng Signed-off-by: Huan Feng Signed-off-by: Jia Jie Ho --- .../jh7110-starfive-visionfive-v2.dts | 8 +++++ arch/riscv/boot/dts/starfive/jh7110.dtsi | 36 +++++++++++++++++++ 2 files changed, 44 insertions(+) diff --git a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-v2.dts b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-v2.dts index 450e920236a5..da2aa4d597f3 100644 --- a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-v2.dts +++ b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-v2.dts @@ -115,3 +115,11 @@ &tdm_ext { &mclk_ext { clock-frequency = <49152000>; }; + +&sec_dma { + status = "okay"; +}; + +&crypto { + status = "okay"; +}; diff --git a/arch/riscv/boot/dts/starfive/jh7110.dtsi b/arch/riscv/boot/dts/starfive/jh7110.dtsi index 4ac159d79d66..745a5650882c 100644 --- a/arch/riscv/boot/dts/starfive/jh7110.dtsi +++ b/arch/riscv/boot/dts/starfive/jh7110.dtsi @@ -455,5 +455,41 @@ uart5: serial@12020000 { reg-shift = <2>; status = "disabled"; }; + + sec_dma: sec_dma@16008000 { + compatible = "arm,pl080", "arm,primecell"; + arm,primecell-periphid = <0x00041080>; + reg = <0x0 0x16008000 0x0 0x4000>; + reg-names = "sec_dma"; + interrupts = <29>; + clocks = <&stgcrg JH7110_STGCLK_SEC_HCLK>, + <&stgcrg JH7110_STGCLK_SEC_MISCAHB>; + clock-names = "sec_hclk","apb_pclk"; + resets = <&stgcrg JH7110_STGRST_SEC_TOP_HRESETN>; + reset-names = "sec_hre"; + lli-bus-interface-ahb1; + mem-bus-interface-ahb1; + memcpy-burst-size = <256>; + memcpy-bus-width = <32>; + #dma-cells = <2>; + status = "disabled"; + }; + + crypto: crypto@16000000 { + compatible = "starfive,jh7110-crypto"; + reg = <0x0 0x16000000 0x0 0x4000>; + reg-names = "secreg"; + clocks = <&stgcrg JH7110_STGCLK_SEC_HCLK>, + <&stgcrg JH7110_STGCLK_SEC_MISCAHB>; + clock-names = "sec_hclk","sec_ahb"; + resets = <&stgcrg JH7110_STGRST_SEC_TOP_HRESETN>; + reset-names = "sec_hre"; + enable-side-channel-mitigation; + enable-dma; + dmas = <&sec_dma 1 2>, + <&sec_dma 0 2>; + dma-names = "sec_m","sec_p"; + status = "disabled"; + }; }; };