From patchwork Tue Jan 16 09:01:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Jie Ho X-Patchwork-Id: 13520597 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from CHN02-BJS-obe.outbound.protection.partner.outlook.cn (mail-bjschn02on2048.outbound.protection.partner.outlook.cn [139.219.17.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D01B312B6E; Tue, 16 Jan 2024 09:18:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=starfivetech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=starfivetech.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HTEFIME37rOhmS9vOuHzyUEy+aqsO2kPies8C0WZBndgB6I8CKq/LQ+OoRdMkP5o/PDwBcF8gnxF/3oBOH/8VXk9jimgjXGqn5HbFVibchLlhjxu/7ZfKPBsuzYYXsoH4lUlOWG4zC0/TT5dncyVq1DdzHIB2VyTB+oxNPezr7jMkxYXiXOTR/vztYE2dcs8+izElk/E0+roBY4+A0g0NbV/knW1qF9z+38e1yr6mQqZYub/ra4coKu8j3KNiiLKEcFoWsE2Fu55iRc7HBAkyWCdQgkEQs1B1KI+W4i4HoqPViHTKH7VlHxdzN3I13QxTbv9bXeekZBWungpJ56OEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WnJhRO5PMt7EqU6VplgYfUhbqmDpVZItpn7+/T7tDXc=; b=eKAgrlTOU7mvP9/A8YHbb09VR73tcGtkgMEP5z630ASb4auzvC945m3kpEDY1/ewl1sBg7ag3agMuem4P9Dm7mDnrNbcjjvZl8y0PBmAp+FdMH/IyViiK98MhX3ihIpWWmKdivDtnGzAWIxO+WI6KQTdYD2ZSzL1kfW3ZgwZ38DTYSr8x9zmveGwzK2wm2ckyfoKt/S8PjDx32URpBDHHFrHGkYBThUlJQ98u+k/KmWgKdMKFGyvDNd5ywQ5qaKkkwWDanwF5NvQ5YBrgK9SA9fGXssl4Rl6YlonNEjZXH5qJGGnHHJWKc4vS4+abTD18kgos1n4sTNv/2fzO/Iaiw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starfivetech.com; dmarc=pass action=none header.from=starfivetech.com; dkim=pass header.d=starfivetech.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=starfivetech.com; Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) by SHXPR01MB0640.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:1e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7181.26; Tue, 16 Jan 2024 09:01:48 +0000 Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6]) by SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6%7]) with mapi id 15.20.7135.032; Tue, 16 Jan 2024 09:01:48 +0000 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Conor Dooley , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/5] dt-bindings: crypto: starfive: Add jh8100 compatible string Date: Tue, 16 Jan 2024 17:01:31 +0800 Message-Id: <20240116090135.75737-2-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240116090135.75737-1-jiajie.ho@starfivetech.com> References: <20240116090135.75737-1-jiajie.ho@starfivetech.com> X-ClientProxiedBy: BJXPR01CA0054.CHNPR01.prod.partner.outlook.cn (2406:e500:c211:12::21) To SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SHXPR01MB0670:EE_|SHXPR01MB0640:EE_ X-MS-Office365-Filtering-Correlation-Id: 651068f4-f0c2-4bc1-5570-08dc1671c577 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0Trv0UX75ShpBDez+AeQDTN1ROkHUVGkKePZ02CpM4herEWmz32TKrlDuvjb5+JVRvgkGuAFFQyY01BWnkpM5zbJlVi12GdwAOsTOzkJXpz72sf0RjMI6OwpCxYDK7htaV0QMlcWvb8Rob8b3bQELPJXiVFbQUBHp3gNtB0mD8LUJtUZdj6juOluP/iednqo9eaUp/aQtYnKwc2cf/dHfjMK0hX/OJxZfNvqK6ROZGy9iHPL1KT6gWMBzkAeXgL7rOUyrDLHSl+CVIUIJJHu7AODannRl2VfkfNWbwTQZi85Wf5bmy1rfxAqHzRMxpYRPc1JOqDIStw4es9o2bB2OaVvqfN6V/ENgVGUMyflYJ6GGZH0pn1X27tY2EGqJ8Ql8xEYqwOKUaekBUMwdrApqtlv7On7rNG0xrEbprt7WAZGKFLEbc6JVUC6Xf12MfC7splSGDgnZY3R//8HihBOZ3gf2nLAmnsqMX11CJIjNOCW/rmG8y/Igx6kWhLgaq7SxxfHOGHiU96yRhOqP04gB2cHo4kWfYSdeRHhllMJ2oX25nVFMzzol68dz2H1QuHB X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn;PTR:;CAT:NONE;SFS:(13230031)(346002)(136003)(396003)(366004)(39830400003)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(38100700002)(41320700001)(4744005)(2906002)(5660300002)(36756003)(40180700001)(8936002)(8676002)(86362001)(41300700001)(38350700005)(83380400001)(26005)(40160700002)(508600001)(66476007)(110136005)(66946007)(66556008)(1076003)(2616005)(6666004)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: NGyobNErMw1hGh5HLV+c8gQDaR3sUPRGqa76Fa8zEkFvvXmtKDgeZdS4jVZMFLr5Y3KpcUxamxWrOZcq+8jMmaufN3IEhxGxiYzqpp/Qb4rED9F6VS+lqKlkRUm+gdT5BYw66tgsi3GohloBCoM2gGGEr3LmOWXQIEBYI3rJnloFItvPYCf/gGVXuDxqxdgFzVr3rWXYT/DWX084adqsdd2fu57+UlXdD/IXqK4MUQtUabezNskSc+HCLltiVELs5G/E7JWgiMZ8cqeE5nAbNNsdXOaEexpum0RUxBfiN9Ox/r7oqjmoDktM4NT+ViYesJkTpaUmc1tUljGlMIaHjJMl/WpAicqUe1nRzF3y0bTxLLnDUivztao1KfjFayywtk2qHkWimdQXSJJPOdmgRGyelmKOM7dUaV0M4BLjE9ITNUPRItHDac4DbBj1QO2KVXHAjuFqJ1D7SN6peVwFU+4YgR2NZxF/f1MLD9S2IXwdrXJzSqKlI/wOn507vFfHbeSh3i+YTJpCGBz9odULiZ4fROo4HKweGGefnzTtqOd6ZqGeJiSYYxOC4yHm10WPITF+/X3PQV945mEGoSJum68Y4JrvuFCPCGOGVTK8gr8DxZwET/Q3lmbL7oJArTa9BUQ6079haPGDxUFL1kJJPCu1FHe5MI9ToQEnY8YEE6W22nVzp5mOtodCN+Ivk4pGf2TfBDkJnkhgEvkFv+/yy3doP7F4c5rSs4hwmUAO6bDk5k2ecddSiMHwhima/J2McIt4C4C2R6F0OKS+2B+wCgcdZ4Dm5uuo3iNndNrmdyBHUTTBKsKg6e6LyeQrmR+JAa4PKV4ysu5iiHyckVdZNN6uScWknjV7iCtWr0bDWB14vcP75xhpHwO0IsAlPX84EkGUDemLvW5wGLOn60gUjSFZYk5QFffHqLvQkDJf4mn3Ds6Ep0zE20QmsQspp4lnUaazo15CWQdSTFQmWNR17UdJmMud4PvqB2RfNyZTTVRSLKHZr+kc/DbYQ/ClncdUTiBY6GYOqIYE3KteN42+DulFv3bBXylX0b82zd3iTqcyM3J0eNRZ/ignS9cIJZuW13LyAXuKnEeEYB5ejpGZvmaGrq2M7ZWQ1dVIDdAJZZmYbbs1qYcdJoTZJbSi6NrKXxmukVyY0ktQ+MUxxY6fS9WF2ZpLSnw1IYW2cPA7Yqmo5oj3aXrw1WtduGGfxiFZX5r1N9vqruZoqnqTBglMAow6hPeuoeQLmch5n4bOGaakJOBxJOISHJ3zcUDq3g+cScQykH9hU4fz46JXVGbPyHshUL8QYevWCbaN+S738ihqeltw8ApAls4Nhc9R98wMUvhHFKHw9XZzAhUcK9pipsRqrN+vlKvDetgjST8dhG0B72XHktWum/SzsRb+5IPm6S7VbnEscqCrjxTfgiThsD9eFZFHW6nKvmAgSUOwJy3PmL9xRKZiDA3W5bCy4+ssEIiEofcuu7cCXD5CkAysN5BW/iRiLJZbGEFCnQZPJZ6Zh2tXgQnUfzJAl3NUbP0Emjahm/aVvHsGYnPwcMkW6fTdxgnNIP91CyYv3JSWkYUZVLAQMGtYEBJ2aOYy5+WFcR2OWtvSZomisHmdrpM8lA== X-OriginatorOrg: starfivetech.com X-MS-Exchange-CrossTenant-Network-Message-Id: 651068f4-f0c2-4bc1-5570-08dc1671c577 X-MS-Exchange-CrossTenant-AuthSource: SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2024 09:01:48.1921 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 06fe3fa3-1221-43d3-861b-5a4ee687a85c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nM3LJlByn6GislHsdtgStrSiM4VpAIJz65ujXhdIWo9d+mnTSquGYOIWpuEJh66m0r4nKif7aZj9SjsYkd6X6OasJYpAp2+kCOgPz80NYoo= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SHXPR01MB0640 Add compatible string for StarFive JH8100 crypto hardware. Signed-off-by: Jia Jie Ho Acked-by: Conor Dooley --- .../devicetree/bindings/crypto/starfive,jh7110-crypto.yaml | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/crypto/starfive,jh7110-crypto.yaml b/Documentation/devicetree/bindings/crypto/starfive,jh7110-crypto.yaml index 71a2876bd6e4..3b14320a107f 100644 --- a/Documentation/devicetree/bindings/crypto/starfive,jh7110-crypto.yaml +++ b/Documentation/devicetree/bindings/crypto/starfive,jh7110-crypto.yaml @@ -12,7 +12,11 @@ maintainers: properties: compatible: - const: starfive,jh7110-crypto + oneOf: + - items: + - const: starfive,jh8100-crypto + - const: starfive,jh7110-crypto + - const: starfive,jh7110-crypto reg: maxItems: 1 From patchwork Tue Jan 16 09:01:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Jie Ho X-Patchwork-Id: 13520601 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from CHN02-BJS-obe.outbound.protection.partner.outlook.cn (mail-bjschn02on2084.outbound.protection.partner.outlook.cn [139.219.17.84]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7733112B72; Tue, 16 Jan 2024 09:34:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=starfivetech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=starfivetech.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K/0JKeXGzE9P0h+q2P2fEzKtXNuYFtjxFU9SxTRLv5fvPi3x22BDoeJv6fPkOrxZOL5sksRidEDXyt1zWJOBR1dNbQQiK+vtG7ngNFjWDuasvcUq20VCXg1zW6WAaDvJjo3g/dh8pO+ypHy/WyOcATq0iKpln0PpMX2LFlv2DXkWStZb2KMLNkRDpsqLOU5+Frh3zeJ8t+Ryic0ujFhw3rXFmhEAb/EGAYAygthj8NJVbcbSWlxExYv7y65uDAuPc3LrklLj0IIGmxuDWpY1IsX+ADM3YHsCelyz2lqpld46F95E/rzpDR1s8bsRs4ujbeYhocxlQR0c1ccZpCY7zQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sU+GYgBBPn83RX//TBJEQX/YvJND9DjfWZC9fHvCczM=; b=RilcQ5Tse+d045igScadjDbDgBb11auj3+c23/MPbFrE5dxLl0Mtqc50o5MAH8scLbLMNbY0iI5Nh4Wj/4Bx0Usqxk/UcKYfmEfdJCkcqi0GXd/BdynxfrOqBCbQYOhRb2Ik05Hj+cMJOxlZiZ51YBa+G9U50ikR34Xchc3nfwcc3BaJqFlBAt+5/6g0E8H3CQKCi+o0pdNRMA/GvOy/O9d0lL2HDz+JXvZa8oXKjAN9XQRyd1G9uue0AfJrGkcOXjvj//KLpV/QTq9qRJadANksrU25BIasp5iVItZ4Qw9uA9imBlOGUwW/OCQiWx0aSrpQFyZ8cksmTB/pVHu/fA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starfivetech.com; dmarc=pass action=none header.from=starfivetech.com; dkim=pass header.d=starfivetech.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=starfivetech.com; Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) by SHXPR01MB0640.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:1e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7181.26; Tue, 16 Jan 2024 09:01:49 +0000 Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6]) by SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6%7]) with mapi id 15.20.7135.032; Tue, 16 Jan 2024 09:01:49 +0000 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Conor Dooley , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/5] crypto: starfive: Update hash dma usage Date: Tue, 16 Jan 2024 17:01:32 +0800 Message-Id: <20240116090135.75737-3-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240116090135.75737-1-jiajie.ho@starfivetech.com> References: <20240116090135.75737-1-jiajie.ho@starfivetech.com> X-ClientProxiedBy: BJXPR01CA0054.CHNPR01.prod.partner.outlook.cn (2406:e500:c211:12::21) To SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SHXPR01MB0670:EE_|SHXPR01MB0640:EE_ X-MS-Office365-Filtering-Correlation-Id: 95e27ee0-7403-4b1f-a7da-08dc1671c671 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IZig2DcspZK3DHYiSpLfQlR34MdGRBkun0x6UC6uaUkxA0tPiMiXrxOMFNTuX5Mh+Mg8HUDaDlrkO2Ud29ZhQGYu1lZmIAP8eGgY9zg8Bq5DsIVhDhhRLSsiHcv+tFIC4O8fEHQgzlNrmV7uKXzdXG/8FeqRh48ROnau/HzJ9vRA8m/d52AnOUc1irSdVnNicdD44aAejaKGztu1VUuQg83Yc1TfJ1DDzfAYooIenNXKatv9NAu2dHSl8tRrzb8DpM5di3jfFLV0kpSTDdqUTlFh9vqWZvEff1eGxTwfoVcm4dAdwALG+AYVjlYognMouhgABf4ivTTr43KLh9FJ07mWv6AJDRFiXlJ7oUtwZYtFTOO7OISUFrgF6407y6w7Ff8xeA4RsfhGhXtutmOWgbJ5ydEYjrccIVstJ6LamnuKZpBKmZ5qYx6tqXBntQVxalVXCbo2WC879LvDd5Y+R5g7xRtb2Ac3MoTMUXPGKwUrFM6o79ksKsuhACoKOUsTYuMMrd00HLnTpGs2VyYP46qWmv/AiveY7w5sDUAJBjEvGBozdKmKiDDKSLVGkXfa X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn;PTR:;CAT:NONE;SFS:(13230031)(346002)(136003)(396003)(366004)(39830400003)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(38100700002)(41320700001)(2906002)(30864003)(5660300002)(15650500001)(36756003)(40180700001)(8936002)(8676002)(86362001)(41300700001)(38350700005)(83380400001)(26005)(40160700002)(508600001)(66476007)(110136005)(66946007)(66556008)(1076003)(2616005)(6666004)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: YHW9Qnjn20oIZTwfnoxAKyfPj3fPPaT9jrdsmbghvQ45NNuzoJZkMyoKS0dS2Hg/9W4LgbbgPEkyvA3+NULusuIsF/B0ksVNCJZt9gQM8bAo78GPA22v6o6TA0M5D0tz/vXf3jr8hn/YVNxpXQReS8+GK/7ytap82kQBWRSzfPQIPio9OsLXz5glQGhlKaKaQKKFSNtgJ9zjhO3kPDYaZu0QkzeWg1+ffU2vF9LcTh3NJKvGjzZYTJKcsQJhFNlPFrc9pTg0JrCZSU+kZcjZY1Z00E878y44k544/IhaXph9vYhblWAS82H14L4QvTfTWblFbhvAPn+0NKdcrt+5f0LDO6BgCs7Dlw8Pf/ptNpy5YNwR6LrMOTyT/hjyeJFAnVuia3B0LG65UAFO4nqnsPJlS4YKCNss/F7K74qLj0/CjbXRwjC1iutHiLucBTRuMjn6LArrr4tbX67cyFz0jvynlbSR1j4i1WtsFMj48nyStjiCaPC8QZYRDYIW6Z3/zK4lXC1/VlagokmJW88bmwwniEk/o0R5z21sxnmNXcWnaxfrn1uvzNuSio7xZktuCO10P2rLyrfh2zM1932DxCMf/Zn7796WbQ9Ur0rFzXiEidsT1Jdx+Ug5IRMAhBoMuw8pL4yLsr0LAeaMA/W+sL+Qh1e1xJbkRKHNBTQxQz1hIQYqwnnwFynQU+NGYd6lGOpkhms7jWWoQ4OCHSDMfAtrLWaTNIv1uE/UiF1g3wPKk6xz+TGO0onoMBhOVerQmlD/yGBcAdw87LKUo6ItyJPFvvlWVBHt6r84RKicLsHPQ0xf5rR2cszdJBkSGVm2vc8MyuqD3boZc3naWsSG1Y/oJjkE1M5wIfBkr5osfRsOw4hHrpLa1G1HQY9yE3gRwsDZQ4oQ+Ra52yiCzi7gIRxBjCIzLsZLt9atDStU3S0TIZ+ie2ML2U6Xt0n/rem/JfpV1rVo5pDwnCMrIEwSYPQ8RaWrrnLCbrdMvlHlxPa2fNAJ99HbmlQ0DHV7OOuI7ZoobJu6wjzMJXiwMUdh7rPppKF/487WHiqdRqMmj8MbTglzUdWt4IUA7mM9SAhrzAlIdvkshhmvBtMsYk14C/Kjec+KVU8YdwEG6ag8ZNNmnR3PuwYQqDMEe8LF/NyzI1mnstwqEZ4xSMFLJjseBxIe8HLqD55uOZDyJcR/lVpUYFOruol+0OAf3kcrzd0X/IXEfwoTkKsgBTKCwVAc36QTQQmnfK3Rt524pWFMPVPEHUi+fwc2FsDhkZRggJ9cCJv9BpsdymAuAYQy9mzWVd2FQxU9l7wqEzcKVAkCo0YWpSxNfyzpEoYEjErBnNaZCxHRHrAPLVcD82WKWKdfeKI84B+6IOUUrOi6hxXVnVrQ0bP2v+jIok1Qc6thR9PhLmdSvCmW+82T2paMZkqWbqLnCB10izG/XmBtRd6VFpuFNSAgl3aD0HZ4helS9A9WGiQaFPfAbhsikGl8GLWwduNH8YZECwrjDLCcmeF3r0ZONuB0+6lNUdtuHhAiUNTlxyBq1LV3IpaNYdof6V9wARqk4d8u+hg/TjeSMM61CUGMzM/BNe5OnTKNGw8Wf31kbb+vIYC+w245UbvH3FJFvw== X-OriginatorOrg: starfivetech.com X-MS-Exchange-CrossTenant-Network-Message-Id: 95e27ee0-7403-4b1f-a7da-08dc1671c671 X-MS-Exchange-CrossTenant-AuthSource: SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2024 09:01:49.8064 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 06fe3fa3-1221-43d3-861b-5a4ee687a85c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: kJ7Oc9YPbfFgujxjd7qvhAsHLbfJEpX8BTuqww+2v20Lzaaxvdyl0aYMsjw7D0ipefD8EUcbFrJFCjdYf8jS2Eh7lrH00kwn25WMTb00RyM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SHXPR01MB0640 Current hash uses sw fallback for non-word aligned input scatterlists. Add support for unaligned cases utilizing the data valid mask for dma. Signed-off-by: Jia Jie Ho --- drivers/crypto/starfive/jh7110-cryp.h | 1 + drivers/crypto/starfive/jh7110-hash.c | 257 ++++++++++---------------- 2 files changed, 100 insertions(+), 158 deletions(-) diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h index 6cdf6db5d904..4940cd1a3fbb 100644 --- a/drivers/crypto/starfive/jh7110-cryp.h +++ b/drivers/crypto/starfive/jh7110-cryp.h @@ -190,6 +190,7 @@ struct starfive_cryp_dev { struct crypto_engine *engine; struct tasklet_struct aes_done; struct tasklet_struct hash_done; + struct completion dma_done; size_t assoclen; size_t total_in; size_t total_out; diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfive/jh7110-hash.c index b6d1808012ca..74e151b5f875 100644 --- a/drivers/crypto/starfive/jh7110-hash.c +++ b/drivers/crypto/starfive/jh7110-hash.c @@ -86,62 +86,31 @@ static int starfive_hash_hmac_key(struct starfive_cryp_ctx *ctx) static void starfive_hash_start(void *param) { - struct starfive_cryp_ctx *ctx = param; - struct starfive_cryp_request_ctx *rctx = ctx->rctx; - struct starfive_cryp_dev *cryp = ctx->cryp; - union starfive_alg_cr alg_cr; + struct starfive_cryp_dev *cryp = param; union starfive_hash_csr csr; - u32 stat; - - dma_unmap_sg(cryp->dev, rctx->in_sg, rctx->in_sg_len, DMA_TO_DEVICE); - - alg_cr.v = 0; - alg_cr.clear = 1; - - writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); + u32 mask; csr.v = readl(cryp->base + STARFIVE_HASH_SHACSR); csr.firstb = 0; csr.final = 1; - - stat = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); - stat &= ~STARFIVE_IE_MASK_HASH_DONE; - writel(stat, cryp->base + STARFIVE_IE_MASK_OFFSET); + csr.ie = 1; writel(csr.v, cryp->base + STARFIVE_HASH_SHACSR); + + mask = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); + mask &= ~STARFIVE_IE_MASK_HASH_DONE; + writel(mask, cryp->base + STARFIVE_IE_MASK_OFFSET); } -static int starfive_hash_xmit_dma(struct starfive_cryp_ctx *ctx) +static void starfive_hash_dma_callback(void *param) { - struct starfive_cryp_request_ctx *rctx = ctx->rctx; - struct starfive_cryp_dev *cryp = ctx->cryp; - struct dma_async_tx_descriptor *in_desc; - union starfive_alg_cr alg_cr; - int total_len; - int ret; - - if (!rctx->total) { - starfive_hash_start(ctx); - return 0; - } + struct starfive_cryp_dev *cryp = param; - writel(rctx->total, cryp->base + STARFIVE_DMA_IN_LEN_OFFSET); - - total_len = rctx->total; - total_len = (total_len & 0x3) ? (((total_len >> 2) + 1) << 2) : total_len; - sg_dma_len(rctx->in_sg) = total_len; - - alg_cr.v = 0; - alg_cr.start = 1; - alg_cr.hash_dma_en = 1; - - writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); - - ret = dma_map_sg(cryp->dev, rctx->in_sg, rctx->in_sg_len, DMA_TO_DEVICE); - if (!ret) - return dev_err_probe(cryp->dev, -EINVAL, "dma_map_sg() error\n"); + complete(&cryp->dma_done); +} - cryp->cfg_in.direction = DMA_MEM_TO_DEV; - cryp->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; +static void starfive_hash_dma_init(struct starfive_cryp_dev *cryp) +{ + cryp->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_16_BYTES; cryp->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; cryp->cfg_in.src_maxburst = cryp->dma_maxburst; cryp->cfg_in.dst_maxburst = cryp->dma_maxburst; @@ -149,50 +118,48 @@ static int starfive_hash_xmit_dma(struct starfive_cryp_ctx *ctx) dmaengine_slave_config(cryp->tx, &cryp->cfg_in); - in_desc = dmaengine_prep_slave_sg(cryp->tx, rctx->in_sg, - ret, DMA_MEM_TO_DEV, - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); - - if (!in_desc) - return -EINVAL; - - in_desc->callback = starfive_hash_start; - in_desc->callback_param = ctx; - - dmaengine_submit(in_desc); - dma_async_issue_pending(cryp->tx); - - return 0; + init_completion(&cryp->dma_done); } -static int starfive_hash_xmit(struct starfive_cryp_ctx *ctx) +static int starfive_hash_dma_xfer(struct starfive_cryp_dev *cryp, + struct scatterlist *sg) { - struct starfive_cryp_request_ctx *rctx = ctx->rctx; - struct starfive_cryp_dev *cryp = ctx->cryp; + struct dma_async_tx_descriptor *in_desc; + union starfive_alg_cr alg_cr; int ret = 0; - rctx->csr.hash.v = 0; - rctx->csr.hash.reset = 1; - writel(rctx->csr.hash.v, cryp->base + STARFIVE_HASH_SHACSR); - - if (starfive_hash_wait_busy(ctx)) - return dev_err_probe(cryp->dev, -ETIMEDOUT, "Error resetting engine.\n"); + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.hash_dma_en = 1; + writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); - rctx->csr.hash.v = 0; - rctx->csr.hash.mode = ctx->hash_mode; - rctx->csr.hash.ie = 1; + writel(sg_dma_len(sg), cryp->base + STARFIVE_DMA_IN_LEN_OFFSET); + sg_dma_len(sg) = ALIGN(sg_dma_len(sg), sizeof(u32)); - if (ctx->is_hmac) { - ret = starfive_hash_hmac_key(ctx); - if (ret) - return ret; - } else { - rctx->csr.hash.start = 1; - rctx->csr.hash.firstb = 1; - writel(rctx->csr.hash.v, cryp->base + STARFIVE_HASH_SHACSR); + in_desc = dmaengine_prep_slave_sg(cryp->tx, sg, 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) { + ret = -EINVAL; + goto end; } - return starfive_hash_xmit_dma(ctx); + reinit_completion(&cryp->dma_done); + in_desc->callback = starfive_hash_dma_callback; + in_desc->callback_param = cryp; + + dmaengine_submit(in_desc); + dma_async_issue_pending(cryp->tx); + + if (!wait_for_completion_timeout(&cryp->dma_done, + msecs_to_jiffies(1000))) + ret = -ETIMEDOUT; + +end: + alg_cr.v = 0; + alg_cr.clear = 1; + writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); + + return ret; } static int starfive_hash_copy_hash(struct ahash_request *req) @@ -229,44 +196,56 @@ void starfive_hash_done_task(unsigned long param) crypto_finalize_hash_request(cryp->engine, cryp->req.hreq, err); } -static int starfive_hash_check_aligned(struct scatterlist *sg, size_t total, size_t align) +static int starfive_hash_one_request(struct crypto_engine *engine, void *areq) { - int len = 0; - - if (!total) - return 0; + struct ahash_request *req = container_of(areq, struct ahash_request, + base); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + struct starfive_cryp_dev *cryp = ctx->cryp; + struct scatterlist *tsg; + int ret, src_nents, i; - if (!IS_ALIGNED(total, align)) - return -EINVAL; + writel(STARFIVE_HASH_RESET, cryp->base + STARFIVE_HASH_SHACSR); - while (sg) { - if (!IS_ALIGNED(sg->offset, sizeof(u32))) - return -EINVAL; + if (starfive_hash_wait_busy(ctx)) + return dev_err_probe(cryp->dev, -ETIMEDOUT, "Error resetting hardware.\n"); - if (!IS_ALIGNED(sg->length, align)) - return -EINVAL; + rctx->csr.hash.v = 0; + rctx->csr.hash.mode = ctx->hash_mode; - len += sg->length; - sg = sg_next(sg); + if (ctx->is_hmac) { + ret = starfive_hash_hmac_key(ctx); + if (ret) + return ret; + } else { + rctx->csr.hash.start = 1; + rctx->csr.hash.firstb = 1; + writel(rctx->csr.hash.v, cryp->base + STARFIVE_HASH_SHACSR); } - if (len != total) - return -EINVAL; + /* No input message, get digest and end. */ + if (!rctx->total) + goto hash_start; - return 0; -} + starfive_hash_dma_init(cryp); -static int starfive_hash_one_request(struct crypto_engine *engine, void *areq) -{ - struct ahash_request *req = container_of(areq, struct ahash_request, - base); - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); - struct starfive_cryp_dev *cryp = ctx->cryp; + for_each_sg(rctx->in_sg, tsg, rctx->in_sg_len, i) { + src_nents = dma_map_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg error\n"); - if (!cryp) - return -ENODEV; + ret = starfive_hash_dma_xfer(cryp, tsg); + dma_unmap_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE); + if (ret) + return ret; + } + +hash_start: + starfive_hash_start(cryp); - return starfive_hash_xmit(ctx); + return 0; } static int starfive_hash_init(struct ahash_request *req) @@ -337,22 +316,6 @@ static int starfive_hash_finup(struct ahash_request *req) return crypto_ahash_finup(&rctx->ahash_fbk_req); } -static int starfive_hash_digest_fb(struct ahash_request *req) -{ - struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); - - ahash_request_set_tfm(&rctx->ahash_fbk_req, ctx->ahash_fbk); - ahash_request_set_callback(&rctx->ahash_fbk_req, req->base.flags, - req->base.complete, req->base.data); - - ahash_request_set_crypt(&rctx->ahash_fbk_req, req->src, - req->result, req->nbytes); - - return crypto_ahash_digest(&rctx->ahash_fbk_req); -} - static int starfive_hash_digest(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); @@ -370,9 +333,6 @@ static int starfive_hash_digest(struct ahash_request *req) rctx->in_sg_len = sg_nents_for_len(rctx->in_sg, rctx->total); ctx->rctx = rctx; - if (starfive_hash_check_aligned(rctx->in_sg, rctx->total, rctx->blksize)) - return starfive_hash_digest_fb(req); - return crypto_transfer_hash_request_to_engine(cryp->engine, req); } @@ -406,7 +366,8 @@ static int starfive_hash_import(struct ahash_request *req, const void *in) static int starfive_hash_init_tfm(struct crypto_ahash *hash, const char *alg_name, - unsigned int mode) + unsigned int mode, + bool is_hmac) { struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); @@ -426,7 +387,7 @@ static int starfive_hash_init_tfm(struct crypto_ahash *hash, crypto_ahash_set_reqsize(hash, sizeof(struct starfive_cryp_request_ctx) + crypto_ahash_reqsize(ctx->ahash_fbk)); - ctx->keylen = 0; + ctx->is_hmac = is_hmac; ctx->hash_mode = mode; return 0; @@ -529,81 +490,61 @@ static int starfive_hash_setkey(struct crypto_ahash *hash, static int starfive_sha224_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "sha224-generic", - STARFIVE_HASH_SHA224); + STARFIVE_HASH_SHA224, 0); } static int starfive_sha256_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "sha256-generic", - STARFIVE_HASH_SHA256); + STARFIVE_HASH_SHA256, 0); } static int starfive_sha384_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "sha384-generic", - STARFIVE_HASH_SHA384); + STARFIVE_HASH_SHA384, 0); } static int starfive_sha512_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "sha512-generic", - STARFIVE_HASH_SHA512); + STARFIVE_HASH_SHA512, 0); } static int starfive_sm3_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "sm3-generic", - STARFIVE_HASH_SM3); + STARFIVE_HASH_SM3, 0); } static int starfive_hmac_sha224_init_tfm(struct crypto_ahash *hash) { - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); - - ctx->is_hmac = true; - return starfive_hash_init_tfm(hash, "hmac(sha224-generic)", - STARFIVE_HASH_SHA224); + STARFIVE_HASH_SHA224, 1); } static int starfive_hmac_sha256_init_tfm(struct crypto_ahash *hash) { - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); - - ctx->is_hmac = true; - return starfive_hash_init_tfm(hash, "hmac(sha256-generic)", - STARFIVE_HASH_SHA256); + STARFIVE_HASH_SHA256, 1); } static int starfive_hmac_sha384_init_tfm(struct crypto_ahash *hash) { - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); - - ctx->is_hmac = true; - return starfive_hash_init_tfm(hash, "hmac(sha384-generic)", - STARFIVE_HASH_SHA384); + STARFIVE_HASH_SHA384, 1); } static int starfive_hmac_sha512_init_tfm(struct crypto_ahash *hash) { - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); - - ctx->is_hmac = true; - return starfive_hash_init_tfm(hash, "hmac(sha512-generic)", - STARFIVE_HASH_SHA512); + STARFIVE_HASH_SHA512, 1); } static int starfive_hmac_sm3_init_tfm(struct crypto_ahash *hash) { - struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); - - ctx->is_hmac = true; - return starfive_hash_init_tfm(hash, "hmac(sm3-generic)", - STARFIVE_HASH_SM3); + STARFIVE_HASH_SM3, 1); } static struct ahash_engine_alg algs_sha2_sm3[] = { From patchwork Tue Jan 16 09:01:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Jie Ho X-Patchwork-Id: 13520598 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from CHN02-BJS-obe.outbound.protection.partner.outlook.cn (mail-bjschn02on2048.outbound.protection.partner.outlook.cn [139.219.17.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B45612B9A; Tue, 16 Jan 2024 09:18:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=starfivetech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=starfivetech.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Hhd6zL8N6Sr6+vzuhUIJmY7nyjZmj5/j0DNltllZq50o+Sa1HAFlEF55gdaRG83iJ3zonmzdzLdTubzlw3BmUMwzoSkN8jA2F0cowlMcivump0HMhw/CI3lBAolyw5TRkTRZ+TkZZDLbfe+Kf6/DLFD6RC6t052XlNhW37kO8dD/Vz1dNugFVopb8kwCMe41/AhVJKsRBaP7QXl8KT09DADHnzPV6UzZCHChoIiQqkAVynXxCvWYkXBgsQeDn+figFx9Rkmlo9SPPKg0MJOPIK3+I8kECSNc660fuJjwvm0xO0o1mB4TmJRett18N6OJZBHMmPYK9qSQ8+FlAQXxjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X2qoOEONSJjVXmGMTYO1hjr/rf4PQSzMwOCqVjfRlCs=; b=a4N8dk99pjtgNans8GYZuwLKEUEClISo11bj0a/YVuK88Q/A6zGCxW4cLOheLTSZHsxj5H7WXj/YOSHzXT6sRRRTwPGwusB4fSsW6alPT8NARwaWDfSS9jUUKAZfJHCycbpzJMOJJgJjnpnVu2QsmtTyoEmM/dTfaIefcM0U+cP1wcQjvlc2Z+cf3nkxVkULc1J70w83oZ81wcKXvZqlnE76aIcxZZxcG5F/85TUp1EEEV7UpT0H2n99sB01e0B6WGMaTmiu3r1gawRrEPTamGxFmPTkh4xyaU2RLozb2twR2dpKYnJqQrGvJ9xym0DzhukzN2gRIglADUdPXIRsxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starfivetech.com; dmarc=pass action=none header.from=starfivetech.com; dkim=pass header.d=starfivetech.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=starfivetech.com; Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) by SHXPR01MB0640.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:1e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7181.26; Tue, 16 Jan 2024 09:01:51 +0000 Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6]) by SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6%7]) with mapi id 15.20.7135.032; Tue, 16 Jan 2024 09:01:51 +0000 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Conor Dooley , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] crypto: starfive: Use dma for aes requests Date: Tue, 16 Jan 2024 17:01:33 +0800 Message-Id: <20240116090135.75737-4-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240116090135.75737-1-jiajie.ho@starfivetech.com> References: <20240116090135.75737-1-jiajie.ho@starfivetech.com> X-ClientProxiedBy: BJXPR01CA0054.CHNPR01.prod.partner.outlook.cn (2406:e500:c211:12::21) To SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SHXPR01MB0670:EE_|SHXPR01MB0640:EE_ X-MS-Office365-Filtering-Correlation-Id: cb6e1717-f4f8-4972-9800-08dc1671c769 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iFLP6oQvZIWbIqvYJtQ/T45SPS98DMY5YSVXb2fbUOF0xKUKT2EAtuyh1JFeyMeFtqHdba45dsfAsceWLA9YyMwSkubS0pUvO1w5Im0/mecbxk4RvE4MywghKQfaD2Xh9mbSx2jA/Zjuiug1gZRw46tUNVZeZc/bIzRn8ttO/uQfWEa+aZd50BsjHswXZCdowDanPbVbDw9zonv2uygQaY9rCM3GBiR0RF0FX9zdPAmtzVErEfPHd7VjEMPfpTIZyXNzaxA88G4c3Vo41+I889YhEcawgyT3JcQ0PZW2EloqJRH4+gtv8TIOBAArUInhBBwIh4h4wJDlDMvF8qoRtHFR7Kg1djuNT2BoMzva9ljlqAz3o40XHQ6clmFMRkuBBEwkZGb9N/rcbSr3RPzKQpuEvjKeICHOLxKKh2UCAGYXq34ryMBxSPEHxUkH3uM4PEZHblw8uPrVCZorvHpKdvZidlvmwoCVsnP2wBNZDgjVlZmf0LQVyjD3RUW7iKR/mBVcjmEfEgBRuqi78yUA+sPxHyp6jGeHIAxzYzUsUX8eOqvqg+1utb6G1wY8M8Rj X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn;PTR:;CAT:NONE;SFS:(13230031)(346002)(136003)(396003)(366004)(39830400003)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(38100700002)(41320700001)(2906002)(30864003)(5660300002)(36756003)(40180700001)(8936002)(8676002)(86362001)(41300700001)(38350700005)(83380400001)(26005)(40160700002)(508600001)(66476007)(110136005)(66946007)(66556008)(1076003)(2616005)(6666004)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Zs+1lwRE5Swr4xmbdjsTuOFb9mowOThMcNdjXfgrf4YmrHjOli3J+GzVz6HfxMFcToYYPzZdXHu2QfNgXdWE+gerL2/LV+5nfz7UJrghU16HPxNt4gozm6HkGcckGzgvG401jqFaMa3VPvCvhYjW/5FoZhCysyT/eYbh9e+KbKjJk42yrKFYX1RFRlVc/40YTKbXC3yXeaTfBn8ZMK3UQix3tmWHwSAIZlB4N8GkADbiA0WfvKcSSiriqd/JX7CesMnxQgHFaWoimxQ01ki4KDVTeo8a3KRhMRp2XpdbGP8hgPSQ5ZFBn9vU9n8MSAP+NohbVXsfBmFQJjXBbhkX/kMIFx45cdiygFmNo+YAMz/2y0bsb2pgCVQYpAT6GYQ7MRpCIs0pkukwus9NZQYeV0HXpy3T6afBo2DAPy/W9u2Mj7aFk6NYbfOMFm1Z4dU9XoOr/z/tb1DC2Xknk2XDO4lTTC8VUzpnNOsr0X2mwFnAZf79ZtllKff53/w0St8UvMIMO665rxrrElxrL2r29fQjiZ9EG9G/cyeKeFbTGcxYyEQqXka0ZMpbSbNikWzazEV7IvdcVxdlus5doAJWBps4GCl2sNKEb2E10QU8u7O7vS0FQrMxSPy2W8+0nVt5LNbYtdbk8QxUuzZEMjwhYwLhD3k624loTZD1gkBh3nDmB3SXTJwAcC448zUBdfWF2AqiAHeK9uCYMNZ+dZJfraawDmsksaNAqvLBFUQn8A2m+XTps3lZtBURvBtXDOcDsAYklODB8OcT7o11pIqh9UH4hlH/2AvPYb/+sZLOM8FN0rsD//dsqwlL+ySW5WM1iNYS+Nl67IYOJSXb5Wp7z0T0GW0xDMa91Tmj8gQ0jE1AmCtFXjI5xj6QW5xZh8SVhL7iCV2gKQETrY+ZOUJ8qFAkvAp6m075MBmGMItZKP4RF4mts9xzjElK9YfX8hiCknV4+yFKAD7ESINMEbvudMdca7RUBqhpOAS/HtAyUc/b2WAcdjaZCuipzC5EPYC+1vn5G9Y/N/ZRrsFiL2xPWpxKxwqzf4HXgoWveT3FwKbuNb4nwZTNvc8CQoRccFaLGJ/iS+P+G/bYv2a52r83urKzXrU9NfyEBRau7nC3daferk8kTZOWYCUVclaQfEprgYXxBKsp7b1Tj7uUxL7tjvyBibs0eC3YXHmuC9p/sx3uxlYUowif/mvRFXrSGrLSMlyvFQmTbvmQAmCaaNoCe2paRuK+5E/6d4Bz7j9JohMCs4+MGl6luCMPXHj/q3Ovo9yhSdbUN168IhFNUsgkzb+aa+ui+g7WpbuLqzI/nhqBXOSvJJbToIaQbtIxaOAZ8MH2DQ7yDF8sw5crm3ugfcUZMwqBF/XB3K7C9+Wazy1xLiUi2/3z4yJv3dT0j9yV4MTF1YE2G7E48wLwSikaXjQqn2C7Lyjpc2qbzJ75RrXBM8Si5dMEg7Bvy/V2ILlyRp00rrzIVBxVhYqqXx26GXRIVqOd1tn0BFC6J3VKhlLX1654uR5ExN9qcIkbAxGv5QbZG/1dYxSHl6MePebXd+rll27Jix8uJ2F+la/yAVWY33bnOCw/+GSqngxeKu7y1I6CoHl3baUv02Ish0jbFg== X-OriginatorOrg: starfivetech.com X-MS-Exchange-CrossTenant-Network-Message-Id: cb6e1717-f4f8-4972-9800-08dc1671c769 X-MS-Exchange-CrossTenant-AuthSource: SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2024 09:01:51.4713 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 06fe3fa3-1221-43d3-861b-5a4ee687a85c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: C16pLoIgmT2QBgrZb/RrQTH0as7L2lK6pc9a13JrYgCO8TBWjh40tSa0Fd31KULoaOmSAw3HimlTImGBKDXSw7uyTKa5UdgsGHaqoXJXm38= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SHXPR01MB0640 Convert AES module to use dma for data transfers to reduce cpu load and compatible with future variants. The reqsize is increased to allocate memory for the skcipher fallback algo. Signed-off-by: Jia Jie Ho --- drivers/crypto/starfive/Kconfig | 4 + drivers/crypto/starfive/jh7110-aes.c | 585 +++++++++++++++++--------- drivers/crypto/starfive/jh7110-cryp.c | 9 - drivers/crypto/starfive/jh7110-cryp.h | 5 +- 4 files changed, 393 insertions(+), 210 deletions(-) diff --git a/drivers/crypto/starfive/Kconfig b/drivers/crypto/starfive/Kconfig index cb59357b58b2..0fe389e9f932 100644 --- a/drivers/crypto/starfive/Kconfig +++ b/drivers/crypto/starfive/Kconfig @@ -14,6 +14,10 @@ config CRYPTO_DEV_JH7110 select CRYPTO_RSA select CRYPTO_AES select CRYPTO_CCM + select CRYPTO_GCM + select CRYPTO_ECB + select CRYPTO_CBC + select CRYPTO_CTR help Support for StarFive JH7110 crypto hardware acceleration engine. This module provides acceleration for public key algo, diff --git a/drivers/crypto/starfive/jh7110-aes.c b/drivers/crypto/starfive/jh7110-aes.c index 1ac15cc4ef3c..a6e30e1a2b5d 100644 --- a/drivers/crypto/starfive/jh7110-aes.c +++ b/drivers/crypto/starfive/jh7110-aes.c @@ -78,7 +78,7 @@ static inline int is_gcm(struct starfive_cryp_dev *cryp) return (cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_GCM; } -static inline int is_encrypt(struct starfive_cryp_dev *cryp) +static inline bool is_encrypt(struct starfive_cryp_dev *cryp) { return cryp->flags & FLG_ENCRYPT; } @@ -103,16 +103,6 @@ static void starfive_aes_aead_hw_start(struct starfive_cryp_ctx *ctx, u32 hw_mod } } -static inline void starfive_aes_set_ivlen(struct starfive_cryp_ctx *ctx) -{ - struct starfive_cryp_dev *cryp = ctx->cryp; - - if (is_gcm(cryp)) - writel(GCM_AES_IV_SIZE, cryp->base + STARFIVE_AES_IVLEN); - else - writel(AES_BLOCK_SIZE, cryp->base + STARFIVE_AES_IVLEN); -} - static inline void starfive_aes_set_alen(struct starfive_cryp_ctx *ctx) { struct starfive_cryp_dev *cryp = ctx->cryp; @@ -261,7 +251,6 @@ static int starfive_aes_hw_init(struct starfive_cryp_ctx *ctx) rctx->csr.aes.mode = hw_mode; rctx->csr.aes.cmode = !is_encrypt(cryp); - rctx->csr.aes.ie = 1; rctx->csr.aes.stmode = STARFIVE_AES_MODE_XFB_1; if (cryp->side_chan) { @@ -279,7 +268,7 @@ static int starfive_aes_hw_init(struct starfive_cryp_ctx *ctx) case STARFIVE_AES_MODE_GCM: starfive_aes_set_alen(ctx); starfive_aes_set_mlen(ctx); - starfive_aes_set_ivlen(ctx); + writel(GCM_AES_IV_SIZE, cryp->base + STARFIVE_AES_IVLEN); starfive_aes_aead_hw_start(ctx, hw_mode); starfive_aes_write_iv(ctx, (void *)cryp->req.areq->iv); break; @@ -300,28 +289,30 @@ static int starfive_aes_hw_init(struct starfive_cryp_ctx *ctx) return cryp->err; } -static int starfive_aes_read_authtag(struct starfive_cryp_dev *cryp) +static int starfive_aes_read_authtag(struct starfive_cryp_ctx *ctx) { - int i, start_addr; + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + int i; if (starfive_aes_wait_busy(cryp)) return dev_err_probe(cryp->dev, -ETIMEDOUT, "Timeout waiting for tag generation."); - start_addr = STARFIVE_AES_NONCE0; - - if (is_gcm(cryp)) - for (i = 0; i < AES_BLOCK_32; i++, start_addr += 4) - cryp->tag_out[i] = readl(cryp->base + start_addr); - else + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_GCM) { + cryp->tag_out[0] = readl(cryp->base + STARFIVE_AES_NONCE0); + cryp->tag_out[1] = readl(cryp->base + STARFIVE_AES_NONCE1); + cryp->tag_out[2] = readl(cryp->base + STARFIVE_AES_NONCE2); + cryp->tag_out[3] = readl(cryp->base + STARFIVE_AES_NONCE3); + } else { for (i = 0; i < AES_BLOCK_32; i++) cryp->tag_out[i] = readl(cryp->base + STARFIVE_AES_AESDIO0R); + } if (is_encrypt(cryp)) { - scatterwalk_copychunks(cryp->tag_out, &cryp->out_walk, cryp->authsize, 1); + scatterwalk_map_and_copy(cryp->tag_out, rctx->out_sg, + cryp->total_in, cryp->authsize, 1); } else { - scatterwalk_copychunks(cryp->tag_in, &cryp->in_walk, cryp->authsize, 0); - if (crypto_memneq(cryp->tag_in, cryp->tag_out, cryp->authsize)) return dev_err_probe(cryp->dev, -EBADMSG, "Failed tag verification\n"); } @@ -329,23 +320,18 @@ static int starfive_aes_read_authtag(struct starfive_cryp_dev *cryp) return 0; } -static void starfive_aes_finish_req(struct starfive_cryp_dev *cryp) +static void starfive_aes_finish_req(struct starfive_cryp_ctx *ctx) { - union starfive_aes_csr csr; + struct starfive_cryp_dev *cryp = ctx->cryp; int err = cryp->err; if (!err && cryp->authsize) - err = starfive_aes_read_authtag(cryp); + err = starfive_aes_read_authtag(ctx); if (!err && ((cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CBC || (cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CTR)) starfive_aes_get_iv(cryp, (void *)cryp->req.sreq->iv); - /* reset irq flags*/ - csr.v = 0; - csr.aesrst = 1; - writel(csr.v, cryp->base + STARFIVE_AES_CSR); - if (cryp->authsize) crypto_finalize_aead_request(cryp->engine, cryp->req.areq, err); else @@ -353,39 +339,6 @@ static void starfive_aes_finish_req(struct starfive_cryp_dev *cryp) err); } -void starfive_aes_done_task(unsigned long param) -{ - struct starfive_cryp_dev *cryp = (struct starfive_cryp_dev *)param; - u32 block[AES_BLOCK_32]; - u32 stat; - int i; - - for (i = 0; i < AES_BLOCK_32; i++) - block[i] = readl(cryp->base + STARFIVE_AES_AESDIO0R); - - scatterwalk_copychunks(block, &cryp->out_walk, min_t(size_t, AES_BLOCK_SIZE, - cryp->total_out), 1); - - cryp->total_out -= min_t(size_t, AES_BLOCK_SIZE, cryp->total_out); - - if (!cryp->total_out) { - starfive_aes_finish_req(cryp); - return; - } - - memset(block, 0, AES_BLOCK_SIZE); - scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, AES_BLOCK_SIZE, - cryp->total_in), 0); - cryp->total_in -= min_t(size_t, AES_BLOCK_SIZE, cryp->total_in); - - for (i = 0; i < AES_BLOCK_32; i++) - writel(block[i], cryp->base + STARFIVE_AES_AESDIO0R); - - stat = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); - stat &= ~STARFIVE_IE_MASK_AES_DONE; - writel(stat, cryp->base + STARFIVE_IE_MASK_OFFSET); -} - static int starfive_aes_gcm_write_adata(struct starfive_cryp_ctx *ctx) { struct starfive_cryp_dev *cryp = ctx->cryp; @@ -451,60 +404,165 @@ static int starfive_aes_ccm_write_adata(struct starfive_cryp_ctx *ctx) return 0; } -static int starfive_aes_prepare_req(struct skcipher_request *req, - struct aead_request *areq) +static void starfive_aes_dma_done(void *param) { - struct starfive_cryp_ctx *ctx; - struct starfive_cryp_request_ctx *rctx; - struct starfive_cryp_dev *cryp; + struct starfive_cryp_dev *cryp = param; - if (!req && !areq) - return -EINVAL; + complete(&cryp->dma_done); +} - ctx = req ? crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)) : - crypto_aead_ctx(crypto_aead_reqtfm(areq)); +static void starfive_aes_dma_init(struct starfive_cryp_dev *cryp) +{ + cryp->cfg_in.direction = DMA_MEM_TO_DEV; + cryp->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_16_BYTES; + cryp->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cryp->cfg_in.src_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_addr = cryp->phys_base + STARFIVE_ALG_FIFO_OFFSET; - cryp = ctx->cryp; - rctx = req ? skcipher_request_ctx(req) : aead_request_ctx(areq); + dmaengine_slave_config(cryp->tx, &cryp->cfg_in); - if (req) { - cryp->req.sreq = req; - cryp->total_in = req->cryptlen; - cryp->total_out = req->cryptlen; - cryp->assoclen = 0; - cryp->authsize = 0; - } else { - cryp->req.areq = areq; - cryp->assoclen = areq->assoclen; - cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(areq)); - if (is_encrypt(cryp)) { - cryp->total_in = areq->cryptlen; - cryp->total_out = areq->cryptlen; - } else { - cryp->total_in = areq->cryptlen - cryp->authsize; - cryp->total_out = cryp->total_in; - } - } + cryp->cfg_out.direction = DMA_DEV_TO_MEM; + cryp->cfg_out.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; + cryp->cfg_out.dst_addr_width = DMA_SLAVE_BUSWIDTH_16_BYTES; + cryp->cfg_out.src_maxburst = 4; + cryp->cfg_out.dst_maxburst = 4; + cryp->cfg_out.src_addr = cryp->phys_base + STARFIVE_ALG_FIFO_OFFSET; - rctx->in_sg = req ? req->src : areq->src; - scatterwalk_start(&cryp->in_walk, rctx->in_sg); + dmaengine_slave_config(cryp->rx, &cryp->cfg_out); - rctx->out_sg = req ? req->dst : areq->dst; - scatterwalk_start(&cryp->out_walk, rctx->out_sg); + init_completion(&cryp->dma_done); +} - if (cryp->assoclen) { - rctx->adata = kzalloc(cryp->assoclen + AES_BLOCK_SIZE, GFP_KERNEL); - if (!rctx->adata) - return dev_err_probe(cryp->dev, -ENOMEM, - "Failed to alloc memory for adata"); +static int starfive_aes_dma_xfer(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst, + int len) +{ + struct dma_async_tx_descriptor *in_desc, *out_desc; + union starfive_alg_cr alg_cr; + int ret = 0, in_save, out_save; + + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.aes_dma_en = 1; + writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); - scatterwalk_copychunks(rctx->adata, &cryp->in_walk, cryp->assoclen, 0); - scatterwalk_copychunks(NULL, &cryp->out_walk, cryp->assoclen, 2); + in_save = sg_dma_len(src); + out_save = sg_dma_len(dst); + + writel(ALIGN(len, AES_BLOCK_SIZE), cryp->base + STARFIVE_DMA_IN_LEN_OFFSET); + writel(ALIGN(len, AES_BLOCK_SIZE), cryp->base + STARFIVE_DMA_OUT_LEN_OFFSET); + + sg_dma_len(src) = ALIGN(len, AES_BLOCK_SIZE); + sg_dma_len(dst) = ALIGN(len, AES_BLOCK_SIZE); + + out_desc = dmaengine_prep_slave_sg(cryp->rx, dst, 1, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!out_desc) { + ret = -EINVAL; + goto dma_err; } - ctx->rctx = rctx; + out_desc->callback = starfive_aes_dma_done; + out_desc->callback_param = cryp; + + reinit_completion(&cryp->dma_done); + dmaengine_submit(out_desc); + dma_async_issue_pending(cryp->rx); + + in_desc = dmaengine_prep_slave_sg(cryp->tx, src, 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) { + ret = -EINVAL; + goto dma_err; + } + + dmaengine_submit(in_desc); + dma_async_issue_pending(cryp->tx); + + if (!wait_for_completion_timeout(&cryp->dma_done, + msecs_to_jiffies(1000))) + ret = -ETIMEDOUT; - return starfive_aes_hw_init(ctx); +dma_err: + sg_dma_len(src) = in_save; + sg_dma_len(dst) = out_save; + + alg_cr.v = 0; + alg_cr.clear = 1; + writel(alg_cr.v, cryp->base + STARFIVE_ALG_CR_OFFSET); + + return ret; +} + +static int starfive_aes_map_sg(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct scatterlist *stsg, *dtsg; + struct scatterlist _src[2], _dst[2]; + unsigned int remain = cryp->total_in; + unsigned int len, src_nents, dst_nents; + int ret; + + if (src == dst) { + for (stsg = src, dtsg = dst; remain > 0; + stsg = sg_next(stsg), dtsg = sg_next(dtsg)) { + src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg error\n"); + + dst_nents = src_nents; + len = min(sg_dma_len(stsg), remain); + + ret = starfive_aes_dma_xfer(cryp, stsg, dtsg, len); + dma_unmap_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL); + if (ret) + return ret; + + remain -= len; + } + } else { + for (stsg = src, dtsg = dst;;) { + src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg src error\n"); + + dst_nents = dma_map_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE); + if (dst_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg dst error\n"); + + len = min(sg_dma_len(stsg), sg_dma_len(dtsg)); + len = min(len, remain); + + ret = starfive_aes_dma_xfer(cryp, stsg, dtsg, len); + dma_unmap_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE); + dma_unmap_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE); + if (ret) + return ret; + + remain -= len; + if (remain == 0) + break; + + if (sg_dma_len(stsg) - len) { + stsg = scatterwalk_ffwd(_src, stsg, len); + dtsg = sg_next(dtsg); + } else if (sg_dma_len(dtsg) - len) { + dtsg = scatterwalk_ffwd(_dst, dtsg, len); + stsg = sg_next(stsg); + } else { + stsg = sg_next(stsg); + dtsg = sg_next(dtsg); + } + } + } + + return 0; } static int starfive_aes_do_one_req(struct crypto_engine *engine, void *areq) @@ -513,35 +571,38 @@ static int starfive_aes_do_one_req(struct crypto_engine *engine, void *areq) container_of(areq, struct skcipher_request, base); struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct starfive_cryp_request_ctx *rctx = skcipher_request_ctx(req); struct starfive_cryp_dev *cryp = ctx->cryp; - u32 block[AES_BLOCK_32]; - u32 stat; - int err; - int i; + int ret; - err = starfive_aes_prepare_req(req, NULL); - if (err) - return err; + cryp->req.sreq = req; + cryp->total_in = req->cryptlen; + cryp->total_out = req->cryptlen; + cryp->assoclen = 0; + cryp->authsize = 0; - /* - * Write first plain/ciphertext block to start the module - * then let irq tasklet handle the rest of the data blocks. - */ - scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, AES_BLOCK_SIZE, - cryp->total_in), 0); - cryp->total_in -= min_t(size_t, AES_BLOCK_SIZE, cryp->total_in); + rctx->in_sg = req->src; + rctx->out_sg = req->dst; + + ctx->rctx = rctx; + + ret = starfive_aes_hw_init(ctx); + if (ret) + return ret; - for (i = 0; i < AES_BLOCK_32; i++) - writel(block[i], cryp->base + STARFIVE_AES_AESDIO0R); + starfive_aes_dma_init(cryp); - stat = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); - stat &= ~STARFIVE_IE_MASK_AES_DONE; - writel(stat, cryp->base + STARFIVE_IE_MASK_OFFSET); + ret = starfive_aes_map_sg(cryp, rctx->in_sg, rctx->out_sg); + if (ret) + return ret; + + starfive_aes_finish_req(ctx); return 0; } -static int starfive_aes_init_tfm(struct crypto_skcipher *tfm) +static int starfive_aes_init_tfm(struct crypto_skcipher *tfm, + const char *alg_name) { struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); @@ -549,12 +610,26 @@ static int starfive_aes_init_tfm(struct crypto_skcipher *tfm) if (!ctx->cryp) return -ENODEV; - crypto_skcipher_set_reqsize(tfm, sizeof(struct starfive_cryp_request_ctx) + + ctx->skcipher_fbk = crypto_alloc_skcipher(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->skcipher_fbk)) + return dev_err_probe(ctx->cryp->dev, PTR_ERR(ctx->skcipher_fbk), + "%s() failed to allocate fallback for %s\n", + __func__, alg_name); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct starfive_cryp_ctx) + sizeof(struct skcipher_request)); return 0; } +static void starfive_aes_exit_tfm(struct crypto_skcipher *tfm) +{ + struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(ctx->skcipher_fbk); +} + static int starfive_aes_aead_do_one_req(struct crypto_engine *engine, void *areq) { struct aead_request *req = @@ -562,76 +637,96 @@ static int starfive_aes_aead_do_one_req(struct crypto_engine *engine, void *areq struct starfive_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); struct starfive_cryp_dev *cryp = ctx->cryp; - struct starfive_cryp_request_ctx *rctx; - u32 block[AES_BLOCK_32]; - u32 stat; - int err; - int i; + struct starfive_cryp_request_ctx *rctx = aead_request_ctx(req); + struct scatterlist _src[2], _dst[2]; + int ret; + + cryp->req.areq = req; + cryp->assoclen = req->assoclen; + cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(req)); + + rctx->in_sg = scatterwalk_ffwd(_src, req->src, cryp->assoclen); + if (req->src == req->dst) + rctx->out_sg = rctx->in_sg; + else + rctx->out_sg = scatterwalk_ffwd(_dst, req->dst, cryp->assoclen); - err = starfive_aes_prepare_req(NULL, req); - if (err) - return err; + if (is_encrypt(cryp)) { + cryp->total_in = req->cryptlen; + cryp->total_out = req->cryptlen; + } else { + cryp->total_in = req->cryptlen - cryp->authsize; + cryp->total_out = cryp->total_in; + scatterwalk_map_and_copy(cryp->tag_in, req->src, + cryp->total_in + cryp->assoclen, + cryp->authsize, 0); + } - rctx = ctx->rctx; + if (cryp->assoclen) { + rctx->adata = kzalloc(cryp->assoclen + AES_BLOCK_SIZE, GFP_KERNEL); + if (!rctx->adata) + return dev_err_probe(cryp->dev, -ENOMEM, + "Failed to alloc memory for adata"); + + if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, cryp->assoclen), + rctx->adata, cryp->assoclen) != cryp->assoclen) + return -EINVAL; + } + + if (cryp->total_in) + sg_zero_buffer(rctx->in_sg, sg_nents(rctx->in_sg), + sg_dma_len(rctx->in_sg) - cryp->total_in, + cryp->total_in); + + ctx->rctx = rctx; + + ret = starfive_aes_hw_init(ctx); + if (ret) + return ret; if (!cryp->assoclen) goto write_text; if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CCM) - cryp->err = starfive_aes_ccm_write_adata(ctx); + ret = starfive_aes_ccm_write_adata(ctx); else - cryp->err = starfive_aes_gcm_write_adata(ctx); + ret = starfive_aes_gcm_write_adata(ctx); kfree(rctx->adata); - if (cryp->err) - return cryp->err; + if (ret) + return ret; write_text: if (!cryp->total_in) goto finish_req; - /* - * Write first plain/ciphertext block to start the module - * then let irq tasklet handle the rest of the data blocks. - */ - scatterwalk_copychunks(block, &cryp->in_walk, min_t(size_t, AES_BLOCK_SIZE, - cryp->total_in), 0); - cryp->total_in -= min_t(size_t, AES_BLOCK_SIZE, cryp->total_in); - - for (i = 0; i < AES_BLOCK_32; i++) - writel(block[i], cryp->base + STARFIVE_AES_AESDIO0R); + starfive_aes_dma_init(cryp); - stat = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); - stat &= ~STARFIVE_IE_MASK_AES_DONE; - writel(stat, cryp->base + STARFIVE_IE_MASK_OFFSET); - - return 0; + ret = starfive_aes_map_sg(cryp, rctx->in_sg, rctx->out_sg); + if (ret) + return ret; finish_req: - starfive_aes_finish_req(cryp); + starfive_aes_finish_req(ctx); return 0; } -static int starfive_aes_aead_init_tfm(struct crypto_aead *tfm) +static int starfive_aes_aead_init_tfm(struct crypto_aead *tfm, + const char *alg_name) { struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); - struct starfive_cryp_dev *cryp = ctx->cryp; - struct crypto_tfm *aead = crypto_aead_tfm(tfm); - struct crypto_alg *alg = aead->__crt_alg; ctx->cryp = starfive_cryp_find_dev(ctx); if (!ctx->cryp) return -ENODEV; - if (alg->cra_flags & CRYPTO_ALG_NEED_FALLBACK) { - ctx->aead_fbk = crypto_alloc_aead(alg->cra_name, 0, - CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(ctx->aead_fbk)) - return dev_err_probe(cryp->dev, PTR_ERR(ctx->aead_fbk), - "%s() failed to allocate fallback for %s\n", - __func__, alg->cra_name); - } + ctx->aead_fbk = crypto_alloc_aead(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->aead_fbk)) + return dev_err_probe(ctx->cryp->dev, PTR_ERR(ctx->aead_fbk), + "%s() failed to allocate fallback for %s\n", + __func__, alg_name); crypto_aead_set_reqsize(tfm, sizeof(struct starfive_cryp_ctx) + sizeof(struct aead_request)); @@ -646,6 +741,44 @@ static void starfive_aes_aead_exit_tfm(struct crypto_aead *tfm) crypto_free_aead(ctx->aead_fbk); } +static bool starfive_aes_check_unaligned(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct scatterlist *tsg; + int i; + + for_each_sg(src, tsg, sg_nents(src), i) + if (!IS_ALIGNED(tsg->length, AES_BLOCK_SIZE) && + !sg_is_last(tsg)) + return true; + + if (src != dst) + for_each_sg(dst, tsg, sg_nents(dst), i) + if (!IS_ALIGNED(tsg->length, AES_BLOCK_SIZE) && + !sg_is_last(tsg)) + return true; + + return false; +} + +static int starfive_aes_do_fallback(struct skcipher_request *req, bool enc) +{ + struct starfive_cryp_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct skcipher_request *subreq = skcipher_request_ctx(req); + + skcipher_request_set_tfm(subreq, ctx->skcipher_fbk); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + + return enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); +} + static int starfive_aes_crypt(struct skcipher_request *req, unsigned long flags) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -660,32 +793,54 @@ static int starfive_aes_crypt(struct skcipher_request *req, unsigned long flags) if (req->cryptlen & blocksize_align) return -EINVAL; + if (starfive_aes_check_unaligned(cryp, req->src, req->dst)) + return starfive_aes_do_fallback(req, is_encrypt(cryp)); + return crypto_transfer_skcipher_request_to_engine(cryp->engine, req); } +static int starfive_aes_aead_do_fallback(struct aead_request *req, bool enc) +{ + struct starfive_cryp_ctx *ctx = + crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct aead_request *subreq = aead_request_ctx(req); + + aead_request_set_tfm(subreq, ctx->aead_fbk); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, + req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + aead_request_set_ad(subreq, req->assoclen); + + return enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); +} + static int starfive_aes_aead_crypt(struct aead_request *req, unsigned long flags) { struct starfive_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); struct starfive_cryp_dev *cryp = ctx->cryp; + struct scatterlist *src, *dst, _src[2], _dst[2]; cryp->flags = flags; - /* - * HW engine could not perform CCM tag verification on - * non-blocksize aligned text, use fallback algo instead + /* aes-ccm does not support tag verification for non-aligned text, + * use fallback for ccm decryption instead. */ - if (ctx->aead_fbk && !is_encrypt(cryp)) { - struct aead_request *subreq = aead_request_ctx(req); + if (((cryp->flags & FLG_MODE_MASK) == STARFIVE_AES_MODE_CCM) && + !is_encrypt(cryp)) + return starfive_aes_aead_do_fallback(req, 0); - aead_request_set_tfm(subreq, ctx->aead_fbk); - aead_request_set_callback(subreq, req->base.flags, - req->base.complete, req->base.data); - aead_request_set_crypt(subreq, req->src, - req->dst, req->cryptlen, req->iv); - aead_request_set_ad(subreq, req->assoclen); + src = scatterwalk_ffwd(_src, req->src, req->assoclen); - return crypto_aead_decrypt(subreq); - } + if (req->src == req->dst) + dst = src; + else + dst = scatterwalk_ffwd(_dst, req->dst, req->assoclen); + + if (starfive_aes_check_unaligned(cryp, src, dst)) + return starfive_aes_aead_do_fallback(req, is_encrypt(cryp)); return crypto_transfer_aead_request_to_engine(cryp->engine, req); } @@ -706,7 +861,7 @@ static int starfive_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, memcpy(ctx->key, key, keylen); ctx->keylen = keylen; - return 0; + return crypto_skcipher_setkey(ctx->skcipher_fbk, key, keylen); } static int starfive_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key, @@ -725,16 +880,20 @@ static int starfive_aes_aead_setkey(struct crypto_aead *tfm, const u8 *key, memcpy(ctx->key, key, keylen); ctx->keylen = keylen; - if (ctx->aead_fbk) - return crypto_aead_setkey(ctx->aead_fbk, key, keylen); - - return 0; + return crypto_aead_setkey(ctx->aead_fbk, key, keylen); } static int starfive_aes_gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) { - return crypto_gcm_check_authsize(authsize); + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + int ret; + + ret = crypto_gcm_check_authsize(authsize); + if (ret) + return ret; + + return crypto_aead_setauthsize(ctx->aead_fbk, authsize); } static int starfive_aes_ccm_setauthsize(struct crypto_aead *tfm, @@ -820,9 +979,35 @@ static int starfive_aes_ccm_decrypt(struct aead_request *req) return starfive_aes_aead_crypt(req, STARFIVE_AES_MODE_CCM); } +static int starfive_aes_ecb_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_aes_init_tfm(tfm, "ecb(aes-generic)"); +} + +static int starfive_aes_cbc_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_aes_init_tfm(tfm, "cbc(aes-generic)"); +} + +static int starfive_aes_ctr_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_aes_init_tfm(tfm, "ctr(aes-generic)"); +} + +static int starfive_aes_ccm_init_tfm(struct crypto_aead *tfm) +{ + return starfive_aes_aead_init_tfm(tfm, "ccm_base(ctr(aes-generic),cbcmac(aes-generic))"); +} + +static int starfive_aes_gcm_init_tfm(struct crypto_aead *tfm) +{ + return starfive_aes_aead_init_tfm(tfm, "gcm_base(ctr(aes-generic),ghash-generic)"); +} + static struct skcipher_engine_alg skcipher_algs[] = { { - .base.init = starfive_aes_init_tfm, + .base.init = starfive_aes_ecb_init_tfm, + .base.exit = starfive_aes_exit_tfm, .base.setkey = starfive_aes_setkey, .base.encrypt = starfive_aes_ecb_encrypt, .base.decrypt = starfive_aes_ecb_decrypt, @@ -832,7 +1017,8 @@ static struct skcipher_engine_alg skcipher_algs[] = { .cra_name = "ecb(aes)", .cra_driver_name = "starfive-ecb-aes", .cra_priority = 200, - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct starfive_cryp_ctx), .cra_alignmask = 0xf, @@ -842,7 +1028,8 @@ static struct skcipher_engine_alg skcipher_algs[] = { .do_one_request = starfive_aes_do_one_req, }, }, { - .base.init = starfive_aes_init_tfm, + .base.init = starfive_aes_cbc_init_tfm, + .base.exit = starfive_aes_exit_tfm, .base.setkey = starfive_aes_setkey, .base.encrypt = starfive_aes_cbc_encrypt, .base.decrypt = starfive_aes_cbc_decrypt, @@ -853,7 +1040,8 @@ static struct skcipher_engine_alg skcipher_algs[] = { .cra_name = "cbc(aes)", .cra_driver_name = "starfive-cbc-aes", .cra_priority = 200, - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct starfive_cryp_ctx), .cra_alignmask = 0xf, @@ -863,7 +1051,8 @@ static struct skcipher_engine_alg skcipher_algs[] = { .do_one_request = starfive_aes_do_one_req, }, }, { - .base.init = starfive_aes_init_tfm, + .base.init = starfive_aes_ctr_init_tfm, + .base.exit = starfive_aes_exit_tfm, .base.setkey = starfive_aes_setkey, .base.encrypt = starfive_aes_ctr_encrypt, .base.decrypt = starfive_aes_ctr_decrypt, @@ -874,7 +1063,8 @@ static struct skcipher_engine_alg skcipher_algs[] = { .cra_name = "ctr(aes)", .cra_driver_name = "starfive-ctr-aes", .cra_priority = 200, - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct starfive_cryp_ctx), .cra_alignmask = 0xf, @@ -892,7 +1082,7 @@ static struct aead_engine_alg aead_algs[] = { .base.setauthsize = starfive_aes_gcm_setauthsize, .base.encrypt = starfive_aes_gcm_encrypt, .base.decrypt = starfive_aes_gcm_decrypt, - .base.init = starfive_aes_aead_init_tfm, + .base.init = starfive_aes_gcm_init_tfm, .base.exit = starfive_aes_aead_exit_tfm, .base.ivsize = GCM_AES_IV_SIZE, .base.maxauthsize = AES_BLOCK_SIZE, @@ -900,7 +1090,8 @@ static struct aead_engine_alg aead_algs[] = { .cra_name = "gcm(aes)", .cra_driver_name = "starfive-gcm-aes", .cra_priority = 200, - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct starfive_cryp_ctx), .cra_alignmask = 0xf, @@ -914,7 +1105,7 @@ static struct aead_engine_alg aead_algs[] = { .base.setauthsize = starfive_aes_ccm_setauthsize, .base.encrypt = starfive_aes_ccm_encrypt, .base.decrypt = starfive_aes_ccm_decrypt, - .base.init = starfive_aes_aead_init_tfm, + .base.init = starfive_aes_ccm_init_tfm, .base.exit = starfive_aes_aead_exit_tfm, .base.ivsize = AES_BLOCK_SIZE, .base.maxauthsize = AES_BLOCK_SIZE, diff --git a/drivers/crypto/starfive/jh7110-cryp.c b/drivers/crypto/starfive/jh7110-cryp.c index 425fddf3a8ab..fe33e87f25ab 100644 --- a/drivers/crypto/starfive/jh7110-cryp.c +++ b/drivers/crypto/starfive/jh7110-cryp.c @@ -97,12 +97,6 @@ static irqreturn_t starfive_cryp_irq(int irq, void *priv) mask = readl(cryp->base + STARFIVE_IE_MASK_OFFSET); status = readl(cryp->base + STARFIVE_IE_FLAG_OFFSET); - if (status & STARFIVE_IE_FLAG_AES_DONE) { - mask |= STARFIVE_IE_MASK_AES_DONE; - writel(mask, cryp->base + STARFIVE_IE_MASK_OFFSET); - tasklet_schedule(&cryp->aes_done); - } - if (status & STARFIVE_IE_FLAG_HASH_DONE) { mask |= STARFIVE_IE_MASK_HASH_DONE; writel(mask, cryp->base + STARFIVE_IE_MASK_OFFSET); @@ -131,7 +125,6 @@ static int starfive_cryp_probe(struct platform_device *pdev) return dev_err_probe(&pdev->dev, PTR_ERR(cryp->base), "Error remapping memory for platform device\n"); - tasklet_init(&cryp->aes_done, starfive_aes_done_task, (unsigned long)cryp); tasklet_init(&cryp->hash_done, starfive_hash_done_task, (unsigned long)cryp); cryp->phys_base = res->start; @@ -219,7 +212,6 @@ static int starfive_cryp_probe(struct platform_device *pdev) clk_disable_unprepare(cryp->ahb); reset_control_assert(cryp->rst); - tasklet_kill(&cryp->aes_done); tasklet_kill(&cryp->hash_done); return ret; @@ -233,7 +225,6 @@ static void starfive_cryp_remove(struct platform_device *pdev) starfive_hash_unregister_algs(); starfive_rsa_unregister_algs(); - tasklet_kill(&cryp->aes_done); tasklet_kill(&cryp->hash_done); crypto_engine_stop(cryp->engine); diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h index 4940cd1a3fbb..ade2da468bba 100644 --- a/drivers/crypto/starfive/jh7110-cryp.h +++ b/drivers/crypto/starfive/jh7110-cryp.h @@ -168,6 +168,7 @@ struct starfive_cryp_ctx { struct crypto_akcipher *akcipher_fbk; struct crypto_ahash *ahash_fbk; struct crypto_aead *aead_fbk; + struct crypto_skcipher *skcipher_fbk; }; struct starfive_cryp_dev { @@ -185,10 +186,7 @@ struct starfive_cryp_dev { struct dma_chan *rx; struct dma_slave_config cfg_in; struct dma_slave_config cfg_out; - struct scatter_walk in_walk; - struct scatter_walk out_walk; struct crypto_engine *engine; - struct tasklet_struct aes_done; struct tasklet_struct hash_done; struct completion dma_done; size_t assoclen; @@ -239,5 +237,4 @@ int starfive_aes_register_algs(void); void starfive_aes_unregister_algs(void); void starfive_hash_done_task(unsigned long param); -void starfive_aes_done_task(unsigned long param); #endif From patchwork Tue Jan 16 09:01:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Jie Ho X-Patchwork-Id: 13520596 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from CHN02-BJS-obe.outbound.protection.partner.outlook.cn (mail-bjschn02on2048.outbound.protection.partner.outlook.cn [139.219.17.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6F2E125C4; Tue, 16 Jan 2024 09:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=starfivetech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=starfivetech.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CHd58G1DEYAp56EabOPCeLnrlcMrP/7aXasc0Qed8+Lfq+4Q6+MlF2kCCCmMK2etR/PLfPo9OBwxtFef44dJqY57pqjmoGq0rzzqlJm/vt+dB5s+Y63jXq2IwAPh294vPKhkLVI0x/wrTph8DLzgvLw81c73zUrvHS/9vowAbYFsc2dIj8wa/sunkv/ZYTfs6Spfh4qh1n+RSJSwwje4L2AGWFB6Cxq4xPh8+os80xYFBDU4Qhe+ilyLMhm5/SmhXgwWioNI0nzsZznp10mrXXt6sLT2dkWXaPHWJGjg+OQffhfYG9NfIMoTbghnf04zLRmEQq5M8TzMcKl4zxYgkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=N5Q+rsk6e68+MYbxqAa4UHh06eZ/BVLjeia10+IUsVU=; b=NLVY2bQIL7r+PjGEXOD6VtFF1zDBt1K17D3jaNDDdguFMH4XtI65d9rsn4kiuFqFYCAl8O39+/4MzEykS5KgxYWfWU1VvoJdBZYorYlIaxCpnXidWLFiwiEI+iRnvXcEOBzO/KK+bB2EIkSkEzn1j0OQ24xKc8x8BXtiKaeNFguc/sVY+fFLlQecqU6Ln05gZtlpl1yfh1Sc7SHuL+Naiqku4kF8QRlKmEsN6EAUA6bhnSU6zcw+WYX3Vi0p3wTw4Vz596TBXWY/AoPEcuveuF3xqEPyHiQ8Wg3/XZcnknY2CKF9oBRvnp5jaF66HIWgAeEwNDPqesEXOn5KK3fZ6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starfivetech.com; dmarc=pass action=none header.from=starfivetech.com; dkim=pass header.d=starfivetech.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=starfivetech.com; Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) by SHXPR01MB0640.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:1e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7181.26; Tue, 16 Jan 2024 09:01:53 +0000 Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6]) by SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6%7]) with mapi id 15.20.7135.032; Tue, 16 Jan 2024 09:01:53 +0000 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Conor Dooley , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] crypto: starfive: Add sm3 support for JH8100 Date: Tue, 16 Jan 2024 17:01:34 +0800 Message-Id: <20240116090135.75737-5-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240116090135.75737-1-jiajie.ho@starfivetech.com> References: <20240116090135.75737-1-jiajie.ho@starfivetech.com> X-ClientProxiedBy: BJXPR01CA0054.CHNPR01.prod.partner.outlook.cn (2406:e500:c211:12::21) To SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SHXPR01MB0670:EE_|SHXPR01MB0640:EE_ X-MS-Office365-Filtering-Correlation-Id: 964dd8c0-b8b3-46cb-384d-08dc1671c872 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gQVfuo/eTxAPgAKqOF+r0OtoMCkHGsZBNE2mWy6lThYIR9d2w7vu3HbWT2jEPf7eGg7KvG+TVG3GkO1d2cpEomwgT4XGSM6/mZEkOuPV6aTK8ztgyAoTLzIQUlW6Eo20dbnlUl8u+a16zaAjcKYSYct8uOdBkNyUXyls0J5aF/eLBU6KQcQWQzWjF+ps+aIZet7xJL5UcGG7Vw58suYU8Tn+MRfV+ab5vw77OYG9CTiGS9KohSgiB59F+latRnyjRe6+P7bszkZJtM4OUKU6YiorYtXnXVkfU2mdPIyX07L/E6OStiPGxFEs04yK6CADGMvTQhNjmxI60sUuO+/Bmi0MRXJUyZAvZJ6oVVI8Z+cRY8A1U/ayUOKdvg36RSY69a0vyUj3aZ/WwYMW9TltBeEzJsRdUt3Bg5tBFkzqWJCPJIagwtqAirz6ab9SXq3Nj8FSWlw920RINH6Sw018C2Ev+hOOP5aVxAof0+RVW9grTehYKdymjPe22zdPHON+4Ub0RENrOzCdiZw9nllRnXDp/FAUZKCmimk+2lMW05iGSgueyi5tJHwRbYp/2Y8A X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn;PTR:;CAT:NONE;SFS:(13230031)(346002)(136003)(396003)(366004)(39830400003)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(38100700002)(41320700001)(2906002)(30864003)(5660300002)(36756003)(40180700001)(8936002)(8676002)(86362001)(41300700001)(38350700005)(83380400001)(26005)(40160700002)(508600001)(66476007)(110136005)(66946007)(66556008)(1076003)(2616005)(6666004)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: M1IxwBDwNKIlD5n7AajTlrrxY8ldo04LcE/kJYH9WK857hwlPbsa4Bwqo+liNEKzhYCZ08TDg5sxnSIGzOibJv3l4STs2BAt7J97GBQjfjYyvlcW1zHCKEkTlnNVQDLJm7oA62BN55y2dR+asKAq+xWSxSL0Dfgj1OH8SjJQMDchLbOgORHLrR8Up8dBAybv1PMpxfjtdaSahOHGtw2gpor89JVya7zZ1X7AT1ahkQAooc86z2n58Ke2tA4fgnsLQEZKqV3eYpAByL6qsz82wdfXJshYlXqDBupLvh1+m3VQ89vt8NvXOdr4RwUaa4djbOKB3l0qnSCKlDDMcIzITmRZzrEQgi5Xofy4ND9vVsPrVQryoR4QyFs3S/FMT3TlQ2b44YUCN+ELListrNM6m6jDADIX0e1zN/pN5WuPHrrwm2EQCqPzIky4eoPZQFi/kOXLlzdz0+VXetYq2RygpOLdrDU6HwovE/b9smsh8F2HGpP/S2B+MmKWf7/2BNN0w/5ZTpiCqGDif9rbYg63i+LWoo4008I+GwzhxvsrqoP7GKuPzscUCQu8+Tinn+ZQUh4Z1f9ts1LI0WrC3sjIJsomh95yua1/S3f1YdHnBgI/ZYdUS00UZmVPaI1t8bsIJWREdcQw0R62FlMsddxGY+S9Eb1w1Jz4150ovDJsDGzwQgVdp8KAvawWTOny89sB8GMKcKtx9VXe50g35pP9qmRrCeydASWChPIEpS4BHxbmq4uzVpH9n3iht3vr9CaZ3UceTpVvq+ZdMpSMPZMCe7Tp7BOBHLRsqMEqWmykxNSiEbKefDymLrsk5dn/umEoojYEIdyrjT8BWDU+KsOAV1fgPShC4Yu6h2qOS8zVw5jlL10qrJ8zbPv3E4jJKgf4sCvtaqHVowEaUHinQxDvx9KrtZmpxgdHBhfqTYdqJu422kVAtIgEdBHAmok0iQJjlV0VnCUiuTAfcxirEsLIyJh1FlcvOnKvMCud0F5qjpOJDL5feeShhf8/QwlBDhJ6jnvWvhtM5wyUBBH2A6aoZ1HwZZrRLUnBDolYHqWW8jPObHo9nmTEWl0Vnr+GsXOKSIZZwF6NVgyumHe7jXaBwbcaXKVGSVSpszCPww3SSfjhoqEPVL/z3ohjQwwJ+TPR8je9elukA5rvOICihgXxiLZBKxzTI/By06W96+rtlcr30u5U1M9fw9NyWYM1Hz/mZgKr9AxXqL/SL860+fkvq5s1cTiTR8ug+iioolStrSMvpjA89u7dURUIHS6RiMuHvE0tESFTXf1SH9xb4JNmIL+AVwjWdlvPxpAF2iuOmZ87NhwZKj8SRiKuqWnpPqh46Gb+YKCy5y6Q9iaCmuqTkJin9esAidaCpHiJHSfFS1anRsSW6PZ3n1CTRHj7Er5S8lSQHykCckPsUSyxvd2EtJ5MVB6/3Ebuacokd7ZyEOJuD/bYB3poaHU1MPxEOnNA8nAELGCCHU2Y1oLd+BSQhvYOm9s7qe2xvitt9pPdk7OSqxJBLFZ85W3QVsS9upsvoCt16RDnMtXxq0xorGj7NlJ3zKkPiePo5tUhWnb+998Pg6NN03l74bWsm5xJkO5dVc15lYH6CIqA1tn4jqGoyg== X-OriginatorOrg: starfivetech.com X-MS-Exchange-CrossTenant-Network-Message-Id: 964dd8c0-b8b3-46cb-384d-08dc1671c872 X-MS-Exchange-CrossTenant-AuthSource: SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2024 09:01:53.1989 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 06fe3fa3-1221-43d3-861b-5a4ee687a85c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: a8KDw7A/N0Z3l7czxsdCvkrZ3c+FqCPTXn+KPo5YK0WUNjOL6UJcDHJocYpnuHRcmTrHwR/7EnU8OnywoWY6aRZ7AWYHnsS3r5Wb/rtW95A= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SHXPR01MB0640 Add driver support for SM3 hash/HMAC for JH8100 SoC. JH8100 contains a separate SM algo engine and new dedicated dma that supports 64-bit address access. Signed-off-by: Jia Jie Ho --- drivers/crypto/starfive/Kconfig | 25 +- drivers/crypto/starfive/Makefile | 3 + drivers/crypto/starfive/jh7110-cryp.c | 48 ++- drivers/crypto/starfive/jh7110-cryp.h | 59 +++ drivers/crypto/starfive/jh7110-hash.c | 20 +- drivers/crypto/starfive/jh8100-sm3.c | 532 ++++++++++++++++++++++++++ 6 files changed, 677 insertions(+), 10 deletions(-) create mode 100644 drivers/crypto/starfive/jh8100-sm3.c diff --git a/drivers/crypto/starfive/Kconfig b/drivers/crypto/starfive/Kconfig index 0fe389e9f932..e6bf02d0ed1f 100644 --- a/drivers/crypto/starfive/Kconfig +++ b/drivers/crypto/starfive/Kconfig @@ -5,7 +5,7 @@ config CRYPTO_DEV_JH7110 tristate "StarFive JH7110 cryptographic engine driver" depends on (SOC_STARFIVE && AMBA_PL08X) || COMPILE_TEST - depends on HAS_DMA + depends on HAS_DMA && !CRYPTO_DEV_JH8100 select CRYPTO_ENGINE select CRYPTO_HMAC select CRYPTO_SHA256 @@ -24,3 +24,26 @@ config CRYPTO_DEV_JH7110 skciphers, AEAD and hash functions. If you choose 'M' here, this module will be called jh7110-crypto. + +config CRYPTO_DEV_JH8100 + tristate "StarFive JH8100 cryptographic engine drivers" + depends on (SOC_STARFIVE && DW_AXI_DMAC) || COMPILE_TEST + depends on HAS_DMA + select CRYPTO_ENGINE + select CRYPTO_HMAC + select CRYPTO_SHA256 + select CRYPTO_SHA512 + select CRYPTO_SM3_GENERIC + select CRYPTO_RSA + select CRYPTO_AES + select CRYPTO_CCM + select CRYPTO_GCM + select CRYPTO_CBC + select CRYPTO_ECB + select CRYPTO_CTR + help + Support for StarFive JH8100 crypto hardware acceleration engine. + This module provides additional support for SM2 signature verification, + SM3 hash/hmac functions and SM4 skcipher. + + If you choose 'M' here, this module will be called jh8100-crypto. diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile index 8c137afe58ad..67717fca3f5d 100644 --- a/drivers/crypto/starfive/Makefile +++ b/drivers/crypto/starfive/Makefile @@ -2,3 +2,6 @@ obj-$(CONFIG_CRYPTO_DEV_JH7110) += jh7110-crypto.o jh7110-crypto-objs := jh7110-cryp.o jh7110-hash.o jh7110-rsa.o jh7110-aes.o + +obj-$(CONFIG_CRYPTO_DEV_JH8100) += jh8100-crypto.o +jh8100-crypto-objs := jh7110-cryp.o jh7110-hash.o jh7110-rsa.o jh7110-aes.o jh8100-sm3.o diff --git a/drivers/crypto/starfive/jh7110-cryp.c b/drivers/crypto/starfive/jh7110-cryp.c index fe33e87f25ab..fb7c19705fbf 100644 --- a/drivers/crypto/starfive/jh7110-cryp.c +++ b/drivers/crypto/starfive/jh7110-cryp.c @@ -106,6 +106,26 @@ static irqreturn_t starfive_cryp_irq(int irq, void *priv) return IRQ_HANDLED; } +#ifdef CONFIG_CRYPTO_DEV_JH8100 +static irqreturn_t starfive_cryp_irq1(int irq, void *priv) +{ + u32 status; + u32 mask; + struct starfive_cryp_dev *cryp = (struct starfive_cryp_dev *)priv; + + mask = readl(cryp->base + STARFIVE_SM_IE_MASK_OFFSET); + status = readl(cryp->base + STARFIVE_SM_IE_FLAG_OFFSET); + + if (status & STARFIVE_SM_IE_FLAG_SM3_DONE) { + mask |= STARFIVE_SM_IE_MASK_SM3_DONE; + writel(mask, cryp->base + STARFIVE_SM_IE_MASK_OFFSET); + tasklet_schedule(&cryp->sm3_done); + } + + return IRQ_HANDLED; +} +#endif + static int starfive_cryp_probe(struct platform_device *pdev) { struct starfive_cryp_dev *cryp; @@ -156,6 +176,16 @@ static int starfive_cryp_probe(struct platform_device *pdev) return dev_err_probe(&pdev->dev, ret, "Failed to register interrupt handler\n"); +#ifdef CONFIG_CRYPTO_DEV_JH8100 + tasklet_init(&cryp->sm3_done, starfive_sm3_done_task, (unsigned long)cryp); + + irq = platform_get_irq(pdev, 1); + if (irq < 0) + return irq; + + ret = devm_request_irq(&pdev->dev, irq, starfive_cryp_irq1, 0, + pdev->name, (void *)cryp); +#endif clk_prepare_enable(cryp->hclk); clk_prepare_enable(cryp->ahb); reset_control_deassert(cryp->rst); @@ -191,8 +221,17 @@ static int starfive_cryp_probe(struct platform_device *pdev) if (ret) goto err_algs_rsa; +#ifdef CONFIG_CRYPTO_DEV_JH8100 + ret = starfive_sm3_register_algs(); + if (ret) + goto err_algs_sm3; +#endif return 0; +#ifdef CONFIG_CRYPTO_DEV_JH8100 +err_algs_sm3: + starfive_rsa_unregister_algs(); +#endif err_algs_rsa: starfive_hash_unregister_algs(); err_algs_hash: @@ -213,7 +252,9 @@ static int starfive_cryp_probe(struct platform_device *pdev) reset_control_assert(cryp->rst); tasklet_kill(&cryp->hash_done); - +#ifdef CONFIG_CRYPTO_DEV_JH8100 + tasklet_kill(&cryp->sm3_done); +#endif return ret; } @@ -226,7 +267,10 @@ static void starfive_cryp_remove(struct platform_device *pdev) starfive_rsa_unregister_algs(); tasklet_kill(&cryp->hash_done); - +#ifdef CONFIG_CRYPTO_DEV_JH8100 + starfive_sm3_unregister_algs(); + tasklet_kill(&cryp->sm3_done); +#endif crypto_engine_stop(cryp->engine); crypto_engine_exit(cryp->engine); diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h index ade2da468bba..a675ee4bc6cf 100644 --- a/drivers/crypto/starfive/jh7110-cryp.h +++ b/drivers/crypto/starfive/jh7110-cryp.h @@ -19,12 +19,22 @@ #define STARFIVE_DMA_IN_LEN_OFFSET 0x10 #define STARFIVE_DMA_OUT_LEN_OFFSET 0x14 +#define STARFIVE_SM_ALG_CR_OFFSET 0x4000 +#define STARFIVE_SM_IE_MASK_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0x4) +#define STARFIVE_SM_IE_FLAG_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0x8) +#define STARFIVE_SM_DMA_IN_LEN_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0xc) +#define STARFIVE_SM_DMA_OUT_LEN_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0x10) +#define STARFIVE_SM_ALG_FIFO_IN_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0x20) +#define STARFIVE_SM_ALG_FIFO_OUT_OFFSET (STARFIVE_SM_ALG_CR_OFFSET + 0x28) + #define STARFIVE_IE_MASK_AES_DONE 0x1 #define STARFIVE_IE_MASK_HASH_DONE 0x4 #define STARFIVE_IE_MASK_PKA_DONE 0x8 #define STARFIVE_IE_FLAG_AES_DONE 0x1 #define STARFIVE_IE_FLAG_HASH_DONE 0x4 #define STARFIVE_IE_FLAG_PKA_DONE 0x8 +#define STARFIVE_SM_IE_MASK_SM3_DONE 0x2 +#define STARFIVE_SM_IE_FLAG_SM3_DONE 0x2 #define STARFIVE_MSG_BUFFER_SIZE SZ_16K #define MAX_KEY_SIZE SHA512_BLOCK_SIZE @@ -68,6 +78,20 @@ union starfive_aes_csr { }; }; +union starfive_sm_alg_cr { + u32 v; + struct { + u32 start :1; + u32 sm4_dma_en :1; + u32 sm3_dma_en :1; + u32 rsvd_0 :1; + u32 alg_done :1; + u32 rsvd_1 :3; + u32 clear :1; + u32 rsvd_2 :23; + }; +}; + union starfive_hash_csr { u32 v; struct { @@ -132,6 +156,32 @@ union starfive_pka_casr { }; }; +union starfive_sm3_csr { + u32 v; + struct { + u32 start :1; + u32 reset :1; + u32 ie :1; + u32 firstb :1; +#define STARFIVE_SM3_MODE 0x0 + u32 mode :3; + u32 rsvd_0 :1; + u32 final :1; + u32 rsvd_1 :2; +#define STARFIVE_SM3_HMAC_FLAGS 0x800 + u32 hmac :1; + u32 rsvd_2 :1; +#define STARFIVE_SM3_KEY_DONE BIT(13) + u32 key_done :1; + u32 key_flag :1; + u32 hmac_done :1; +#define STARFIVE_SM3_BUSY BIT(16) + u32 busy :1; + u32 hashdone :1; + u32 rsvd_3 :14; + }; +}; + struct starfive_rsa_key { u8 *n; u8 *e; @@ -188,6 +238,7 @@ struct starfive_cryp_dev { struct dma_slave_config cfg_out; struct crypto_engine *engine; struct tasklet_struct hash_done; + struct tasklet_struct sm3_done; struct completion dma_done; size_t assoclen; size_t total_in; @@ -211,6 +262,7 @@ struct starfive_cryp_request_ctx { union starfive_hash_csr hash; union starfive_pka_cacr pka; union starfive_aes_csr aes; + union starfive_sm3_csr sm3; } csr; struct scatterlist *in_sg; @@ -237,4 +289,11 @@ int starfive_aes_register_algs(void); void starfive_aes_unregister_algs(void); void starfive_hash_done_task(unsigned long param); + +#if IS_REACHABLE(CONFIG_CRYPTO_DEV_JH8100) +int starfive_sm3_register_algs(void); +void starfive_sm3_unregister_algs(void); + +void starfive_sm3_done_task(unsigned long param); +#endif #endif diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfive/jh7110-hash.c index 74e151b5f875..45cf82e64fb8 100644 --- a/drivers/crypto/starfive/jh7110-hash.c +++ b/drivers/crypto/starfive/jh7110-hash.c @@ -511,12 +511,6 @@ static int starfive_sha512_init_tfm(struct crypto_ahash *hash) STARFIVE_HASH_SHA512, 0); } -static int starfive_sm3_init_tfm(struct crypto_ahash *hash) -{ - return starfive_hash_init_tfm(hash, "sm3-generic", - STARFIVE_HASH_SM3, 0); -} - static int starfive_hmac_sha224_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "hmac(sha224-generic)", @@ -541,11 +535,19 @@ static int starfive_hmac_sha512_init_tfm(struct crypto_ahash *hash) STARFIVE_HASH_SHA512, 1); } +#ifndef CONFIG_CRYPTO_DEV_JH8100 +static int starfive_sm3_init_tfm(struct crypto_ahash *hash) +{ + return starfive_hash_init_tfm(hash, "sm3-generic", + STARFIVE_HASH_SM3, 0); +} + static int starfive_hmac_sm3_init_tfm(struct crypto_ahash *hash) { return starfive_hash_init_tfm(hash, "hmac(sm3-generic)", STARFIVE_HASH_SM3, 1); } +#endif static struct ahash_engine_alg algs_sha2_sm3[] = { { @@ -776,7 +778,10 @@ static struct ahash_engine_alg algs_sha2_sm3[] = { .op = { .do_one_request = starfive_hash_one_request, }, -}, { +}, + +#ifndef CONFIG_CRYPTO_DEV_JH8100 +{ .base.init = starfive_hash_init, .base.update = starfive_hash_update, .base.final = starfive_hash_final, @@ -834,6 +839,7 @@ static struct ahash_engine_alg algs_sha2_sm3[] = { .do_one_request = starfive_hash_one_request, }, }, +#endif }; int starfive_hash_register_algs(void) diff --git a/drivers/crypto/starfive/jh8100-sm3.c b/drivers/crypto/starfive/jh8100-sm3.c new file mode 100644 index 000000000000..7289c5fba0d8 --- /dev/null +++ b/drivers/crypto/starfive/jh8100-sm3.c @@ -0,0 +1,532 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * SM3 Hash function and HMAC support for StarFive driver + * + * Copyright (c) 2022 - 2023 StarFive Technology + * + */ + +#include +#include +#include +#include +#include "jh7110-cryp.h" +#include +#include +#include + +#define STARFIVE_SM3_REGS_OFFSET 0x4200 +#define STARFIVE_SM3_CSR (STARFIVE_SM3_REGS_OFFSET + 0x0) +#define STARFIVE_SM3_WDR (STARFIVE_SM3_REGS_OFFSET + 0x4) +#define STARFIVE_SM3_RDR (STARFIVE_SM3_REGS_OFFSET + 0x8) +#define STARFIVE_SM3_WSR (STARFIVE_SM3_REGS_OFFSET + 0xC) +#define STARFIVE_SM3_WLEN3 (STARFIVE_SM3_REGS_OFFSET + 0x10) +#define STARFIVE_SM3_WLEN2 (STARFIVE_SM3_REGS_OFFSET + 0x14) +#define STARFIVE_SM3_WLEN1 (STARFIVE_SM3_REGS_OFFSET + 0x18) +#define STARFIVE_SM3_WLEN0 (STARFIVE_SM3_REGS_OFFSET + 0x1C) +#define STARFIVE_SM3_WKR (STARFIVE_SM3_REGS_OFFSET + 0x20) +#define STARFIVE_SM3_WKLEN (STARFIVE_SM3_REGS_OFFSET + 0x24) + +#define STARFIVE_SM3_BUFLEN SHA512_BLOCK_SIZE +#define STARFIVE_SM3_RESET 0x2 + +static inline int starfive_sm3_wait_busy(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + u32 status; + + return readl_relaxed_poll_timeout(cryp->base + STARFIVE_SM3_CSR, status, + !(status & STARFIVE_SM3_BUSY), 10, 100000); +} + +static inline int starfive_sm3_wait_key_done(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + u32 status; + + return readl_relaxed_poll_timeout(cryp->base + STARFIVE_SM3_CSR, status, + (status & STARFIVE_SM3_KEY_DONE), 10, 100000); +} + +static int starfive_sm3_hmac_key(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + struct starfive_cryp_dev *cryp = ctx->cryp; + int klen = ctx->keylen, loop; + unsigned int *key = (unsigned int *)ctx->key; + unsigned char *cl; + + writel(ctx->keylen, cryp->base + STARFIVE_SM3_WKLEN); + + rctx->csr.sm3.hmac = 1; + rctx->csr.sm3.key_flag = 1; + + writel(rctx->csr.sm3.v, cryp->base + STARFIVE_SM3_CSR); + + for (loop = 0; loop < klen / sizeof(unsigned int); loop++, key++) + writel(*key, cryp->base + STARFIVE_SM3_WKR); + + if (klen & 0x3) { + cl = (unsigned char *)key; + for (loop = 0; loop < (klen & 0x3); loop++, cl++) + writeb(*cl, cryp->base + STARFIVE_SM3_WKR); + } + + if (starfive_sm3_wait_key_done(ctx)) + return dev_err_probe(cryp->dev, -ETIMEDOUT, + "starfive_sm3_wait_key_done error\n"); + + return 0; +} + +static void starfive_sm3_start(struct starfive_cryp_dev *cryp) +{ + union starfive_sm3_csr csr; + u32 mask; + + csr.v = readl(cryp->base + STARFIVE_SM3_CSR); + csr.firstb = 0; + csr.final = 1; + csr.ie = 1; + writel(csr.v, cryp->base + STARFIVE_SM3_CSR); + + mask = readl(cryp->base + STARFIVE_SM_IE_MASK_OFFSET); + mask &= ~STARFIVE_SM_IE_MASK_SM3_DONE; + writel(mask, cryp->base + STARFIVE_SM_IE_MASK_OFFSET); +} + +static void starfive_sm3_dma_callback(void *param) +{ + struct starfive_cryp_dev *cryp = param; + + complete(&cryp->dma_done); +} + +static void starfive_sm3_dma_init(struct starfive_cryp_dev *cryp) +{ + cryp->cfg_in.direction = DMA_MEM_TO_DEV; + cryp->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_in.src_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_addr = cryp->phys_base + STARFIVE_SM_ALG_FIFO_IN_OFFSET; + + dmaengine_slave_config(cryp->tx, &cryp->cfg_in); + + init_completion(&cryp->dma_done); +} + +static int starfive_sm3_dma_xfer(struct starfive_cryp_dev *cryp, + struct scatterlist *sg) +{ + struct dma_async_tx_descriptor *in_desc; + union starfive_sm_alg_cr alg_cr; + int ret = 0; + + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.sm3_dma_en = 1; + writel(alg_cr.v, cryp->base + STARFIVE_SM_ALG_CR_OFFSET); + + writel(sg_dma_len(sg), cryp->base + STARFIVE_SM_DMA_IN_LEN_OFFSET); + sg_dma_len(sg) = ALIGN(sg_dma_len(sg), sizeof(u32)); + + in_desc = dmaengine_prep_slave_sg(cryp->tx, sg, 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) { + ret = -EINVAL; + goto end; + } + + reinit_completion(&cryp->dma_done); + in_desc->callback = starfive_sm3_dma_callback; + in_desc->callback_param = cryp; + + dmaengine_submit(in_desc); + dma_async_issue_pending(cryp->tx); + + if (!wait_for_completion_timeout(&cryp->dma_done, + msecs_to_jiffies(1000))) + ret = -ETIMEDOUT; + +end: + alg_cr.v = 0; + alg_cr.clear = 1; + writel(alg_cr.v, cryp->base + STARFIVE_SM_ALG_CR_OFFSET); + + return ret; +} + +static int starfive_sm3_copy_hash(struct ahash_request *req) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + int count, *data; + int mlen; + + if (!req->result) + return 0; + + mlen = rctx->digsize / sizeof(u32); + data = (u32 *)req->result; + + for (count = 0; count < mlen; count++) + data[count] = readl(ctx->cryp->base + STARFIVE_SM3_RDR); + + return 0; +} + +void starfive_sm3_done_task(unsigned long param) +{ + struct starfive_cryp_dev *cryp = (struct starfive_cryp_dev *)param; + int err; + + err = starfive_sm3_copy_hash(cryp->req.hreq); + + /* Reset to clear hash_done in irq register*/ + writel(STARFIVE_SM3_RESET, cryp->base + STARFIVE_SM3_CSR); + + crypto_finalize_hash_request(cryp->engine, cryp->req.hreq, err); +} + +static int starfive_sm3_one_request(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = + container_of(areq, struct ahash_request, base); + struct starfive_cryp_ctx *ctx = + crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + struct scatterlist *tsg; + int ret, src_nents, i; + + rctx->csr.sm3.v = 0; + rctx->csr.sm3.reset = 1; + + writel(rctx->csr.sm3.v, cryp->base + STARFIVE_SM3_CSR); + + if (starfive_sm3_wait_busy(ctx)) + return dev_err_probe(cryp->dev, -ETIMEDOUT, + "Error resetting engine.\n"); + + rctx->csr.sm3.v = 0; + rctx->csr.sm3.mode = ctx->hash_mode; + + if (ctx->is_hmac) { + ret = starfive_sm3_hmac_key(ctx); + if (ret) + return ret; + } else { + rctx->csr.sm3.start = 1; + rctx->csr.sm3.firstb = 1; + writel(rctx->csr.sm3.v, cryp->base + STARFIVE_SM3_CSR); + } + + /* No input message, get digest and end. */ + if (!rctx->total) + goto hash_start; + + starfive_sm3_dma_init(cryp); + + for_each_sg(rctx->in_sg, tsg, rctx->in_sg_len, i) { + src_nents = dma_map_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg error\n"); + + ret = starfive_sm3_dma_xfer(cryp, tsg); + dma_unmap_sg(cryp->dev, tsg, 1, DMA_TO_DEVICE); + if (ret) + return ret; + } + +hash_start: + starfive_sm3_start(cryp); + + return 0; +} + +static void starfive_sm3_set_ahash(struct ahash_request *req, + struct starfive_cryp_ctx *ctx, + struct starfive_cryp_request_ctx *rctx) +{ + ahash_request_set_tfm(&rctx->ahash_fbk_req, ctx->ahash_fbk); + ahash_request_set_callback(&rctx->ahash_fbk_req, + req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, req->base.data); + ahash_request_set_crypt(&rctx->ahash_fbk_req, req->src, + req->result, req->nbytes); +} + +static int starfive_sm3_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + starfive_sm3_set_ahash(req, ctx, rctx); + + return crypto_ahash_init(&rctx->ahash_fbk_req); +} + +static int starfive_sm3_update(struct ahash_request *req) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + starfive_sm3_set_ahash(req, ctx, rctx); + + return crypto_ahash_update(&rctx->ahash_fbk_req); +} + +static int starfive_sm3_final(struct ahash_request *req) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + starfive_sm3_set_ahash(req, ctx, rctx); + + return crypto_ahash_final(&rctx->ahash_fbk_req); +} + +static int starfive_sm3_finup(struct ahash_request *req) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + starfive_sm3_set_ahash(req, ctx, rctx); + + return crypto_ahash_finup(&rctx->ahash_fbk_req); +} + +static int starfive_sm3_digest(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct starfive_cryp_dev *cryp = ctx->cryp; + + memset(rctx, 0, sizeof(struct starfive_cryp_request_ctx)); + + cryp->req.hreq = req; + rctx->total = req->nbytes; + rctx->in_sg = req->src; + rctx->blksize = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); + rctx->digsize = crypto_ahash_digestsize(tfm); + rctx->in_sg_len = sg_nents_for_len(rctx->in_sg, rctx->total); + ctx->rctx = rctx; + + return crypto_transfer_hash_request_to_engine(cryp->engine, req); +} + +static int starfive_sm3_export(struct ahash_request *req, void *out) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->ahash_fbk_req, ctx->ahash_fbk); + ahash_request_set_callback(&rctx->ahash_fbk_req, + req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, req->base.data); + + return crypto_ahash_export(&rctx->ahash_fbk_req, out); +} + +static int starfive_sm3_import(struct ahash_request *req, const void *in) +{ + struct starfive_cryp_request_ctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(tfm); + + ahash_request_set_tfm(&rctx->ahash_fbk_req, ctx->ahash_fbk); + ahash_request_set_callback(&rctx->ahash_fbk_req, + req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, req->base.data); + + return crypto_ahash_import(&rctx->ahash_fbk_req, in); +} + +static int starfive_sm3_init_algo(struct crypto_ahash *hash, + const char *alg_name, + bool is_hmac) +{ + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); + + ctx->cryp = starfive_cryp_find_dev(ctx); + if (!ctx->cryp) + return -ENODEV; + + ctx->ahash_fbk = crypto_alloc_ahash(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(ctx->ahash_fbk)) + return dev_err_probe(ctx->cryp->dev, PTR_ERR(ctx->ahash_fbk), + "starfive-sm3: Could not load fallback driver.\n"); + + crypto_ahash_set_statesize(hash, crypto_ahash_statesize(ctx->ahash_fbk)); + crypto_ahash_set_reqsize(hash, sizeof(struct starfive_cryp_request_ctx) + + crypto_ahash_reqsize(ctx->ahash_fbk)); + + ctx->keylen = 0; + ctx->hash_mode = STARFIVE_SM3_MODE; + ctx->is_hmac = is_hmac; + + return 0; +} + +static void starfive_sm3_exit_tfm(struct crypto_ahash *hash) +{ + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); + + crypto_free_ahash(ctx->ahash_fbk); +} + +static int starfive_sm3_long_setkey(struct starfive_cryp_ctx *ctx, + const u8 *key, unsigned int keylen) +{ + struct crypto_wait wait; + struct ahash_request *req; + struct scatterlist sg; + struct crypto_ahash *ahash_tfm; + struct starfive_cryp_dev *cryp = ctx->cryp; + u8 *buf; + int ret; + + ahash_tfm = crypto_alloc_ahash("sm3-starfive", 0, 0); + if (IS_ERR(ahash_tfm)) + return PTR_ERR(ahash_tfm); + + req = ahash_request_alloc(ahash_tfm, GFP_KERNEL); + if (!req) { + ret = -ENOMEM; + goto err_free_ahash; + } + + crypto_init_wait(&wait); + ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + crypto_ahash_clear_flags(ahash_tfm, ~0); + + buf = devm_kzalloc(cryp->dev, keylen + STARFIVE_SM3_BUFLEN, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto err_free_req; + } + + memcpy(buf, key, keylen); + sg_init_one(&sg, buf, keylen); + ahash_request_set_crypt(req, &sg, ctx->key, keylen); + + ret = crypto_wait_req(crypto_ahash_digest(req), &wait); + +err_free_req: + ahash_request_free(req); +err_free_ahash: + crypto_free_ahash(ahash_tfm); + return ret; +} + +static int starfive_sm3_setkey(struct crypto_ahash *hash, + const u8 *key, unsigned int keylen) +{ + struct starfive_cryp_ctx *ctx = crypto_ahash_ctx(hash); + unsigned int digestsize = crypto_ahash_digestsize(hash); + unsigned int blocksize = crypto_ahash_blocksize(hash); + + crypto_ahash_setkey(ctx->ahash_fbk, key, keylen); + + if (keylen <= blocksize) { + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + return 0; + } + + ctx->keylen = digestsize; + + return starfive_sm3_long_setkey(ctx, key, keylen); +} + +static int starfive_sm3_init_tfm(struct crypto_ahash *hash) +{ + return starfive_sm3_init_algo(hash, "sm3-generic", 0); +} + +static int starfive_hmac_sm3_init_tfm(struct crypto_ahash *hash) +{ + return starfive_sm3_init_algo(hash, "hmac(sm3-generic)", 1); +} + +static struct ahash_engine_alg algs_sm3[] = { +{ + .base.init = starfive_sm3_init, + .base.update = starfive_sm3_update, + .base.final = starfive_sm3_final, + .base.finup = starfive_sm3_finup, + .base.digest = starfive_sm3_digest, + .base.export = starfive_sm3_export, + .base.import = starfive_sm3_import, + .base.init_tfm = starfive_sm3_init_tfm, + .base.exit_tfm = starfive_sm3_exit_tfm, + .base.halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = sizeof(struct sm3_state), + .base = { + .cra_name = "sm3", + .cra_driver_name = "sm3-starfive", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_module = THIS_MODULE, + } + }, + .op = { + .do_one_request = starfive_sm3_one_request, + }, +}, { + .base.init = starfive_sm3_init, + .base.update = starfive_sm3_update, + .base.final = starfive_sm3_final, + .base.finup = starfive_sm3_finup, + .base.digest = starfive_sm3_digest, + .base.export = starfive_sm3_export, + .base.import = starfive_sm3_import, + .base.init_tfm = starfive_hmac_sm3_init_tfm, + .base.exit_tfm = starfive_sm3_exit_tfm, + .base.setkey = starfive_sm3_setkey, + .base.halg = { + .digestsize = SM3_DIGEST_SIZE, + .statesize = sizeof(struct sm3_state), + .base = { + .cra_name = "hmac(sm3)", + .cra_driver_name = "sm3-hmac-starfive", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_module = THIS_MODULE, + } + }, + .op = { + .do_one_request = starfive_sm3_one_request, + }, +}, +}; + +int starfive_sm3_register_algs(void) +{ + return crypto_engine_register_ahashes(algs_sm3, ARRAY_SIZE(algs_sm3)); +} + +void starfive_sm3_unregister_algs(void) +{ + crypto_engine_unregister_ahashes(algs_sm3, ARRAY_SIZE(algs_sm3)); +} From patchwork Tue Jan 16 09:01:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Jie Ho X-Patchwork-Id: 13520602 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from CHN02-BJS-obe.outbound.protection.partner.outlook.cn (mail-bjschn02on2068.outbound.protection.partner.outlook.cn [139.219.17.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 750A412B72; Tue, 16 Jan 2024 09:36:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=starfivetech.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=starfivetech.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gt0IUt7z+zVhymEUzhBMa2ikLHHuA2vpw15VNXW5oQa3RL185zslYAy/YgpwPSRv3jvmbRFPY1v/0EYgAf2buyKhjgCjPMxu6JVXRQrPsDSlIMoOzvXgXdTgPi6gxJVty5coHqFCNjPIZNqzDDhqaDqmYkRtK2NhP/iTjZMrwfAycUQnbXGnRZHwSYOgCXc7vPZm9frDWSryO5XyhySbQ1gD7F0oK6dGMu8GDkWMOoLpH2Ivxlc/qi+cA/XfdOddnA0UwjOwnEAjFQTzgN2y6IWNNDvIyn7z9qfeBY0fOM1LQufZZlC+jEsSeihXN8uYFDdtPytYRLn4FoYZgkCeSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=c+nl1gw73pl5y0tqj8KwCieiKgy9VRVJupsXITisgpU=; b=m29MJlYG3NdA2gT2NDCxSg2eiEdmB98MMORfEOccW8JSurr4Fw9idn39hSynHOZHA7AfeK61cH6MW4D/A2luP/3mnPWutPQeGAB00DHk9/Apii8Q6fvnr0QNcxgfQmGmVfmAQQT0NIONL8vLhtl3no/Dqd4hHC4wE6yUntixAOJuw7Tbk+iCekJEa24+XAqnx8TkHT7NzgkeWKw7Bz6jLCQsksVKdHGDU4vkU1hs8YbyM4Dxvu8BaytdayRO1dcSKsM/1GoFJDOxL+ZasVZcnBVEAvDTX/ct8ZApHIme0RXvEF5W+pNZfyLkIvfMaH8/N8DM+3sYef16SaqjMjo3MA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starfivetech.com; dmarc=pass action=none header.from=starfivetech.com; dkim=pass header.d=starfivetech.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=starfivetech.com; Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) by SHXPR01MB0640.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:1e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7181.26; Tue, 16 Jan 2024 09:01:54 +0000 Received: from SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6]) by SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn ([fe80::cf5e:3b9:7295:1ff6%7]) with mapi id 15.20.7135.032; Tue, 16 Jan 2024 09:01:54 +0000 From: Jia Jie Ho To: Herbert Xu , "David S . Miller" , Rob Herring , Krzysztof Kozlowski , Conor Dooley , linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/5] crypto: starfive: Add sm4 support for JH8100 Date: Tue, 16 Jan 2024 17:01:35 +0800 Message-Id: <20240116090135.75737-6-jiajie.ho@starfivetech.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240116090135.75737-1-jiajie.ho@starfivetech.com> References: <20240116090135.75737-1-jiajie.ho@starfivetech.com> X-ClientProxiedBy: BJXPR01CA0054.CHNPR01.prod.partner.outlook.cn (2406:e500:c211:12::21) To SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn (2406:e500:c311:26::16) Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SHXPR01MB0670:EE_|SHXPR01MB0640:EE_ X-MS-Office365-Filtering-Correlation-Id: 929a37d3-4b8c-4235-8ab3-08dc1671c96c X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eicCX4bUizWcBmlQgVV+0SCSBzynuN0AJZe++0LLMbgh8sUkO5NcpBqHycLU9t/UFnDKa3/6GwwZ4fKwi/4ZOQcFgq7k1woCfrJt87u/H6c1l7NTg3eQwPTb/SXTgEtcrRM1FBCx7QkoIBYQzC4MGLU4W9t/QkY8kPj8RCaNs5Ec8hal/T7wx9Z44YO8rywZW6XUWCMBEQNDjUs9lQhmGuRUQgx4s9mnlOjf0gKbDVE56LoWeuKxsPljpfnO3rPyESFr8+UxtRtiihb/33d/WGORaBy11xWqTMpC/Q8tnVN1F1PQReSRKlJYeINH3crKojpAXTRNnVbmfgPaCFQiDZXWHyjfMV3IlT4F3KFq6RPQ+bV53+Bb1y09hdRWCea2CZXJujv+4JjqkdKCFxoB4mS4xdZpJ9p6DhG0pyQl2eop6pVpOM0sXI+dwL4XgE4GH6JE3hUY5MNtbhOqBwJNLC3+LI+JmwRy/9KZmXbnV7WJwpzJoIthcsi/pfQYymYqRZAM0mOWcocMRKSSqcceyWMHEeTGFN79HrOvVjdAXM6ZM11FvesA2g1GHGFMxGr+ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn;PTR:;CAT:NONE;SFS:(13230031)(346002)(136003)(396003)(366004)(39830400003)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(38100700002)(41320700001)(2906002)(30864003)(5660300002)(36756003)(40180700001)(8936002)(8676002)(86362001)(41300700001)(38350700005)(83380400001)(26005)(40160700002)(508600001)(66476007)(110136005)(66946007)(66556008)(1076003)(2616005)(6666004)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: CKGZMPjSKqrw/+xuVdDVXbrD4vZqTRe5R7SxkhEnGgylLfwY3fr5TbmKC36TlWOYewzo196CNKbd8IB6W93MV72opZ5Z7wiConaELrnHEOvrhhHHGDj4NWprAjphbwSjH7bgE6Ksd25MXRQiWJ9cZqPtvYHFs3N0uzDm+zuF7YCJ4Aky39VguKfHhfu3wIaoG8m+mH/4gyNUyfpJfO8/QyfOLOZBfCOmwzAVHuR8f++/jfj+6OgRRg3wQzf9sIUqFddR7SZoaBhVNXqEocHUxIyPq4W+WEe1EnwubbmBpNfRIVQQQz/vRPNxELK/LIe56V6d7ie2awdWxOIqpTA4jzRkl7dlyqg3tFRJl/5PfR5mdfIfufI7NLpu8u8KWuN24+K/71c/IOeD8jMiqTh3v/60/nH9SPqhCPx6DX4q42Zvz1M4rwNGOHDc3UWulSEsyoJQmI/enRaenLV4LwIFBkkYgccxm6Imx07L+5LmerkvJt4PhnIq5BvCGiwQw+n7dfBxnxP+CleGRDQNDC+Yjj1AP8opj0jkF1xMI8vW/bgk0nwVhroe5dAh0EcBNTJrr7aNo26aZlZEs0nAOGQ0RU6zLrH2LXtZQUl2hd2mEJWIAkmh5akZ3b+s9HpioHmCJotoP6JpNM0aGhzQZJNJi/3YIH9qGCDqYax8HQXqu22xx8bBJNonMcKXbjmFiWVVfKwdqi4kyB/j2naus22gECLoM636KxbA+ILJym5TBO7QLfA3AACV0BC0nhLrTaO1YxnaFtZIMUqKWRJNVyl/aMylVdyBPO3yAiMkUze6vzm0RR7GqcHCEajmZ7WGvY4SGnN1WD+2VLxQbo6WMZF2+Z7nZ4QTBDbsRaw7JPfrtabGscFy9MikdkqqoiE7zMUO4HygwbuRnaj920AE5dQuvh6b5i8ayMZT31kaeYWqJNq719xIg7dTlIyalEQUM4nusKgeh4bw6gZolNKk8UDrr1+6bQ+xfmZt01enCI1xUfver7efBdCnZKmORAkf98Pl38Jh7SevSSBMfKFxvXAi8hcAqYcZB+QumFVcUVOQVxEjHsgGU+TQJSw2x4Oj1UKBowW/DB0a8Gi5JK+VlAj+zk/JClx56RTXzD3w+gQP/SSPQr/qGPaPrsVnw1PzuqkozxdOsuqNGVqXRBVilsrw/Ju3gKkHtSBZhpnfCGLkhUrus6dpPdW9yHW1pWMc+jzmqX6OTYYtxfGL2b3HEC11iQolCy1oF4aDQWMOrfrScLZEBnhhb0XHzPP2JXlWRDkWmwxyTbhI4XOCtSkGU495zpOybYO9siDHMgieyXIt+XXqirlFeBBLO/GQi6UTtpYeAZU5z2+GmoTni3YQgMZGJxGxVl0ezM5k+uKfdSnqwSbIle6JWfao6HCZh58gxd43vJXcElj07L8IuqWaKfs7cQfR1IpZIb8V1rMfZwCbSEsM8lJMwlfxCT1UjCzhMXSEnULo92ZReYzEEED7Uo1QDMg34TWlPOaL/6sc5B5RdunG5LaytedwWaQ6ChAJ3oYLGs8/SUhydMiqSsYyZ8ITGsKMbimfyjiJX9j9iAnxh1kYgs1w3F+zQx/0673Xmt2oTylAMFFo5p+lNw2GUaaJLg== X-OriginatorOrg: starfivetech.com X-MS-Exchange-CrossTenant-Network-Message-Id: 929a37d3-4b8c-4235-8ab3-08dc1671c96c X-MS-Exchange-CrossTenant-AuthSource: SHXPR01MB0670.CHNPR01.prod.partner.outlook.cn X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2024 09:01:54.8430 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 06fe3fa3-1221-43d3-861b-5a4ee687a85c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: fWmFXYjiGF5cfOqJBdtJLK3wEM0TPZylVlExf8bngR+2BO6L6L4c5ploihfz3AkP4f3CZ45hu3CzAP3sThyDc0RSa8ebgnd/VyEX9sV4g60= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SHXPR01MB0640 Add driver support for sm4 skcipher and aead for StarFive JH8100 SoC. Signed-off-by: Jia Jie Ho --- drivers/crypto/starfive/Kconfig | 1 + drivers/crypto/starfive/Makefile | 2 +- drivers/crypto/starfive/jh7110-cryp.c | 8 + drivers/crypto/starfive/jh7110-cryp.h | 39 + drivers/crypto/starfive/jh8100-sm4.c | 1107 +++++++++++++++++++++++++ 5 files changed, 1156 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/starfive/jh8100-sm4.c diff --git a/drivers/crypto/starfive/Kconfig b/drivers/crypto/starfive/Kconfig index e6bf02d0ed1f..740bb70c5607 100644 --- a/drivers/crypto/starfive/Kconfig +++ b/drivers/crypto/starfive/Kconfig @@ -34,6 +34,7 @@ config CRYPTO_DEV_JH8100 select CRYPTO_SHA256 select CRYPTO_SHA512 select CRYPTO_SM3_GENERIC + select CRYPTO_SM4_GENERIC select CRYPTO_RSA select CRYPTO_AES select CRYPTO_CCM diff --git a/drivers/crypto/starfive/Makefile b/drivers/crypto/starfive/Makefile index 67717fca3f5d..8370f20427fd 100644 --- a/drivers/crypto/starfive/Makefile +++ b/drivers/crypto/starfive/Makefile @@ -4,4 +4,4 @@ obj-$(CONFIG_CRYPTO_DEV_JH7110) += jh7110-crypto.o jh7110-crypto-objs := jh7110-cryp.o jh7110-hash.o jh7110-rsa.o jh7110-aes.o obj-$(CONFIG_CRYPTO_DEV_JH8100) += jh8100-crypto.o -jh8100-crypto-objs := jh7110-cryp.o jh7110-hash.o jh7110-rsa.o jh7110-aes.o jh8100-sm3.o +jh8100-crypto-objs := jh7110-cryp.o jh7110-hash.o jh7110-rsa.o jh7110-aes.o jh8100-sm3.o jh8100-sm4.o diff --git a/drivers/crypto/starfive/jh7110-cryp.c b/drivers/crypto/starfive/jh7110-cryp.c index fb7c19705fbf..63b801cd6555 100644 --- a/drivers/crypto/starfive/jh7110-cryp.c +++ b/drivers/crypto/starfive/jh7110-cryp.c @@ -225,10 +225,17 @@ static int starfive_cryp_probe(struct platform_device *pdev) ret = starfive_sm3_register_algs(); if (ret) goto err_algs_sm3; + + ret = starfive_sm4_register_algs(); + if (ret) + goto err_algs_sm4; #endif + return 0; #ifdef CONFIG_CRYPTO_DEV_JH8100 +err_algs_sm4: + starfive_sm3_unregister_algs(); err_algs_sm3: starfive_rsa_unregister_algs(); #endif @@ -269,6 +276,7 @@ static void starfive_cryp_remove(struct platform_device *pdev) tasklet_kill(&cryp->hash_done); #ifdef CONFIG_CRYPTO_DEV_JH8100 starfive_sm3_unregister_algs(); + starfive_sm4_unregister_algs(); tasklet_kill(&cryp->sm3_done); #endif crypto_engine_stop(cryp->engine); diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h index a675ee4bc6cf..af7fa8aeb5d0 100644 --- a/drivers/crypto/starfive/jh7110-cryp.h +++ b/drivers/crypto/starfive/jh7110-cryp.h @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -182,6 +183,40 @@ union starfive_sm3_csr { }; }; +union starfive_sm4_csr { + u32 v; + struct { + u32 cmode :1; + u32 rsvd_0 :1; + u32 ie :1; + u32 sm4rst :1; + u32 rsvd_1 :1; +#define STARFIVE_SM4_DONE BIT(5) + u32 sm4done :1; +#define STARFIVE_SM4_KEY_DONE BIT(6) + u32 krdy :1; + u32 busy :1; + u32 vsm4_start :1; + u32 delay_sm4 :1; +#define STARFIVE_SM4_CCM_START BIT(10) + u32 ccm_start :1; +#define STARFIVE_SM4_GCM_START BIT(11) + u32 gcm_start :1; + u32 rsvd_2 :4; +#define STARFIVE_SM4_MODE_XFB_1 0x0 +#define STARFIVE_SM4_MODE_XFB_128 0x5 + u32 stmode :3; + u32 rsvd_3 :2; +#define STARFIVE_SM4_MODE_ECB 0x0 +#define STARFIVE_SM4_MODE_CBC 0x1 +#define STARFIVE_SM4_MODE_CTR 0x4 +#define STARFIVE_SM4_MODE_CCM 0x5 +#define STARFIVE_SM4_MODE_GCM 0x6 + u32 mode :3; + u32 rsvd_4 :8; + }; +}; + struct starfive_rsa_key { u8 *n; u8 *e; @@ -263,6 +298,7 @@ struct starfive_cryp_request_ctx { union starfive_pka_cacr pka; union starfive_aes_csr aes; union starfive_sm3_csr sm3; + union starfive_sm4_csr sm4; } csr; struct scatterlist *in_sg; @@ -294,6 +330,9 @@ void starfive_hash_done_task(unsigned long param); int starfive_sm3_register_algs(void); void starfive_sm3_unregister_algs(void); +int starfive_sm4_register_algs(void); +void starfive_sm4_unregister_algs(void); + void starfive_sm3_done_task(unsigned long param); #endif #endif diff --git a/drivers/crypto/starfive/jh8100-sm4.c b/drivers/crypto/starfive/jh8100-sm4.c new file mode 100644 index 000000000000..ccde5fb793cc --- /dev/null +++ b/drivers/crypto/starfive/jh8100-sm4.c @@ -0,0 +1,1107 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * StarFive SM4 acceleration driver + * + * Copyright (c) 2022 - 2023 StarFive Technology + */ + +#include +#include +#include +#include +#include +#include "jh7110-cryp.h" +#include + +#define STARFIVE_SM4_REGS_OFFSET 0x4100 +#define STARFIVE_SM4_SM4DIO0R (STARFIVE_SM4_REGS_OFFSET + 0x0) +#define STARFIVE_SM4_KEY0 (STARFIVE_SM4_REGS_OFFSET + 0x4) +#define STARFIVE_SM4_KEY1 (STARFIVE_SM4_REGS_OFFSET + 0x8) +#define STARFIVE_SM4_KEY2 (STARFIVE_SM4_REGS_OFFSET + 0xC) +#define STARFIVE_SM4_KEY3 (STARFIVE_SM4_REGS_OFFSET + 0x10) +#define STARFIVE_SM4_IV0 (STARFIVE_SM4_REGS_OFFSET + 0x14) +#define STARFIVE_SM4_IV1 (STARFIVE_SM4_REGS_OFFSET + 0x18) +#define STARFIVE_SM4_IV2 (STARFIVE_SM4_REGS_OFFSET + 0x1c) +#define STARFIVE_SM4_IV3 (STARFIVE_SM4_REGS_OFFSET + 0x20) +#define STARFIVE_SM4_CSR (STARFIVE_SM4_REGS_OFFSET + 0x24) +#define STARFIVE_SM4_NONCE0 (STARFIVE_SM4_REGS_OFFSET + 0x30) +#define STARFIVE_SM4_NONCE1 (STARFIVE_SM4_REGS_OFFSET + 0x34) +#define STARFIVE_SM4_NONCE2 (STARFIVE_SM4_REGS_OFFSET + 0x38) +#define STARFIVE_SM4_NONCE3 (STARFIVE_SM4_REGS_OFFSET + 0x3c) +#define STARFIVE_SM4_ALEN0 (STARFIVE_SM4_REGS_OFFSET + 0x40) +#define STARFIVE_SM4_ALEN1 (STARFIVE_SM4_REGS_OFFSET + 0x44) +#define STARFIVE_SM4_MLEN0 (STARFIVE_SM4_REGS_OFFSET + 0x48) +#define STARFIVE_SM4_MLEN1 (STARFIVE_SM4_REGS_OFFSET + 0x4c) +#define STARFIVE_SM4_IVLEN (STARFIVE_SM4_REGS_OFFSET + 0x50) + +#define FLG_MODE_MASK GENMASK(2, 0) +#define FLG_ENCRYPT BIT(4) + +/* Misc */ +#define CCM_B0_ADATA 0x40 +#define SM4_BLOCK_32 (SM4_BLOCK_SIZE / sizeof(u32)) + +static inline int starfive_sm4_wait_done(struct starfive_cryp_dev *cryp) +{ + u32 status; + + return readl_relaxed_poll_timeout(cryp->base + STARFIVE_SM4_CSR, status, + status & STARFIVE_SM4_DONE, 10, 100000); +} + +static inline int starfive_sm4_wait_keydone(struct starfive_cryp_dev *cryp) +{ + u32 status; + + return readl_relaxed_poll_timeout(cryp->base + STARFIVE_SM4_CSR, status, + status & STARFIVE_SM4_KEY_DONE, 10, 100000); +} + +static inline int is_encrypt(struct starfive_cryp_dev *cryp) +{ + return cryp->flags & FLG_ENCRYPT; +} + +static int starfive_sm4_aead_write_key(struct starfive_cryp_ctx *ctx, u32 hw_mode) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + unsigned int value; + u32 *key = (u32 *)ctx->key; + + writel(key[0], cryp->base + STARFIVE_SM4_KEY0); + writel(key[1], cryp->base + STARFIVE_SM4_KEY1); + writel(key[2], cryp->base + STARFIVE_SM4_KEY2); + writel(key[3], cryp->base + STARFIVE_SM4_KEY3); + + value = readl(ctx->cryp->base + STARFIVE_SM4_CSR); + + if (hw_mode == STARFIVE_SM4_MODE_GCM) + value |= STARFIVE_SM4_GCM_START; + else + value |= STARFIVE_SM4_CCM_START; + + writel(value, cryp->base + STARFIVE_SM4_CSR); + + if (starfive_sm4_wait_keydone(cryp)) + return -ETIMEDOUT; + + return 0; +} + +static inline void starfive_sm4_set_alen(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + + writel(upper_32_bits(cryp->assoclen), cryp->base + STARFIVE_SM4_ALEN0); + writel(lower_32_bits(cryp->assoclen), cryp->base + STARFIVE_SM4_ALEN1); +} + +static inline void starfive_sm4_set_mlen(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + + writel(upper_32_bits(cryp->total_in), cryp->base + STARFIVE_SM4_MLEN0); + writel(lower_32_bits(cryp->total_in), cryp->base + STARFIVE_SM4_MLEN1); +} + +static inline int starfive_sm4_ccm_check_iv(const u8 *iv) +{ + /* 2 <= L <= 8, so 1 <= L' <= 7. */ + if (iv[0] < 1 || iv[0] > 7) + return -EINVAL; + + return 0; +} + +static inline void starfive_sm4_write_iv(struct starfive_cryp_ctx *ctx, u32 *iv) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + + writel(iv[0], cryp->base + STARFIVE_SM4_IV0); + writel(iv[1], cryp->base + STARFIVE_SM4_IV1); + writel(iv[2], cryp->base + STARFIVE_SM4_IV2); + writel(iv[3], cryp->base + STARFIVE_SM4_IV3); +} + +static inline void starfive_sm4_get_iv(struct starfive_cryp_dev *cryp, u32 *iv) +{ + iv[0] = readl(cryp->base + STARFIVE_SM4_IV0); + iv[1] = readl(cryp->base + STARFIVE_SM4_IV1); + iv[2] = readl(cryp->base + STARFIVE_SM4_IV2); + iv[3] = readl(cryp->base + STARFIVE_SM4_IV3); +} + +static inline void starfive_sm4_write_nonce(struct starfive_cryp_ctx *ctx, u32 *nonce) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + + writel(nonce[0], cryp->base + STARFIVE_SM4_NONCE0); + writel(nonce[1], cryp->base + STARFIVE_SM4_NONCE1); + writel(nonce[2], cryp->base + STARFIVE_SM4_NONCE2); + writel(nonce[3], cryp->base + STARFIVE_SM4_NONCE3); +} + +static int starfive_sm4_write_key(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + u32 *key = (u32 *)ctx->key; + + writel(key[0], cryp->base + STARFIVE_SM4_KEY0); + writel(key[1], cryp->base + STARFIVE_SM4_KEY1); + writel(key[2], cryp->base + STARFIVE_SM4_KEY2); + writel(key[3], cryp->base + STARFIVE_SM4_KEY3); + + if (starfive_sm4_wait_keydone(cryp)) + return -ETIMEDOUT; + + return 0; +} + +static int starfive_sm4_ccm_init(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + u8 iv[SM4_BLOCK_SIZE], b0[SM4_BLOCK_SIZE]; + unsigned int textlen; + + memcpy(iv, cryp->req.areq->iv, SM4_BLOCK_SIZE); + memset(iv + SM4_BLOCK_SIZE - 1 - iv[0], 0, iv[0] + 1); + + /* Build B0 */ + memcpy(b0, iv, SM4_BLOCK_SIZE); + + b0[0] |= (8 * ((cryp->authsize - 2) / 2)); + + if (cryp->assoclen) + b0[0] |= CCM_B0_ADATA; + + textlen = cryp->total_in; + + b0[SM4_BLOCK_SIZE - 2] = textlen >> 8; + b0[SM4_BLOCK_SIZE - 1] = textlen & 0xFF; + + starfive_sm4_write_nonce(ctx, (u32 *)b0); + + return 0; +} + +static int starfive_sm4_hw_init(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + struct starfive_cryp_dev *cryp = ctx->cryp; + u32 hw_mode; + int ret = 0; + + /* reset */ + rctx->csr.sm4.v = 0; + rctx->csr.sm4.sm4rst = 1; + writel(rctx->csr.sm4.v, cryp->base + STARFIVE_SM4_CSR); + + /* csr setup */ + hw_mode = cryp->flags & FLG_MODE_MASK; + + rctx->csr.sm4.v = 0; + rctx->csr.sm4.mode = hw_mode; + rctx->csr.sm4.cmode = !is_encrypt(cryp); + rctx->csr.sm4.stmode = STARFIVE_SM4_MODE_XFB_1; + + if (cryp->side_chan) { + rctx->csr.sm4.delay_sm4 = 1; + rctx->csr.sm4.vsm4_start = 1; + } + + writel(rctx->csr.sm4.v, cryp->base + STARFIVE_SM4_CSR); + + switch (hw_mode) { + case STARFIVE_SM4_MODE_GCM: + starfive_sm4_set_alen(ctx); + starfive_sm4_set_mlen(ctx); + writel(GCM_AES_IV_SIZE, cryp->base + STARFIVE_SM4_IVLEN); + ret = starfive_sm4_aead_write_key(ctx, hw_mode); + if (ret) + return ret; + + starfive_sm4_write_iv(ctx, (void *)cryp->req.areq->iv); + break; + case STARFIVE_SM4_MODE_CCM: + starfive_sm4_set_alen(ctx); + starfive_sm4_set_mlen(ctx); + starfive_sm4_ccm_init(ctx); + ret = starfive_sm4_aead_write_key(ctx, hw_mode); + if (ret) + return ret; + break; + case STARFIVE_SM4_MODE_CBC: + case STARFIVE_SM4_MODE_CTR: + starfive_sm4_write_iv(ctx, (void *)cryp->req.sreq->iv); + ret = starfive_sm4_write_key(ctx); + if (ret) + return ret; + break; + case STARFIVE_SM4_MODE_ECB: + ret = starfive_sm4_write_key(ctx); + if (ret) + return ret; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int starfive_sm4_read_authtag(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + int i; + + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_GCM) { + cryp->tag_out[0] = readl(cryp->base + STARFIVE_SM4_NONCE0); + cryp->tag_out[1] = readl(cryp->base + STARFIVE_SM4_NONCE1); + cryp->tag_out[2] = readl(cryp->base + STARFIVE_SM4_NONCE2); + cryp->tag_out[3] = readl(cryp->base + STARFIVE_SM4_NONCE3); + } else { + for (i = 0; i < SM4_BLOCK_32; i++) + cryp->tag_out[i] = readl(cryp->base + STARFIVE_SM4_SM4DIO0R); + } + + if (is_encrypt(cryp)) { + scatterwalk_map_and_copy(cryp->tag_out, rctx->out_sg, + cryp->total_in, cryp->authsize, 1); + } else { + if (crypto_memneq(cryp->tag_in, cryp->tag_out, cryp->authsize)) + return dev_err_probe(cryp->dev, -EBADMSG, + "Failed tag verification\n"); + } + + return 0; +} + +static void starfive_sm4_finish_req(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + int err = 0; + + if (cryp->authsize) + err = starfive_sm4_read_authtag(ctx); + + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CBC || + (cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CTR) + starfive_sm4_get_iv(cryp, (void *)cryp->req.sreq->iv); + + if (cryp->authsize) + crypto_finalize_aead_request(cryp->engine, cryp->req.areq, err); + else + crypto_finalize_skcipher_request(cryp->engine, cryp->req.sreq, + err); +} + +static int starfive_sm4_gcm_write_adata(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + u32 *buffer; + int total_len, loop; + + total_len = ALIGN(cryp->assoclen, SM4_BLOCK_SIZE) / sizeof(unsigned int); + buffer = (u32 *)rctx->adata; + + for (loop = 0; loop < total_len; loop += 4) { + writel(*buffer, cryp->base + STARFIVE_SM4_NONCE0); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_NONCE1); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_NONCE2); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_NONCE3); + buffer++; + + if (starfive_sm4_wait_done(cryp)) + return dev_err_probe(cryp->dev, -ETIMEDOUT, + "Timeout processing gcm aad block"); + } + + return 0; +} + +static int starfive_sm4_ccm_write_adata(struct starfive_cryp_ctx *ctx) +{ + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = ctx->rctx; + u32 *buffer; + int total_len, loop; + + buffer = (u32 *)rctx->adata; + total_len = ALIGN(cryp->assoclen + 2, SM4_BLOCK_SIZE) / sizeof(unsigned int); + + for (loop = 0; loop < total_len; loop += 4) { + writel(*buffer, cryp->base + STARFIVE_SM4_SM4DIO0R); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_SM4DIO0R); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_SM4DIO0R); + buffer++; + writel(*buffer, cryp->base + STARFIVE_SM4_SM4DIO0R); + buffer++; + + if (starfive_sm4_wait_done(cryp)) + return dev_err_probe(cryp->dev, -ETIMEDOUT, + "Timeout processing ccm aad block"); + } + + return 0; +} + +static void starfive_sm4_dma_done(void *param) +{ + struct starfive_cryp_dev *cryp = param; + + complete(&cryp->dma_done); +} + +static void starfive_sm4_dma_init(struct starfive_cryp_dev *cryp) +{ + cryp->cfg_in.direction = DMA_MEM_TO_DEV; + cryp->cfg_in.src_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_in.dst_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_in.src_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_maxburst = cryp->dma_maxburst; + cryp->cfg_in.dst_addr = cryp->phys_base + STARFIVE_SM_ALG_FIFO_IN_OFFSET; + + dmaengine_slave_config(cryp->tx, &cryp->cfg_in); + + cryp->cfg_out.direction = DMA_DEV_TO_MEM; + cryp->cfg_out.src_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_out.dst_addr_width = DMA_SLAVE_BUSWIDTH_8_BYTES; + cryp->cfg_out.src_maxburst = cryp->dma_maxburst; + cryp->cfg_out.dst_maxburst = cryp->dma_maxburst; + cryp->cfg_out.src_addr = cryp->phys_base + STARFIVE_SM_ALG_FIFO_OUT_OFFSET; + + dmaengine_slave_config(cryp->rx, &cryp->cfg_out); + + init_completion(&cryp->dma_done); +} + +static int starfive_sm4_dma_xfer(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst, + int len) +{ + struct dma_async_tx_descriptor *in_desc, *out_desc; + union starfive_sm_alg_cr alg_cr; + int ret = 0, in_save, out_save; + + alg_cr.v = 0; + alg_cr.start = 1; + alg_cr.sm4_dma_en = 1; + writel(alg_cr.v, cryp->base + STARFIVE_SM_ALG_CR_OFFSET); + + in_save = sg_dma_len(src); + out_save = sg_dma_len(dst); + + writel(ALIGN(len, SM4_BLOCK_SIZE), cryp->base + STARFIVE_SM_DMA_IN_LEN_OFFSET); + writel(ALIGN(len, SM4_BLOCK_SIZE), cryp->base + STARFIVE_SM_DMA_OUT_LEN_OFFSET); + + sg_dma_len(src) = ALIGN(len, SM4_BLOCK_SIZE); + sg_dma_len(dst) = ALIGN(len, SM4_BLOCK_SIZE); + + out_desc = dmaengine_prep_slave_sg(cryp->rx, dst, 1, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!out_desc) { + ret = -EINVAL; + goto dma_err; + } + + out_desc->callback = starfive_sm4_dma_done; + out_desc->callback_param = cryp; + + reinit_completion(&cryp->dma_done); + dmaengine_submit(out_desc); + dma_async_issue_pending(cryp->rx); + + in_desc = dmaengine_prep_slave_sg(cryp->tx, src, 1, DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!in_desc) { + ret = -EINVAL; + goto dma_err; + } + + dmaengine_submit(in_desc); + dma_async_issue_pending(cryp->tx); + + if (!wait_for_completion_timeout(&cryp->dma_done, + msecs_to_jiffies(1000))) + ret = -ETIMEDOUT; + +dma_err: + sg_dma_len(src) = in_save; + sg_dma_len(dst) = out_save; + + alg_cr.v = 0; + alg_cr.clear = 1; + writel(alg_cr.v, cryp->base + STARFIVE_SM_ALG_CR_OFFSET); + + return ret; +} + +static int starfive_sm4_map_sg(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct scatterlist *stsg, *dtsg; + struct scatterlist _src[2], _dst[2]; + unsigned int remain = cryp->total_in; + unsigned int len, src_nents, dst_nents; + int ret; + + if (src == dst) { + for (stsg = src, dtsg = dst; remain > 0; + stsg = sg_next(stsg), dtsg = sg_next(dtsg)) { + src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg error\n"); + + dst_nents = src_nents; + + len = min(sg_dma_len(stsg), remain); + + ret = starfive_sm4_dma_xfer(cryp, stsg, dtsg, len); + dma_unmap_sg(cryp->dev, stsg, 1, DMA_BIDIRECTIONAL); + if (ret) + return ret; + + remain -= len; + } + } else { + for (stsg = src, dtsg = dst;;) { + src_nents = dma_map_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE); + if (src_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg src error\n"); + + dst_nents = dma_map_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE); + if (dst_nents == 0) + return dev_err_probe(cryp->dev, -ENOMEM, + "dma_map_sg dst error\n"); + + len = min(sg_dma_len(stsg), sg_dma_len(dtsg)); + len = min(len, remain); + + ret = starfive_sm4_dma_xfer(cryp, stsg, dtsg, len); + dma_unmap_sg(cryp->dev, stsg, 1, DMA_TO_DEVICE); + dma_unmap_sg(cryp->dev, dtsg, 1, DMA_FROM_DEVICE); + if (ret) + return ret; + + remain -= len; + if (remain == 0) + break; + + if (sg_dma_len(stsg) - len) { + stsg = scatterwalk_ffwd(_src, stsg, len); + dtsg = sg_next(dtsg); + } else if (sg_dma_len(dtsg) - len) { + dtsg = scatterwalk_ffwd(_dst, dtsg, len); + stsg = sg_next(stsg); + } else { + stsg = sg_next(stsg); + dtsg = sg_next(dtsg); + } + } + } + + return 0; +} + +static int starfive_sm4_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct skcipher_request *req = + container_of(areq, struct skcipher_request, base); + struct starfive_cryp_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = skcipher_request_ctx(req); + int ret; + + cryp->req.sreq = req; + cryp->total_in = req->cryptlen; + cryp->total_out = req->cryptlen; + cryp->assoclen = 0; + cryp->authsize = 0; + + rctx->in_sg = req->src; + rctx->out_sg = req->dst; + + ctx->rctx = rctx; + + ret = starfive_sm4_hw_init(ctx); + if (ret) + return ret; + + starfive_sm4_dma_init(cryp); + + ret = starfive_sm4_map_sg(cryp, rctx->in_sg, rctx->out_sg); + if (ret) + return ret; + + starfive_sm4_finish_req(ctx); + + return 0; +} + +static int starfive_sm4_init_tfm(struct crypto_skcipher *tfm, + const char *alg_name) +{ + struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + + ctx->cryp = starfive_cryp_find_dev(ctx); + if (!ctx->cryp) + return -ENODEV; + + ctx->skcipher_fbk = crypto_alloc_skcipher(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->skcipher_fbk)) + return dev_err_probe(ctx->cryp->dev, PTR_ERR(ctx->skcipher_fbk), + "%s() failed to allocate fallback for %s\n", + __func__, alg_name); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct starfive_cryp_request_ctx) + + sizeof(struct skcipher_request)); + + return 0; +} + +static void starfive_sm4_exit_tfm(struct crypto_skcipher *tfm) +{ + struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(ctx->skcipher_fbk); +} + +static int starfive_sm4_aead_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct aead_request *req = + container_of(areq, struct aead_request, base); + struct starfive_cryp_ctx *ctx = + crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct starfive_cryp_dev *cryp = ctx->cryp; + struct starfive_cryp_request_ctx *rctx = aead_request_ctx(req); + struct scatterlist _dst[2], _src[2]; + int ret; + + cryp->req.areq = req; + cryp->assoclen = req->assoclen; + cryp->authsize = crypto_aead_authsize(crypto_aead_reqtfm(req)); + + if (is_encrypt(cryp)) { + cryp->total_in = req->cryptlen; + cryp->total_out = req->cryptlen; + } else { + cryp->total_in = req->cryptlen - cryp->authsize; + cryp->total_out = cryp->total_in; + scatterwalk_map_and_copy(cryp->tag_in, req->src, + cryp->total_in + cryp->assoclen, + cryp->authsize, 0); + } + + if (cryp->assoclen) { + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CCM) { + rctx->adata = kzalloc(cryp->assoclen + 2 + SM4_BLOCK_SIZE, GFP_KERNEL); + if (!rctx->adata) + return -ENOMEM; + + /* Append 2 bytes zeroes at the start of ccm aad */ + rctx->adata[0] = 0; + rctx->adata[1] = 0; + + sg_copy_to_buffer(req->src, + sg_nents_for_len(req->src, cryp->assoclen), + &rctx->adata[2], cryp->assoclen); + } else { + rctx->adata = kzalloc(cryp->assoclen + SM4_BLOCK_SIZE, GFP_KERNEL); + if (!rctx->adata) + return dev_err_probe(cryp->dev, -ENOMEM, + "Failed to alloc memory for adata"); + + sg_copy_to_buffer(req->src, + sg_nents_for_len(req->src, cryp->assoclen), + rctx->adata, cryp->assoclen); + } + } + + rctx->in_sg = scatterwalk_ffwd(_src, req->src, cryp->assoclen); + if (req->src == req->dst) + rctx->out_sg = rctx->in_sg; + else + rctx->out_sg = scatterwalk_ffwd(_dst, req->dst, cryp->assoclen); + + if (cryp->total_in) + sg_zero_buffer(rctx->in_sg, sg_nents(rctx->in_sg), + sg_dma_len(rctx->in_sg) - cryp->total_in, + cryp->total_in); + + ctx->rctx = rctx; + + ret = starfive_sm4_hw_init(ctx); + if (ret) + return ret; + + if (!cryp->assoclen) + goto write_text; + + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CCM) + ret = starfive_sm4_ccm_write_adata(ctx); + else + ret = starfive_sm4_gcm_write_adata(ctx); + + kfree(rctx->adata); + + if (ret) + return ret; + +write_text: + if (!cryp->total_in) + goto finish_req; + + starfive_sm4_dma_init(cryp); + + ret = starfive_sm4_map_sg(cryp, rctx->in_sg, rctx->out_sg); + if (ret) + return ret; + +finish_req: + starfive_sm4_finish_req(ctx); + return 0; +} + +static int starfive_sm4_aead_init_tfm(struct crypto_aead *tfm, + const char *alg_name) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + + ctx->cryp = starfive_cryp_find_dev(ctx); + if (!ctx->cryp) + return -ENODEV; + + ctx->aead_fbk = crypto_alloc_aead(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->aead_fbk)) + return dev_err_probe(ctx->cryp->dev, PTR_ERR(ctx->aead_fbk), + "%s() failed to allocate fallback for %s\n", + __func__, alg_name); + + crypto_aead_set_reqsize(tfm, sizeof(struct starfive_cryp_ctx) + + sizeof(struct aead_request)); + + return 0; +} + +static void starfive_sm4_aead_exit_tfm(struct crypto_aead *tfm) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + + crypto_free_aead(ctx->aead_fbk); +} + +static bool starfive_sm4_check_unaligned(struct starfive_cryp_dev *cryp, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct scatterlist *tsg; + int i; + + for_each_sg(src, tsg, sg_nents(src), i) + if (!IS_ALIGNED(tsg->length, SM4_BLOCK_SIZE) && + !sg_is_last(tsg)) + return true; + + if (src != dst) + for_each_sg(dst, tsg, sg_nents(dst), i) + if (!IS_ALIGNED(tsg->length, SM4_BLOCK_SIZE) && + !sg_is_last(tsg)) + return true; + + return false; +} + +static int starfive_sm4_do_fallback(struct skcipher_request *req, bool enc) +{ + struct starfive_cryp_ctx *ctx = + crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct skcipher_request *subreq = skcipher_request_ctx(req); + + skcipher_request_set_tfm(subreq, ctx->skcipher_fbk); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + + return enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); +} + +static int starfive_sm4_crypt(struct skcipher_request *req, unsigned long flags) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct starfive_cryp_dev *cryp = ctx->cryp; + unsigned int blocksize_align = crypto_skcipher_blocksize(tfm) - 1; + + cryp->flags = flags; + + if ((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_ECB || + (cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CBC) + if (req->cryptlen & blocksize_align) + return -EINVAL; + + if (starfive_sm4_check_unaligned(cryp, req->src, req->dst)) + return starfive_sm4_do_fallback(req, is_encrypt(cryp)); + + return crypto_transfer_skcipher_request_to_engine(cryp->engine, req); +} + +static int starfive_sm4_aead_do_fallback(struct aead_request *req, bool enc) +{ + struct starfive_cryp_ctx *ctx = + crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct aead_request *subreq = aead_request_ctx(req); + + aead_request_set_tfm(subreq, ctx->aead_fbk); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, + req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + aead_request_set_ad(subreq, req->assoclen); + + return enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); +} + +static int starfive_sm4_aead_crypt(struct aead_request *req, unsigned long flags) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct starfive_cryp_dev *cryp = ctx->cryp; + struct scatterlist *src, *dst, _src[2], _dst[2]; + + cryp->flags = flags; + + /* sm4-ccm does not support tag verification for non-aligned text, + * use fallback for ccm decryption instead. + */ + if (((cryp->flags & FLG_MODE_MASK) == STARFIVE_SM4_MODE_CCM) && + !is_encrypt(cryp)) + return starfive_sm4_aead_do_fallback(req, 0); + + src = scatterwalk_ffwd(_src, req->src, req->assoclen); + + if (req->src == req->dst) + dst = src; + else + dst = scatterwalk_ffwd(_dst, req->dst, req->assoclen); + + if (starfive_sm4_check_unaligned(cryp, src, dst)) + return starfive_sm4_aead_do_fallback(req, is_encrypt(cryp)); + + return crypto_transfer_aead_request_to_engine(cryp->engine, req); +} + +static int starfive_sm4_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + struct starfive_cryp_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!key || !keylen) + return -EINVAL; + + if (keylen != SM4_KEY_SIZE) + return -EINVAL; + + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + + return crypto_skcipher_setkey(ctx->skcipher_fbk, key, keylen); +} + +static int starfive_sm4_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + + if (!key || !keylen) + return -EINVAL; + + if (keylen != SM4_KEY_SIZE) + return -EINVAL; + + memcpy(ctx->key, key, keylen); + ctx->keylen = keylen; + + return crypto_aead_setkey(ctx->aead_fbk, key, keylen); +} + +static int starfive_sm4_gcm_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + int ret; + + ret = crypto_gcm_check_authsize(authsize); + if (ret) + return ret; + + return crypto_aead_setauthsize(ctx->aead_fbk, authsize); +} + +static int starfive_sm4_ccm_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct starfive_cryp_ctx *ctx = crypto_aead_ctx(tfm); + + switch (authsize) { + case 4: + case 6: + case 8: + case 10: + case 12: + case 14: + case 16: + break; + default: + return -EINVAL; + } + + return crypto_aead_setauthsize(ctx->aead_fbk, authsize); +} + +static int starfive_sm4_ecb_encrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_ECB | FLG_ENCRYPT); +} + +static int starfive_sm4_ecb_decrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_ECB); +} + +static int starfive_sm4_cbc_encrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_CBC | FLG_ENCRYPT); +} + +static int starfive_sm4_cbc_decrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_CBC); +} + +static int starfive_sm4_ctr_encrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_CTR | FLG_ENCRYPT); +} + +static int starfive_sm4_ctr_decrypt(struct skcipher_request *req) +{ + return starfive_sm4_crypt(req, STARFIVE_SM4_MODE_CTR); +} + +static int starfive_sm4_gcm_encrypt(struct aead_request *req) +{ + return starfive_sm4_aead_crypt(req, STARFIVE_SM4_MODE_GCM | FLG_ENCRYPT); +} + +static int starfive_sm4_gcm_decrypt(struct aead_request *req) +{ + return starfive_sm4_aead_crypt(req, STARFIVE_SM4_MODE_GCM); +} + +static int starfive_sm4_ccm_encrypt(struct aead_request *req) +{ + int ret; + + ret = starfive_sm4_ccm_check_iv(req->iv); + if (ret) + return ret; + + return starfive_sm4_aead_crypt(req, STARFIVE_SM4_MODE_CCM | FLG_ENCRYPT); +} + +static int starfive_sm4_ccm_decrypt(struct aead_request *req) +{ + int ret; + + ret = starfive_sm4_ccm_check_iv(req->iv); + if (ret) + return ret; + + return starfive_sm4_aead_crypt(req, STARFIVE_SM4_MODE_CCM); +} + +static int starfive_sm4_ecb_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_sm4_init_tfm(tfm, "ecb(sm4-generic)"); +} + +static int starfive_sm4_cbc_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_sm4_init_tfm(tfm, "cbc(sm4-generic)"); +} + +static int starfive_sm4_ctr_init_tfm(struct crypto_skcipher *tfm) +{ + return starfive_sm4_init_tfm(tfm, "ctr(sm4-generic)"); +} + +static int starfive_sm4_ccm_aead_init_tfm(struct crypto_aead *tfm) +{ + return starfive_sm4_aead_init_tfm(tfm, "ccm_base(ctr(sm4-generic),cbcmac(sm4-generic))"); +} + +static int starfive_sm4_gcm_aead_init_tfm(struct crypto_aead *tfm) +{ + return starfive_sm4_aead_init_tfm(tfm, "gcm_base(ctr(sm4-generic),ghash-generic)"); +} + +static struct skcipher_engine_alg skcipher_sm4[] = { +{ + .base.init = starfive_sm4_ecb_init_tfm, + .base.exit = starfive_sm4_exit_tfm, + .base.setkey = starfive_sm4_setkey, + .base.encrypt = starfive_sm4_ecb_encrypt, + .base.decrypt = starfive_sm4_ecb_decrypt, + .base.min_keysize = SM4_KEY_SIZE, + .base.max_keysize = SM4_KEY_SIZE, + .base.base = { + .cra_name = "ecb(sm4)", + .cra_driver_name = "starfive-ecb-sm4", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, + .op = { + .do_one_request = starfive_sm4_do_one_req, + }, +}, { + .base.init = starfive_sm4_ctr_init_tfm, + .base.exit = starfive_sm4_exit_tfm, + .base.setkey = starfive_sm4_setkey, + .base.encrypt = starfive_sm4_ctr_encrypt, + .base.decrypt = starfive_sm4_ctr_decrypt, + .base.min_keysize = SM4_KEY_SIZE, + .base.max_keysize = SM4_KEY_SIZE, + .base.ivsize = SM4_BLOCK_SIZE, + .base.base = { + .cra_name = "ctr(sm4)", + .cra_driver_name = "starfive-ctr-sm4", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, + .op = { + .do_one_request = starfive_sm4_do_one_req, + }, +}, { + .base.init = starfive_sm4_cbc_init_tfm, + .base.exit = starfive_sm4_exit_tfm, + .base.setkey = starfive_sm4_setkey, + .base.encrypt = starfive_sm4_cbc_encrypt, + .base.decrypt = starfive_sm4_cbc_decrypt, + .base.min_keysize = SM4_KEY_SIZE, + .base.max_keysize = SM4_KEY_SIZE, + .base.ivsize = SM4_BLOCK_SIZE, + .base.base = { + .cra_name = "cbc(sm4)", + .cra_driver_name = "starfive-cbc-sm4", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, + .op = { + .do_one_request = starfive_sm4_do_one_req, + }, +}, +}; + +static struct aead_engine_alg aead_sm4[] = { +{ + .base.setkey = starfive_sm4_aead_setkey, + .base.setauthsize = starfive_sm4_gcm_setauthsize, + .base.encrypt = starfive_sm4_gcm_encrypt, + .base.decrypt = starfive_sm4_gcm_decrypt, + .base.init = starfive_sm4_gcm_aead_init_tfm, + .base.exit = starfive_sm4_aead_exit_tfm, + .base.ivsize = GCM_AES_IV_SIZE, + .base.maxauthsize = SM4_BLOCK_SIZE, + .base.base = { + .cra_name = "gcm(sm4)", + .cra_driver_name = "starfive-gcm-sm4", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, + .op = { + .do_one_request = starfive_sm4_aead_do_one_req, + }, +}, { + .base.setkey = starfive_sm4_aead_setkey, + .base.setauthsize = starfive_sm4_ccm_setauthsize, + .base.encrypt = starfive_sm4_ccm_encrypt, + .base.decrypt = starfive_sm4_ccm_decrypt, + .base.init = starfive_sm4_ccm_aead_init_tfm, + .base.exit = starfive_sm4_aead_exit_tfm, + .base.ivsize = SM4_BLOCK_SIZE, + .base.maxauthsize = SM4_BLOCK_SIZE, + .base.base = { + .cra_name = "ccm(sm4)", + .cra_driver_name = "starfive-ccm-sm4", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct starfive_cryp_ctx), + .cra_alignmask = 0xf, + .cra_module = THIS_MODULE, + }, + .op = { + .do_one_request = starfive_sm4_aead_do_one_req, + }, +}, +}; + +int starfive_sm4_register_algs(void) +{ + int ret; + + ret = crypto_engine_register_skciphers(skcipher_sm4, ARRAY_SIZE(skcipher_sm4)); + if (ret) + return ret; + + ret = crypto_engine_register_aeads(aead_sm4, ARRAY_SIZE(aead_sm4)); + if (ret) + crypto_engine_unregister_skciphers(skcipher_sm4, ARRAY_SIZE(skcipher_sm4)); + + return ret; +} + +void starfive_sm4_unregister_algs(void) +{ + crypto_engine_unregister_aeads(aead_sm4, ARRAY_SIZE(aead_sm4)); + crypto_engine_unregister_skciphers(skcipher_sm4, ARRAY_SIZE(skcipher_sm4)); +}