From patchwork Thu Mar 4 06:35:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115551 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73896C433DB for ; Thu, 4 Mar 2021 06:40:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4799A64EFE for ; Thu, 4 Mar 2021 06:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235374AbhCDGj2 (ORCPT ); Thu, 4 Mar 2021 01:39:28 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:13470 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235362AbhCDGjC (ORCPT ); Thu, 4 Mar 2021 01:39:02 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Drh3T2vMZzjW6X; Thu, 4 Mar 2021 14:36:37 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:13 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms Date: Thu, 4 Mar 2021 14:35:44 +0800 Message-ID: <1614839750-29670-2-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org A new generation of accelerator Kunpeng930 has appeared, and the corresponding driver needs to be updated to support some new algorithms of Kunpeng930. To be compatible with Kunpeng920, we add parameter 'struct hisi_qm *qm' to sec_algs_(un)register to identify the chip's version. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu Reviewed-by: Longfang Liu --- drivers/crypto/hisilicon/hpre/hpre.h | 5 +++-- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 4 ++-- drivers/crypto/hisilicon/qm.c | 4 ++-- drivers/crypto/hisilicon/qm.h | 4 ++-- drivers/crypto/hisilicon/sec2/sec.h | 4 ++-- drivers/crypto/hisilicon/sec2/sec_crypto.c | 4 ++-- drivers/crypto/hisilicon/sec2/sec_crypto.h | 4 ++-- drivers/crypto/hisilicon/zip/zip.h | 4 ++-- drivers/crypto/hisilicon/zip/zip_crypto.c | 4 ++-- 9 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index 181c109..cc50f23 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -93,7 +93,8 @@ struct hpre_sqe { }; struct hisi_qp *hpre_create_qp(void); -int hpre_algs_register(void); -void hpre_algs_unregister(void); +int hpre_algs_register(struct hisi_qm *qm); +void hpre_algs_unregister(struct hisi_qm *qm); + #endif diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index a87f990..d89b2f5 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -1154,7 +1154,7 @@ static struct kpp_alg dh = { }; #endif -int hpre_algs_register(void) +int hpre_algs_register(struct hisi_qm *qm) { int ret; @@ -1171,7 +1171,7 @@ int hpre_algs_register(void) return ret; } -void hpre_algs_unregister(void) +void hpre_algs_unregister(struct hisi_qm *qm) { crypto_unregister_akcipher(&rsa); #ifdef CONFIG_CRYPTO_DH diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index 13cb421..bc23174 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -4084,7 +4084,7 @@ int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list) mutex_unlock(&qm_list->lock); if (flag) { - ret = qm_list->register_to_crypto(); + ret = qm_list->register_to_crypto(qm); if (ret) { mutex_lock(&qm_list->lock); list_del(&qm->list); @@ -4115,7 +4115,7 @@ void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list) mutex_unlock(&qm_list->lock); if (list_empty(&qm_list->list)) - qm_list->unregister_from_crypto(); + qm_list->unregister_from_crypto(qm); } EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister); diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h index 54967c6..f91110f 100644 --- a/drivers/crypto/hisilicon/qm.h +++ b/drivers/crypto/hisilicon/qm.h @@ -199,8 +199,8 @@ struct hisi_qm_err_ini { struct hisi_qm_list { struct mutex lock; struct list_head list; - int (*register_to_crypto)(void); - void (*unregister_from_crypto)(void); + int (*register_to_crypto)(struct hisi_qm *qm); + void (*unregister_from_crypto)(struct hisi_qm *qm); }; struct hisi_qm { diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h index 0849191..17ddb20 100644 --- a/drivers/crypto/hisilicon/sec2/sec.h +++ b/drivers/crypto/hisilicon/sec2/sec.h @@ -183,6 +183,6 @@ struct sec_dev { void sec_destroy_qps(struct hisi_qp **qps, int qp_num); struct hisi_qp **sec_create_qps(void); -int sec_register_to_crypto(void); -void sec_unregister_from_crypto(void); +int sec_register_to_crypto(struct hisi_qm *qm); +void sec_unregister_from_crypto(struct hisi_qm *qm); #endif diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index 2eaa516..f835514 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -1634,7 +1634,7 @@ static struct aead_alg sec_aeads[] = { AES_BLOCK_SIZE, AES_BLOCK_SIZE, SHA512_DIGEST_SIZE), }; -int sec_register_to_crypto(void) +int sec_register_to_crypto(struct hisi_qm *qm) { int ret; @@ -1651,7 +1651,7 @@ int sec_register_to_crypto(void) return ret; } -void sec_unregister_from_crypto(void) +void sec_unregister_from_crypto(struct hisi_qm *qm) { crypto_unregister_skciphers(sec_skciphers, ARRAY_SIZE(sec_skciphers)); diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h index b2786e1..0e933e7 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.h +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h @@ -211,6 +211,6 @@ struct sec_sqe { struct sec_sqe_type2 type2; }; -int sec_register_to_crypto(void); -void sec_unregister_from_crypto(void); +int sec_register_to_crypto(struct hisi_qm *qm); +void sec_unregister_from_crypto(struct hisi_qm *qm); #endif diff --git a/drivers/crypto/hisilicon/zip/zip.h b/drivers/crypto/hisilicon/zip/zip.h index 92397f9..9ed7461 100644 --- a/drivers/crypto/hisilicon/zip/zip.h +++ b/drivers/crypto/hisilicon/zip/zip.h @@ -62,6 +62,6 @@ struct hisi_zip_sqe { }; int zip_create_qps(struct hisi_qp **qps, int ctx_num, int node); -int hisi_zip_register_to_crypto(void); -void hisi_zip_unregister_from_crypto(void); +int hisi_zip_register_to_crypto(struct hisi_qm *qm); +void hisi_zip_unregister_from_crypto(struct hisi_qm *qm); #endif diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c index 08b4660..41f6966 100644 --- a/drivers/crypto/hisilicon/zip/zip_crypto.c +++ b/drivers/crypto/hisilicon/zip/zip_crypto.c @@ -665,7 +665,7 @@ static struct acomp_alg hisi_zip_acomp_gzip = { } }; -int hisi_zip_register_to_crypto(void) +int hisi_zip_register_to_crypto(struct hisi_qm *qm) { int ret; @@ -684,7 +684,7 @@ int hisi_zip_register_to_crypto(void) return ret; } -void hisi_zip_unregister_from_crypto(void) +void hisi_zip_unregister_from_crypto(struct hisi_qm *qm) { crypto_unregister_acomp(&hisi_zip_acomp_gzip); crypto_unregister_acomp(&hisi_zip_acomp_zlib); From patchwork Thu Mar 4 06:35:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115553 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F516C433E0 for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE24964EFE for ; Thu, 4 Mar 2021 06:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235379AbhCDGja (ORCPT ); Thu, 4 Mar 2021 01:39:30 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:13055 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235356AbhCDGjD (ORCPT ); Thu, 4 Mar 2021 01:39:03 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Drh2y2wXGzMhnw; Thu, 4 Mar 2021 14:36:10 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:13 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 2/7] crypto: hisilicon/hpre - add algorithm type Date: Thu, 4 Mar 2021 14:35:45 +0800 Message-ID: <1614839750-29670-3-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Algorithm type is brought in to get hardware HPRE queue to support different algorithms. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- drivers/crypto/hisilicon/hpre/hpre.h | 10 +++++++++- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 12 ++++++------ drivers/crypto/hisilicon/hpre/hpre_main.c | 11 +++++++++-- 3 files changed, 24 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index cc50f23..02193e1 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -10,6 +10,14 @@ #define HPRE_PF_DEF_Q_NUM 64 #define HPRE_PF_DEF_Q_BASE 0 +/* + * type used in qm sqc DW6. + * 0 - Algorithm which has been supported in V2, like RSA, DH and so on; + * 1 - ECC algorithm in V3. + */ +#define HPRE_V2_ALG_TYPE 0 +#define HPRE_V3_ECC_ALG_TYPE 1 + enum { HPRE_CLUSTER0, HPRE_CLUSTER1, @@ -92,7 +100,7 @@ struct hpre_sqe { __le32 rsvd1[_HPRE_SQE_ALIGN_EXT]; }; -struct hisi_qp *hpre_create_qp(void); +struct hisi_qp *hpre_create_qp(u8 type); int hpre_algs_register(struct hisi_qm *qm); void hpre_algs_unregister(struct hisi_qm *qm); diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index d89b2f5..712bea9 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -152,12 +152,12 @@ static void hpre_rm_req_from_ctx(struct hpre_asym_request *hpre_req) } } -static struct hisi_qp *hpre_get_qp_and_start(void) +static struct hisi_qp *hpre_get_qp_and_start(u8 type) { struct hisi_qp *qp; int ret; - qp = hpre_create_qp(); + qp = hpre_create_qp(type); if (!qp) { pr_err("Can not create hpre qp!\n"); return ERR_PTR(-ENODEV); @@ -422,11 +422,11 @@ static void hpre_alg_cb(struct hisi_qp *qp, void *resp) req->cb(ctx, resp); } -static int hpre_ctx_init(struct hpre_ctx *ctx) +static int hpre_ctx_init(struct hpre_ctx *ctx, u8 type) { struct hisi_qp *qp; - qp = hpre_get_qp_and_start(); + qp = hpre_get_qp_and_start(type); if (IS_ERR(qp)) return PTR_ERR(qp); @@ -674,7 +674,7 @@ static int hpre_dh_init_tfm(struct crypto_kpp *tfm) { struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); - return hpre_ctx_init(ctx); + return hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE); } static void hpre_dh_exit_tfm(struct crypto_kpp *tfm) @@ -1100,7 +1100,7 @@ static int hpre_rsa_init_tfm(struct crypto_akcipher *tfm) return PTR_ERR(ctx->rsa.soft_tfm); } - ret = hpre_ctx_init(ctx); + ret = hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE); if (ret) crypto_free_akcipher(ctx->rsa.soft_tfm); diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index e7a2c70..76f0a87 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -226,13 +226,20 @@ static u32 vfs_num; module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444); MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)"); -struct hisi_qp *hpre_create_qp(void) +struct hisi_qp *hpre_create_qp(u8 type) { int node = cpu_to_node(smp_processor_id()); struct hisi_qp *qp = NULL; int ret; - ret = hisi_qm_alloc_qps_node(&hpre_devices, 1, 0, node, &qp); + if (type != HPRE_V2_ALG_TYPE && type != HPRE_V3_ECC_ALG_TYPE) + return NULL; + + /* + * type: 0 - RSA/DH. algorithm supported in V2, + * 1 - ECC algorithm in V3. + */ + ret = hisi_qm_alloc_qps_node(&hpre_devices, 1, type, node, &qp); if (!ret) return qp; From patchwork Thu Mar 4 06:35:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115557 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28A74C433E6 for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDB8364EFD for ; Thu, 4 Mar 2021 06:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235376AbhCDGj3 (ORCPT ); Thu, 4 Mar 2021 01:39:29 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:13056 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235355AbhCDGjC (ORCPT ); Thu, 4 Mar 2021 01:39:02 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Drh2y2brKzMhp4; Thu, 4 Mar 2021 14:36:10 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:13 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 3/7] crypto: move curve_id of ECDH from the key to algorithm name Date: Thu, 4 Mar 2021 14:35:46 +0800 Message-ID: <1614839750-29670-4-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. crypto and crypto/atmel-ecc: Move curve id of ECDH from the key into the algorithm name instead in crypto and atmel-ecc, so ECDH algorithm name change form 'ecdh' to 'ecdh-nist-pxxx', and we cannot use 'curve_id' in 'struct ecdh'; 2. crypto/testmgr and net/bluetooth: Modify 'testmgr.c', 'testmgr.h' and 'net/bluetooth' to adapt the modification. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu Reported-by: kernel test robot --- crypto/ecdh.c | 72 +++++++++++++++++++++++++++++++-------------- crypto/ecdh_helper.c | 4 +-- crypto/testmgr.c | 13 ++++++-- crypto/testmgr.h | 34 ++++++++++----------- drivers/crypto/atmel-ecc.c | 28 +++++------------- include/crypto/ecdh.h | 2 -- net/bluetooth/ecdh_helper.c | 2 -- net/bluetooth/selftest.c | 2 +- net/bluetooth/smp.c | 6 ++-- 9 files changed, 89 insertions(+), 74 deletions(-) diff --git a/crypto/ecdh.c b/crypto/ecdh.c index 96f80c8..04a427b 100644 --- a/crypto/ecdh.c +++ b/crypto/ecdh.c @@ -23,33 +23,16 @@ static inline struct ecdh_ctx *ecdh_get_ctx(struct crypto_kpp *tfm) return kpp_tfm_ctx(tfm); } -static unsigned int ecdh_supported_curve(unsigned int curve_id) -{ - switch (curve_id) { - case ECC_CURVE_NIST_P192: return ECC_CURVE_NIST_P192_DIGITS; - case ECC_CURVE_NIST_P256: return ECC_CURVE_NIST_P256_DIGITS; - default: return 0; - } -} - static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, unsigned int len) { struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); struct ecdh params; - unsigned int ndigits; if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0 || - params.key_size > sizeof(ctx->private_key)) + params.key_size > sizeof(u64) * ctx->ndigits) return -EINVAL; - ndigits = ecdh_supported_curve(params.curve_id); - if (!ndigits) - return -EINVAL; - - ctx->curve_id = params.curve_id; - ctx->ndigits = ndigits; - if (!params.key || !params.key_size) return ecc_gen_privkey(ctx->curve_id, ctx->ndigits, ctx->private_key); @@ -140,13 +123,24 @@ static unsigned int ecdh_max_size(struct crypto_kpp *tfm) return ctx->ndigits << (ECC_DIGITS_TO_BYTES_SHIFT + 1); } -static struct kpp_alg ecdh = { +static int ecdh_nist_p192_init_tfm(struct crypto_kpp *tfm) +{ + struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P192; + ctx->ndigits = ECC_CURVE_NIST_P192_DIGITS; + + return 0; +} + +static struct kpp_alg ecdh_nist_p192 = { .set_secret = ecdh_set_secret, .generate_public_key = ecdh_compute_value, .compute_shared_secret = ecdh_compute_value, .max_size = ecdh_max_size, + .init = ecdh_nist_p192_init_tfm, .base = { - .cra_name = "ecdh", + .cra_name = "ecdh-nist-p192", .cra_driver_name = "ecdh-generic", .cra_priority = 100, .cra_module = THIS_MODULE, @@ -154,14 +148,48 @@ static struct kpp_alg ecdh = { }, }; +static int ecdh_nist_p256_init_tfm(struct crypto_kpp *tfm) +{ + struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P256; + ctx->ndigits = ECC_CURVE_NIST_P256_DIGITS; + + return 0; +} + +static struct kpp_alg ecdh_nist_p256 = { + .set_secret = ecdh_set_secret, + .generate_public_key = ecdh_compute_value, + .compute_shared_secret = ecdh_compute_value, + .max_size = ecdh_max_size, + .init = ecdh_nist_p256_init_tfm, + .base = { + .cra_name = "ecdh-nist-p256", + .cra_driver_name = "ecdh-generic", + .cra_priority = 100, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct ecdh_ctx), + }, +}; + +static bool ecdh_nist_p192_registered; + static int ecdh_init(void) { - return crypto_register_kpp(&ecdh); + int ret; + + ret = crypto_register_kpp(&ecdh_nist_p192); + ecdh_nist_p192_registered = ret == 0; + + return crypto_register_kpp(&ecdh_nist_p256); } static void ecdh_exit(void) { - crypto_unregister_kpp(&ecdh); + if (ecdh_nist_p192_registered) + crypto_unregister_kpp(&ecdh_nist_p192); + crypto_unregister_kpp(&ecdh_nist_p256); } subsys_initcall(ecdh_init); diff --git a/crypto/ecdh_helper.c b/crypto/ecdh_helper.c index fca63b5..f18f902 100644 --- a/crypto/ecdh_helper.c +++ b/crypto/ecdh_helper.c @@ -10,7 +10,7 @@ #include #include -#define ECDH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + 2 * sizeof(short)) +#define ECDH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + sizeof(short)) static inline u8 *ecdh_pack_data(void *dst, const void *src, size_t sz) { @@ -46,7 +46,6 @@ int crypto_ecdh_encode_key(char *buf, unsigned int len, return -EINVAL; ptr = ecdh_pack_data(ptr, &secret, sizeof(secret)); - ptr = ecdh_pack_data(ptr, ¶ms->curve_id, sizeof(params->curve_id)); ptr = ecdh_pack_data(ptr, ¶ms->key_size, sizeof(params->key_size)); ecdh_pack_data(ptr, params->key, params->key_size); @@ -70,7 +69,6 @@ int crypto_ecdh_decode_key(const char *buf, unsigned int len, if (unlikely(len < secret.len)) return -EINVAL; - ptr = ecdh_unpack_data(¶ms->curve_id, ptr, sizeof(params->curve_id)); ptr = ecdh_unpack_data(¶ms->key_size, ptr, sizeof(params->key_size)); if (secret.len != crypto_ecdh_key_len(params)) return -EINVAL; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 9335999..c290ba0 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4904,11 +4904,20 @@ static const struct alg_test_desc alg_test_descs[] = { } }, { #endif - .alg = "ecdh", +#ifndef CONFIG_CRYPTO_FIPS + .alg = "ecdh-nist-p192", .test = alg_test_kpp, .fips_allowed = 1, .suite = { - .kpp = __VECS(ecdh_tv_template) + .kpp = __VECS(ecdh_p192_tv_template) + } + }, { +#endif + .alg = "ecdh-nist-p256", + .test = alg_test_kpp, + .fips_allowed = 1, + .suite = { + .kpp = __VECS(ecdh_p256_tv_template) } }, { .alg = "ecrdsa", diff --git a/crypto/testmgr.h b/crypto/testmgr.h index ced56ea..6426603 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2261,19 +2261,17 @@ static const struct kpp_testvec curve25519_tv_template[] = { } }; -static const struct kpp_testvec ecdh_tv_template[] = { - { #ifndef CONFIG_CRYPTO_FIPS +static const struct kpp_testvec ecdh_p192_tv_template[] = { + { .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x20\x00" /* len */ - "\x01\x00" /* curve_id */ + "\x1e\x00" /* len */ "\x18\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x20" /* len */ - "\x00\x01" /* curve_id */ + "\x00\x1e" /* len */ "\x00\x18" /* key_size */ #endif "\xb5\x05\xb1\x71\x1e\xbf\x8c\xda" @@ -2301,18 +2299,20 @@ static const struct kpp_testvec ecdh_tv_template[] = { .b_public_size = 48, .expected_a_public_size = 48, .expected_ss_size = 24 - }, { + } +}; #endif + +static const struct kpp_testvec ecdh_p256_tv_template[] = { + { .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x28\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x26\x00" /* len */ "\x20\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x28" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x26" /* len */ "\x00\x20" /* key_size */ #endif "\x24\xd1\x21\xeb\xe5\xcf\x2d\x83" @@ -2350,25 +2350,21 @@ static const struct kpp_testvec ecdh_tv_template[] = { .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x08\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x06\x00" /* len */ "\x00\x00", /* key_size */ #else "\x00\x02" /* type */ - "\x00\x08" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x06" /* len */ "\x00\x00", /* key_size */ #endif .b_secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x28\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x26\x00" /* len */ "\x20\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x28" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x26" /* len */ "\x00\x20" /* key_size */ #endif "\x24\xd1\x21\xeb\xe5\xcf\x2d\x83" diff --git a/drivers/crypto/atmel-ecc.c b/drivers/crypto/atmel-ecc.c index 9bd8e51..515946c 100644 --- a/drivers/crypto/atmel-ecc.c +++ b/drivers/crypto/atmel-ecc.c @@ -34,7 +34,6 @@ static struct atmel_ecc_driver_data driver_data; * of the user to not call set_secret() while * generate_public_key() or compute_shared_secret() are in flight. * @curve_id : elliptic curve id - * @n_sz : size in bytes of the n prime * @do_fallback: true when the device doesn't support the curve or when the user * wants to use its own private key. */ @@ -43,7 +42,6 @@ struct atmel_ecdh_ctx { struct crypto_kpp *fallback; const u8 *public_key; unsigned int curve_id; - size_t n_sz; bool do_fallback; }; @@ -51,7 +49,6 @@ static void atmel_ecdh_done(struct atmel_i2c_work_data *work_data, void *areq, int status) { struct kpp_request *req = areq; - struct atmel_ecdh_ctx *ctx = work_data->ctx; struct atmel_i2c_cmd *cmd = &work_data->cmd; size_t copied, n_sz; @@ -59,7 +56,7 @@ static void atmel_ecdh_done(struct atmel_i2c_work_data *work_data, void *areq, goto free_work_data; /* might want less than we've got */ - n_sz = min_t(size_t, ctx->n_sz, req->dst_len); + n_sz = min_t(size_t, ATMEL_ECC_NIST_P256_N_SIZE, req->dst_len); /* copy the shared secret */ copied = sg_copy_from_buffer(req->dst, sg_nents_for_len(req->dst, n_sz), @@ -73,14 +70,6 @@ static void atmel_ecdh_done(struct atmel_i2c_work_data *work_data, void *areq, kpp_request_complete(req, status); } -static unsigned int atmel_ecdh_supported_curve(unsigned int curve_id) -{ - if (curve_id == ECC_CURVE_NIST_P256) - return ATMEL_ECC_NIST_P256_N_SIZE; - - return 0; -} - /* * A random private key is generated and stored in the device. The device * returns the pair public key. @@ -104,8 +93,7 @@ static int atmel_ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, return -EINVAL; } - ctx->n_sz = atmel_ecdh_supported_curve(params.curve_id); - if (!ctx->n_sz || params.key_size) { + if (params.key_size) { /* fallback to ecdh software implementation */ ctx->do_fallback = true; return crypto_kpp_set_secret(ctx->fallback, buf, len); @@ -125,7 +113,6 @@ static int atmel_ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, goto free_cmd; ctx->do_fallback = false; - ctx->curve_id = params.curve_id; atmel_i2c_init_genkey_cmd(cmd, DATA_SLOT_2); @@ -263,6 +250,7 @@ static int atmel_ecdh_init_tfm(struct crypto_kpp *tfm) struct crypto_kpp *fallback; struct atmel_ecdh_ctx *ctx = kpp_tfm_ctx(tfm); + ctx->curve_id = ECC_CURVE_NIST_P256; ctx->client = atmel_ecc_i2c_client_alloc(); if (IS_ERR(ctx->client)) { pr_err("tfm - i2c_client binding failed\n"); @@ -306,7 +294,7 @@ static unsigned int atmel_ecdh_max_size(struct crypto_kpp *tfm) return ATMEL_ECC_PUBKEY_SIZE; } -static struct kpp_alg atmel_ecdh = { +static struct kpp_alg atmel_ecdh_nist_p256 = { .set_secret = atmel_ecdh_set_secret, .generate_public_key = atmel_ecdh_generate_public_key, .compute_shared_secret = atmel_ecdh_compute_shared_secret, @@ -315,7 +303,7 @@ static struct kpp_alg atmel_ecdh = { .max_size = atmel_ecdh_max_size, .base = { .cra_flags = CRYPTO_ALG_NEED_FALLBACK, - .cra_name = "ecdh", + .cra_name = "ecdh-nist-p256", .cra_driver_name = "atmel-ecdh", .cra_priority = ATMEL_ECC_PRIORITY, .cra_module = THIS_MODULE, @@ -340,14 +328,14 @@ static int atmel_ecc_probe(struct i2c_client *client, &driver_data.i2c_client_list); spin_unlock(&driver_data.i2c_list_lock); - ret = crypto_register_kpp(&atmel_ecdh); + ret = crypto_register_kpp(&atmel_ecdh_nist_p256); if (ret) { spin_lock(&driver_data.i2c_list_lock); list_del(&i2c_priv->i2c_client_list_node); spin_unlock(&driver_data.i2c_list_lock); dev_err(&client->dev, "%s alg registration failed\n", - atmel_ecdh.base.cra_driver_name); + atmel_ecdh_nist_p256.base.cra_driver_name); } else { dev_info(&client->dev, "atmel ecc algorithms registered in /proc/crypto\n"); } @@ -365,7 +353,7 @@ static int atmel_ecc_remove(struct i2c_client *client) return -EBUSY; } - crypto_unregister_kpp(&atmel_ecdh); + crypto_unregister_kpp(&atmel_ecdh_nist_p256); spin_lock(&driver_data.i2c_list_lock); list_del(&i2c_priv->i2c_client_list_node); diff --git a/include/crypto/ecdh.h b/include/crypto/ecdh.h index a5b805b..deaaa48 100644 --- a/include/crypto/ecdh.h +++ b/include/crypto/ecdh.h @@ -29,12 +29,10 @@ /** * struct ecdh - define an ECDH private key * - * @curve_id: ECC curve the key is based on. * @key: Private ECDH key * @key_size: Size of the private ECDH key */ struct ecdh { - unsigned short curve_id; char *key; unsigned short key_size; }; diff --git a/net/bluetooth/ecdh_helper.c b/net/bluetooth/ecdh_helper.c index 3226fe0..989401f 100644 --- a/net/bluetooth/ecdh_helper.c +++ b/net/bluetooth/ecdh_helper.c @@ -126,8 +126,6 @@ int set_ecdh_privkey(struct crypto_kpp *tfm, const u8 private_key[32]) int err; struct ecdh p = {0}; - p.curve_id = ECC_CURVE_NIST_P256; - if (private_key) { tmp = kmalloc(32, GFP_KERNEL); if (!tmp) diff --git a/net/bluetooth/selftest.c b/net/bluetooth/selftest.c index f71c6fa..f49604d 100644 --- a/net/bluetooth/selftest.c +++ b/net/bluetooth/selftest.c @@ -205,7 +205,7 @@ static int __init test_ecdh(void) calltime = ktime_get(); - tfm = crypto_alloc_kpp("ecdh", 0, 0); + tfm = crypto_alloc_kpp("ecdh-nist-p256", 0, 0); if (IS_ERR(tfm)) { BT_ERR("Unable to create ECDH crypto context"); err = PTR_ERR(tfm); diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index c659c46..5de73de 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -1387,7 +1387,7 @@ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn) goto zfree_smp; } - smp->tfm_ecdh = crypto_alloc_kpp("ecdh", 0, 0); + smp->tfm_ecdh = crypto_alloc_kpp("ecdh-nist-p256", 0, 0); if (IS_ERR(smp->tfm_ecdh)) { BT_ERR("Unable to create ECDH crypto context"); goto free_shash; @@ -3282,7 +3282,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid) return ERR_CAST(tfm_cmac); } - tfm_ecdh = crypto_alloc_kpp("ecdh", 0, 0); + tfm_ecdh = crypto_alloc_kpp("ecdh-nist-p256", 0, 0); if (IS_ERR(tfm_ecdh)) { BT_ERR("Unable to create ECDH crypto context"); crypto_free_shash(tfm_cmac); @@ -3807,7 +3807,7 @@ int __init bt_selftest_smp(void) return PTR_ERR(tfm_cmac); } - tfm_ecdh = crypto_alloc_kpp("ecdh", 0, 0); + tfm_ecdh = crypto_alloc_kpp("ecdh-nist-p256", 0, 0); if (IS_ERR(tfm_ecdh)) { BT_ERR("Unable to create ECDH crypto context"); crypto_free_shash(tfm_cmac); From patchwork Thu Mar 4 06:35:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115565 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 716E8C433E0 for ; Thu, 4 Mar 2021 06:40:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CB8364F04 for ; Thu, 4 Mar 2021 06:40:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235469AbhCDGkA (ORCPT ); Thu, 4 Mar 2021 01:40:00 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:12684 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235373AbhCDGje (ORCPT ); Thu, 4 Mar 2021 01:39:34 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Drh334x2nzlSXZ; Thu, 4 Mar 2021 14:36:15 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:13 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 4/7] crypto: and expose ecc curves Date: Thu, 4 Mar 2021 14:35:47 +0800 Message-ID: <1614839750-29670-5-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Move 'ecc_get_curve' to 'include/crypto/ecc_curve.h', so everyone in kernel tree can easily get ecc curve params; Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- crypto/ecc.c | 5 ++++- crypto/ecc.h | 37 ++------------------------------ include/crypto/ecc_curve.h | 53 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 59 insertions(+), 36 deletions(-) create mode 100644 include/crypto/ecc_curve.h diff --git a/crypto/ecc.c b/crypto/ecc.c index c80aa25..4b55ad0 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c @@ -24,6 +24,7 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include #include #include #include @@ -42,7 +43,8 @@ typedef struct { u64 m_high; } uint128_t; -static inline const struct ecc_curve *ecc_get_curve(unsigned int curve_id) + +const struct ecc_curve *ecc_get_curve(unsigned int curve_id) { switch (curve_id) { /* In FIPS mode only allow P256 and higher */ @@ -54,6 +56,7 @@ static inline const struct ecc_curve *ecc_get_curve(unsigned int curve_id) return NULL; } } +EXPORT_SYMBOL(ecc_get_curve); static u64 *ecc_alloc_digits_space(unsigned int ndigits) { diff --git a/crypto/ecc.h b/crypto/ecc.h index d4e546b..38a81d4 100644 --- a/crypto/ecc.h +++ b/crypto/ecc.h @@ -26,6 +26,8 @@ #ifndef _CRYPTO_ECC_H #define _CRYPTO_ECC_H +#include + /* One digit is u64 qword. */ #define ECC_CURVE_NIST_P192_DIGITS 3 #define ECC_CURVE_NIST_P256_DIGITS 4 @@ -33,44 +35,9 @@ #define ECC_DIGITS_TO_BYTES_SHIFT 3 -/** - * struct ecc_point - elliptic curve point in affine coordinates - * - * @x: X coordinate in vli form. - * @y: Y coordinate in vli form. - * @ndigits: Length of vlis in u64 qwords. - */ -struct ecc_point { - u64 *x; - u64 *y; - u8 ndigits; -}; - #define ECC_POINT_INIT(x, y, ndigits) (struct ecc_point) { x, y, ndigits } /** - * struct ecc_curve - definition of elliptic curve - * - * @name: Short name of the curve. - * @g: Generator point of the curve. - * @p: Prime number, if Barrett's reduction is used for this curve - * pre-calculated value 'mu' is appended to the @p after ndigits. - * Use of Barrett's reduction is heuristically determined in - * vli_mmod_fast(). - * @n: Order of the curve group. - * @a: Curve parameter a. - * @b: Curve parameter b. - */ -struct ecc_curve { - char *name; - struct ecc_point g; - u64 *p; - u64 *n; - u64 *a; - u64 *b; -}; - -/** * ecc_is_key_valid() - Validate a given ECDH private key * * @curve_id: id representing the curve to use diff --git a/include/crypto/ecc_curve.h b/include/crypto/ecc_curve.h new file mode 100644 index 0000000..19a35da --- /dev/null +++ b/include/crypto/ecc_curve.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2021 HiSilicon */ + +#ifndef _CRYTO_ECC_CURVE_H +#define _CRYTO_ECC_CURVE_H + +#include + +/** + * struct ecc_point - elliptic curve point in affine coordinates + * + * @x: X coordinate in vli form. + * @y: Y coordinate in vli form. + * @ndigits: Length of vlis in u64 qwords. + */ +struct ecc_point { + u64 *x; + u64 *y; + u8 ndigits; +}; + +/** + * struct ecc_curve - definition of elliptic curve + * + * @name: Short name of the curve. + * @g: Generator point of the curve. + * @p: Prime number, if Barrett's reduction is used for this curve + * pre-calculated value 'mu' is appended to the @p after ndigits. + * Use of Barrett's reduction is heuristically determined in + * vli_mmod_fast(). + * @n: Order of the curve group. + * @a: Curve parameter a. + * @b: Curve parameter b. + */ +struct ecc_curve { + char *name; + struct ecc_point g; + u64 *p; + u64 *n; + u64 *a; + u64 *b; +}; + +/** + * ecc_get_curve() - get elliptic curve; + * @curve_id: Curves IDs: + * defined in 'include/crypto/ecdh.h'; + * + * Returns curve if get curve succssful, NULL otherwise + */ +const struct ecc_curve *ecc_get_curve(unsigned int curve_id); + +#endif From patchwork Thu Mar 4 06:35:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115559 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1BCBC4332E for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB25B64F18 for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235386AbhCDGja (ORCPT ); Thu, 4 Mar 2021 01:39:30 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:13471 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235364AbhCDGjG (ORCPT ); Thu, 4 Mar 2021 01:39:06 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Drh3Z4FTKzjW5P; Thu, 4 Mar 2021 14:36:42 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:14 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 5/7] crypto: hisilicon/hpre - add 'ECDH' algorithm Date: Thu, 4 Mar 2021 14:35:48 +0800 Message-ID: <1614839750-29670-6-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. Enable 'ECDH' algorithm in Kunpeng 930; 2. HPRE ECDH Support: ecdh-nist-p192, ecdh-nist-p256. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- drivers/crypto/hisilicon/hpre/hpre.h | 2 +- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 515 +++++++++++++++++++++++++++- drivers/crypto/hisilicon/hpre/hpre_main.c | 1 + 3 files changed, 513 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index 02193e1..50e6b2e 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -83,6 +83,7 @@ enum hpre_alg_type { HPRE_ALG_KG_CRT = 0x3, HPRE_ALG_DH_G2 = 0x4, HPRE_ALG_DH = 0x5, + HPRE_ALG_ECC_MUL = 0xD, }; struct hpre_sqe { @@ -104,5 +105,4 @@ struct hisi_qp *hpre_create_qp(u8 type); int hpre_algs_register(struct hisi_qm *qm); void hpre_algs_unregister(struct hisi_qm *qm); - #endif diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index 712bea9..a6010b1 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -2,6 +2,8 @@ /* Copyright (c) 2019 HiSilicon Limited. */ #include #include +#include +#include #include #include #include @@ -36,6 +38,13 @@ struct hpre_ctx; #define HPRE_DFX_SEC_TO_US 1000000 #define HPRE_DFX_US_TO_NS 1000 +/* size in bytes of the n prime */ +#define HPRE_ECC_NIST_P192_N_SIZE 24 +#define HPRE_ECC_NIST_P256_N_SIZE 32 + +/* size in bytes */ +#define HPRE_ECC_HW256_KSZ_B 32 + typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe); struct hpre_rsa_ctx { @@ -61,14 +70,25 @@ struct hpre_dh_ctx { * else if base if the counterpart public key we * compute the shared secret * ZZ = yb^xa mod p; [RFC2631 sec 2.1.1] + * low address: d--->n, please refer to Hisilicon HPRE UM */ - char *xa_p; /* low address: d--->n, please refer to Hisilicon HPRE UM */ + char *xa_p; dma_addr_t dma_xa_p; char *g; /* m */ dma_addr_t dma_g; }; +struct hpre_ecdh_ctx { + /* low address: p->a->k->b */ + unsigned char *p; + dma_addr_t dma_p; + + /* low address: x->y */ + unsigned char *g; + dma_addr_t dma_g; +}; + struct hpre_ctx { struct hisi_qp *qp; struct hpre_asym_request **req_list; @@ -80,7 +100,10 @@ struct hpre_ctx { union { struct hpre_rsa_ctx rsa; struct hpre_dh_ctx dh; + struct hpre_ecdh_ctx ecdh; }; + /* for ecc algorithms */ + unsigned int curve_id; }; struct hpre_asym_request { @@ -91,6 +114,7 @@ struct hpre_asym_request { union { struct akcipher_request *rsa; struct kpp_request *dh; + struct kpp_request *ecdh; } areq; int err; int req_id; @@ -1115,6 +1139,416 @@ static void hpre_rsa_exit_tfm(struct crypto_akcipher *tfm) crypto_free_akcipher(ctx->rsa.soft_tfm); } +static void hpre_key_to_big_end(u8 *data, int len) +{ + int i, j; + u8 tmp; + + for (i = 0; i < len / 2; i++) { + j = len - i - 1; + tmp = data[j]; + data[j] = data[i]; + data[i] = tmp; + } +} + +static void hpre_ecc_clear_ctx(struct hpre_ctx *ctx, bool is_clear_all, + bool is_ecdh) +{ + struct device *dev = HPRE_DEV(ctx); + unsigned int sz = ctx->key_sz; + unsigned int shift = sz << 1; + + if (is_clear_all) + hisi_qm_stop_qp(ctx->qp); + + if (is_ecdh && ctx->ecdh.p) { + /* ecdh: p->a->k->b */ + memzero_explicit(ctx->ecdh.p + shift, sz); + dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p); + ctx->ecdh.p = NULL; + } + + hpre_ctx_clear(ctx, is_clear_all); +} + +static unsigned int hpre_ecdh_supported_curve(unsigned short id) +{ + switch (id) { + case ECC_CURVE_NIST_P192: + case ECC_CURVE_NIST_P256: + return HPRE_ECC_HW256_KSZ_B; + default: + break; + } + + return 0; +} + +static void fill_curve_param(void *addr, u64 *param, unsigned int cur_sz, u8 ndigits) +{ + unsigned int sz = cur_sz - (ndigits - 1) * sizeof(u64); + u8 i = 0; + + while (i < ndigits - 1) { + memcpy(addr + sizeof(u64) * i, ¶m[i], sizeof(u64)); + i++; + } + + memcpy(addr + sizeof(u64) * i, ¶m[ndigits - 1], sz); + hpre_key_to_big_end((u8 *)addr, cur_sz); +} + +static int hpre_ecdh_fill_curve(struct hpre_ctx *ctx, struct ecdh *params, + unsigned int cur_sz) +{ + unsigned int shifta = ctx->key_sz << 1; + unsigned int shiftb = ctx->key_sz << 2; + void *p = ctx->ecdh.p + ctx->key_sz - cur_sz; + void *a = ctx->ecdh.p + shifta - cur_sz; + void *b = ctx->ecdh.p + shiftb - cur_sz; + void *x = ctx->ecdh.g + ctx->key_sz - cur_sz; + void *y = ctx->ecdh.g + shifta - cur_sz; + const struct ecc_curve *curve = ecc_get_curve(ctx->curve_id); + char *n; + + if (unlikely(!curve)) + return -EINVAL; + + n = kzalloc(ctx->key_sz, GFP_KERNEL); + if (!n) + return -ENOMEM; + + fill_curve_param(p, curve->p, cur_sz, curve->g.ndigits); + fill_curve_param(a, curve->a, cur_sz, curve->g.ndigits); + fill_curve_param(b, curve->b, cur_sz, curve->g.ndigits); + fill_curve_param(x, curve->g.x, cur_sz, curve->g.ndigits); + fill_curve_param(y, curve->g.y, cur_sz, curve->g.ndigits); + fill_curve_param(n, curve->n, cur_sz, curve->g.ndigits); + + if (params->key_size == cur_sz && memcmp(params->key, n, cur_sz) >= 0) { + kfree(n); + return -EINVAL; + } + + kfree(n); + return 0; +} + +static unsigned int hpre_ecdh_get_curvesz(unsigned short id) +{ + switch (id) { + case ECC_CURVE_NIST_P192: + return HPRE_ECC_NIST_P192_N_SIZE; + case ECC_CURVE_NIST_P256: + return HPRE_ECC_NIST_P256_N_SIZE; + default: + break; + } + + return 0; +} + +static int hpre_ecdh_set_param(struct hpre_ctx *ctx, struct ecdh *params) +{ + struct device *dev = HPRE_DEV(ctx); + unsigned int sz, shift, curve_sz; + int ret; + + ctx->key_sz = hpre_ecdh_supported_curve(ctx->curve_id); + if (!ctx->key_sz) + return -EINVAL; + + curve_sz = hpre_ecdh_get_curvesz(ctx->curve_id); + if (!curve_sz || params->key_size > curve_sz) + return -EINVAL; + + sz = ctx->key_sz; + + if (!ctx->ecdh.p) { + ctx->ecdh.p = dma_alloc_coherent(dev, sz << 3, &ctx->ecdh.dma_p, + GFP_KERNEL); + if (!ctx->ecdh.p) + return -ENOMEM; + } + + shift = sz << 2; + ctx->ecdh.g = ctx->ecdh.p + shift; + ctx->ecdh.dma_g = ctx->ecdh.dma_p + shift; + + ret = hpre_ecdh_fill_curve(ctx, params, curve_sz); + if (ret) { + dev_err(dev, "failed to fill curve_param, ret = %d!\n", ret); + dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p); + ctx->ecdh.p = NULL; + return ret; + } + + return 0; +} + +static bool hpre_key_is_zero(char *key, unsigned short key_sz) +{ + int i; + + for (i = 0; i < key_sz; i++) + if (key[i]) + return false; + + return true; +} + +static int hpre_ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, + unsigned int len) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + unsigned int sz, sz_shift; + struct ecdh params; + int ret; + + if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0) { + dev_err(dev, "failed to decode ecdh key!\n"); + return -EINVAL; + } + + if (hpre_key_is_zero(params.key, params.key_size)) { + dev_err(dev, "Invalid hpre key!\n"); + return -EINVAL; + } + + hpre_ecc_clear_ctx(ctx, false, true); + + ret = hpre_ecdh_set_param(ctx, ¶ms); + if (ret < 0) { + dev_err(dev, "failed to set hpre param, ret = %d!\n", ret); + return ret; + } + + sz = ctx->key_sz; + sz_shift = (sz << 1) + sz - params.key_size; + memcpy(ctx->ecdh.p + sz_shift, params.key, params.key_size); + + return 0; +} + +static void hpre_ecdh_hw_data_clr_all(struct hpre_ctx *ctx, + struct hpre_asym_request *req, + struct scatterlist *dst, + struct scatterlist *src) +{ + struct device *dev = HPRE_DEV(ctx); + struct hpre_sqe *sqe = &req->req; + dma_addr_t dma; + + dma = le64_to_cpu(sqe->in); + if (unlikely(!dma)) + return; + + if (src && req->src) + dma_free_coherent(dev, ctx->key_sz << 2, req->src, dma); + + dma = le64_to_cpu(sqe->out); + if (unlikely(!dma)) + return; + + if (req->dst) + dma_free_coherent(dev, ctx->key_sz << 1, req->dst, dma); + if (dst) + dma_unmap_single(dev, dma, ctx->key_sz << 1, DMA_FROM_DEVICE); +} + +static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp) +{ + unsigned int curve_sz = hpre_ecdh_get_curvesz(ctx->curve_id); + struct hpre_dfx *dfx = ctx->hpre->debug.dfx; + struct hpre_asym_request *req = NULL; + struct kpp_request *areq; + u64 overtime_thrhld; + char *p; + int ret; + + ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req); + areq = req->areq.ecdh; + areq->dst_len = ctx->key_sz << 1; + + overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value); + if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld)) + atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value); + + p = sg_virt(areq->dst); + memmove(p, p + ctx->key_sz - curve_sz, curve_sz); + memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz); + + hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src); + kpp_request_complete(areq, ret); + + atomic64_inc(&dfx[HPRE_RECV_CNT].value); +} + +static int hpre_ecdh_msg_request_set(struct hpre_ctx *ctx, + struct kpp_request *req) +{ + struct hpre_asym_request *h_req; + struct hpre_sqe *msg; + int req_id; + void *tmp; + + if (req->dst_len < ctx->key_sz << 1) { + req->dst_len = ctx->key_sz << 1; + return -EINVAL; + } + + tmp = kpp_request_ctx(req); + h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req->cb = hpre_ecdh_cb; + h_req->areq.ecdh = req; + msg = &h_req->req; + memset(msg, 0, sizeof(*msg)); + msg->key = cpu_to_le64(ctx->ecdh.dma_p); + + msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT); + msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1; + h_req->ctx = ctx; + + req_id = hpre_add_req_to_ctx(h_req); + if (req_id < 0) + return -EBUSY; + + msg->tag = cpu_to_le16((u16)req_id); + return 0; +} + +static int hpre_ecdh_src_data_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + unsigned int tmpshift; + dma_addr_t dma = 0; + void *ptr; + int shift; + + /* Src_data include gx and gy. */ + shift = ctx->key_sz - (len >> 1); + if (unlikely(shift < 0)) + return -EINVAL; + + ptr = dma_alloc_coherent(dev, ctx->key_sz << 2, &dma, GFP_KERNEL); + if (unlikely(!ptr)) + return -ENOMEM; + + tmpshift = ctx->key_sz << 1; + scatterwalk_map_and_copy(ptr + tmpshift, data, 0, len, 0); + memcpy(ptr + shift, ptr + tmpshift, len >> 1); + memcpy(ptr + ctx->key_sz + shift, ptr + tmpshift + (len >> 1), len >> 1); + + hpre_req->src = ptr; + msg->in = cpu_to_le64(dma); + return 0; +} + +static int hpre_ecdh_dst_data_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + dma_addr_t dma = 0; + + if (unlikely(!data || !sg_is_last(data) || len != ctx->key_sz << 1)) { + dev_err(dev, "data or data length is illegal!\n"); + return -EINVAL; + } + + hpre_req->dst = NULL; + dma = dma_map_single(dev, sg_virt(data), len, DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(dev, dma))) { + dev_err(dev, "dma map data err!\n"); + return -ENOMEM; + } + + msg->out = cpu_to_le64(dma); + return 0; +} + +static int hpre_ecdh_compute_value(struct kpp_request *req) +{ + struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + void *tmp = kpp_request_ctx(req); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_sqe *msg = &hpre_req->req; + int ret; + + ret = hpre_ecdh_msg_request_set(ctx, req); + if (unlikely(ret)) { + dev_err(dev, "failed to set ecdh request, ret = %d!\n", ret); + return ret; + } + + if (req->src) { + ret = hpre_ecdh_src_data_init(hpre_req, req->src, req->src_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init src data, ret = %d!\n", ret); + goto clear_all; + } + } else { + msg->in = cpu_to_le64(ctx->ecdh.dma_g); + } + + ret = hpre_ecdh_dst_data_init(hpre_req, req->dst, req->dst_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init dst data, ret = %d!\n", ret); + goto clear_all; + } + + msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_ECC_MUL); + ret = hpre_send(ctx, msg); + if (likely(!ret)) + return -EINPROGRESS; + +clear_all: + hpre_rm_req_from_ctx(hpre_req); + hpre_ecdh_hw_data_clr_all(ctx, hpre_req, req->dst, req->src); + return ret; +} + +static unsigned int hpre_ecdh_max_size(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + /* max size is the pub_key_size, include x and y */ + return ctx->key_sz << 1; +} + +static int hpre_ecdh_nist_p192_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P192; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static int hpre_ecdh_nist_p256_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P256; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static void hpre_ecdh_exit_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + hpre_ecc_clear_ctx(ctx, true, true); +} + static struct akcipher_alg rsa = { .sign = hpre_rsa_dec, .verify = hpre_rsa_enc, @@ -1154,6 +1588,63 @@ static struct kpp_alg dh = { }; #endif +static struct kpp_alg ecdh_nist_p192 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p192_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p192", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static struct kpp_alg ecdh_nist_p256 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p256_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p256", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static int hpre_register_ecdh(void) +{ + int ret; + + ret = crypto_register_kpp(&ecdh_nist_p192); + if (ret) + return ret; + + ret = crypto_register_kpp(&ecdh_nist_p256); + if (ret) { + crypto_unregister_kpp(&ecdh_nist_p192); + return ret; + } + + return 0; +} + +static void hpre_unregister_ecdh(void) +{ + crypto_unregister_kpp(&ecdh_nist_p256); + crypto_unregister_kpp(&ecdh_nist_p192); +} + int hpre_algs_register(struct hisi_qm *qm) { int ret; @@ -1164,17 +1655,33 @@ int hpre_algs_register(struct hisi_qm *qm) return ret; #ifdef CONFIG_CRYPTO_DH ret = crypto_register_kpp(&dh); - if (ret) + if (ret) { crypto_unregister_akcipher(&rsa); + return ret; + } #endif - return ret; + if (qm->ver >= QM_HW_V3) { + ret = hpre_register_ecdh(); + if (ret) { +#ifdef CONFIG_CRYPTO_DH + crypto_unregister_kpp(&dh); +#endif + crypto_unregister_akcipher(&rsa); + return ret; + } + } + + return 0; } void hpre_algs_unregister(struct hisi_qm *qm) { - crypto_unregister_akcipher(&rsa); + if (qm->ver >= QM_HW_V3) + hpre_unregister_ecdh(); + #ifdef CONFIG_CRYPTO_DH crypto_unregister_kpp(&dh); #endif + crypto_unregister_akcipher(&rsa); } diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 76f0a87..87e8f4d 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -1082,4 +1082,5 @@ module_exit(hpre_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Zaibo Xu "); +MODULE_AUTHOR("Meng Yu "); MODULE_DESCRIPTION("Driver for HiSilicon HPRE accelerator"); From patchwork Thu Mar 4 06:35:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115555 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4559C4332B for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 882DD64F13 for ; Thu, 4 Mar 2021 06:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235394AbhCDGja (ORCPT ); Thu, 4 Mar 2021 01:39:30 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:12683 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235372AbhCDGjN (ORCPT ); Thu, 4 Mar 2021 01:39:13 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Drh334P0kzlSXW; Thu, 4 Mar 2021 14:36:15 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:14 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 6/7] crypto: add curve25519 params and expose them Date: Thu, 4 Mar 2021 14:35:49 +0800 Message-ID: <1614839750-29670-7-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. Add curve 25519 parameters in 'crypto/ecc_curve_defs.h'; 2. Add curve25519 interface 'ecc_get_curve25519_param' in 'include/crypto/ecc_curve.h', to make its parameters be exposed to everyone in kernel tree. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- crypto/ecc.c | 6 ++++++ crypto/ecc_curve_defs.h | 17 +++++++++++++++++ include/crypto/ecc_curve.h | 7 +++++++ 3 files changed, 30 insertions(+) diff --git a/crypto/ecc.c b/crypto/ecc.c index 4b55ad0..0798a18 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c @@ -43,6 +43,12 @@ typedef struct { u64 m_high; } uint128_t; +/* Returns curv25519 curve param */ +const struct ecc_curve *ecc_get_curve25519(void) +{ + return &ecc_25519; +} +EXPORT_SYMBOL(ecc_get_curve25519); const struct ecc_curve *ecc_get_curve(unsigned int curve_id) { diff --git a/crypto/ecc_curve_defs.h b/crypto/ecc_curve_defs.h index 69be6c7..d7769cc 100644 --- a/crypto/ecc_curve_defs.h +++ b/crypto/ecc_curve_defs.h @@ -54,4 +54,21 @@ static struct ecc_curve nist_p256 = { .b = nist_p256_b }; +/* curve25519 */ +static u64 curve25519_g_x[] = { 0x0000000000000009, 0x0000000000000000, + 0x0000000000000000, 0x0000000000000000 }; +static u64 curve25519_p[] = { 0xffffffffffffffed, 0xffffffffffffffff, + 0xffffffffffffffff, 0x7fffffffffffffff }; +static u64 curve25519_a[] = { 0x000000000001DB41, 0x0000000000000000, + 0x0000000000000000, 0x0000000000000000 }; +static const struct ecc_curve ecc_25519 = { + .name = "curve25519", + .g = { + .x = curve25519_g_x, + .ndigits = 4, + }, + .p = curve25519_p, + .a = curve25519_a, +}; + #endif diff --git a/include/crypto/ecc_curve.h b/include/crypto/ecc_curve.h index 19a35da..7096478 100644 --- a/include/crypto/ecc_curve.h +++ b/include/crypto/ecc_curve.h @@ -50,4 +50,11 @@ struct ecc_curve { */ const struct ecc_curve *ecc_get_curve(unsigned int curve_id); +/** + * ecc_get_curve25519() - get curve25519 curve; + * + * Returns curve25519 + */ +const struct ecc_curve *ecc_get_curve25519(void); + #endif From patchwork Thu Mar 4 06:35:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Yu X-Patchwork-Id: 12115561 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C548C4321A for ; Thu, 4 Mar 2021 06:40:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 606E864F02 for ; Thu, 4 Mar 2021 06:40:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234419AbhCDGja (ORCPT ); Thu, 4 Mar 2021 01:39:30 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:12685 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235365AbhCDGjH (ORCPT ); Thu, 4 Mar 2021 01:39:07 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Drh334fRczlSXY; Thu, 4 Mar 2021 14:36:15 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Thu, 4 Mar 2021 14:38:15 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v10 7/7] crypto: hisilicon/hpre - add 'CURVE25519' algorithm Date: Thu, 4 Mar 2021 14:35:50 +0800 Message-ID: <1614839750-29670-8-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> References: <1614839750-29670-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Enable 'CURVE25519' algorithm in Kunpeng 930. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu Reported-by: kernel test robot --- drivers/crypto/hisilicon/Kconfig | 1 + drivers/crypto/hisilicon/hpre/hpre.h | 2 + drivers/crypto/hisilicon/hpre/hpre_crypto.c | 366 +++++++++++++++++++++++++++- 3 files changed, 361 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/hisilicon/Kconfig b/drivers/crypto/hisilicon/Kconfig index 8431926..c45adb1 100644 --- a/drivers/crypto/hisilicon/Kconfig +++ b/drivers/crypto/hisilicon/Kconfig @@ -65,6 +65,7 @@ config CRYPTO_DEV_HISI_HPRE depends on UACCE || UACCE=n depends on ARM64 || (COMPILE_TEST && 64BIT) depends on ACPI + select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_DEV_HISI_QM select CRYPTO_DH select CRYPTO_RSA diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index 50e6b2e..92892e3 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -84,6 +84,8 @@ enum hpre_alg_type { HPRE_ALG_DH_G2 = 0x4, HPRE_ALG_DH = 0x5, HPRE_ALG_ECC_MUL = 0xD, + /* shared by x25519 and x448, but x448 is not supported now */ + HPRE_ALG_CURVE25519_MUL = 0x10, }; struct hpre_sqe { diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index a6010b1..53068d2 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2019 HiSilicon Limited. */ #include +#include #include #include #include @@ -89,6 +90,16 @@ struct hpre_ecdh_ctx { dma_addr_t dma_g; }; +struct hpre_curve25519_ctx { + /* low address: p->a->k */ + unsigned char *p; + dma_addr_t dma_p; + + /* gx coordinate */ + unsigned char *g; + dma_addr_t dma_g; +}; + struct hpre_ctx { struct hisi_qp *qp; struct hpre_asym_request **req_list; @@ -101,6 +112,7 @@ struct hpre_ctx { struct hpre_rsa_ctx rsa; struct hpre_dh_ctx dh; struct hpre_ecdh_ctx ecdh; + struct hpre_curve25519_ctx curve25519; }; /* for ecc algorithms */ unsigned int curve_id; @@ -115,6 +127,7 @@ struct hpre_asym_request { struct akcipher_request *rsa; struct kpp_request *dh; struct kpp_request *ecdh; + struct kpp_request *curve25519; } areq; int err; int req_id; @@ -437,7 +450,6 @@ static void hpre_alg_cb(struct hisi_qp *qp, void *resp) struct hpre_sqe *sqe = resp; struct hpre_asym_request *req = ctx->req_list[le16_to_cpu(sqe->tag)]; - if (unlikely(!req)) { atomic64_inc(&dfx[HPRE_INVALID_REQ_CNT].value); return; @@ -1167,6 +1179,12 @@ static void hpre_ecc_clear_ctx(struct hpre_ctx *ctx, bool is_clear_all, memzero_explicit(ctx->ecdh.p + shift, sz); dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p); ctx->ecdh.p = NULL; + } else if (!is_ecdh && ctx->curve25519.p) { + /* curve25519: p->a->k */ + memzero_explicit(ctx->curve25519.p + shift, sz); + dma_free_coherent(dev, sz << 2, ctx->curve25519.p, + ctx->curve25519.dma_p); + ctx->curve25519.p = NULL; } hpre_ctx_clear(ctx, is_clear_all); @@ -1549,6 +1567,312 @@ static void hpre_ecdh_exit_tfm(struct crypto_kpp *tfm) hpre_ecc_clear_ctx(ctx, true, true); } +static void hpre_curve25519_fill_curve(struct hpre_ctx *ctx, const void *buf, + unsigned int len) +{ + u8 secret[CURVE25519_KEY_SIZE] = { 0 }; + unsigned int sz = ctx->key_sz; + const struct ecc_curve *curve; + unsigned int shift = sz << 1; + void *p; + + /* + * The key from 'buf' is in little-endian, we should preprocess it as + * the description in rfc7748: "k[0] &= 248, k[31] &= 127, k[31] |= 64", + * then convert it to big endian. Only in this way, the result can be + * the same as the software curve-25519 that exists in crypto. + */ + memcpy(secret, buf, len); + curve25519_clamp_secret(secret); + hpre_key_to_big_end(secret, CURVE25519_KEY_SIZE); + + p = ctx->curve25519.p + sz - len; + + curve = ecc_get_curve25519(); + + /* fill curve parameters */ + fill_curve_param(p, curve->p, len, curve->g.ndigits); + fill_curve_param(p + sz, curve->a, len, curve->g.ndigits); + memcpy(p + shift, secret, len); + fill_curve_param(p + shift + sz, curve->g.x, len, curve->g.ndigits); + memzero_explicit(secret, CURVE25519_KEY_SIZE); +} + +static int hpre_curve25519_set_param(struct hpre_ctx *ctx, const void *buf, + unsigned int len) +{ + struct device *dev = HPRE_DEV(ctx); + unsigned int sz = ctx->key_sz; + unsigned int shift = sz << 1; + + /* p->a->k->gx */ + if (!ctx->curve25519.p) { + ctx->curve25519.p = dma_alloc_coherent(dev, sz << 2, + &ctx->curve25519.dma_p, + GFP_KERNEL); + if (!ctx->curve25519.p) + return -ENOMEM; + } + + ctx->curve25519.g = ctx->curve25519.p + shift + sz; + ctx->curve25519.dma_g = ctx->curve25519.dma_p + shift + sz; + + hpre_curve25519_fill_curve(ctx, buf, len); + + return 0; +} + +static int hpre_curve25519_set_secret(struct crypto_kpp *tfm, const void *buf, + unsigned int len) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + int ret = -EINVAL; + + if (len != CURVE25519_KEY_SIZE || + !crypto_memneq(buf, curve25519_null_point, CURVE25519_KEY_SIZE)) { + dev_err(dev, "key is null or key len is not 32bytes!\n"); + return ret; + } + + /* Free old secret if any */ + hpre_ecc_clear_ctx(ctx, false, false); + + ctx->key_sz = CURVE25519_KEY_SIZE; + ret = hpre_curve25519_set_param(ctx, buf, CURVE25519_KEY_SIZE); + if (ret) { + dev_err(dev, "failed to set curve25519 param, ret = %d!\n", ret); + hpre_ecc_clear_ctx(ctx, false, false); + return ret; + } + + return 0; +} + +static void hpre_curve25519_hw_data_clr_all(struct hpre_ctx *ctx, + struct hpre_asym_request *req, + struct scatterlist *dst, + struct scatterlist *src) +{ + struct device *dev = HPRE_DEV(ctx); + struct hpre_sqe *sqe = &req->req; + dma_addr_t dma; + + dma = le64_to_cpu(sqe->in); + if (unlikely(!dma)) + return; + + if (src && req->src) + dma_free_coherent(dev, ctx->key_sz, req->src, dma); + + dma = le64_to_cpu(sqe->out); + if (unlikely(!dma)) + return; + + if (req->dst) + dma_free_coherent(dev, ctx->key_sz, req->dst, dma); + if (dst) + dma_unmap_single(dev, dma, ctx->key_sz, DMA_FROM_DEVICE); +} + +static void hpre_curve25519_cb(struct hpre_ctx *ctx, void *resp) +{ + struct hpre_dfx *dfx = ctx->hpre->debug.dfx; + struct hpre_asym_request *req = NULL; + struct kpp_request *areq; + u64 overtime_thrhld; + int ret; + + ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req); + areq = req->areq.curve25519; + areq->dst_len = ctx->key_sz; + + overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value); + if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld)) + atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value); + + hpre_key_to_big_end(sg_virt(areq->dst), CURVE25519_KEY_SIZE); + + hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src); + kpp_request_complete(areq, ret); + + atomic64_inc(&dfx[HPRE_RECV_CNT].value); +} + +static int hpre_curve25519_msg_request_set(struct hpre_ctx *ctx, + struct kpp_request *req) +{ + struct hpre_asym_request *h_req; + struct hpre_sqe *msg; + int req_id; + void *tmp; + + if (unlikely(req->dst_len < ctx->key_sz)) { + req->dst_len = ctx->key_sz; + return -EINVAL; + } + + tmp = kpp_request_ctx(req); + h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req->cb = hpre_curve25519_cb; + h_req->areq.curve25519 = req; + msg = &h_req->req; + memset(msg, 0, sizeof(*msg)); + msg->key = cpu_to_le64(ctx->curve25519.dma_p); + + msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT); + msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1; + h_req->ctx = ctx; + + req_id = hpre_add_req_to_ctx(h_req); + if (req_id < 0) + return -EBUSY; + + msg->tag = cpu_to_le16((u16)req_id); + return 0; +} + +static int hpre_curve25519_src_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + u8 p[CURVE25519_KEY_SIZE] = { 0 }; + const struct ecc_curve *curve; + dma_addr_t dma = 0; + u8 *ptr; + + if (len != CURVE25519_KEY_SIZE) { + dev_err(dev, "sourc_data len is not 32bytes, len = %u!\n", len); + return -EINVAL; + } + + ptr = dma_alloc_coherent(dev, ctx->key_sz, &dma, GFP_KERNEL); + if (unlikely(!ptr)) + return -ENOMEM; + + scatterwalk_map_and_copy(ptr, data, 0, len, 0); + + if (!crypto_memneq(ptr, curve25519_null_point, CURVE25519_KEY_SIZE)) { + dev_err(dev, "gx is null!\n"); + goto err; + } + + /* + * Src_data(gx) is in little-endian order, MSB in the final byte should + * be masked as discribed in RFC7748, then transform it to big-endian + * form, then hisi_hpre can use the data. + */ + ptr[31] &= 0x7f; + hpre_key_to_big_end(ptr, CURVE25519_KEY_SIZE); + + curve = ecc_get_curve25519(); + + fill_curve_param(p, curve->p, CURVE25519_KEY_SIZE, curve->g.ndigits); + if (memcmp(ptr, p, ctx->key_sz) >= 0) { + dev_err(dev, "gx is out of p!\n"); + goto err; + } + + hpre_req->src = ptr; + msg->in = cpu_to_le64(dma); + return 0; + +err: + dma_free_coherent(dev, ctx->key_sz, ptr, dma); + return -EINVAL; +} + +static int hpre_curve25519_dst_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + dma_addr_t dma = 0; + + if (!data || !sg_is_last(data) || len != ctx->key_sz) { + dev_err(dev, "data or data length is illegal!\n"); + return -EINVAL; + } + + hpre_req->dst = NULL; + dma = dma_map_single(dev, sg_virt(data), len, DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(dev, dma))) { + dev_err(dev, "dma map data err!\n"); + return -ENOMEM; + } + + msg->out = cpu_to_le64(dma); + return 0; +} + +static int hpre_curve25519_compute_value(struct kpp_request *req) +{ + struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + void *tmp = kpp_request_ctx(req); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_sqe *msg = &hpre_req->req; + int ret; + + ret = hpre_curve25519_msg_request_set(ctx, req); + if (unlikely(ret)) { + dev_err(dev, "failed to set curve25519 request, ret = %d!\n", ret); + return ret; + } + + if (req->src) { + ret = hpre_curve25519_src_init(hpre_req, req->src, req->src_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init src data, ret = %d!\n", + ret); + goto clear_all; + } + } else { + msg->in = cpu_to_le64(ctx->curve25519.dma_g); + } + + ret = hpre_curve25519_dst_init(hpre_req, req->dst, req->dst_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init dst data, ret = %d!\n", ret); + goto clear_all; + } + + msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_CURVE25519_MUL); + ret = hpre_send(ctx, msg); + if (likely(!ret)) + return -EINPROGRESS; + +clear_all: + hpre_rm_req_from_ctx(hpre_req); + hpre_curve25519_hw_data_clr_all(ctx, hpre_req, req->dst, req->src); + return ret; +} + +static unsigned int hpre_curve25519_max_size(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + return ctx->key_sz; +} + +static int hpre_curve25519_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static void hpre_curve25519_exit_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + hpre_ecc_clear_ctx(ctx, true, false); +} + static struct akcipher_alg rsa = { .sign = hpre_rsa_dec, .verify = hpre_rsa_enc, @@ -1622,6 +1946,24 @@ static struct kpp_alg ecdh_nist_p256 = { }, }; +static struct kpp_alg curve25519_alg = { + .set_secret = hpre_curve25519_set_secret, + .generate_public_key = hpre_curve25519_compute_value, + .compute_shared_secret = hpre_curve25519_compute_value, + .max_size = hpre_curve25519_max_size, + .init = hpre_curve25519_init_tfm, + .exit = hpre_curve25519_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "curve25519", + .cra_driver_name = "hpre-curve25519", + .cra_module = THIS_MODULE, + }, +}; + + static int hpre_register_ecdh(void) { int ret; @@ -1663,22 +2005,30 @@ int hpre_algs_register(struct hisi_qm *qm) if (qm->ver >= QM_HW_V3) { ret = hpre_register_ecdh(); + if (ret) + goto reg_err; + ret = crypto_register_kpp(&curve25519_alg); if (ret) { -#ifdef CONFIG_CRYPTO_DH - crypto_unregister_kpp(&dh); -#endif - crypto_unregister_akcipher(&rsa); - return ret; + hpre_unregister_ecdh(); + goto reg_err; } } - return 0; + +reg_err: +#ifdef CONFIG_CRYPTO_DH + crypto_unregister_kpp(&dh); +#endif + crypto_unregister_akcipher(&rsa); + return ret; } void hpre_algs_unregister(struct hisi_qm *qm) { - if (qm->ver >= QM_HW_V3) + if (qm->ver >= QM_HW_V3) { + crypto_unregister_kpp(&curve25519_alg); hpre_unregister_ecdh(); + } #ifdef CONFIG_CRYPTO_DH crypto_unregister_kpp(&dh);