From patchwork Fri Sep 9 09:46:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971424 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B940ECAAA1 for ; Fri, 9 Sep 2022 09:50:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229599AbiIIJt6 (ORCPT ); Fri, 9 Sep 2022 05:49:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230426AbiIIJtx (ORCPT ); Fri, 9 Sep 2022 05:49:53 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FB33F591; Fri, 9 Sep 2022 02:49:48 -0700 (PDT) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB1R0SXvzZcmm; Fri, 9 Sep 2022 17:45:15 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:45 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:44 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 01/10] crypto: hisilicon/qm - get hardware features from hardware registers Date: Fri, 9 Sep 2022 17:46:55 +0800 Message-ID: <20220909094704.32099-2-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Before hardware V3, hardwares do not provide the feature registers, driver resolves hardware differences based on the hardware version. As a result, the driver does not support the new hardware. Hardware V3 and later versions support to obtain hardware features, such as power-gating management and doorbell isolation, through the hardware registers. To be compatible with later hardware versions, the features of the current device is obtained by reading the hardware registers instead of the hardware version. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre_main.c | 4 +- drivers/crypto/hisilicon/qm.c | 196 +++++++++++++++------- drivers/crypto/hisilicon/sec2/sec_main.c | 4 +- drivers/crypto/hisilicon/zip/zip_main.c | 4 +- include/linux/hisi_acc_qm.h | 31 +++- 5 files changed, 170 insertions(+), 69 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 8e0e87cede6b..d484105b2716 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -457,7 +457,7 @@ static void hpre_open_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; /* Enable prefetch */ @@ -478,7 +478,7 @@ static void hpre_close_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; val = readl_relaxed(qm->io_base + HPRE_PREFETCH_CFG); diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index 54bbd7fa57cc..fa77b469761b 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -87,9 +87,7 @@ #define QM_DB_CMD_SHIFT_V1 16 #define QM_DB_INDEX_SHIFT_V1 32 #define QM_DB_PRIORITY_SHIFT_V1 48 -#define QM_QUE_ISO_CFG_V 0x0030 #define QM_PAGE_SIZE 0x0034 -#define QM_QUE_ISO_EN 0x100154 #define QM_CAPBILITY 0x100158 #define QM_QP_NUN_MASK GENMASK(10, 0) #define QM_QP_DB_INTERVAL 0x10000 @@ -206,6 +204,8 @@ #define MAX_WAIT_COUNTS 1000 #define QM_CACHE_WB_START 0x204 #define QM_CACHE_WB_DONE 0x208 +#define QM_FUNC_CAPS_REG 0x3100 +#define QM_CAPBILITY_VERSION GENMASK(7, 0) #define PCI_BAR_2 2 #define PCI_BAR_4 4 @@ -330,6 +330,22 @@ enum qm_mb_cmd { QM_VF_GET_QOS, }; +static const struct hisi_qm_cap_info qm_cap_info_comm[] = { + {QM_SUPPORT_DB_ISOLATION, 0x30, 0, BIT(0), 0x0, 0x0, 0x0}, + {QM_SUPPORT_FUNC_QOS, 0x3100, 0, BIT(8), 0x0, 0x0, 0x1}, + {QM_SUPPORT_STOP_QP, 0x3100, 0, BIT(9), 0x0, 0x0, 0x1}, + {QM_SUPPORT_MB_COMMAND, 0x3100, 0, BIT(11), 0x0, 0x0, 0x1}, + {QM_SUPPORT_SVA_PREFETCH, 0x3100, 0, BIT(14), 0x0, 0x0, 0x1}, +}; + +static const struct hisi_qm_cap_info qm_cap_info_pf[] = { + {QM_SUPPORT_RPM, 0x3100, 0, BIT(13), 0x0, 0x0, 0x1}, +}; + +static const struct hisi_qm_cap_info qm_cap_info_vf[] = { + {QM_SUPPORT_RPM, 0x3100, 0, BIT(12), 0x0, 0x0, 0x0}, +}; + struct qm_cqe { __le32 rsvd0; __le16 cmd_id; @@ -427,10 +443,7 @@ struct hisi_qm_hw_ops { void (*hw_error_init)(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe); void (*hw_error_uninit)(struct hisi_qm *qm); enum acc_err_result (*hw_error_handle)(struct hisi_qm *qm); - int (*stop_qp)(struct hisi_qp *qp); int (*set_msi)(struct hisi_qm *qm, bool set); - int (*ping_all_vfs)(struct hisi_qm *qm, u64 cmd); - int (*ping_pf)(struct hisi_qm *qm, u64 cmd); }; struct qm_dfx_item { @@ -841,6 +854,36 @@ static int qm_dev_mem_reset(struct hisi_qm *qm) POLL_TIMEOUT); } +/** + * hisi_qm_get_hw_info() - Get device information. + * @qm: The qm which want to get information. + * @info_table: Array for storing device information. + * @index: Index in info_table. + * @is_read: Whether read from reg, 0: not support read from reg. + * + * This function returns device information the caller needs. + */ +u32 hisi_qm_get_hw_info(struct hisi_qm *qm, + const struct hisi_qm_cap_info *info_table, + u32 index, bool is_read) +{ + u32 val; + + switch (qm->ver) { + case QM_HW_V1: + return info_table[index].v1_val; + case QM_HW_V2: + return info_table[index].v2_val; + default: + if (!is_read) + return info_table[index].v3_val; + + val = readl(qm->io_base + info_table[index].offset); + return (val >> info_table[index].shift) & info_table[index].mask; + } +} +EXPORT_SYMBOL_GPL(hisi_qm_get_hw_info); + static u32 qm_get_irq_num_v1(struct hisi_qm *qm) { return QM_IRQ_NUM_V1; @@ -867,7 +910,7 @@ static int qm_pm_get_sync(struct hisi_qm *qm) struct device *dev = &qm->pdev->dev; int ret; - if (qm->fun_type == QM_HW_VF || qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_RPM, &qm->caps)) return 0; ret = pm_runtime_resume_and_get(dev); @@ -883,7 +926,7 @@ static void qm_pm_put_sync(struct hisi_qm *qm) { struct device *dev = &qm->pdev->dev; - if (qm->fun_type == QM_HW_VF || qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_RPM, &qm->caps)) return; pm_runtime_mark_last_busy(dev); @@ -1164,7 +1207,7 @@ static void qm_init_prefetch(struct hisi_qm *qm) struct device *dev = &qm->pdev->dev; u32 page_type = 0x0; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; switch (PAGE_SIZE) { @@ -1283,7 +1326,7 @@ static void qm_vft_data_cfg(struct hisi_qm *qm, enum vft_type type, u32 base, } break; case SHAPER_VFT: - if (qm->ver >= QM_HW_V3) { + if (factor) { tmp = factor->cir_b | (factor->cir_u << QM_SHAPER_FACTOR_CIR_U_SHIFT) | (factor->cir_s << QM_SHAPER_FACTOR_CIR_S_SHIFT) | @@ -1301,10 +1344,13 @@ static void qm_vft_data_cfg(struct hisi_qm *qm, enum vft_type type, u32 base, static int qm_set_vft_common(struct hisi_qm *qm, enum vft_type type, u32 fun_num, u32 base, u32 number) { - struct qm_shaper_factor *factor = &qm->factor[fun_num]; + struct qm_shaper_factor *factor = NULL; unsigned int val; int ret; + if (type == SHAPER_VFT && test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) + factor = &qm->factor[fun_num]; + ret = readl_relaxed_poll_timeout(qm->io_base + QM_VFT_CFG_RDY, val, val & BIT(0), POLL_PERIOD, POLL_TIMEOUT); @@ -1362,7 +1408,7 @@ static int qm_set_sqc_cqc_vft(struct hisi_qm *qm, u32 fun_num, u32 base, } /* init default shaper qos val */ - if (qm->ver >= QM_HW_V3) { + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) { ret = qm_shaper_init_vft(qm, fun_num); if (ret) goto back_sqc_cqc; @@ -2466,7 +2512,7 @@ static int qm_wait_vf_prepare_finish(struct hisi_qm *qm) u64 val; u32 i; - if (!qm->vfs_num || qm->ver < QM_HW_V3) + if (!qm->vfs_num || !test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) return 0; while (true) { @@ -2751,10 +2797,7 @@ static const struct hisi_qm_hw_ops qm_hw_ops_v3 = { .hw_error_init = qm_hw_error_init_v3, .hw_error_uninit = qm_hw_error_uninit_v3, .hw_error_handle = qm_hw_error_handle_v2, - .stop_qp = qm_stop_qp, .set_msi = qm_set_msi_v3, - .ping_all_vfs = qm_ping_all_vfs, - .ping_pf = qm_ping_pf, }; static void *qm_get_avail_sqe(struct hisi_qp *qp) @@ -3051,8 +3094,8 @@ static int qm_drain_qp(struct hisi_qp *qp) return 0; /* Kunpeng930 supports drain qp by device */ - if (qm->ops->stop_qp) { - ret = qm->ops->stop_qp(qp); + if (test_bit(QM_SUPPORT_STOP_QP, &qm->caps)) { + ret = qm_stop_qp(qp); if (ret) dev_err(dev, "Failed to stop qp(%u)!\n", qp->qp_id); return ret; @@ -3282,7 +3325,7 @@ static int hisi_qm_uacce_mmap(struct uacce_queue *q, if (qm->ver == QM_HW_V1) { if (sz > PAGE_SIZE * QM_DOORBELL_PAGE_NR) return -EINVAL; - } else if (qm->ver == QM_HW_V2 || !qm->use_db_isolation) { + } else if (!test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) { if (sz > PAGE_SIZE * (QM_DOORBELL_PAGE_NR + QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE)) return -EINVAL; @@ -3436,7 +3479,7 @@ static int qm_alloc_uacce(struct hisi_qm *qm) if (qm->ver == QM_HW_V1) mmio_page_nr = QM_DOORBELL_PAGE_NR; - else if (qm->ver == QM_HW_V2 || !qm->use_db_isolation) + else if (!test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) mmio_page_nr = QM_DOORBELL_PAGE_NR + QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE; else @@ -3598,7 +3641,7 @@ static void hisi_qm_pre_init(struct hisi_qm *qm) init_rwsem(&qm->qps_lock); qm->qp_in_used = 0; qm->misc_ctl = false; - if (qm->fun_type == QM_HW_PF && qm->ver > QM_HW_V2) { + if (test_bit(QM_SUPPORT_RPM, &qm->caps)) { if (!acpi_device_power_manageable(ACPI_COMPANION(&pdev->dev))) dev_info(&pdev->dev, "_PS0 and _PR0 are not defined"); } @@ -3608,7 +3651,7 @@ static void qm_cmd_uninit(struct hisi_qm *qm) { u32 val; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) return; val = readl(qm->io_base + QM_IFC_INT_MASK); @@ -3620,7 +3663,7 @@ static void qm_cmd_init(struct hisi_qm *qm) { u32 val; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) return; /* Clear communication interrupt source */ @@ -3636,7 +3679,7 @@ static void qm_put_pci_res(struct hisi_qm *qm) { struct pci_dev *pdev = qm->pdev; - if (qm->use_db_isolation) + if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) iounmap(qm->db_io_base); iounmap(qm->io_base); @@ -3686,7 +3729,9 @@ static void hisi_qm_memory_uninit(struct hisi_qm *qm) } idr_destroy(&qm->qp_idr); - kfree(qm->factor); + + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) + kfree(qm->factor); } /** @@ -4469,12 +4514,10 @@ static int qm_vf_read_qos(struct hisi_qm *qm) qm->mb_qos = 0; /* vf ping pf to get function qos */ - if (qm->ops->ping_pf) { - ret = qm->ops->ping_pf(qm, QM_VF_GET_QOS); - if (ret) { - pci_err(qm->pdev, "failed to send cmd to PF to get qos!\n"); - return ret; - } + ret = qm_ping_pf(qm, QM_VF_GET_QOS); + if (ret) { + pci_err(qm->pdev, "failed to send cmd to PF to get qos!\n"); + return ret; } while (true) { @@ -4646,14 +4689,14 @@ static const struct file_operations qm_algqos_fops = { * hisi_qm_set_algqos_init() - Initialize function qos debugfs files. * @qm: The qm for which we want to add debugfs files. * - * Create function qos debugfs files. + * Create function qos debugfs files, VF ping PF to get function qos. */ static void hisi_qm_set_algqos_init(struct hisi_qm *qm) { if (qm->fun_type == QM_HW_PF) debugfs_create_file("alg_qos", 0644, qm->debug.debug_root, qm, &qm_algqos_fops); - else + else if (test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) debugfs_create_file("alg_qos", 0444, qm->debug.debug_root, qm, &qm_algqos_fops); } @@ -4701,7 +4744,7 @@ void hisi_qm_debug_init(struct hisi_qm *qm) &qm_atomic64_ops); } - if (qm->ver >= QM_HW_V3) + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) hisi_qm_set_algqos_init(qm); } EXPORT_SYMBOL_GPL(hisi_qm_debug_init); @@ -4824,7 +4867,9 @@ int hisi_qm_sriov_disable(struct pci_dev *pdev, bool is_frozen) pci_disable_sriov(pdev); /* clear vf function shaper configure array */ - memset(qm->factor + 1, 0, sizeof(struct qm_shaper_factor) * total_vfs); + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) + memset(qm->factor + 1, 0, sizeof(struct qm_shaper_factor) * total_vfs); + ret = qm_clear_vft_config(qm); if (ret) return ret; @@ -5048,8 +5093,8 @@ static int qm_try_stop_vfs(struct hisi_qm *qm, u64 cmd, return 0; /* Kunpeng930 supports to notify VFs to stop before PF reset */ - if (qm->ops->ping_all_vfs) { - ret = qm->ops->ping_all_vfs(qm, cmd); + if (test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) { + ret = qm_ping_all_vfs(qm, cmd); if (ret) pci_err(pdev, "failed to send cmd to all VFs before PF reset!\n"); } else { @@ -5240,8 +5285,8 @@ static int qm_try_start_vfs(struct hisi_qm *qm, enum qm_mb_cmd cmd) } /* Kunpeng930 supports to notify VFs to start after PF reset. */ - if (qm->ops->ping_all_vfs) { - ret = qm->ops->ping_all_vfs(qm, cmd); + if (test_bit(QM_SUPPORT_MB_COMMAND, &qm->caps)) { + ret = qm_ping_all_vfs(qm, cmd); if (ret) pci_warn(pdev, "failed to send cmd to all VFs after PF reset!\n"); } else { @@ -5687,7 +5732,7 @@ static void qm_pf_reset_vf_prepare(struct hisi_qm *qm, hisi_qm_set_hw_reset(qm, QM_RESET_STOP_RX_OFFSET); out: pci_save_state(pdev); - ret = qm->ops->ping_pf(qm, cmd); + ret = qm_ping_pf(qm, cmd); if (ret) dev_warn(&pdev->dev, "PF responds timeout in reset prepare!\n"); } @@ -5705,7 +5750,7 @@ static void qm_pf_reset_vf_done(struct hisi_qm *qm) cmd = QM_VF_START_FAIL; } - ret = qm->ops->ping_pf(qm, cmd); + ret = qm_ping_pf(qm, cmd); if (ret) dev_warn(&pdev->dev, "PF responds timeout in reset done!\n"); @@ -5910,7 +5955,7 @@ static int qm_get_qp_num(struct hisi_qm *qm) qm->ctrl_qp_num = readl(qm->io_base + QM_CAPBILITY) & QM_QP_NUN_MASK; - if (qm->use_db_isolation) + if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) qm->max_qp_num = (readl(qm->io_base + QM_CAPBILITY) >> QM_QP_MAX_NUM_SHIFT) & QM_QP_NUN_MASK; else @@ -5926,6 +5971,39 @@ static int qm_get_qp_num(struct hisi_qm *qm) return 0; } +static void qm_get_hw_caps(struct hisi_qm *qm) +{ + const struct hisi_qm_cap_info *cap_info = qm->fun_type == QM_HW_PF ? + qm_cap_info_pf : qm_cap_info_vf; + u32 size = qm->fun_type == QM_HW_PF ? ARRAY_SIZE(qm_cap_info_pf) : + ARRAY_SIZE(qm_cap_info_vf); + u32 val, i; + + /* Doorbell isolate register is a independent register. */ + val = hisi_qm_get_hw_info(qm, qm_cap_info_comm, QM_SUPPORT_DB_ISOLATION, true); + if (val) + set_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps); + + if (qm->ver >= QM_HW_V3) { + val = readl(qm->io_base + QM_FUNC_CAPS_REG); + qm->cap_ver = val & QM_CAPBILITY_VERSION; + } + + /* Get PF/VF common capbility */ + for (i = 1; i < ARRAY_SIZE(qm_cap_info_comm); i++) { + val = hisi_qm_get_hw_info(qm, qm_cap_info_comm, i, qm->cap_ver); + if (val) + set_bit(qm_cap_info_comm[i].type, &qm->caps); + } + + /* Get PF/VF different capbility */ + for (i = 0; i < size; i++) { + val = hisi_qm_get_hw_info(qm, cap_info, i, qm->cap_ver); + if (val) + set_bit(cap_info[i].type, &qm->caps); + } +} + static int qm_get_pci_res(struct hisi_qm *qm) { struct pci_dev *pdev = qm->pdev; @@ -5945,16 +6023,8 @@ static int qm_get_pci_res(struct hisi_qm *qm) goto err_request_mem_regions; } - if (qm->ver > QM_HW_V2) { - if (qm->fun_type == QM_HW_PF) - qm->use_db_isolation = readl(qm->io_base + - QM_QUE_ISO_EN) & BIT(0); - else - qm->use_db_isolation = readl(qm->io_base + - QM_QUE_ISO_CFG_V) & BIT(0); - } - - if (qm->use_db_isolation) { + qm_get_hw_caps(qm); + if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) { qm->db_interval = QM_QP_DB_INTERVAL; qm->db_phys_base = pci_resource_start(pdev, PCI_BAR_4); qm->db_io_base = ioremap(qm->db_phys_base, @@ -5978,7 +6048,7 @@ static int qm_get_pci_res(struct hisi_qm *qm) return 0; err_db_ioremap: - if (qm->use_db_isolation) + if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) iounmap(qm->db_io_base); err_ioremap: iounmap(qm->io_base); @@ -6095,12 +6165,15 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) int ret, total_func, i; size_t off = 0; - total_func = pci_sriov_get_totalvfs(qm->pdev) + 1; - qm->factor = kcalloc(total_func, sizeof(struct qm_shaper_factor), GFP_KERNEL); - if (!qm->factor) - return -ENOMEM; - for (i = 0; i < total_func; i++) - qm->factor[i].func_qos = QM_QOS_MAX_VAL; + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) { + total_func = pci_sriov_get_totalvfs(qm->pdev) + 1; + qm->factor = kcalloc(total_func, sizeof(struct qm_shaper_factor), GFP_KERNEL); + if (!qm->factor) + return -ENOMEM; + + for (i = 0; i < total_func; i++) + qm->factor[i].func_qos = QM_QOS_MAX_VAL; + } #define QM_INIT_BUF(qm, type, num) do { \ (qm)->type = ((qm)->qdma.va + (off)); \ @@ -6136,7 +6209,8 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) dma_free_coherent(dev, qm->qdma.size, qm->qdma.va, qm->qdma.dma); err_destroy_idr: idr_destroy(&qm->qp_idr); - kfree(qm->factor); + if (test_bit(QM_SUPPORT_FUNC_QOS, &qm->caps)) + kfree(qm->factor); return ret; } @@ -6279,7 +6353,7 @@ void hisi_qm_pm_init(struct hisi_qm *qm) { struct device *dev = &qm->pdev->dev; - if (qm->fun_type == QM_HW_VF || qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_RPM, &qm->caps)) return; pm_runtime_set_autosuspend_delay(dev, QM_AUTOSUSPEND_DELAY); @@ -6298,7 +6372,7 @@ void hisi_qm_pm_uninit(struct hisi_qm *qm) { struct device *dev = &qm->pdev->dev; - if (qm->fun_type == QM_HW_VF || qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_RPM, &qm->caps)) return; pm_runtime_get_noresume(dev); diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index 2c0be91c0b09..1ec3b06345fd 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -415,7 +415,7 @@ static void sec_open_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; /* Enable prefetch */ @@ -435,7 +435,7 @@ static void sec_close_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; val = readl_relaxed(qm->io_base + SEC_PREFETCH_CFG); diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index 04c8a4c65d77..36809bae6334 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -348,7 +348,7 @@ static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; /* Enable prefetch */ @@ -368,7 +368,7 @@ static void hisi_zip_close_sva_prefetch(struct hisi_qm *qm) u32 val; int ret; - if (qm->ver < QM_HW_V3) + if (!test_bit(QM_SUPPORT_SVA_PREFETCH, &qm->caps)) return; val = readl_relaxed(qm->io_base + HZIP_PREFETCH_CFG); diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h index 116e8bd68c99..851c962ba473 100644 --- a/include/linux/hisi_acc_qm.h +++ b/include/linux/hisi_acc_qm.h @@ -168,6 +168,15 @@ enum qm_vf_state { QM_NOT_READY, }; +enum qm_cap_bits { + QM_SUPPORT_DB_ISOLATION = 0x0, + QM_SUPPORT_FUNC_QOS, + QM_SUPPORT_STOP_QP, + QM_SUPPORT_MB_COMMAND, + QM_SUPPORT_SVA_PREFETCH, + QM_SUPPORT_RPM, +}; + struct dfx_diff_registers { u32 *regs; u32 reg_offset; @@ -258,6 +267,18 @@ struct hisi_qm_err_ini { void (*err_info_init)(struct hisi_qm *qm); }; +struct hisi_qm_cap_info { + u32 type; + /* Register offset */ + u32 offset; + /* Bit offset in register */ + u32 shift; + u32 mask; + u32 v1_val; + u32 v2_val; + u32 v3_val; +}; + struct hisi_qm_list { struct mutex lock; struct list_head list; @@ -278,6 +299,9 @@ struct hisi_qm { struct pci_dev *pdev; void __iomem *io_base; void __iomem *db_io_base; + + /* Capbility version, 0: not supports */ + u32 cap_ver; u32 sqe_size; u32 qp_base; u32 qp_num; @@ -304,6 +328,8 @@ struct hisi_qm { struct hisi_qm_err_info err_info; struct hisi_qm_err_status err_status; unsigned long misc_ctl; /* driver removing and reset sched */ + /* Device capability bit */ + unsigned long caps; struct rw_semaphore qps_lock; struct idr qp_idr; @@ -326,8 +352,6 @@ struct hisi_qm { bool use_sva; bool is_frozen; - /* doorbell isolation enable */ - bool use_db_isolation; resource_size_t phys_base; resource_size_t db_phys_base; struct uacce_device *uacce; @@ -501,6 +525,9 @@ void hisi_qm_pm_init(struct hisi_qm *qm); int hisi_qm_get_dfx_access(struct hisi_qm *qm); void hisi_qm_put_dfx_access(struct hisi_qm *qm); void hisi_qm_regs_dump(struct seq_file *s, struct debugfs_regset32 *regset); +u32 hisi_qm_get_hw_info(struct hisi_qm *qm, + const struct hisi_qm_cap_info *info_table, + u32 index, bool is_read); /* Used by VFIO ACC live migration driver */ struct pci_driver *hisi_sec_get_pf_driver(void); From patchwork Fri Sep 9 09:46:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971423 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A63EECAAD3 for ; Fri, 9 Sep 2022 09:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231617AbiIIJt6 (ORCPT ); Fri, 9 Sep 2022 05:49:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231628AbiIIJtw (ORCPT ); Fri, 9 Sep 2022 05:49:52 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FFA9FF7; Fri, 9 Sep 2022 02:49:48 -0700 (PDT) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB2S3XnlzmV89; Fri, 9 Sep 2022 17:46:08 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:46 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:45 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 02/10] crypto: hisilicon/qm - get qp num and depth from hardware registers Date: Fri, 9 Sep 2022 17:46:56 +0800 Message-ID: <20220909094704.32099-3-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Hardware V3 and later versions can obtain qp num and depth supported by the hardware from registers. To be compatible with later hardware versions, get qp num and depth from registers instead of fixed marcos. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 4 +- drivers/crypto/hisilicon/qm.c | 166 ++++++++++++-------- drivers/crypto/hisilicon/sec2/sec.h | 5 +- drivers/crypto/hisilicon/sec2/sec_crypto.c | 148 ++++++++++------- drivers/crypto/hisilicon/sec2/sec_main.c | 1 - drivers/crypto/hisilicon/zip/zip_crypto.c | 6 +- drivers/crypto/hisilicon/zip/zip_main.c | 1 - include/linux/hisi_acc_qm.h | 5 +- 8 files changed, 202 insertions(+), 134 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index 3ba6f15deafc..2aa91a4f77da 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -147,7 +147,7 @@ static int hpre_alloc_req_id(struct hpre_ctx *ctx) int id; spin_lock_irqsave(&ctx->req_lock, flags); - id = idr_alloc(&ctx->req_idr, NULL, 0, QM_Q_DEPTH, GFP_ATOMIC); + id = idr_alloc(&ctx->req_idr, NULL, 0, ctx->qp->sq_depth, GFP_ATOMIC); spin_unlock_irqrestore(&ctx->req_lock, flags); return id; @@ -488,7 +488,7 @@ static int hpre_ctx_init(struct hpre_ctx *ctx, u8 type) qp->qp_ctx = ctx; qp->req_cb = hpre_alg_cb; - ret = hpre_ctx_set(ctx, qp, QM_Q_DEPTH); + ret = hpre_ctx_set(ctx, qp, qp->sq_depth); if (ret) hpre_stop_qp_and_put(qp); diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index fa77b469761b..b3216ee627e5 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -78,6 +78,9 @@ #define QM_EQ_OVERFLOW 1 #define QM_CQE_ERROR 2 +#define QM_XQ_DEPTH_SHIFT 16 +#define QM_XQ_DEPTH_MASK GENMASK(15, 0) + #define QM_DOORBELL_CMD_SQ 0 #define QM_DOORBELL_CMD_CQ 1 #define QM_DOORBELL_CMD_EQ 2 @@ -88,8 +91,6 @@ #define QM_DB_INDEX_SHIFT_V1 32 #define QM_DB_PRIORITY_SHIFT_V1 48 #define QM_PAGE_SIZE 0x0034 -#define QM_CAPBILITY 0x100158 -#define QM_QP_NUN_MASK GENMASK(10, 0) #define QM_QP_DB_INTERVAL 0x10000 #define QM_MEM_START_INIT 0x100040 @@ -222,7 +223,6 @@ #define WAIT_PERIOD 20 #define REMOVE_WAIT_DELAY 10 #define QM_SQE_ADDR_MASK GENMASK(7, 0) -#define QM_EQ_DEPTH (1024 * 2) #define QM_DRIVER_REMOVING 0 #define QM_RST_SCHED 1 @@ -271,8 +271,8 @@ ((buf_sz) << QM_CQ_BUF_SIZE_SHIFT) | \ ((cqe_sz) << QM_CQ_CQE_SIZE_SHIFT)) -#define QM_MK_CQC_DW3_V2(cqe_sz) \ - ((QM_Q_DEPTH - 1) | ((cqe_sz) << QM_CQ_CQE_SIZE_SHIFT)) +#define QM_MK_CQC_DW3_V2(cqe_sz, cq_depth) \ + ((((u32)cq_depth) - 1) | ((cqe_sz) << QM_CQ_CQE_SIZE_SHIFT)) #define QM_MK_SQC_W13(priority, orders, alg_type) \ (((priority) << QM_SQ_PRIORITY_SHIFT) | \ @@ -285,8 +285,8 @@ ((buf_sz) << QM_SQ_BUF_SIZE_SHIFT) | \ ((u32)ilog2(sqe_sz) << QM_SQ_SQE_SIZE_SHIFT)) -#define QM_MK_SQC_DW3_V2(sqe_sz) \ - ((QM_Q_DEPTH - 1) | ((u32)ilog2(sqe_sz) << QM_SQ_SQE_SIZE_SHIFT)) +#define QM_MK_SQC_DW3_V2(sqe_sz, sq_depth) \ + ((((u32)sq_depth) - 1) | ((u32)ilog2(sqe_sz) << QM_SQ_SQE_SIZE_SHIFT)) #define INIT_QC_COMMON(qc, base, pasid) do { \ (qc)->head = 0; \ @@ -330,6 +330,13 @@ enum qm_mb_cmd { QM_VF_GET_QOS, }; +enum qm_basic_type { + QM_TOTAL_QP_NUM_CAP = 0x0, + QM_FUNC_MAX_QP_CAP, + QM_XEQ_DEPTH_CAP, + QM_QP_DEPTH_CAP, +}; + static const struct hisi_qm_cap_info qm_cap_info_comm[] = { {QM_SUPPORT_DB_ISOLATION, 0x30, 0, BIT(0), 0x0, 0x0, 0x0}, {QM_SUPPORT_FUNC_QOS, 0x3100, 0, BIT(8), 0x0, 0x0, 0x1}, @@ -346,6 +353,13 @@ static const struct hisi_qm_cap_info qm_cap_info_vf[] = { {QM_SUPPORT_RPM, 0x3100, 0, BIT(12), 0x0, 0x0, 0x0}, }; +static const struct hisi_qm_cap_info qm_basic_info[] = { + {QM_TOTAL_QP_NUM_CAP, 0x100158, 0, GENMASK(10, 0), 0x1000, 0x400, 0x400}, + {QM_FUNC_MAX_QP_CAP, 0x100158, 11, GENMASK(10, 0), 0x1000, 0x400, 0x400}, + {QM_XEQ_DEPTH_CAP, 0x3104, 0, GENMASK(15, 0), 0x800, 0x4000800, 0x4000800}, + {QM_QP_DEPTH_CAP, 0x3108, 0, GENMASK(31, 0), 0x4000400, 0x4000400, 0x4000400}, +}; + struct qm_cqe { __le32 rsvd0; __le16 cmd_id; @@ -884,6 +898,16 @@ u32 hisi_qm_get_hw_info(struct hisi_qm *qm, } EXPORT_SYMBOL_GPL(hisi_qm_get_hw_info); +static void qm_get_xqc_depth(struct hisi_qm *qm, u16 *low_bits, + u16 *high_bits, enum qm_basic_type type) +{ + u32 depth; + + depth = hisi_qm_get_hw_info(qm, qm_basic_info, type, qm->cap_ver); + *high_bits = depth & QM_XQ_DEPTH_MASK; + *low_bits = (depth >> QM_XQ_DEPTH_SHIFT) & QM_XQ_DEPTH_MASK; +} + static u32 qm_get_irq_num_v1(struct hisi_qm *qm) { return QM_IRQ_NUM_V1; @@ -935,7 +959,7 @@ static void qm_pm_put_sync(struct hisi_qm *qm) static void qm_cq_head_update(struct hisi_qp *qp) { - if (qp->qp_status.cq_head == QM_Q_DEPTH - 1) { + if (qp->qp_status.cq_head == qp->cq_depth - 1) { qp->qp_status.cqc_phase = !qp->qp_status.cqc_phase; qp->qp_status.cq_head = 0; } else { @@ -967,6 +991,7 @@ static int qm_get_complete_eqe_num(struct hisi_qm_poll_data *poll_data) { struct hisi_qm *qm = poll_data->qm; struct qm_eqe *eqe = qm->eqe + qm->status.eq_head; + u16 eq_depth = qm->eq_depth; int eqe_num = 0; u16 cqn; @@ -975,7 +1000,7 @@ static int qm_get_complete_eqe_num(struct hisi_qm_poll_data *poll_data) poll_data->qp_finish_id[eqe_num] = cqn; eqe_num++; - if (qm->status.eq_head == QM_EQ_DEPTH - 1) { + if (qm->status.eq_head == eq_depth - 1) { qm->status.eqc_phase = !qm->status.eqc_phase; eqe = qm->eqe; qm->status.eq_head = 0; @@ -984,7 +1009,7 @@ static int qm_get_complete_eqe_num(struct hisi_qm_poll_data *poll_data) qm->status.eq_head++; } - if (eqe_num == (QM_EQ_DEPTH >> 1) - 1) + if (eqe_num == (eq_depth >> 1) - 1) break; } @@ -1124,6 +1149,7 @@ static irqreturn_t qm_aeq_thread(int irq, void *data) { struct hisi_qm *qm = data; struct qm_aeqe *aeqe = qm->aeqe + qm->status.aeq_head; + u16 aeq_depth = qm->aeq_depth; u32 type, qp_id; while (QM_AEQE_PHASE(aeqe) == qm->status.aeqc_phase) { @@ -1148,7 +1174,7 @@ static irqreturn_t qm_aeq_thread(int irq, void *data) break; } - if (qm->status.aeq_head == QM_Q_DEPTH - 1) { + if (qm->status.aeq_head == aeq_depth - 1) { qm->status.aeqc_phase = !qm->status.aeqc_phase; aeqe = qm->aeqe; qm->status.aeq_head = 0; @@ -2050,7 +2076,7 @@ static int qm_eqc_aeqc_dump(struct hisi_qm *qm, char *s, size_t size, } static int q_dump_param_parse(struct hisi_qm *qm, char *s, - u32 *e_id, u32 *q_id) + u32 *e_id, u32 *q_id, u16 q_depth) { struct device *dev = &qm->pdev->dev; unsigned int qp_num = qm->qp_num; @@ -2076,8 +2102,8 @@ static int q_dump_param_parse(struct hisi_qm *qm, char *s, } ret = kstrtou32(presult, 0, e_id); - if (ret || *e_id >= QM_Q_DEPTH) { - dev_err(dev, "Please input sqe num (0-%d)", QM_Q_DEPTH - 1); + if (ret || *e_id >= q_depth) { + dev_err(dev, "Please input sqe num (0-%u)", q_depth - 1); return -EINVAL; } @@ -2091,21 +2117,22 @@ static int q_dump_param_parse(struct hisi_qm *qm, char *s, static int qm_sq_dump(struct hisi_qm *qm, char *s) { + u16 sq_depth = qm->qp_array->cq_depth; void *sqe, *sqe_curr; struct hisi_qp *qp; u32 qp_id, sqe_id; int ret; - ret = q_dump_param_parse(qm, s, &sqe_id, &qp_id); + ret = q_dump_param_parse(qm, s, &sqe_id, &qp_id, sq_depth); if (ret) return ret; - sqe = kzalloc(qm->sqe_size * QM_Q_DEPTH, GFP_KERNEL); + sqe = kzalloc(qm->sqe_size * sq_depth, GFP_KERNEL); if (!sqe) return -ENOMEM; qp = &qm->qp_array[qp_id]; - memcpy(sqe, qp->sqe, qm->sqe_size * QM_Q_DEPTH); + memcpy(sqe, qp->sqe, qm->sqe_size * sq_depth); sqe_curr = sqe + (u32)(sqe_id * qm->sqe_size); memset(sqe_curr + qm->debug.sqe_mask_offset, QM_SQE_ADDR_MASK, qm->debug.sqe_mask_len); @@ -2124,7 +2151,7 @@ static int qm_cq_dump(struct hisi_qm *qm, char *s) u32 qp_id, cqe_id; int ret; - ret = q_dump_param_parse(qm, s, &cqe_id, &qp_id); + ret = q_dump_param_parse(qm, s, &cqe_id, &qp_id, qm->qp_array->cq_depth); if (ret) return ret; @@ -2150,11 +2177,11 @@ static int qm_eq_aeq_dump(struct hisi_qm *qm, const char *s, if (ret) return -EINVAL; - if (!strcmp(name, "EQE") && xeqe_id >= QM_EQ_DEPTH) { - dev_err(dev, "Please input eqe num (0-%d)", QM_EQ_DEPTH - 1); + if (!strcmp(name, "EQE") && xeqe_id >= qm->eq_depth) { + dev_err(dev, "Please input eqe num (0-%u)", qm->eq_depth - 1); return -EINVAL; - } else if (!strcmp(name, "AEQE") && xeqe_id >= QM_Q_DEPTH) { - dev_err(dev, "Please input aeqe num (0-%d)", QM_Q_DEPTH - 1); + } else if (!strcmp(name, "AEQE") && xeqe_id >= qm->aeq_depth) { + dev_err(dev, "Please input aeqe num (0-%u)", qm->eq_depth - 1); return -EINVAL; } @@ -2805,7 +2832,7 @@ static void *qm_get_avail_sqe(struct hisi_qp *qp) struct hisi_qp_status *qp_status = &qp->qp_status; u16 sq_tail = qp_status->sq_tail; - if (unlikely(atomic_read(&qp->qp_status.used) == QM_Q_DEPTH - 1)) + if (unlikely(atomic_read(&qp->qp_status.used) == qp->sq_depth - 1)) return NULL; return qp->sqe + sq_tail * qp->qm->sqe_size; @@ -2846,7 +2873,7 @@ static struct hisi_qp *qm_create_qp_nolock(struct hisi_qm *qm, u8 alg_type) qp = &qm->qp_array[qp_id]; hisi_qm_unset_hw_reset(qp); - memset(qp->cqe, 0, sizeof(struct qm_cqe) * QM_Q_DEPTH); + memset(qp->cqe, 0, sizeof(struct qm_cqe) * qp->cq_depth); qp->event_cb = NULL; qp->req_cb = NULL; @@ -2927,9 +2954,9 @@ static int qm_sq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid) INIT_QC_COMMON(sqc, qp->sqe_dma, pasid); if (ver == QM_HW_V1) { sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V1(0, 0, 0, qm->sqe_size)); - sqc->w8 = cpu_to_le16(QM_Q_DEPTH - 1); + sqc->w8 = cpu_to_le16(qp->sq_depth - 1); } else { - sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size)); + sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size, qp->sq_depth)); sqc->w8 = 0; /* rand_qc */ } sqc->cq_num = cpu_to_le16(qp_id); @@ -2970,9 +2997,9 @@ static int qm_cq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid) if (ver == QM_HW_V1) { cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V1(0, 0, 0, QM_QC_CQE_SIZE)); - cqc->w8 = cpu_to_le16(QM_Q_DEPTH - 1); + cqc->w8 = cpu_to_le16(qp->cq_depth - 1); } else { - cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE)); + cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE, qp->cq_depth)); cqc->w8 = 0; /* rand_qc */ } cqc->dw6 = cpu_to_le32(1 << QM_CQ_PHASE_SHIFT | 1 << QM_CQ_FLAG_SHIFT); @@ -3059,13 +3086,14 @@ static void qp_stop_fail_cb(struct hisi_qp *qp) { int qp_used = atomic_read(&qp->qp_status.used); u16 cur_tail = qp->qp_status.sq_tail; - u16 cur_head = (cur_tail + QM_Q_DEPTH - qp_used) % QM_Q_DEPTH; + u16 sq_depth = qp->sq_depth; + u16 cur_head = (cur_tail + sq_depth - qp_used) % sq_depth; struct hisi_qm *qm = qp->qm; u16 pos; int i; for (i = 0; i < qp_used; i++) { - pos = (i + cur_head) % QM_Q_DEPTH; + pos = (i + cur_head) % sq_depth; qp->req_cb(qp, qp->sqe + (u32)(qm->sqe_size * pos)); atomic_dec(&qp->qp_status.used); } @@ -3213,7 +3241,7 @@ int hisi_qp_send(struct hisi_qp *qp, const void *msg) { struct hisi_qp_status *qp_status = &qp->qp_status; u16 sq_tail = qp_status->sq_tail; - u16 sq_tail_next = (sq_tail + 1) % QM_Q_DEPTH; + u16 sq_tail_next = (sq_tail + 1) % qp->sq_depth; void *sqe = qm_get_avail_sqe(qp); if (unlikely(atomic_read(&qp->qp_status.flags) == QP_STOP || @@ -3442,6 +3470,7 @@ static int qm_alloc_uacce(struct hisi_qm *qm) struct uacce_device *uacce; unsigned long mmio_page_nr; unsigned long dus_page_nr; + u16 sq_depth, cq_depth; struct uacce_interface interface = { .flags = UACCE_DEV_SVA, .ops = &uacce_qm_ops, @@ -3485,9 +3514,11 @@ static int qm_alloc_uacce(struct hisi_qm *qm) else mmio_page_nr = qm->db_interval / PAGE_SIZE; + qm_get_xqc_depth(qm, &sq_depth, &cq_depth, QM_QP_DEPTH_CAP); + /* Add one more page for device or qp status */ - dus_page_nr = (PAGE_SIZE - 1 + qm->sqe_size * QM_Q_DEPTH + - sizeof(struct qm_cqe) * QM_Q_DEPTH + PAGE_SIZE) >> + dus_page_nr = (PAGE_SIZE - 1 + qm->sqe_size * sq_depth + + sizeof(struct qm_cqe) * cq_depth + PAGE_SIZE) >> PAGE_SHIFT; uacce->qf_pg_num[UACCE_QFRT_MMIO] = mmio_page_nr; @@ -3592,10 +3623,11 @@ static void hisi_qp_memory_uninit(struct hisi_qm *qm, int num) kfree(qm->qp_array); } -static int hisi_qp_memory_init(struct hisi_qm *qm, size_t dma_size, int id) +static int hisi_qp_memory_init(struct hisi_qm *qm, size_t dma_size, int id, + u16 sq_depth, u16 cq_depth) { struct device *dev = &qm->pdev->dev; - size_t off = qm->sqe_size * QM_Q_DEPTH; + size_t off = qm->sqe_size * sq_depth; struct hisi_qp *qp; int ret = -ENOMEM; @@ -3615,6 +3647,8 @@ static int hisi_qp_memory_init(struct hisi_qm *qm, size_t dma_size, int id) qp->cqe = qp->qdma.va + off; qp->cqe_dma = qp->qdma.dma + off; qp->qdma.size = dma_size; + qp->sq_depth = sq_depth; + qp->cq_depth = cq_depth; qp->qm = qm; qp->qp_id = id; @@ -3858,7 +3892,7 @@ static int qm_eq_ctx_cfg(struct hisi_qm *qm) eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma)); if (qm->ver == QM_HW_V1) eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE); - eqc->dw6 = cpu_to_le32((QM_EQ_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); + eqc->dw6 = cpu_to_le32(((u32)qm->eq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT)); eqc_dma = dma_map_single(dev, eqc, sizeof(struct qm_eqc), DMA_TO_DEVICE); @@ -3887,7 +3921,7 @@ static int qm_aeq_ctx_cfg(struct hisi_qm *qm) aeqc->base_l = cpu_to_le32(lower_32_bits(qm->aeqe_dma)); aeqc->base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma)); - aeqc->dw6 = cpu_to_le32((QM_Q_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT)); + aeqc->dw6 = cpu_to_le32(((u32)qm->aeq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT)); aeqc_dma = dma_map_single(dev, aeqc, sizeof(struct qm_aeqc), DMA_TO_DEVICE); @@ -5947,19 +5981,21 @@ EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister); static int qm_get_qp_num(struct hisi_qm *qm) { - if (qm->ver == QM_HW_V1) - qm->ctrl_qp_num = QM_QNUM_V1; - else if (qm->ver == QM_HW_V2) - qm->ctrl_qp_num = QM_QNUM_V2; - else - qm->ctrl_qp_num = readl(qm->io_base + QM_CAPBILITY) & - QM_QP_NUN_MASK; + bool is_db_isolation; - if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) - qm->max_qp_num = (readl(qm->io_base + QM_CAPBILITY) >> - QM_QP_MAX_NUM_SHIFT) & QM_QP_NUN_MASK; - else - qm->max_qp_num = qm->ctrl_qp_num; + /* VF's qp_num assigned by PF in v2, and VF can get qp_num by vft. */ + if (qm->fun_type == QM_HW_VF) { + if (qm->ver != QM_HW_V1) + /* v2 starts to support get vft by mailbox */ + return hisi_qm_get_vft(qm, &qm->qp_base, &qm->qp_num); + + return 0; + } + + is_db_isolation = test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps); + qm->ctrl_qp_num = hisi_qm_get_hw_info(qm, qm_basic_info, QM_TOTAL_QP_NUM_CAP, true); + qm->max_qp_num = hisi_qm_get_hw_info(qm, qm_basic_info, + QM_FUNC_MAX_QP_CAP, is_db_isolation); /* check if qp number is valid */ if (qm->qp_num > qm->max_qp_num) { @@ -6039,11 +6075,9 @@ static int qm_get_pci_res(struct hisi_qm *qm) qm->db_interval = 0; } - if (qm->fun_type == QM_HW_PF) { - ret = qm_get_qp_num(qm); - if (ret) - goto err_db_ioremap; - } + ret = qm_get_qp_num(qm); + if (ret) + goto err_db_ioremap; return 0; @@ -6126,6 +6160,7 @@ static int hisi_qm_init_work(struct hisi_qm *qm) static int hisi_qp_alloc_memory(struct hisi_qm *qm) { struct device *dev = &qm->pdev->dev; + u16 sq_depth, cq_depth; size_t qp_dma_size; int i, ret; @@ -6139,13 +6174,14 @@ static int hisi_qp_alloc_memory(struct hisi_qm *qm) return -ENOMEM; } + qm_get_xqc_depth(qm, &sq_depth, &cq_depth, QM_QP_DEPTH_CAP); + /* one more page for device or qp statuses */ - qp_dma_size = qm->sqe_size * QM_Q_DEPTH + - sizeof(struct qm_cqe) * QM_Q_DEPTH; + qp_dma_size = qm->sqe_size * sq_depth + sizeof(struct qm_cqe) * cq_depth; qp_dma_size = PAGE_ALIGN(qp_dma_size) + PAGE_SIZE; for (i = 0; i < qm->qp_num; i++) { qm->poll_data[i].qm = qm; - ret = hisi_qp_memory_init(qm, qp_dma_size, i); + ret = hisi_qp_memory_init(qm, qp_dma_size, i, sq_depth, cq_depth); if (ret) goto err_init_qp_mem; @@ -6182,8 +6218,9 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) } while (0) idr_init(&qm->qp_idr); - qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * QM_EQ_DEPTH) + - QMC_ALIGN(sizeof(struct qm_aeqe) * QM_Q_DEPTH) + + qm_get_xqc_depth(qm, &qm->eq_depth, &qm->aeq_depth, QM_XEQ_DEPTH_CAP); + qm->qdma.size = QMC_ALIGN(sizeof(struct qm_eqe) * qm->eq_depth) + + QMC_ALIGN(sizeof(struct qm_aeqe) * qm->aeq_depth) + QMC_ALIGN(sizeof(struct qm_sqc) * qm->qp_num) + QMC_ALIGN(sizeof(struct qm_cqc) * qm->qp_num); qm->qdma.va = dma_alloc_coherent(dev, qm->qdma.size, &qm->qdma.dma, @@ -6194,8 +6231,8 @@ static int hisi_qm_memory_init(struct hisi_qm *qm) goto err_destroy_idr; } - QM_INIT_BUF(qm, eqe, QM_EQ_DEPTH); - QM_INIT_BUF(qm, aeqe, QM_Q_DEPTH); + QM_INIT_BUF(qm, eqe, qm->eq_depth); + QM_INIT_BUF(qm, aeqe, qm->aeq_depth); QM_INIT_BUF(qm, sqc, qm->qp_num); QM_INIT_BUF(qm, cqc, qm->qp_num); @@ -6257,13 +6294,6 @@ int hisi_qm_init(struct hisi_qm *qm) if (ret) goto err_pci_init; - if (qm->fun_type == QM_HW_VF && qm->ver != QM_HW_V1) { - /* v2 starts to support get vft by mailbox */ - ret = hisi_qm_get_vft(qm, &qm->qp_base, &qm->qp_num); - if (ret) - goto err_irq_register; - } - if (qm->fun_type == QM_HW_PF) { qm_disable_clock_gate(qm); ret = qm_dev_mem_reset(qm); diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h index d2a0bc93e752..04e034abc5e8 100644 --- a/drivers/crypto/hisilicon/sec2/sec.h +++ b/drivers/crypto/hisilicon/sec2/sec.h @@ -17,6 +17,7 @@ struct sec_alg_res { dma_addr_t a_ivin_dma; u8 *out_mac; dma_addr_t out_mac_dma; + u16 depth; }; /* Cipher request of SEC private */ @@ -115,9 +116,9 @@ struct sec_cipher_ctx { /* SEC queue context which defines queue's relatives */ struct sec_qp_ctx { struct hisi_qp *qp; - struct sec_req *req_list[QM_Q_DEPTH]; + struct sec_req **req_list; struct idr req_idr; - struct sec_alg_res res[QM_Q_DEPTH]; + struct sec_alg_res *res; struct sec_ctx *ctx; spinlock_t req_lock; struct list_head backlog; diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index 77c9f13cf69a..db2165e9fcbf 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -59,14 +59,14 @@ #define SEC_ICV_MASK 0x000E #define SEC_SQE_LEN_RATE_MASK 0x3 -#define SEC_TOTAL_IV_SZ (SEC_IV_SIZE * QM_Q_DEPTH) +#define SEC_TOTAL_IV_SZ(depth) (SEC_IV_SIZE * (depth)) #define SEC_SGL_SGE_NR 128 #define SEC_CIPHER_AUTH 0xfe #define SEC_AUTH_CIPHER 0x1 #define SEC_MAX_MAC_LEN 64 #define SEC_MAX_AAD_LEN 65535 #define SEC_MAX_CCM_AAD_LEN 65279 -#define SEC_TOTAL_MAC_SZ (SEC_MAX_MAC_LEN * QM_Q_DEPTH) +#define SEC_TOTAL_MAC_SZ(depth) (SEC_MAX_MAC_LEN * (depth)) #define SEC_PBUF_SZ 512 #define SEC_PBUF_IV_OFFSET SEC_PBUF_SZ @@ -74,11 +74,11 @@ #define SEC_PBUF_PKG (SEC_PBUF_SZ + SEC_IV_SIZE + \ SEC_MAX_MAC_LEN * 2) #define SEC_PBUF_NUM (PAGE_SIZE / SEC_PBUF_PKG) -#define SEC_PBUF_PAGE_NUM (QM_Q_DEPTH / SEC_PBUF_NUM) -#define SEC_PBUF_LEFT_SZ (SEC_PBUF_PKG * (QM_Q_DEPTH - \ - SEC_PBUF_PAGE_NUM * SEC_PBUF_NUM)) -#define SEC_TOTAL_PBUF_SZ (PAGE_SIZE * SEC_PBUF_PAGE_NUM + \ - SEC_PBUF_LEFT_SZ) +#define SEC_PBUF_PAGE_NUM(depth) ((depth) / SEC_PBUF_NUM) +#define SEC_PBUF_LEFT_SZ(depth) (SEC_PBUF_PKG * ((depth) - \ + SEC_PBUF_PAGE_NUM(depth) * SEC_PBUF_NUM)) +#define SEC_TOTAL_PBUF_SZ(depth) (PAGE_SIZE * SEC_PBUF_PAGE_NUM(depth) + \ + SEC_PBUF_LEFT_SZ(depth)) #define SEC_SQE_LEN_RATE 4 #define SEC_SQE_CFLAG 2 @@ -128,9 +128,7 @@ static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx) int req_id; spin_lock_bh(&qp_ctx->req_lock); - - req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL, - 0, QM_Q_DEPTH, GFP_ATOMIC); + req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL, 0, qp_ctx->qp->sq_depth, GFP_ATOMIC); spin_unlock_bh(&qp_ctx->req_lock); if (unlikely(req_id < 0)) { dev_err(req->ctx->dev, "alloc req id fail!\n"); @@ -148,7 +146,7 @@ static void sec_free_req_id(struct sec_req *req) struct sec_qp_ctx *qp_ctx = req->qp_ctx; int req_id = req->req_id; - if (unlikely(req_id < 0 || req_id >= QM_Q_DEPTH)) { + if (unlikely(req_id < 0 || req_id >= qp_ctx->qp->sq_depth)) { dev_err(req->ctx->dev, "free request id invalid!\n"); return; } @@ -300,14 +298,15 @@ static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req) /* Get DMA memory resources */ static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res) { + u16 q_depth = res->depth; int i; - res->c_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ, + res->c_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ(q_depth), &res->c_ivin_dma, GFP_KERNEL); if (!res->c_ivin) return -ENOMEM; - for (i = 1; i < QM_Q_DEPTH; i++) { + for (i = 1; i < q_depth; i++) { res[i].c_ivin_dma = res->c_ivin_dma + i * SEC_IV_SIZE; res[i].c_ivin = res->c_ivin + i * SEC_IV_SIZE; } @@ -318,20 +317,21 @@ static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_civ_resource(struct device *dev, struct sec_alg_res *res) { if (res->c_ivin) - dma_free_coherent(dev, SEC_TOTAL_IV_SZ, + dma_free_coherent(dev, SEC_TOTAL_IV_SZ(res->depth), res->c_ivin, res->c_ivin_dma); } static int sec_alloc_aiv_resource(struct device *dev, struct sec_alg_res *res) { + u16 q_depth = res->depth; int i; - res->a_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ, + res->a_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ(q_depth), &res->a_ivin_dma, GFP_KERNEL); if (!res->a_ivin) return -ENOMEM; - for (i = 1; i < QM_Q_DEPTH; i++) { + for (i = 1; i < q_depth; i++) { res[i].a_ivin_dma = res->a_ivin_dma + i * SEC_IV_SIZE; res[i].a_ivin = res->a_ivin + i * SEC_IV_SIZE; } @@ -342,20 +342,21 @@ static int sec_alloc_aiv_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_aiv_resource(struct device *dev, struct sec_alg_res *res) { if (res->a_ivin) - dma_free_coherent(dev, SEC_TOTAL_IV_SZ, + dma_free_coherent(dev, SEC_TOTAL_IV_SZ(res->depth), res->a_ivin, res->a_ivin_dma); } static int sec_alloc_mac_resource(struct device *dev, struct sec_alg_res *res) { + u16 q_depth = res->depth; int i; - res->out_mac = dma_alloc_coherent(dev, SEC_TOTAL_MAC_SZ << 1, + res->out_mac = dma_alloc_coherent(dev, SEC_TOTAL_MAC_SZ(q_depth) << 1, &res->out_mac_dma, GFP_KERNEL); if (!res->out_mac) return -ENOMEM; - for (i = 1; i < QM_Q_DEPTH; i++) { + for (i = 1; i < q_depth; i++) { res[i].out_mac_dma = res->out_mac_dma + i * (SEC_MAX_MAC_LEN << 1); res[i].out_mac = res->out_mac + i * (SEC_MAX_MAC_LEN << 1); @@ -367,14 +368,14 @@ static int sec_alloc_mac_resource(struct device *dev, struct sec_alg_res *res) static void sec_free_mac_resource(struct device *dev, struct sec_alg_res *res) { if (res->out_mac) - dma_free_coherent(dev, SEC_TOTAL_MAC_SZ << 1, + dma_free_coherent(dev, SEC_TOTAL_MAC_SZ(res->depth) << 1, res->out_mac, res->out_mac_dma); } static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res) { if (res->pbuf) - dma_free_coherent(dev, SEC_TOTAL_PBUF_SZ, + dma_free_coherent(dev, SEC_TOTAL_PBUF_SZ(res->depth), res->pbuf, res->pbuf_dma); } @@ -384,10 +385,12 @@ static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res) */ static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res) { + u16 q_depth = res->depth; + int size = SEC_PBUF_PAGE_NUM(q_depth); int pbuf_page_offset; int i, j, k; - res->pbuf = dma_alloc_coherent(dev, SEC_TOTAL_PBUF_SZ, + res->pbuf = dma_alloc_coherent(dev, SEC_TOTAL_PBUF_SZ(q_depth), &res->pbuf_dma, GFP_KERNEL); if (!res->pbuf) return -ENOMEM; @@ -400,11 +403,11 @@ static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res) * So we need SEC_PBUF_PAGE_NUM numbers of PAGE * for the SEC_TOTAL_PBUF_SZ */ - for (i = 0; i <= SEC_PBUF_PAGE_NUM; i++) { + for (i = 0; i <= size; i++) { pbuf_page_offset = PAGE_SIZE * i; for (j = 0; j < SEC_PBUF_NUM; j++) { k = i * SEC_PBUF_NUM + j; - if (k == QM_Q_DEPTH) + if (k == q_depth) break; res[k].pbuf = res->pbuf + j * SEC_PBUF_PKG + pbuf_page_offset; @@ -470,36 +473,29 @@ static void sec_alg_resource_free(struct sec_ctx *ctx, sec_free_mac_resource(dev, qp_ctx->res); } -static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx, - int qp_ctx_id, int alg_type) +static int sec_alloc_qp_ctx_resource(struct hisi_qm *qm, struct sec_ctx *ctx, + struct sec_qp_ctx *qp_ctx) { + u16 q_depth = qp_ctx->qp->sq_depth; struct device *dev = ctx->dev; - struct sec_qp_ctx *qp_ctx; - struct hisi_qp *qp; int ret = -ENOMEM; - qp_ctx = &ctx->qp_ctx[qp_ctx_id]; - qp = ctx->qps[qp_ctx_id]; - qp->req_type = 0; - qp->qp_ctx = qp_ctx; - qp_ctx->qp = qp; - qp_ctx->ctx = ctx; - - qp->req_cb = sec_req_cb; + qp_ctx->req_list = kcalloc(q_depth, sizeof(struct sec_req *), GFP_KERNEL); + if (!qp_ctx->req_list) + return ret; - spin_lock_init(&qp_ctx->req_lock); - idr_init(&qp_ctx->req_idr); - INIT_LIST_HEAD(&qp_ctx->backlog); + qp_ctx->res = kcalloc(q_depth, sizeof(struct sec_alg_res), GFP_KERNEL); + if (!qp_ctx->res) + goto err_free_req_list; + qp_ctx->res->depth = q_depth; - qp_ctx->c_in_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH, - SEC_SGL_SGE_NR); + qp_ctx->c_in_pool = hisi_acc_create_sgl_pool(dev, q_depth, SEC_SGL_SGE_NR); if (IS_ERR(qp_ctx->c_in_pool)) { dev_err(dev, "fail to create sgl pool for input!\n"); - goto err_destroy_idr; + goto err_free_res; } - qp_ctx->c_out_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH, - SEC_SGL_SGE_NR); + qp_ctx->c_out_pool = hisi_acc_create_sgl_pool(dev, q_depth, SEC_SGL_SGE_NR); if (IS_ERR(qp_ctx->c_out_pool)) { dev_err(dev, "fail to create sgl pool for output!\n"); goto err_free_c_in_pool; @@ -509,34 +505,72 @@ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx, if (ret) goto err_free_c_out_pool; - ret = hisi_qm_start_qp(qp, 0); - if (ret < 0) - goto err_queue_free; - return 0; -err_queue_free: - sec_alg_resource_free(ctx, qp_ctx); err_free_c_out_pool: hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool); err_free_c_in_pool: hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool); -err_destroy_idr: - idr_destroy(&qp_ctx->req_idr); +err_free_res: + kfree(qp_ctx->res); +err_free_req_list: + kfree(qp_ctx->req_list); return ret; } -static void sec_release_qp_ctx(struct sec_ctx *ctx, - struct sec_qp_ctx *qp_ctx) +static void sec_free_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx) { struct device *dev = ctx->dev; - hisi_qm_stop_qp(qp_ctx->qp); sec_alg_resource_free(ctx, qp_ctx); - hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool); hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool); + kfree(qp_ctx->res); + kfree(qp_ctx->req_list); +} + +static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx, + int qp_ctx_id, int alg_type) +{ + struct sec_qp_ctx *qp_ctx; + struct hisi_qp *qp; + int ret; + qp_ctx = &ctx->qp_ctx[qp_ctx_id]; + qp = ctx->qps[qp_ctx_id]; + qp->req_type = 0; + qp->qp_ctx = qp_ctx; + qp_ctx->qp = qp; + qp_ctx->ctx = ctx; + + qp->req_cb = sec_req_cb; + + spin_lock_init(&qp_ctx->req_lock); + idr_init(&qp_ctx->req_idr); + INIT_LIST_HEAD(&qp_ctx->backlog); + + ret = sec_alloc_qp_ctx_resource(qm, ctx, qp_ctx); + if (ret) + goto err_destroy_idr; + + ret = hisi_qm_start_qp(qp, 0); + if (ret < 0) + goto err_resource_free; + + return 0; + +err_resource_free: + sec_free_qp_ctx_resource(ctx, qp_ctx); +err_destroy_idr: + idr_destroy(&qp_ctx->req_idr); + return ret; +} + +static void sec_release_qp_ctx(struct sec_ctx *ctx, + struct sec_qp_ctx *qp_ctx) +{ + hisi_qm_stop_qp(qp_ctx->qp); + sec_free_qp_ctx_resource(ctx, qp_ctx); idr_destroy(&qp_ctx->req_idr); } @@ -559,7 +593,7 @@ static int sec_ctx_base_init(struct sec_ctx *ctx) ctx->pbuf_supported = ctx->sec->iommu_used; /* Half of queue depth is taken as fake requests limit in the queue. */ - ctx->fake_req_limit = QM_Q_DEPTH >> 1; + ctx->fake_req_limit = ctx->qps[0]->sq_depth >> 1; ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx), GFP_KERNEL); if (!ctx->qp_ctx) { diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index 1ec3b06345fd..ea7f85f72b10 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -27,7 +27,6 @@ #define SEC_BD_ERR_CHK_EN3 0xffffbfff #define SEC_SQE_SIZE 128 -#define SEC_SQ_SIZE (SEC_SQE_SIZE * QM_Q_DEPTH) #define SEC_PF_DEF_Q_NUM 256 #define SEC_PF_DEF_Q_BASE 0 #define SEC_CTX_Q_NUM_DEF 2 diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c index a6c914d527eb..a7f6884c3ab3 100644 --- a/drivers/crypto/hisilicon/zip/zip_crypto.c +++ b/drivers/crypto/hisilicon/zip/zip_crypto.c @@ -599,12 +599,13 @@ static void hisi_zip_ctx_exit(struct hisi_zip_ctx *hisi_zip_ctx) static int hisi_zip_create_req_q(struct hisi_zip_ctx *ctx) { + u16 q_depth = ctx->qp_ctx[0].qp->sq_depth; struct hisi_zip_req_q *req_q; int i, ret; for (i = 0; i < HZIP_CTX_Q_NUM; i++) { req_q = &ctx->qp_ctx[i].req_q; - req_q->size = QM_Q_DEPTH; + req_q->size = q_depth; req_q->req_bitmap = bitmap_zalloc(req_q->size, GFP_KERNEL); if (!req_q->req_bitmap) { @@ -650,6 +651,7 @@ static void hisi_zip_release_req_q(struct hisi_zip_ctx *ctx) static int hisi_zip_create_sgl_pool(struct hisi_zip_ctx *ctx) { + u16 q_depth = ctx->qp_ctx[0].qp->sq_depth; struct hisi_zip_qp_ctx *tmp; struct device *dev; int i; @@ -657,7 +659,7 @@ static int hisi_zip_create_sgl_pool(struct hisi_zip_ctx *ctx) for (i = 0; i < HZIP_CTX_Q_NUM; i++) { tmp = &ctx->qp_ctx[i]; dev = &tmp->qp->qm->pdev->dev; - tmp->sgl_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH << 1, + tmp->sgl_pool = hisi_acc_create_sgl_pool(dev, q_depth << 1, sgl_sge_nr); if (IS_ERR(tmp->sgl_pool)) { if (i == 1) diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index 36809bae6334..24d5226e0a0a 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -82,7 +82,6 @@ #define HZIP_CORE_NUM (HZIP_COMP_CORE_NUM + \ HZIP_DECOMP_CORE_NUM) #define HZIP_SQE_SIZE 128 -#define HZIP_SQ_SIZE (HZIP_SQE_SIZE * QM_Q_DEPTH) #define HZIP_PF_DEF_Q_NUM 64 #define HZIP_PF_DEF_Q_BASE 0 diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h index 851c962ba473..371c0e7ffd27 100644 --- a/include/linux/hisi_acc_qm.h +++ b/include/linux/hisi_acc_qm.h @@ -109,7 +109,6 @@ QM_MAILBOX_TIMEOUT | QM_FLR_TIMEOUT) #define QM_BASE_CE QM_ECC_1BIT -#define QM_Q_DEPTH 1024 #define QM_MIN_QNUM 2 #define HISI_ACC_SGL_SGE_NR_MAX 255 #define QM_SHAPER_CFG 0x100164 @@ -310,6 +309,8 @@ struct hisi_qm { u32 max_qp_num; u32 vfs_num; u32 db_interval; + u16 eq_depth; + u16 aeq_depth; struct list_head list; struct hisi_qm_list *qm_list; @@ -375,6 +376,8 @@ struct hisi_qp_ops { struct hisi_qp { u32 qp_id; + u16 sq_depth; + u16 cq_depth; u8 alg_type; u8 req_type; From patchwork Fri Sep 9 09:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971425 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9853C6FA86 for ; Fri, 9 Sep 2022 09:50:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229488AbiIIJt7 (ORCPT ); Fri, 9 Sep 2022 05:49:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231758AbiIIJtz (ORCPT ); Fri, 9 Sep 2022 05:49:55 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F0E431DF3; Fri, 9 Sep 2022 02:49:50 -0700 (PDT) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB2V5ymczmV99; Fri, 9 Sep 2022 17:46:10 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:47 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:46 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 03/10] crypto: hisilicon/qm - add UACCE_CMD_QM_SET_QP_INFO support Date: Fri, 9 Sep 2022 17:46:57 +0800 Message-ID: <20220909094704.32099-4-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To be compatible with accelerator devices of different versions, 'UACCE_CMD_QM_SET_QP_INFO' ioctl is added to obtain queue information in userspace, including queue depth and buffer description size. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/qm.c | 21 ++++++++++++++++++--- include/uapi/misc/uacce/hisi_qm.h | 17 ++++++++++++++++- 2 files changed, 34 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index b3216ee627e5..e227a3e50600 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -3430,6 +3430,7 @@ static long hisi_qm_uacce_ioctl(struct uacce_queue *q, unsigned int cmd, unsigned long arg) { struct hisi_qp *qp = q->priv; + struct hisi_qp_info qp_info; struct hisi_qp_ctx qp_ctx; if (cmd == UACCE_CMD_QM_SET_QP_CTX) { @@ -3446,11 +3447,25 @@ static long hisi_qm_uacce_ioctl(struct uacce_queue *q, unsigned int cmd, if (copy_to_user((void __user *)arg, &qp_ctx, sizeof(struct hisi_qp_ctx))) return -EFAULT; - } else { - return -EINVAL; + + return 0; + } else if (cmd == UACCE_CMD_QM_SET_QP_INFO) { + if (copy_from_user(&qp_info, (void __user *)arg, + sizeof(struct hisi_qp_info))) + return -EFAULT; + + qp_info.sqe_size = qp->qm->sqe_size; + qp_info.sq_depth = qp->sq_depth; + qp_info.cq_depth = qp->cq_depth; + + if (copy_to_user((void __user *)arg, &qp_info, + sizeof(struct hisi_qp_info))) + return -EFAULT; + + return 0; } - return 0; + return -EINVAL; } static const struct uacce_ops uacce_qm_ops = { diff --git a/include/uapi/misc/uacce/hisi_qm.h b/include/uapi/misc/uacce/hisi_qm.h index 1faef5ff87ef..3e66dbc2f323 100644 --- a/include/uapi/misc/uacce/hisi_qm.h +++ b/include/uapi/misc/uacce/hisi_qm.h @@ -14,11 +14,26 @@ struct hisi_qp_ctx { __u16 qc_type; }; +/** + * struct hisi_qp_info - User data for hisi qp. + * @sqe_size: Submission queue element size + * @sq_depth: The number of sqe + * @cq_depth: The number of cqe + * @reserved: Reserved data + */ +struct hisi_qp_info { + __u32 sqe_size; + __u16 sq_depth; + __u16 cq_depth; + __u64 reserved; +}; + #define HISI_QM_API_VER_BASE "hisi_qm_v1" #define HISI_QM_API_VER2_BASE "hisi_qm_v2" #define HISI_QM_API_VER3_BASE "hisi_qm_v3" /* UACCE_CMD_QM_SET_QP_CTX: Set qp algorithm type */ #define UACCE_CMD_QM_SET_QP_CTX _IOWR('H', 10, struct hisi_qp_ctx) - +/* UACCE_CMD_QM_SET_QP_INFO: Set qp depth and BD size */ +#define UACCE_CMD_QM_SET_QP_INFO _IOWR('H', 11, struct hisi_qp_info) #endif From patchwork Fri Sep 9 09:46:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971429 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F5D2C6FA82 for ; Fri, 9 Sep 2022 09:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231843AbiIIJuC (ORCPT ); Fri, 9 Sep 2022 05:50:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231720AbiIIJty (ORCPT ); Fri, 9 Sep 2022 05:49:54 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A90BE26AF9; Fri, 9 Sep 2022 02:49:49 -0700 (PDT) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB1S4Sd3zZcHj; Fri, 9 Sep 2022 17:45:16 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:48 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:47 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 04/10] crypto: hisilicon/qm - get error type from hardware registers Date: Fri, 9 Sep 2022 17:46:58 +0800 Message-ID: <20220909094704.32099-5-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Hardware V3 and later versions support get error type from registers. To be compatible with later hardware versions, get error type from registers instead of fixed marco. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre_main.c | 68 ++++++++++++--- drivers/crypto/hisilicon/qm.c | 101 ++++++++++------------ drivers/crypto/hisilicon/sec2/sec.h | 11 +++ drivers/crypto/hisilicon/sec2/sec_main.c | 51 ++++++++--- drivers/crypto/hisilicon/zip/zip_main.c | 74 ++++++++++++---- include/linux/hisi_acc_qm.h | 27 +----- 6 files changed, 206 insertions(+), 126 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index d484105b2716..36177c437c56 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -53,9 +53,7 @@ #define HPRE_CORE_IS_SCHD_OFFSET 0x90 #define HPRE_RAS_CE_ENB 0x301410 -#define HPRE_HAC_RAS_CE_ENABLE (BIT(0) | BIT(22) | BIT(23)) #define HPRE_RAS_NFE_ENB 0x301414 -#define HPRE_HAC_RAS_NFE_ENABLE 0x3ffffe #define HPRE_RAS_FE_ENB 0x301418 #define HPRE_OOO_SHUTDOWN_SEL 0x301a3c #define HPRE_HAC_RAS_FE_ENABLE 0 @@ -147,6 +145,28 @@ static const char * const hpre_debug_file_name[] = { [HPRE_CLUSTER_CTRL] = "cluster_ctrl", }; +enum hpre_cap_type { + HPRE_QM_NFE_MASK_CAP, + HPRE_QM_RESET_MASK_CAP, + HPRE_QM_OOO_SHUTDOWN_MASK_CAP, + HPRE_QM_CE_MASK_CAP, + HPRE_NFE_MASK_CAP, + HPRE_RESET_MASK_CAP, + HPRE_OOO_SHUTDOWN_MASK_CAP, + HPRE_CE_MASK_CAP, +}; + +static const struct hisi_qm_cap_info hpre_basic_info[] = { + {HPRE_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C37, 0x7C37}, + {HPRE_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC37, 0x6C37}, + {HPRE_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C37}, + {HPRE_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8}, + {HPRE_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xFFFFFE}, + {HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFFFE}, + {HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFFFE}, + {HPRE_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1}, +}; + static const struct hpre_hw_error hpre_hw_errors[] = { { .int_msk = BIT(0), @@ -630,7 +650,8 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable) val1 = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB); if (enable) { val1 |= HPRE_AM_OOO_SHUTDOWN_ENABLE; - val2 = HPRE_HAC_RAS_NFE_ENABLE; + val2 = hisi_qm_get_hw_info(qm, hpre_basic_info, + HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); } else { val1 &= ~HPRE_AM_OOO_SHUTDOWN_ENABLE; val2 = 0x0; @@ -644,21 +665,30 @@ static void hpre_master_ooo_ctrl(struct hisi_qm *qm, bool enable) static void hpre_hw_error_disable(struct hisi_qm *qm) { - /* disable hpre hw error interrupts */ - writel(HPRE_CORE_INT_DISABLE, qm->io_base + HPRE_INT_MASK); + u32 ce, nfe; + ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver); + nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); + + /* disable hpre hw error interrupts */ + writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_INT_MASK); /* disable HPRE block master OOO when nfe occurs on Kunpeng930 */ hpre_master_ooo_ctrl(qm, false); } static void hpre_hw_error_enable(struct hisi_qm *qm) { + u32 ce, nfe; + + ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CE_MASK_CAP, qm->cap_ver); + nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); + /* clear HPRE hw error source if having */ - writel(HPRE_CORE_INT_DISABLE, qm->io_base + HPRE_HAC_SOURCE_INT); + writel(ce | nfe | HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_HAC_SOURCE_INT); /* configure error type */ - writel(HPRE_HAC_RAS_CE_ENABLE, qm->io_base + HPRE_RAS_CE_ENB); - writel(HPRE_HAC_RAS_NFE_ENABLE, qm->io_base + HPRE_RAS_NFE_ENB); + writel(ce, qm->io_base + HPRE_RAS_CE_ENB); + writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB); writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB); /* enable HPRE block master OOO when nfe occurs on Kunpeng930 */ @@ -1125,7 +1155,11 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm) static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { + u32 nfe; + writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT); + nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver); + writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB); } static void hpre_open_axi_master_ooo(struct hisi_qm *qm) @@ -1143,14 +1177,20 @@ static void hpre_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; - err_info->ce = QM_BASE_CE; - err_info->fe = 0; - err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | - HPRE_OOO_ECC_2BIT_ERR; - err_info->dev_ce_mask = HPRE_HAC_RAS_CE_ENABLE; + err_info->fe = HPRE_HAC_RAS_FE_ENABLE; + err_info->ce = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_CE_MASK_CAP, qm->cap_ver); + err_info->nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_QM_NFE_MASK_CAP, qm->cap_ver); + err_info->ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR; + err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, + HPRE_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, + HPRE_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, + HPRE_QM_RESET_MASK_CAP, qm->cap_ver); + err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, + HPRE_RESET_MASK_CAP, qm->cap_ver); err_info->msi_wr_port = HPRE_WR_MSI_PORT; err_info->acpi_rst = "HRST"; - err_info->nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT; } static const struct hisi_qm_err_ini hpre_err_ini = { diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index e227a3e50600..f62e48ccef2b 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -126,7 +126,6 @@ #define QM_DFX_CNT_CLR_CE 0x100118 #define QM_ABNORMAL_INT_SOURCE 0x100000 -#define QM_ABNORMAL_INT_SOURCE_CLR GENMASK(14, 0) #define QM_ABNORMAL_INT_MASK 0x100004 #define QM_ABNORMAL_INT_MASK_VALUE 0x7fff #define QM_ABNORMAL_INT_STATUS 0x100008 @@ -144,8 +143,10 @@ #define QM_RAS_NFE_ENABLE 0x1000f4 #define QM_RAS_CE_THRESHOLD 0x1000f8 #define QM_RAS_CE_TIMES_PER_IRQ 1 -#define QM_RAS_MSI_INT_SEL 0x1040f4 #define QM_OOO_SHUTDOWN_SEL 0x1040f8 +#define QM_ECC_MBIT BIT(2) +#define QM_DB_TIMEOUT BIT(10) +#define QM_OF_FIFO_OF BIT(11) #define QM_RESET_WAIT_TIMEOUT 400 #define QM_PEH_VENDOR_ID 0x1000d8 @@ -454,7 +455,7 @@ struct hisi_qm_hw_ops { u8 cmd, u16 index, u8 priority); u32 (*get_irq_num)(struct hisi_qm *qm); int (*debug_init)(struct hisi_qm *qm); - void (*hw_error_init)(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe); + void (*hw_error_init)(struct hisi_qm *qm); void (*hw_error_uninit)(struct hisi_qm *qm); enum acc_err_result (*hw_error_handle)(struct hisi_qm *qm); int (*set_msi)(struct hisi_qm *qm, bool set); @@ -651,22 +652,17 @@ static u32 qm_get_dev_err_status(struct hisi_qm *qm) } /* Check if the error causes the master ooo block */ -static int qm_check_dev_error(struct hisi_qm *qm) +static bool qm_check_dev_error(struct hisi_qm *qm) { u32 val, dev_val; if (qm->fun_type == QM_HW_VF) - return 0; - - val = qm_get_hw_error_status(qm); - dev_val = qm_get_dev_err_status(qm); + return false; - if (qm->ver < QM_HW_V3) - return (val & QM_ECC_MBIT) || - (dev_val & qm->err_info.ecc_2bits_mask); + val = qm_get_hw_error_status(qm) & qm->err_info.qm_shutdown_mask; + dev_val = qm_get_dev_err_status(qm) & qm->err_info.dev_shutdown_mask; - return (val & readl(qm->io_base + QM_OOO_SHUTDOWN_SEL)) || - (dev_val & (~qm->err_info.dev_ce_mask)); + return val || dev_val; } static int qm_wait_reset_finish(struct hisi_qm *qm) @@ -2346,58 +2342,65 @@ static void qm_create_debugfs_file(struct hisi_qm *qm, struct dentry *dir, file->debug = &qm->debug; } -static void qm_hw_error_init_v1(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe) +static void qm_hw_error_init_v1(struct hisi_qm *qm) { writel(QM_ABNORMAL_INT_MASK_VALUE, qm->io_base + QM_ABNORMAL_INT_MASK); } -static void qm_hw_error_cfg(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe) +static void qm_hw_error_cfg(struct hisi_qm *qm) { - qm->error_mask = ce | nfe | fe; + struct hisi_qm_err_info *err_info = &qm->err_info; + + qm->error_mask = err_info->nfe | err_info->ce | err_info->fe; /* clear QM hw residual error source */ - writel(QM_ABNORMAL_INT_SOURCE_CLR, - qm->io_base + QM_ABNORMAL_INT_SOURCE); + writel(qm->error_mask, qm->io_base + QM_ABNORMAL_INT_SOURCE); /* configure error type */ - writel(ce, qm->io_base + QM_RAS_CE_ENABLE); + writel(err_info->ce, qm->io_base + QM_RAS_CE_ENABLE); writel(QM_RAS_CE_TIMES_PER_IRQ, qm->io_base + QM_RAS_CE_THRESHOLD); - writel(nfe, qm->io_base + QM_RAS_NFE_ENABLE); - writel(fe, qm->io_base + QM_RAS_FE_ENABLE); + writel(err_info->nfe, qm->io_base + QM_RAS_NFE_ENABLE); + writel(err_info->fe, qm->io_base + QM_RAS_FE_ENABLE); } -static void qm_hw_error_init_v2(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe) +static void qm_hw_error_init_v2(struct hisi_qm *qm) { - u32 irq_enable = ce | nfe | fe; - u32 irq_unmask = ~irq_enable; + u32 irq_unmask; - qm_hw_error_cfg(qm, ce, nfe, fe); + qm_hw_error_cfg(qm); + irq_unmask = ~qm->error_mask; irq_unmask &= readl(qm->io_base + QM_ABNORMAL_INT_MASK); writel(irq_unmask, qm->io_base + QM_ABNORMAL_INT_MASK); } static void qm_hw_error_uninit_v2(struct hisi_qm *qm) { - writel(QM_ABNORMAL_INT_MASK_VALUE, qm->io_base + QM_ABNORMAL_INT_MASK); + u32 irq_mask = qm->error_mask; + + irq_mask |= readl(qm->io_base + QM_ABNORMAL_INT_MASK); + writel(irq_mask, qm->io_base + QM_ABNORMAL_INT_MASK); } -static void qm_hw_error_init_v3(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe) +static void qm_hw_error_init_v3(struct hisi_qm *qm) { - u32 irq_enable = ce | nfe | fe; - u32 irq_unmask = ~irq_enable; + u32 irq_unmask; - qm_hw_error_cfg(qm, ce, nfe, fe); + qm_hw_error_cfg(qm); /* enable close master ooo when hardware error happened */ - writel(nfe & (~QM_DB_RANDOM_INVALID), qm->io_base + QM_OOO_SHUTDOWN_SEL); + writel(qm->err_info.qm_shutdown_mask, qm->io_base + QM_OOO_SHUTDOWN_SEL); + irq_unmask = ~qm->error_mask; irq_unmask &= readl(qm->io_base + QM_ABNORMAL_INT_MASK); writel(irq_unmask, qm->io_base + QM_ABNORMAL_INT_MASK); } static void qm_hw_error_uninit_v3(struct hisi_qm *qm) { - writel(QM_ABNORMAL_INT_MASK_VALUE, qm->io_base + QM_ABNORMAL_INT_MASK); + u32 irq_mask = qm->error_mask; + + irq_mask |= readl(qm->io_base + QM_ABNORMAL_INT_MASK); + writel(irq_mask, qm->io_base + QM_ABNORMAL_INT_MASK); /* disable close master ooo when hardware error happened */ writel(0x0, qm->io_base + QM_OOO_SHUTDOWN_SEL); @@ -2442,7 +2445,7 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status) static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm) { - u32 error_status, tmp, val; + u32 error_status, tmp; /* read err sts */ tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS); @@ -2453,17 +2456,11 @@ static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm) qm->err_status.is_qm_ecc_mbit = true; qm_log_hw_error(qm, error_status); - val = error_status | QM_DB_RANDOM_INVALID | QM_BASE_CE; - /* ce error does not need to be reset */ - if (val == (QM_DB_RANDOM_INVALID | QM_BASE_CE)) { - writel(error_status, qm->io_base + - QM_ABNORMAL_INT_SOURCE); - writel(qm->err_info.nfe, - qm->io_base + QM_RAS_NFE_ENABLE); - return ACC_ERR_RECOVERED; - } + if (error_status & qm->err_info.qm_reset_mask) + return ACC_ERR_NEED_RESET; - return ACC_ERR_NEED_RESET; + writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE); + writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE); } return ACC_ERR_RECOVERED; @@ -4202,14 +4199,12 @@ DEFINE_DEBUGFS_ATTRIBUTE(qm_atomic64_ops, qm_debugfs_atomic64_get, static void qm_hw_error_init(struct hisi_qm *qm) { - struct hisi_qm_err_info *err_info = &qm->err_info; - if (!qm->ops->hw_error_init) { dev_err(&qm->pdev->dev, "QM doesn't support hw error handling!\n"); return; } - qm->ops->hw_error_init(qm, err_info->ce, err_info->nfe, err_info->fe); + qm->ops->hw_error_init(qm); } static void qm_hw_error_uninit(struct hisi_qm *qm) @@ -4963,17 +4958,11 @@ static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm) if (qm->err_ini->log_dev_hw_err) qm->err_ini->log_dev_hw_err(qm, err_sts); - /* ce error does not need to be reset */ - if ((err_sts | qm->err_info.dev_ce_mask) == - qm->err_info.dev_ce_mask) { - if (qm->err_ini->clear_dev_hw_err_status) - qm->err_ini->clear_dev_hw_err_status(qm, - err_sts); - - return ACC_ERR_RECOVERED; - } + if (err_sts & qm->err_info.dev_reset_mask) + return ACC_ERR_NEED_RESET; - return ACC_ERR_NEED_RESET; + if (qm->err_ini->clear_dev_hw_err_status) + qm->err_ini->clear_dev_hw_err_status(qm, err_sts); } return ACC_ERR_RECOVERED; diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h index 04e034abc5e8..895ba9b47554 100644 --- a/drivers/crypto/hisilicon/sec2/sec.h +++ b/drivers/crypto/hisilicon/sec2/sec.h @@ -192,6 +192,17 @@ struct sec_dev { bool iommu_used; }; +enum sec_cap_type { + SEC_QM_NFE_MASK_CAP = 0x0, + SEC_QM_RESET_MASK_CAP, + SEC_QM_OOO_SHUTDOWN_MASK_CAP, + SEC_QM_CE_MASK_CAP, + SEC_NFE_MASK_CAP, + SEC_RESET_MASK_CAP, + SEC_OOO_SHUTDOWN_MASK_CAP, + SEC_CE_MASK_CAP, +}; + void sec_destroy_qps(struct hisi_qp **qps, int qp_num); struct hisi_qp **sec_create_qps(void); int sec_register_to_crypto(struct hisi_qm *qm); diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index ea7f85f72b10..e814f94dd0d4 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -41,16 +41,12 @@ #define SEC_ECC_NUM 16 #define SEC_ECC_MASH 0xFF #define SEC_CORE_INT_DISABLE 0x0 -#define SEC_CORE_INT_ENABLE 0x7c1ff -#define SEC_CORE_INT_CLEAR 0x7c1ff #define SEC_SAA_ENABLE 0x17f #define SEC_RAS_CE_REG 0x301050 #define SEC_RAS_FE_REG 0x301054 #define SEC_RAS_NFE_REG 0x301058 -#define SEC_RAS_CE_ENB_MSK 0x88 #define SEC_RAS_FE_ENB_MSK 0x0 -#define SEC_RAS_NFE_ENB_MSK 0x7c177 #define SEC_OOO_SHUTDOWN_SEL 0x301014 #define SEC_RAS_DISABLE 0x0 #define SEC_MEM_START_INIT_REG 0x301100 @@ -136,6 +132,17 @@ static struct hisi_qm_list sec_devices = { .unregister_from_crypto = sec_unregister_from_crypto, }; +static const struct hisi_qm_cap_info sec_basic_info[] = { + {SEC_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C77, 0x7C77}, + {SEC_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC77, 0x6C77}, + {SEC_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C77}, + {SEC_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8}, + {SEC_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x177, 0x60177}, + {SEC_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x177, 0x177}, + {SEC_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x4, 0x177}, + {SEC_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x88, 0xC088}, +}; + static const struct sec_hw_error sec_hw_errors[] = { { .int_msk = BIT(0), @@ -575,7 +582,8 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable) val1 = readl(qm->io_base + SEC_CONTROL_REG); if (enable) { val1 |= SEC_AXI_SHUTDOWN_ENABLE; - val2 = SEC_RAS_NFE_ENB_MSK; + val2 = hisi_qm_get_hw_info(qm, sec_basic_info, + SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); } else { val1 &= SEC_AXI_SHUTDOWN_DISABLE; val2 = 0x0; @@ -589,25 +597,30 @@ static void sec_master_ooo_ctrl(struct hisi_qm *qm, bool enable) static void sec_hw_error_enable(struct hisi_qm *qm) { + u32 ce, nfe; + if (qm->ver == QM_HW_V1) { writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK); pci_info(qm->pdev, "V1 not support hw error handle\n"); return; } + ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CE_MASK_CAP, qm->cap_ver); + nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); + /* clear SEC hw error source if having */ - writel(SEC_CORE_INT_CLEAR, qm->io_base + SEC_CORE_INT_SOURCE); + writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_SOURCE); /* enable RAS int */ - writel(SEC_RAS_CE_ENB_MSK, qm->io_base + SEC_RAS_CE_REG); + writel(ce, qm->io_base + SEC_RAS_CE_REG); writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG); - writel(SEC_RAS_NFE_ENB_MSK, qm->io_base + SEC_RAS_NFE_REG); + writel(nfe, qm->io_base + SEC_RAS_NFE_REG); /* enable SEC block master OOO when nfe occurs on Kunpeng930 */ sec_master_ooo_ctrl(qm, true); /* enable SEC hw error interrupts */ - writel(SEC_CORE_INT_ENABLE, qm->io_base + SEC_CORE_INT_MASK); + writel(ce | nfe | SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_CORE_INT_MASK); } static void sec_hw_error_disable(struct hisi_qm *qm) @@ -938,7 +951,11 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm) static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { + u32 nfe; + writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE); + nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver); + writel(nfe, qm->io_base + SEC_RAS_NFE_REG); } static void sec_open_axi_master_ooo(struct hisi_qm *qm) @@ -954,14 +971,20 @@ static void sec_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; - err_info->ce = QM_BASE_CE; - err_info->fe = 0; + err_info->fe = SEC_RAS_FE_ENB_MSK; + err_info->ce = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_CE_MASK_CAP, qm->cap_ver); + err_info->nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_QM_NFE_MASK_CAP, qm->cap_ver); err_info->ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC; - err_info->dev_ce_mask = SEC_RAS_CE_ENB_MSK; + err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, + SEC_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, sec_basic_info, + SEC_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, + SEC_QM_RESET_MASK_CAP, qm->cap_ver); + err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, sec_basic_info, + SEC_RESET_MASK_CAP, qm->cap_ver); err_info->msi_wr_port = BIT(0); err_info->acpi_rst = "SRST"; - err_info->nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT | - QM_ACC_WB_NOT_READY_TIMEOUT; } static const struct hisi_qm_err_ini sec_err_ini = { diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index 24d5226e0a0a..6ed10ec05a73 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -69,11 +69,10 @@ #define HZIP_CORE_INT_STATUS_M_ECC BIT(1) #define HZIP_CORE_SRAM_ECC_ERR_INFO 0x301148 #define HZIP_CORE_INT_RAS_CE_ENB 0x301160 -#define HZIP_CORE_INT_RAS_CE_ENABLE 0x1 #define HZIP_CORE_INT_RAS_NFE_ENB 0x301164 #define HZIP_CORE_INT_RAS_FE_ENB 0x301168 +#define HZIP_CORE_INT_RAS_FE_ENB_MASK 0x0 #define HZIP_OOO_SHUTDOWN_SEL 0x30120C -#define HZIP_CORE_INT_RAS_NFE_ENABLE 0x1FFE #define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16 #define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24 #define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0) @@ -186,6 +185,28 @@ struct hisi_zip_ctrl { struct ctrl_debug_file files[HZIP_DEBUG_FILE_NUM]; }; +enum zip_cap_type { + ZIP_QM_NFE_MASK_CAP = 0x0, + ZIP_QM_RESET_MASK_CAP, + ZIP_QM_OOO_SHUTDOWN_MASK_CAP, + ZIP_QM_CE_MASK_CAP, + ZIP_NFE_MASK_CAP, + ZIP_RESET_MASK_CAP, + ZIP_OOO_SHUTDOWN_MASK_CAP, + ZIP_CE_MASK_CAP, +}; + +static struct hisi_qm_cap_info zip_basic_cap_info[] = { + {ZIP_QM_NFE_MASK_CAP, 0x3124, 0, GENMASK(31, 0), 0x0, 0x1C57, 0x7C77}, + {ZIP_QM_RESET_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0xC57, 0x6C77}, + {ZIP_QM_OOO_SHUTDOWN_MASK_CAP, 0x3128, 0, GENMASK(31, 0), 0x0, 0x4, 0x6C77}, + {ZIP_QM_CE_MASK_CAP, 0x312C, 0, GENMASK(31, 0), 0x0, 0x8, 0x8}, + {ZIP_NFE_MASK_CAP, 0x3130, 0, GENMASK(31, 0), 0x0, 0x7FE, 0x1FFE}, + {ZIP_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x7FE, 0x7FE}, + {ZIP_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x2, 0x7FE}, + {ZIP_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1}, +}; + enum { HZIP_COMP_CORE0, HZIP_COMP_CORE1, @@ -457,7 +478,8 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable) val1 = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL); if (enable) { val1 |= HZIP_AXI_SHUTDOWN_ENABLE; - val2 = HZIP_CORE_INT_RAS_NFE_ENABLE; + val2 = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); } else { val1 &= ~HZIP_AXI_SHUTDOWN_ENABLE; val2 = 0x0; @@ -471,6 +493,8 @@ static void hisi_zip_master_ooo_ctrl(struct hisi_qm *qm, bool enable) static void hisi_zip_hw_error_enable(struct hisi_qm *qm) { + u32 nfe, ce; + if (qm->ver == QM_HW_V1) { writel(HZIP_CORE_INT_MASK_ALL, qm->io_base + HZIP_CORE_INT_MASK_REG); @@ -478,17 +502,17 @@ static void hisi_zip_hw_error_enable(struct hisi_qm *qm) return; } + nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); + ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver); + /* clear ZIP hw error source if having */ - writel(HZIP_CORE_INT_MASK_ALL, qm->io_base + HZIP_CORE_INT_SOURCE); + writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_SOURCE); /* configure error type */ - writel(HZIP_CORE_INT_RAS_CE_ENABLE, - qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); - writel(0x0, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB); - writel(HZIP_CORE_INT_RAS_NFE_ENABLE, - qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); + writel(ce, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB); + writel(HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB); + writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); - /* enable ZIP block master OOO when nfe occurs on Kunpeng930 */ hisi_zip_master_ooo_ctrl(qm, true); /* enable ZIP hw error interrupts */ @@ -497,10 +521,13 @@ static void hisi_zip_hw_error_enable(struct hisi_qm *qm) static void hisi_zip_hw_error_disable(struct hisi_qm *qm) { + u32 nfe, ce; + /* disable ZIP hw error interrupts */ - writel(HZIP_CORE_INT_MASK_ALL, qm->io_base + HZIP_CORE_INT_MASK_REG); + nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); + ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CE_MASK_CAP, qm->cap_ver); + writel(ce | nfe | HZIP_CORE_INT_RAS_FE_ENB_MASK, qm->io_base + HZIP_CORE_INT_MASK_REG); - /* disable ZIP block master OOO when nfe occurs on Kunpeng930 */ hisi_zip_master_ooo_ctrl(qm, false); } @@ -900,7 +927,11 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm) static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts) { + u32 nfe; + writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE); + nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver); + writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB); } static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm) @@ -934,16 +965,21 @@ static void hisi_zip_err_info_init(struct hisi_qm *qm) { struct hisi_qm_err_info *err_info = &qm->err_info; - err_info->ce = QM_BASE_CE; - err_info->fe = 0; + err_info->fe = HZIP_CORE_INT_RAS_FE_ENB_MASK; + err_info->ce = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_QM_CE_MASK_CAP, qm->cap_ver); + err_info->nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_QM_NFE_MASK_CAP, qm->cap_ver); err_info->ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC; - err_info->dev_ce_mask = HZIP_CORE_INT_RAS_CE_ENABLE; + err_info->qm_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_QM_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->dev_shutdown_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_OOO_SHUTDOWN_MASK_CAP, qm->cap_ver); + err_info->qm_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_QM_RESET_MASK_CAP, qm->cap_ver); + err_info->dev_reset_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_RESET_MASK_CAP, qm->cap_ver); err_info->msi_wr_port = HZIP_WR_PORT; err_info->acpi_rst = "ZRST"; - err_info->nfe = QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT; - - if (qm->ver >= QM_HW_V3) - err_info->nfe |= QM_ACC_DO_TASK_TIMEOUT; } static const struct hisi_qm_err_ini hisi_zip_err_ini = { diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h index 371c0e7ffd27..e230c7c46110 100644 --- a/include/linux/hisi_acc_qm.h +++ b/include/linux/hisi_acc_qm.h @@ -87,28 +87,6 @@ #define PEH_AXUSER_CFG 0x401001 #define PEH_AXUSER_CFG_ENABLE 0xffffffff -#define QM_AXI_RRESP BIT(0) -#define QM_AXI_BRESP BIT(1) -#define QM_ECC_MBIT BIT(2) -#define QM_ECC_1BIT BIT(3) -#define QM_ACC_GET_TASK_TIMEOUT BIT(4) -#define QM_ACC_DO_TASK_TIMEOUT BIT(5) -#define QM_ACC_WB_NOT_READY_TIMEOUT BIT(6) -#define QM_SQ_CQ_VF_INVALID BIT(7) -#define QM_CQ_VF_INVALID BIT(8) -#define QM_SQ_VF_INVALID BIT(9) -#define QM_DB_TIMEOUT BIT(10) -#define QM_OF_FIFO_OF BIT(11) -#define QM_DB_RANDOM_INVALID BIT(12) -#define QM_MAILBOX_TIMEOUT BIT(13) -#define QM_FLR_TIMEOUT BIT(14) - -#define QM_BASE_NFE (QM_AXI_RRESP | QM_AXI_BRESP | QM_ECC_MBIT | \ - QM_ACC_GET_TASK_TIMEOUT | QM_DB_TIMEOUT | \ - QM_OF_FIFO_OF | QM_DB_RANDOM_INVALID | \ - QM_MAILBOX_TIMEOUT | QM_FLR_TIMEOUT) -#define QM_BASE_CE QM_ECC_1BIT - #define QM_MIN_QNUM 2 #define HISI_ACC_SGL_SGE_NR_MAX 255 #define QM_SHAPER_CFG 0x100164 @@ -240,7 +218,10 @@ struct hisi_qm_err_info { char *acpi_rst; u32 msi_wr_port; u32 ecc_2bits_mask; - u32 dev_ce_mask; + u32 qm_shutdown_mask; + u32 dev_shutdown_mask; + u32 qm_reset_mask; + u32 dev_reset_mask; u32 ce; u32 nfe; u32 fe; From patchwork Fri Sep 9 09:46:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971431 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9340AECAAA1 for ; Fri, 9 Sep 2022 09:50:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232196AbiIIJu0 (ORCPT ); Fri, 9 Sep 2022 05:50:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231970AbiIIJuD (ORCPT ); Fri, 9 Sep 2022 05:50:03 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AF2C4CA1C; Fri, 9 Sep 2022 02:49:59 -0700 (PDT) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB1d5t21zZcmy; Fri, 9 Sep 2022 17:45:25 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:48 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:48 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 05/10] crypto: hisilicon/qm - support get device irq information from hardware registers Date: Fri, 9 Sep 2022 17:46:59 +0800 Message-ID: <20220909094704.32099-6-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Support get device irq information from hardware registers instead of fixed macros. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/qm.c | 294 ++++++++++++++++++++++------------ 1 file changed, 195 insertions(+), 99 deletions(-) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index f62e48ccef2b..e4a506d02d23 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -22,15 +22,11 @@ #define QM_VF_AEQ_INT_MASK 0x4 #define QM_VF_EQ_INT_SOURCE 0x8 #define QM_VF_EQ_INT_MASK 0xc -#define QM_IRQ_NUM_V1 1 -#define QM_IRQ_NUM_PF_V2 4 -#define QM_IRQ_NUM_VF_V2 2 -#define QM_IRQ_NUM_VF_V3 3 -#define QM_EQ_EVENT_IRQ_VECTOR 0 -#define QM_AEQ_EVENT_IRQ_VECTOR 1 -#define QM_CMD_EVENT_IRQ_VECTOR 2 -#define QM_ABNORMAL_EVENT_IRQ_VECTOR 3 +#define QM_IRQ_VECTOR_MASK GENMASK(15, 0) +#define QM_IRQ_TYPE_MASK GENMASK(15, 0) +#define QM_IRQ_TYPE_SHIFT 16 +#define QM_ABN_IRQ_TYPE_MASK GENMASK(7, 0) /* mailbox */ #define QM_MB_PING_ALL_VFS 0xffff @@ -336,6 +332,12 @@ enum qm_basic_type { QM_FUNC_MAX_QP_CAP, QM_XEQ_DEPTH_CAP, QM_QP_DEPTH_CAP, + QM_EQ_IRQ_TYPE_CAP, + QM_AEQ_IRQ_TYPE_CAP, + QM_ABN_IRQ_TYPE_CAP, + QM_PF2VF_IRQ_TYPE_CAP, + QM_PF_IRQ_NUM_CAP, + QM_VF_IRQ_NUM_CAP, }; static const struct hisi_qm_cap_info qm_cap_info_comm[] = { @@ -359,6 +361,12 @@ static const struct hisi_qm_cap_info qm_basic_info[] = { {QM_FUNC_MAX_QP_CAP, 0x100158, 11, GENMASK(10, 0), 0x1000, 0x400, 0x400}, {QM_XEQ_DEPTH_CAP, 0x3104, 0, GENMASK(15, 0), 0x800, 0x4000800, 0x4000800}, {QM_QP_DEPTH_CAP, 0x3108, 0, GENMASK(31, 0), 0x4000400, 0x4000400, 0x4000400}, + {QM_EQ_IRQ_TYPE_CAP, 0x310c, 0, GENMASK(31, 0), 0x10000, 0x10000, 0x10000}, + {QM_AEQ_IRQ_TYPE_CAP, 0x3110, 0, GENMASK(31, 0), 0x0, 0x10001, 0x10001}, + {QM_ABN_IRQ_TYPE_CAP, 0x3114, 0, GENMASK(31, 0), 0x0, 0x10003, 0x10003}, + {QM_PF2VF_IRQ_TYPE_CAP, 0x3118, 0, GENMASK(31, 0), 0x0, 0x0, 0x10002}, + {QM_PF_IRQ_NUM_CAP, 0x311c, 16, GENMASK(15, 0), 0x1, 0x4, 0x4}, + {QM_VF_IRQ_NUM_CAP, 0x311c, 0, GENMASK(15, 0), 0x1, 0x2, 0x3}, }; struct qm_cqe { @@ -453,7 +461,6 @@ struct hisi_qm_hw_ops { int (*get_vft)(struct hisi_qm *qm, u32 *base, u32 *number); void (*qm_db)(struct hisi_qm *qm, u16 qn, u8 cmd, u16 index, u8 priority); - u32 (*get_irq_num)(struct hisi_qm *qm); int (*debug_init)(struct hisi_qm *qm); void (*hw_error_init)(struct hisi_qm *qm); void (*hw_error_uninit)(struct hisi_qm *qm); @@ -562,6 +569,8 @@ static struct qm_typical_qos_table shaper_cbs_s[] = { {50100, 100000, 19} }; +static void qm_irqs_unregister(struct hisi_qm *qm); + static bool qm_avail_state(struct hisi_qm *qm, enum qm_state new) { enum qm_state curr = atomic_read(&qm->status.flags); @@ -904,25 +913,12 @@ static void qm_get_xqc_depth(struct hisi_qm *qm, u16 *low_bits, *low_bits = (depth >> QM_XQ_DEPTH_SHIFT) & QM_XQ_DEPTH_MASK; } -static u32 qm_get_irq_num_v1(struct hisi_qm *qm) -{ - return QM_IRQ_NUM_V1; -} - -static u32 qm_get_irq_num_v2(struct hisi_qm *qm) -{ - if (qm->fun_type == QM_HW_PF) - return QM_IRQ_NUM_PF_V2; - else - return QM_IRQ_NUM_VF_V2; -} - -static u32 qm_get_irq_num_v3(struct hisi_qm *qm) +static u32 qm_get_irq_num(struct hisi_qm *qm) { if (qm->fun_type == QM_HW_PF) - return QM_IRQ_NUM_PF_V2; + return hisi_qm_get_hw_info(qm, qm_basic_info, QM_PF_IRQ_NUM_CAP, qm->cap_ver); - return QM_IRQ_NUM_VF_V3; + return hisi_qm_get_hw_info(qm, qm_basic_info, QM_VF_IRQ_NUM_CAP, qm->cap_ver); } static int qm_pm_get_sync(struct hisi_qm *qm) @@ -1196,24 +1192,6 @@ static irqreturn_t qm_aeq_irq(int irq, void *data) return IRQ_WAKE_THREAD; } -static void qm_irq_unregister(struct hisi_qm *qm) -{ - struct pci_dev *pdev = qm->pdev; - - free_irq(pci_irq_vector(pdev, QM_EQ_EVENT_IRQ_VECTOR), qm); - - if (qm->ver > QM_HW_V1) { - free_irq(pci_irq_vector(pdev, QM_AEQ_EVENT_IRQ_VECTOR), qm); - - if (qm->fun_type == QM_HW_PF) - free_irq(pci_irq_vector(pdev, - QM_ABNORMAL_EVENT_IRQ_VECTOR), qm); - } - - if (qm->ver > QM_HW_V2) - free_irq(pci_irq_vector(pdev, QM_CMD_EVENT_IRQ_VECTOR), qm); -} - static void qm_init_qp_status(struct hisi_qp *qp) { struct hisi_qp_status *qp_status = &qp->qp_status; @@ -2799,7 +2777,6 @@ static int qm_set_msi_v3(struct hisi_qm *qm, bool set) static const struct hisi_qm_hw_ops qm_hw_ops_v1 = { .qm_db = qm_db_v1, - .get_irq_num = qm_get_irq_num_v1, .hw_error_init = qm_hw_error_init_v1, .set_msi = qm_set_msi, }; @@ -2807,7 +2784,6 @@ static const struct hisi_qm_hw_ops qm_hw_ops_v1 = { static const struct hisi_qm_hw_ops qm_hw_ops_v2 = { .get_vft = qm_get_vft_v2, .qm_db = qm_db_v2, - .get_irq_num = qm_get_irq_num_v2, .hw_error_init = qm_hw_error_init_v2, .hw_error_uninit = qm_hw_error_uninit_v2, .hw_error_handle = qm_hw_error_handle_v2, @@ -2817,7 +2793,6 @@ static const struct hisi_qm_hw_ops qm_hw_ops_v2 = { static const struct hisi_qm_hw_ops qm_hw_ops_v3 = { .get_vft = qm_get_vft_v2, .qm_db = qm_db_v2, - .get_irq_num = qm_get_irq_num_v3, .hw_error_init = qm_hw_error_init_v3, .hw_error_uninit = qm_hw_error_uninit_v3, .hw_error_handle = qm_hw_error_handle_v2, @@ -3803,7 +3778,7 @@ void hisi_qm_uninit(struct hisi_qm *qm) hisi_qm_set_state(qm, QM_NOT_READY); up_write(&qm->qps_lock); - qm_irq_unregister(qm); + qm_irqs_unregister(qm); hisi_qm_pci_uninit(qm); if (qm->use_sva) { uacce_remove(qm->uacce); @@ -5658,51 +5633,6 @@ static irqreturn_t qm_abnormal_irq(int irq, void *data) return IRQ_HANDLED; } -static int qm_irq_register(struct hisi_qm *qm) -{ - struct pci_dev *pdev = qm->pdev; - int ret; - - ret = request_irq(pci_irq_vector(pdev, QM_EQ_EVENT_IRQ_VECTOR), - qm_irq, 0, qm->dev_name, qm); - if (ret) - return ret; - - if (qm->ver > QM_HW_V1) { - ret = request_threaded_irq(pci_irq_vector(pdev, - QM_AEQ_EVENT_IRQ_VECTOR), - qm_aeq_irq, qm_aeq_thread, - 0, qm->dev_name, qm); - if (ret) - goto err_aeq_irq; - - if (qm->fun_type == QM_HW_PF) { - ret = request_irq(pci_irq_vector(pdev, - QM_ABNORMAL_EVENT_IRQ_VECTOR), - qm_abnormal_irq, 0, qm->dev_name, qm); - if (ret) - goto err_abonormal_irq; - } - } - - if (qm->ver > QM_HW_V2) { - ret = request_irq(pci_irq_vector(pdev, QM_CMD_EVENT_IRQ_VECTOR), - qm_mb_cmd_irq, 0, qm->dev_name, qm); - if (ret) - goto err_mb_cmd_irq; - } - - return 0; - -err_mb_cmd_irq: - if (qm->fun_type == QM_HW_PF) - free_irq(pci_irq_vector(pdev, QM_ABNORMAL_EVENT_IRQ_VECTOR), qm); -err_abonormal_irq: - free_irq(pci_irq_vector(pdev, QM_AEQ_EVENT_IRQ_VECTOR), qm); -err_aeq_irq: - free_irq(pci_irq_vector(pdev, QM_EQ_EVENT_IRQ_VECTOR), qm); - return ret; -} /** * hisi_qm_dev_shutdown() - Shutdown device. @@ -5983,6 +5913,176 @@ void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list) } EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister); +static void qm_unregister_abnormal_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + + if (qm->fun_type == QM_HW_VF) + return; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_ABN_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK)) + return; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + free_irq(pci_irq_vector(pdev, irq_vector), qm); +} + +static int qm_register_abnormal_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + int ret; + + if (qm->fun_type == QM_HW_VF) + return 0; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_ABN_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK)) + return 0; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_abnormal_irq, 0, qm->dev_name, qm); + if (ret) + dev_err(&qm->pdev->dev, "failed to request abnormal irq, ret = %d", ret); + + return ret; +} + +static void qm_unregister_mb_cmd_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_PF2VF_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + free_irq(pci_irq_vector(pdev, irq_vector), qm); +} + +static int qm_register_mb_cmd_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + int ret; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_PF2VF_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return 0; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_mb_cmd_irq, 0, qm->dev_name, qm); + if (ret) + dev_err(&pdev->dev, "failed to request function communication irq, ret = %d", ret); + + return ret; +} + +static void qm_unregister_aeq_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_AEQ_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + free_irq(pci_irq_vector(pdev, irq_vector), qm); +} + +static int qm_register_aeq_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + int ret; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_AEQ_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return 0; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + ret = request_threaded_irq(pci_irq_vector(pdev, irq_vector), qm_aeq_irq, + qm_aeq_thread, 0, qm->dev_name, qm); + if (ret) + dev_err(&pdev->dev, "failed to request eq irq, ret = %d", ret); + + return ret; +} + +static void qm_unregister_eq_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_EQ_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + free_irq(pci_irq_vector(pdev, irq_vector), qm); +} + +static int qm_register_eq_irq(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + u32 irq_vector, val; + int ret; + + val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_EQ_IRQ_TYPE_CAP, qm->cap_ver); + if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK)) + return 0; + + irq_vector = val & QM_IRQ_VECTOR_MASK; + ret = request_irq(pci_irq_vector(pdev, irq_vector), qm_irq, 0, qm->dev_name, qm); + if (ret) + dev_err(&pdev->dev, "failed to request eq irq, ret = %d", ret); + + return ret; +} + +static void qm_irqs_unregister(struct hisi_qm *qm) +{ + qm_unregister_mb_cmd_irq(qm); + qm_unregister_abnormal_irq(qm); + qm_unregister_aeq_irq(qm); + qm_unregister_eq_irq(qm); +} + +static int qm_irqs_register(struct hisi_qm *qm) +{ + int ret; + + ret = qm_register_eq_irq(qm); + if (ret) + return ret; + + ret = qm_register_aeq_irq(qm); + if (ret) + goto free_eq_irq; + + ret = qm_register_abnormal_irq(qm); + if (ret) + goto free_aeq_irq; + + ret = qm_register_mb_cmd_irq(qm); + if (ret) + goto free_abnormal_irq; + + return 0; + +free_abnormal_irq: + qm_unregister_abnormal_irq(qm); +free_aeq_irq: + qm_unregister_aeq_irq(qm); +free_eq_irq: + qm_unregister_eq_irq(qm); + return ret; +} + static int qm_get_qp_num(struct hisi_qm *qm) { bool is_db_isolation; @@ -6117,11 +6217,7 @@ static int hisi_qm_pci_init(struct hisi_qm *qm) goto err_get_pci_res; pci_set_master(pdev); - if (!qm->ops->get_irq_num) { - ret = -EOPNOTSUPP; - goto err_get_pci_res; - } - num_vec = qm->ops->get_irq_num(qm); + num_vec = qm_get_irq_num(qm); ret = pci_alloc_irq_vectors(pdev, num_vec, num_vec, PCI_IRQ_MSI); if (ret < 0) { dev_err(dev, "Failed to enable MSI vectors!\n"); @@ -6294,7 +6390,7 @@ int hisi_qm_init(struct hisi_qm *qm) if (ret) return ret; - ret = qm_irq_register(qm); + ret = qm_irqs_register(qm); if (ret) goto err_pci_init; @@ -6336,7 +6432,7 @@ int hisi_qm_init(struct hisi_qm *qm) qm->uacce = NULL; } err_irq_register: - qm_irq_unregister(qm); + qm_irqs_unregister(qm); err_pci_init: hisi_qm_pci_uninit(qm); return ret; From patchwork Fri Sep 9 09:47:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971426 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A579DC6FA8B for ; Fri, 9 Sep 2022 09:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231717AbiIIJuA (ORCPT ); Fri, 9 Sep 2022 05:50:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231800AbiIIJtz (ORCPT ); Fri, 9 Sep 2022 05:49:55 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 502C233429; Fri, 9 Sep 2022 02:49:51 -0700 (PDT) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MPB2G4w2yzlVkj; Fri, 9 Sep 2022 17:45:58 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:49 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:49 +0800 From: Weili Qian To: CC: , , , , Zhiqi Song , Weili Qian Subject: [PATCH 06/10] crypto: hisilicon/hpre - support hpre capability Date: Fri, 9 Sep 2022 17:47:00 +0800 Message-ID: <20220909094704.32099-7-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zhiqi Song Read some hpre device configuration info from capability register, instead of fixed macros. Signed-off-by: Zhiqi Song Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre.h | 8 +- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 134 ++++++++++++++++---- drivers/crypto/hisilicon/hpre/hpre_main.c | 53 +++++++- 3 files changed, 157 insertions(+), 38 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index 9a0558ed82f9..9f0b94c8e03d 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -22,7 +22,8 @@ enum { HPRE_CLUSTER0, HPRE_CLUSTER1, HPRE_CLUSTER2, - HPRE_CLUSTER3 + HPRE_CLUSTER3, + HPRE_CLUSTERS_NUM_MAX }; enum hpre_ctrl_dbgfs_file { @@ -42,9 +43,6 @@ enum hpre_dfx_dbgfs_file { HPRE_DFX_FILE_NUM }; -#define HPRE_CLUSTERS_NUM_V2 (HPRE_CLUSTER3 + 1) -#define HPRE_CLUSTERS_NUM_V3 1 -#define HPRE_CLUSTERS_NUM_MAX HPRE_CLUSTERS_NUM_V2 #define HPRE_DEBUGFS_FILE_NUM (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM_MAX - 1) struct hpre_debugfs_file { @@ -105,5 +103,5 @@ struct hpre_sqe { struct hisi_qp *hpre_create_qp(u8 type); int hpre_algs_register(struct hisi_qm *qm); void hpre_algs_unregister(struct hisi_qm *qm); - +bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg); #endif diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index 2aa91a4f77da..ad1475feb313 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -51,6 +51,12 @@ struct hpre_ctx; #define HPRE_ECC_HW256_KSZ_B 32 #define HPRE_ECC_HW384_KSZ_B 48 +/* capability register mask */ +#define HPRE_DRV_RSA_MASK_CAP BIT(0) +#define HPRE_DRV_DH_MASK_CAP BIT(1) +#define HPRE_DRV_ECDH_MASK_CAP BIT(2) +#define HPRE_DRV_X25519_MASK_CAP BIT(5) + typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe); struct hpre_rsa_ctx { @@ -2070,22 +2076,75 @@ static struct kpp_alg curve25519_alg = { }, }; +static int hpre_register_rsa(struct hisi_qm *qm) +{ + int ret; + + if (!hpre_check_alg_support(qm, HPRE_DRV_RSA_MASK_CAP)) + return 0; -static int hpre_register_ecdh(void) + rsa.base.cra_flags = 0; + ret = crypto_register_akcipher(&rsa); + if (ret) + dev_err(&qm->pdev->dev, "failed to register rsa (%d)!\n", ret); + + return ret; +} + +static void hpre_unregister_rsa(struct hisi_qm *qm) +{ + if (!hpre_check_alg_support(qm, HPRE_DRV_RSA_MASK_CAP)) + return; + + crypto_unregister_akcipher(&rsa); +} + +static int hpre_register_dh(struct hisi_qm *qm) { int ret; - ret = crypto_register_kpp(&ecdh_nist_p192); + if (!hpre_check_alg_support(qm, HPRE_DRV_DH_MASK_CAP)) + return 0; + + ret = crypto_register_kpp(&dh); if (ret) + dev_err(&qm->pdev->dev, "failed to register dh (%d)!\n", ret); + + return ret; +} + +static void hpre_unregister_dh(struct hisi_qm *qm) +{ + if (!hpre_check_alg_support(qm, HPRE_DRV_DH_MASK_CAP)) + return; + + crypto_unregister_kpp(&dh); +} + +static int hpre_register_ecdh(struct hisi_qm *qm) +{ + int ret; + + if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP)) + return 0; + + ret = crypto_register_kpp(&ecdh_nist_p192); + if (ret) { + dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p192 (%d)!\n", ret); return ret; + } ret = crypto_register_kpp(&ecdh_nist_p256); - if (ret) + if (ret) { + dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p256 (%d)!\n", ret); goto unregister_ecdh_p192; + } ret = crypto_register_kpp(&ecdh_nist_p384); - if (ret) + if (ret) { + dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p384 (%d)!\n", ret); goto unregister_ecdh_p256; + } return 0; @@ -2096,52 +2155,73 @@ static int hpre_register_ecdh(void) return ret; } -static void hpre_unregister_ecdh(void) +static void hpre_unregister_ecdh(struct hisi_qm *qm) { + if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP)) + return; + crypto_unregister_kpp(&ecdh_nist_p384); crypto_unregister_kpp(&ecdh_nist_p256); crypto_unregister_kpp(&ecdh_nist_p192); } +static int hpre_register_x25519(struct hisi_qm *qm) +{ + int ret; + + if (!hpre_check_alg_support(qm, HPRE_DRV_X25519_MASK_CAP)) + return 0; + + ret = crypto_register_kpp(&curve25519_alg); + if (ret) + dev_err(&qm->pdev->dev, "failed to register x25519 (%d)!\n", ret); + + return ret; +} + +static void hpre_unregister_x25519(struct hisi_qm *qm) +{ + if (!hpre_check_alg_support(qm, HPRE_DRV_X25519_MASK_CAP)) + return; + + crypto_unregister_kpp(&curve25519_alg); +} + int hpre_algs_register(struct hisi_qm *qm) { int ret; - rsa.base.cra_flags = 0; - ret = crypto_register_akcipher(&rsa); + ret = hpre_register_rsa(qm); if (ret) return ret; - ret = crypto_register_kpp(&dh); + ret = hpre_register_dh(qm); if (ret) goto unreg_rsa; - if (qm->ver >= QM_HW_V3) { - ret = hpre_register_ecdh(); - if (ret) - goto unreg_dh; - ret = crypto_register_kpp(&curve25519_alg); - if (ret) - goto unreg_ecdh; - } - return 0; + ret = hpre_register_ecdh(qm); + if (ret) + goto unreg_dh; + + ret = hpre_register_x25519(qm); + if (ret) + goto unreg_ecdh; + + return ret; unreg_ecdh: - hpre_unregister_ecdh(); + hpre_unregister_ecdh(qm); unreg_dh: - crypto_unregister_kpp(&dh); + hpre_unregister_dh(qm); unreg_rsa: - crypto_unregister_akcipher(&rsa); + hpre_unregister_rsa(qm); return ret; } void hpre_algs_unregister(struct hisi_qm *qm) { - if (qm->ver >= QM_HW_V3) { - crypto_unregister_kpp(&curve25519_alg); - hpre_unregister_ecdh(); - } - - crypto_unregister_kpp(&dh); - crypto_unregister_akcipher(&rsa); + hpre_unregister_x25519(qm); + hpre_unregister_ecdh(qm); + hpre_unregister_dh(qm); + hpre_unregister_rsa(qm); } diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 36177c437c56..407cdd9d8413 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -77,8 +77,6 @@ #define HPRE_QM_AXI_CFG_MASK GENMASK(15, 0) #define HPRE_QM_VFG_AX_MASK GENMASK(7, 0) #define HPRE_BD_USR_MASK GENMASK(1, 0) -#define HPRE_CLUSTER_CORE_MASK_V2 GENMASK(3, 0) -#define HPRE_CLUSTER_CORE_MASK_V3 GENMASK(7, 0) #define HPRE_PREFETCH_CFG 0x301130 #define HPRE_SVA_PREFTCH_DFX 0x30115C #define HPRE_PREFETCH_ENABLE (~(BIT(0) | BIT(30))) @@ -154,6 +152,23 @@ enum hpre_cap_type { HPRE_RESET_MASK_CAP, HPRE_OOO_SHUTDOWN_MASK_CAP, HPRE_CE_MASK_CAP, + HPRE_CLUSTER_NUM_CAP, + HPRE_CORE_TYPE_NUM_CAP, + HPRE_CORE_NUM_CAP, + HPRE_CLUSTER_CORE_NUM_CAP, + HPRE_CORE_ENABLE_BITMAP_CAP, + HPRE_DRV_ALG_BITMAP_CAP, + HPRE_DEV_ALG_BITMAP_CAP, + HPRE_CORE1_ALG_BITMAP_CAP, + HPRE_CORE2_ALG_BITMAP_CAP, + HPRE_CORE3_ALG_BITMAP_CAP, + HPRE_CORE4_ALG_BITMAP_CAP, + HPRE_CORE5_ALG_BITMAP_CAP, + HPRE_CORE6_ALG_BITMAP_CAP, + HPRE_CORE7_ALG_BITMAP_CAP, + HPRE_CORE8_ALG_BITMAP_CAP, + HPRE_CORE9_ALG_BITMAP_CAP, + HPRE_CORE10_ALG_BITMAP_CAP }; static const struct hisi_qm_cap_info hpre_basic_info[] = { @@ -165,6 +180,23 @@ static const struct hisi_qm_cap_info hpre_basic_info[] = { {HPRE_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x3FFFFE, 0xBFFFFE}, {HPRE_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x22, 0xBFFFFE}, {HPRE_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1}, + {HPRE_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x0, 0x4, 0x1}, + {HPRE_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x0, 0x2, 0x2}, + {HPRE_CORE_NUM_CAP, 0x313c, 8, GENMASK(7, 0), 0x0, 0x8, 0xA}, + {HPRE_CLUSTER_CORE_NUM_CAP, 0x313c, 0, GENMASK(7, 0), 0x0, 0x2, 0xA}, + {HPRE_CORE_ENABLE_BITMAP_CAP, 0x3140, 0, GENMASK(31, 0), 0x0, 0xF, 0x3FF}, + {HPRE_DRV_ALG_BITMAP_CAP, 0x3144, 0, GENMASK(31, 0), 0x0, 0x03, 0x27}, + {HPRE_DEV_ALG_BITMAP_CAP, 0x3148, 0, GENMASK(31, 0), 0x0, 0x03, 0x7F}, + {HPRE_CORE1_ALG_BITMAP_CAP, 0x314c, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE2_ALG_BITMAP_CAP, 0x3150, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE3_ALG_BITMAP_CAP, 0x3154, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE4_ALG_BITMAP_CAP, 0x3158, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE5_ALG_BITMAP_CAP, 0x315c, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE6_ALG_BITMAP_CAP, 0x3160, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE7_ALG_BITMAP_CAP, 0x3164, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE8_ALG_BITMAP_CAP, 0x3168, 0, GENMASK(31, 0), 0x0, 0x7F, 0x7F}, + {HPRE_CORE9_ALG_BITMAP_CAP, 0x316c, 0, GENMASK(31, 0), 0x0, 0x10, 0x10}, + {HPRE_CORE10_ALG_BITMAP_CAP, 0x3170, 0, GENMASK(31, 0), 0x0, 0x10, 0x10} }; static const struct hpre_hw_error hpre_hw_errors[] = { @@ -282,6 +314,17 @@ static struct dfx_diff_registers hpre_diff_regs[] = { }, }; +bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg) +{ + u32 cap_val; + + cap_val = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DRV_ALG_BITMAP_CAP, qm->cap_ver); + if (alg & cap_val) + return true; + + return false; +} + static int hpre_diff_regs_show(struct seq_file *s, void *unused) { struct hisi_qm *qm = s->private; @@ -350,14 +393,12 @@ MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)"); static inline int hpre_cluster_num(struct hisi_qm *qm) { - return (qm->ver >= QM_HW_V3) ? HPRE_CLUSTERS_NUM_V3 : - HPRE_CLUSTERS_NUM_V2; + return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CLUSTER_NUM_CAP, qm->cap_ver); } static inline int hpre_cluster_core_mask(struct hisi_qm *qm) { - return (qm->ver >= QM_HW_V3) ? - HPRE_CLUSTER_CORE_MASK_V3 : HPRE_CLUSTER_CORE_MASK_V2; + return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CORE_ENABLE_BITMAP_CAP, qm->cap_ver); } struct hisi_qp *hpre_create_qp(u8 type) From patchwork Fri Sep 9 09:47:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971427 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D275FC6FA8D for ; Fri, 9 Sep 2022 09:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231653AbiIIJuB (ORCPT ); Fri, 9 Sep 2022 05:50:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231749AbiIIJtz (ORCPT ); Fri, 9 Sep 2022 05:49:55 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A127E33A00; Fri, 9 Sep 2022 02:49:52 -0700 (PDT) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB2Y11TnzmVLR; Fri, 9 Sep 2022 17:46:13 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:50 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:50 +0800 From: Weili Qian To: CC: , , , , Zhiqi Song , Weili Qian Subject: [PATCH 07/10] crypto: hisilicon/hpre - optimize registration of ecdh Date: Fri, 9 Sep 2022 17:47:01 +0800 Message-ID: <20220909094704.32099-8-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zhiqi Song Use table to store the different ecdh curve configuration, making the registration of ecdh clearer and expansion more convenient. Signed-off-by: Zhiqi Song Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 136 +++++++++----------- 1 file changed, 63 insertions(+), 73 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index ad1475feb313..ac7fabf65865 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -2008,55 +2008,53 @@ static struct kpp_alg dh = { }, }; -static struct kpp_alg ecdh_nist_p192 = { - .set_secret = hpre_ecdh_set_secret, - .generate_public_key = hpre_ecdh_compute_value, - .compute_shared_secret = hpre_ecdh_compute_value, - .max_size = hpre_ecdh_max_size, - .init = hpre_ecdh_nist_p192_init_tfm, - .exit = hpre_ecdh_exit_tfm, - .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, - .base = { - .cra_ctxsize = sizeof(struct hpre_ctx), - .cra_priority = HPRE_CRYPTO_ALG_PRI, - .cra_name = "ecdh-nist-p192", - .cra_driver_name = "hpre-ecdh-nist-p192", - .cra_module = THIS_MODULE, - }, -}; - -static struct kpp_alg ecdh_nist_p256 = { - .set_secret = hpre_ecdh_set_secret, - .generate_public_key = hpre_ecdh_compute_value, - .compute_shared_secret = hpre_ecdh_compute_value, - .max_size = hpre_ecdh_max_size, - .init = hpre_ecdh_nist_p256_init_tfm, - .exit = hpre_ecdh_exit_tfm, - .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, - .base = { - .cra_ctxsize = sizeof(struct hpre_ctx), - .cra_priority = HPRE_CRYPTO_ALG_PRI, - .cra_name = "ecdh-nist-p256", - .cra_driver_name = "hpre-ecdh-nist-p256", - .cra_module = THIS_MODULE, - }, -}; - -static struct kpp_alg ecdh_nist_p384 = { - .set_secret = hpre_ecdh_set_secret, - .generate_public_key = hpre_ecdh_compute_value, - .compute_shared_secret = hpre_ecdh_compute_value, - .max_size = hpre_ecdh_max_size, - .init = hpre_ecdh_nist_p384_init_tfm, - .exit = hpre_ecdh_exit_tfm, - .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, - .base = { - .cra_ctxsize = sizeof(struct hpre_ctx), - .cra_priority = HPRE_CRYPTO_ALG_PRI, - .cra_name = "ecdh-nist-p384", - .cra_driver_name = "hpre-ecdh-nist-p384", - .cra_module = THIS_MODULE, - }, +static struct kpp_alg ecdh_curves[] = { + { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p192_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p192", + .cra_driver_name = "hpre-ecdh-nist-p192", + .cra_module = THIS_MODULE, + }, + }, { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p256_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p256", + .cra_driver_name = "hpre-ecdh-nist-p256", + .cra_module = THIS_MODULE, + }, + }, { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p384_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p384", + .cra_driver_name = "hpre-ecdh-nist-p384", + .cra_module = THIS_MODULE, + }, + } }; static struct kpp_alg curve25519_alg = { @@ -2123,46 +2121,38 @@ static void hpre_unregister_dh(struct hisi_qm *qm) static int hpre_register_ecdh(struct hisi_qm *qm) { - int ret; + int ret, i; if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP)) return 0; - ret = crypto_register_kpp(&ecdh_nist_p192); - if (ret) { - dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p192 (%d)!\n", ret); - return ret; - } - - ret = crypto_register_kpp(&ecdh_nist_p256); - if (ret) { - dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p256 (%d)!\n", ret); - goto unregister_ecdh_p192; - } - - ret = crypto_register_kpp(&ecdh_nist_p384); - if (ret) { - dev_err(&qm->pdev->dev, "failed to register ecdh_nist_p384 (%d)!\n", ret); - goto unregister_ecdh_p256; + for (i = 0; i < ARRAY_SIZE(ecdh_curves); i++) { + ret = crypto_register_kpp(&ecdh_curves[i]); + if (ret) { + dev_err(&qm->pdev->dev, "failed to register %s (%d)!\n", + ecdh_curves[i].base.cra_name, ret); + goto unreg_kpp; + } } return 0; -unregister_ecdh_p256: - crypto_unregister_kpp(&ecdh_nist_p256); -unregister_ecdh_p192: - crypto_unregister_kpp(&ecdh_nist_p192); +unreg_kpp: + for (--i; i >= 0; --i) + crypto_unregister_kpp(&ecdh_curves[i]); + return ret; } static void hpre_unregister_ecdh(struct hisi_qm *qm) { + int i; + if (!hpre_check_alg_support(qm, HPRE_DRV_ECDH_MASK_CAP)) return; - crypto_unregister_kpp(&ecdh_nist_p384); - crypto_unregister_kpp(&ecdh_nist_p256); - crypto_unregister_kpp(&ecdh_nist_p192); + for (i = ARRAY_SIZE(ecdh_curves) - 1; i >= 0; --i) + crypto_unregister_kpp(&ecdh_curves[i]); } static int hpre_register_x25519(struct hisi_qm *qm) From patchwork Fri Sep 9 09:47:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971432 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7E9AC6FA86 for ; Fri, 9 Sep 2022 09:50:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231720AbiIIJu1 (ORCPT ); Fri, 9 Sep 2022 05:50:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232002AbiIIJuE (ORCPT ); Fri, 9 Sep 2022 05:50:04 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7A684D268; Fri, 9 Sep 2022 02:49:59 -0700 (PDT) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MPB1d6KzmzZcnL; Fri, 9 Sep 2022 17:45:25 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:51 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:51 +0800 From: Weili Qian To: CC: , , , , Weili Qian Subject: [PATCH 08/10] crypto: hisilicon/zip - support zip capability Date: Fri, 9 Sep 2022 17:47:02 +0800 Message-ID: <20220909094704.32099-9-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add function 'hisi_zip_alg_support' to get device configuration information from capability registers, instead of determining whether to register an algorithm based on hardware platform's version. Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/zip/zip.h | 1 + drivers/crypto/hisilicon/zip/zip_crypto.c | 67 +++++++++++--- drivers/crypto/hisilicon/zip/zip_main.c | 102 +++++++++++++++------- 3 files changed, 128 insertions(+), 42 deletions(-) diff --git a/drivers/crypto/hisilicon/zip/zip.h b/drivers/crypto/hisilicon/zip/zip.h index f289656e9ac0..f2e6da3240ae 100644 --- a/drivers/crypto/hisilicon/zip/zip.h +++ b/drivers/crypto/hisilicon/zip/zip.h @@ -84,4 +84,5 @@ struct hisi_zip_sqe { int zip_create_qps(struct hisi_qp **qps, int qp_num, int node); int hisi_zip_register_to_crypto(struct hisi_qm *qm); void hisi_zip_unregister_from_crypto(struct hisi_qm *qm); +bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg); #endif diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c index a7f6884c3ab3..6608971d10cd 100644 --- a/drivers/crypto/hisilicon/zip/zip_crypto.c +++ b/drivers/crypto/hisilicon/zip/zip_crypto.c @@ -39,6 +39,9 @@ #define HZIP_ALG_PRIORITY 300 #define HZIP_SGL_SGE_NR 10 +#define HZIP_ALG_ZLIB GENMASK(1, 0) +#define HZIP_ALG_GZIP GENMASK(3, 2) + static const u8 zlib_head[HZIP_ZLIB_HEAD_SIZE] = {0x78, 0x9c}; static const u8 gzip_head[HZIP_GZIP_HEAD_SIZE] = { 0x1f, 0x8b, 0x08, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x03 @@ -756,6 +759,28 @@ static struct acomp_alg hisi_zip_acomp_zlib = { } }; +static int hisi_zip_register_zlib(struct hisi_qm *qm) +{ + int ret; + + if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB)) + return 0; + + ret = crypto_register_acomp(&hisi_zip_acomp_zlib); + if (ret) + dev_err(&qm->pdev->dev, "failed to register to zlib (%d)!\n", ret); + + return ret; +} + +static void hisi_zip_unregister_zlib(struct hisi_qm *qm) +{ + if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB)) + return; + + crypto_unregister_acomp(&hisi_zip_acomp_zlib); +} + static struct acomp_alg hisi_zip_acomp_gzip = { .init = hisi_zip_acomp_init, .exit = hisi_zip_acomp_exit, @@ -770,27 +795,45 @@ static struct acomp_alg hisi_zip_acomp_gzip = { } }; -int hisi_zip_register_to_crypto(struct hisi_qm *qm) +static int hisi_zip_register_gzip(struct hisi_qm *qm) { int ret; - ret = crypto_register_acomp(&hisi_zip_acomp_zlib); - if (ret) { - pr_err("failed to register to zlib (%d)!\n", ret); - return ret; - } + if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP)) + return 0; ret = crypto_register_acomp(&hisi_zip_acomp_gzip); - if (ret) { - pr_err("failed to register to gzip (%d)!\n", ret); - crypto_unregister_acomp(&hisi_zip_acomp_zlib); - } + if (ret) + dev_err(&qm->pdev->dev, "failed to register to gzip (%d)!\n", ret); return ret; } -void hisi_zip_unregister_from_crypto(struct hisi_qm *qm) +static void hisi_zip_unregister_gzip(struct hisi_qm *qm) { + if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP)) + return; + crypto_unregister_acomp(&hisi_zip_acomp_gzip); - crypto_unregister_acomp(&hisi_zip_acomp_zlib); +} + +int hisi_zip_register_to_crypto(struct hisi_qm *qm) +{ + int ret = 0; + + ret = hisi_zip_register_zlib(qm); + if (ret) + return ret; + + ret = hisi_zip_register_gzip(qm); + if (ret) + hisi_zip_unregister_zlib(qm); + + return ret; +} + +void hisi_zip_unregister_from_crypto(struct hisi_qm *qm) +{ + hisi_zip_unregister_zlib(qm); + hisi_zip_unregister_gzip(qm); } diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index 6ed10ec05a73..a8aedddac67a 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -20,18 +20,6 @@ #define HZIP_QUEUE_NUM_V1 4096 #define HZIP_CLOCK_GATE_CTRL 0x301004 -#define COMP0_ENABLE BIT(0) -#define COMP1_ENABLE BIT(1) -#define DECOMP0_ENABLE BIT(2) -#define DECOMP1_ENABLE BIT(3) -#define DECOMP2_ENABLE BIT(4) -#define DECOMP3_ENABLE BIT(5) -#define DECOMP4_ENABLE BIT(6) -#define DECOMP5_ENABLE BIT(7) -#define HZIP_ALL_COMP_DECOMP_EN (COMP0_ENABLE | COMP1_ENABLE | \ - DECOMP0_ENABLE | DECOMP1_ENABLE | \ - DECOMP2_ENABLE | DECOMP3_ENABLE | \ - DECOMP4_ENABLE | DECOMP5_ENABLE) #define HZIP_DECOMP_CHECK_ENABLE BIT(16) #define HZIP_FSM_MAX_CNT 0x301008 @@ -76,10 +64,6 @@ #define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16 #define HZIP_SRAM_ECC_ERR_ADDR_SHIFT 24 #define HZIP_CORE_INT_MASK_ALL GENMASK(12, 0) -#define HZIP_COMP_CORE_NUM 2 -#define HZIP_DECOMP_CORE_NUM 6 -#define HZIP_CORE_NUM (HZIP_COMP_CORE_NUM + \ - HZIP_DECOMP_CORE_NUM) #define HZIP_SQE_SIZE 128 #define HZIP_PF_DEF_Q_NUM 64 #define HZIP_PF_DEF_Q_BASE 0 @@ -194,6 +178,21 @@ enum zip_cap_type { ZIP_RESET_MASK_CAP, ZIP_OOO_SHUTDOWN_MASK_CAP, ZIP_CE_MASK_CAP, + ZIP_CLUSTER_NUM_CAP, + ZIP_CORE_TYPE_NUM_CAP, + ZIP_CORE_NUM_CAP, + ZIP_CLUSTER_COMP_NUM_CAP, + ZIP_CLUSTER_DECOMP_NUM_CAP, + ZIP_DECOMP_ENABLE_BITMAP, + ZIP_COMP_ENABLE_BITMAP, + ZIP_DRV_ALG_BITMAP, + ZIP_DEV_ALG_BITMAP, + ZIP_CORE1_ALG_BITMAP, + ZIP_CORE2_ALG_BITMAP, + ZIP_CORE3_ALG_BITMAP, + ZIP_CORE4_ALG_BITMAP, + ZIP_CORE5_ALG_BITMAP, + ZIP_CAP_MAX }; static struct hisi_qm_cap_info zip_basic_cap_info[] = { @@ -205,6 +204,21 @@ static struct hisi_qm_cap_info zip_basic_cap_info[] = { {ZIP_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x7FE, 0x7FE}, {ZIP_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x2, 0x7FE}, {ZIP_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x1, 0x1}, + {ZIP_CLUSTER_NUM_CAP, 0x313C, 28, GENMASK(3, 0), 0x1, 0x1, 0x1}, + {ZIP_CORE_TYPE_NUM_CAP, 0x313C, 24, GENMASK(3, 0), 0x2, 0x2, 0x2}, + {ZIP_CORE_NUM_CAP, 0x313C, 16, GENMASK(7, 0), 0x8, 0x8, 0x5}, + {ZIP_CLUSTER_COMP_NUM_CAP, 0x313C, 8, GENMASK(7, 0), 0x2, 0x2, 0x2}, + {ZIP_CLUSTER_DECOMP_NUM_CAP, 0x313C, 0, GENMASK(7, 0), 0x6, 0x6, 0x3}, + {ZIP_DECOMP_ENABLE_BITMAP, 0x3140, 16, GENMASK(15, 0), 0xFC, 0xFC, 0x1C}, + {ZIP_COMP_ENABLE_BITMAP, 0x3140, 0, GENMASK(15, 0), 0x3, 0x3, 0x3}, + {ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0xF, 0xF, 0xF}, + {ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0xFF}, + {ZIP_CORE1_ALG_BITMAP, 0x314C, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5}, + {ZIP_CORE2_ALG_BITMAP, 0x3150, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5}, + {ZIP_CORE3_ALG_BITMAP, 0x3154, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A}, + {ZIP_CORE4_ALG_BITMAP, 0x3158, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A}, + {ZIP_CORE5_ALG_BITMAP, 0x315C, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A}, + {ZIP_CAP_MAX, 0x317c, 0, GENMASK(0, 0), 0x0, 0x0, 0x0} }; enum { @@ -363,6 +377,17 @@ int zip_create_qps(struct hisi_qp **qps, int qp_num, int node) return hisi_qm_alloc_qps_node(&zip_devices, qp_num, 0, node, qps); } +bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg) +{ + u32 cap_val; + + cap_val = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DRV_ALG_BITMAP, qm->cap_ver); + if ((alg & cap_val) == alg) + return true; + + return false; +} + static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) { u32 val; @@ -421,6 +446,7 @@ static void hisi_zip_enable_clock_gate(struct hisi_qm *qm) static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm) { void __iomem *base = qm->io_base; + u32 dcomp_bm, comp_bm; /* qm user domain */ writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1); @@ -458,8 +484,11 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm) } /* let's open all compression/decompression cores */ - writel(HZIP_DECOMP_CHECK_ENABLE | HZIP_ALL_COMP_DECOMP_EN, - base + HZIP_CLOCK_GATE_CTRL); + dcomp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_DECOMP_ENABLE_BITMAP, qm->cap_ver); + comp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info, + ZIP_COMP_ENABLE_BITMAP, qm->cap_ver); + writel(HZIP_DECOMP_CHECK_ENABLE | dcomp_bm | comp_bm, base + HZIP_CLOCK_GATE_CTRL); /* enable sqc,cqc writeback */ writel(SQC_CACHE_ENABLE | CQC_CACHE_ENABLE | SQC_CACHE_WB_ENABLE | @@ -678,18 +707,23 @@ DEFINE_SHOW_ATTRIBUTE(hisi_zip_regs); static int hisi_zip_core_debug_init(struct hisi_qm *qm) { + u32 zip_core_num, zip_comp_core_num; struct device *dev = &qm->pdev->dev; struct debugfs_regset32 *regset; struct dentry *tmp_d; char buf[HZIP_BUF_SIZE]; int i; - for (i = 0; i < HZIP_CORE_NUM; i++) { - if (i < HZIP_COMP_CORE_NUM) + zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver); + zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP, + qm->cap_ver); + + for (i = 0; i < zip_core_num; i++) { + if (i < zip_comp_core_num) scnprintf(buf, sizeof(buf), "comp_core%d", i); else scnprintf(buf, sizeof(buf), "decomp_core%d", - i - HZIP_COMP_CORE_NUM); + i - zip_comp_core_num); regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL); if (!regset) @@ -702,7 +736,7 @@ static int hisi_zip_core_debug_init(struct hisi_qm *qm) tmp_d = debugfs_create_dir(buf, qm->debug.debug_root); debugfs_create_file("regs", 0444, tmp_d, regset, - &hisi_zip_regs_fops); + &hisi_zip_regs_fops); } return 0; @@ -822,10 +856,13 @@ static int hisi_zip_show_last_regs_init(struct hisi_qm *qm) int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs); struct qm_debug *debug = &qm->debug; void __iomem *io_base; + u32 zip_core_num; int i, j, idx; - debug->last_words = kcalloc(core_dfx_regs_num * HZIP_CORE_NUM + - com_dfx_regs_num, sizeof(unsigned int), GFP_KERNEL); + zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver); + + debug->last_words = kcalloc(core_dfx_regs_num * zip_core_num + com_dfx_regs_num, + sizeof(unsigned int), GFP_KERNEL); if (!debug->last_words) return -ENOMEM; @@ -834,7 +871,7 @@ static int hisi_zip_show_last_regs_init(struct hisi_qm *qm) debug->last_words[i] = readl_relaxed(io_base); } - for (i = 0; i < HZIP_CORE_NUM; i++) { + for (i = 0; i < zip_core_num; i++) { io_base = qm->io_base + core_offsets[i]; for (j = 0; j < core_dfx_regs_num; j++) { idx = com_dfx_regs_num + i * core_dfx_regs_num + j; @@ -861,6 +898,7 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm) { int core_dfx_regs_num = ARRAY_SIZE(hzip_dump_dfx_regs); int com_dfx_regs_num = ARRAY_SIZE(hzip_com_dfx_regs); + u32 zip_core_num, zip_comp_core_num; struct qm_debug *debug = &qm->debug; char buf[HZIP_BUF_SIZE]; void __iomem *base; @@ -874,15 +912,18 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm) val = readl_relaxed(qm->io_base + hzip_com_dfx_regs[i].offset); if (debug->last_words[i] != val) pci_info(qm->pdev, "com_dfx: %s \t= 0x%08x => 0x%08x\n", - hzip_com_dfx_regs[i].name, debug->last_words[i], val); + hzip_com_dfx_regs[i].name, debug->last_words[i], val); } - for (i = 0; i < HZIP_CORE_NUM; i++) { - if (i < HZIP_COMP_CORE_NUM) + zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver); + zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP, + qm->cap_ver); + for (i = 0; i < zip_core_num; i++) { + if (i < zip_comp_core_num) scnprintf(buf, sizeof(buf), "Comp_core-%d", i); else scnprintf(buf, sizeof(buf), "Decomp_core-%d", - i - HZIP_COMP_CORE_NUM); + i - zip_comp_core_num); base = qm->io_base + core_offsets[i]; pci_info(qm->pdev, "==>%s:\n", buf); @@ -892,7 +933,8 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm) val = readl_relaxed(base + hzip_dump_dfx_regs[j].offset); if (debug->last_words[idx] != val) pci_info(qm->pdev, "%s \t= 0x%08x => 0x%08x\n", - hzip_dump_dfx_regs[j].name, debug->last_words[idx], val); + hzip_dump_dfx_regs[j].name, + debug->last_words[idx], val); } } } From patchwork Fri Sep 9 09:47:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971428 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59ED3ECAAA1 for ; Fri, 9 Sep 2022 09:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231924AbiIIJuD (ORCPT ); Fri, 9 Sep 2022 05:50:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230210AbiIIJt7 (ORCPT ); Fri, 9 Sep 2022 05:49:59 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45E0295A4; Fri, 9 Sep 2022 02:49:55 -0700 (PDT) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4MPB2L13swz14QVv; Fri, 9 Sep 2022 17:46:02 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:53 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:52 +0800 From: Weili Qian To: CC: , , , , Wenkai Lin , Weili Qian Subject: [PATCH 09/10] crypto: hisilicon/sec - get algorithm bitmap from registers Date: Fri, 9 Sep 2022 17:47:03 +0800 Message-ID: <20220909094704.32099-10-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Wenkai Lin Add function 'sec_get_alg_bitmap' to get hardware algorithm bitmap before register algorithm to crypto, instead of determining whether to register an algorithm based on hardware platform's version. Signed-off-by: Wenkai Lin Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/sec2/sec.h | 18 ++ drivers/crypto/hisilicon/sec2/sec_crypto.c | 306 +++++++++++++-------- drivers/crypto/hisilicon/sec2/sec_main.c | 33 ++- 3 files changed, 236 insertions(+), 121 deletions(-) diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h index 895ba9b47554..3e57fc04b377 100644 --- a/drivers/crypto/hisilicon/sec2/sec.h +++ b/drivers/crypto/hisilicon/sec2/sec.h @@ -201,10 +201,28 @@ enum sec_cap_type { SEC_RESET_MASK_CAP, SEC_OOO_SHUTDOWN_MASK_CAP, SEC_CE_MASK_CAP, + SEC_CLUSTER_NUM_CAP, + SEC_CORE_TYPE_NUM_CAP, + SEC_CORE_NUM_CAP, + SEC_CORES_PER_CLUSTER_NUM_CAP, + SEC_CORE_ENABLE_BITMAP, + SEC_DRV_ALG_BITMAP_LOW, + SEC_DRV_ALG_BITMAP_HIGH, + SEC_DEV_ALG_BITMAP_LOW, + SEC_DEV_ALG_BITMAP_HIGH, + SEC_CORE1_ALG_BITMAP_LOW, + SEC_CORE1_ALG_BITMAP_HIGH, + SEC_CORE2_ALG_BITMAP_LOW, + SEC_CORE2_ALG_BITMAP_HIGH, + SEC_CORE3_ALG_BITMAP_LOW, + SEC_CORE3_ALG_BITMAP_HIGH, + SEC_CORE4_ALG_BITMAP_LOW, + SEC_CORE4_ALG_BITMAP_HIGH, }; void sec_destroy_qps(struct hisi_qp **qps, int qp_num); struct hisi_qp **sec_create_qps(void); int sec_register_to_crypto(struct hisi_qm *qm); void sec_unregister_from_crypto(struct hisi_qm *qm); +u64 sec_get_alg_bitmap(struct hisi_qm *qm, u32 high, u32 low); #endif diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index db2165e9fcbf..1be4f496d3c4 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -104,6 +104,16 @@ #define IV_CTR_INIT 0x1 #define IV_BYTE_OFFSET 0x8 +struct sec_skcipher { + u64 alg_msk; + struct skcipher_alg alg; +}; + +struct sec_aead { + u64 alg_msk; + struct aead_alg alg; +}; + /* Get an en/de-cipher queue cyclically to balance load over queues of TFM */ static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req) { @@ -2160,67 +2170,80 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req) .min_keysize = sec_min_key_size,\ .max_keysize = sec_max_key_size,\ .ivsize = iv_size,\ -}, +} #define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \ max_key_size, blk_size, iv_size) \ SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \ sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size) -static struct skcipher_alg sec_skciphers[] = { - SEC_SKCIPHER_ALG("ecb(aes)", sec_setkey_aes_ecb, - AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, - AES_BLOCK_SIZE, 0) - - SEC_SKCIPHER_ALG("cbc(aes)", sec_setkey_aes_cbc, - AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, - AES_BLOCK_SIZE, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("xts(aes)", sec_setkey_aes_xts, - SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MAX_KEY_SIZE, - AES_BLOCK_SIZE, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb, - SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE, - DES3_EDE_BLOCK_SIZE, 0) - - SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc, - SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE, - DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts, - SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE, - AES_BLOCK_SIZE, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("cbc(sm4)", sec_setkey_sm4_cbc, - AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, - AES_BLOCK_SIZE, AES_BLOCK_SIZE) -}; - -static struct skcipher_alg sec_skciphers_v3[] = { - SEC_SKCIPHER_ALG("ofb(aes)", sec_setkey_aes_ofb, - AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("cfb(aes)", sec_setkey_aes_cfb, - AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("ctr(aes)", sec_setkey_aes_ctr, - AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("ofb(sm4)", sec_setkey_sm4_ofb, - AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("cfb(sm4)", sec_setkey_sm4_cfb, - AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) - - SEC_SKCIPHER_ALG("ctr(sm4)", sec_setkey_sm4_ctr, - AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, - SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE) +static struct sec_skcipher sec_skciphers[] = { + { + .alg_msk = BIT(0), + .alg = SEC_SKCIPHER_ALG("ecb(aes)", sec_setkey_aes_ecb, AES_MIN_KEY_SIZE, + AES_MAX_KEY_SIZE, AES_BLOCK_SIZE, 0), + }, + { + .alg_msk = BIT(1), + .alg = SEC_SKCIPHER_ALG("cbc(aes)", sec_setkey_aes_cbc, AES_MIN_KEY_SIZE, + AES_MAX_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(2), + .alg = SEC_SKCIPHER_ALG("ctr(aes)", sec_setkey_aes_ctr, AES_MIN_KEY_SIZE, + AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(3), + .alg = SEC_SKCIPHER_ALG("xts(aes)", sec_setkey_aes_xts, SEC_XTS_MIN_KEY_SIZE, + SEC_XTS_MAX_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(4), + .alg = SEC_SKCIPHER_ALG("ofb(aes)", sec_setkey_aes_ofb, AES_MIN_KEY_SIZE, + AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(5), + .alg = SEC_SKCIPHER_ALG("cfb(aes)", sec_setkey_aes_cfb, AES_MIN_KEY_SIZE, + AES_MAX_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(12), + .alg = SEC_SKCIPHER_ALG("cbc(sm4)", sec_setkey_sm4_cbc, AES_MIN_KEY_SIZE, + AES_MIN_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(13), + .alg = SEC_SKCIPHER_ALG("ctr(sm4)", sec_setkey_sm4_ctr, AES_MIN_KEY_SIZE, + AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(14), + .alg = SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts, SEC_XTS_MIN_KEY_SIZE, + SEC_XTS_MIN_KEY_SIZE, AES_BLOCK_SIZE, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(15), + .alg = SEC_SKCIPHER_ALG("ofb(sm4)", sec_setkey_sm4_ofb, AES_MIN_KEY_SIZE, + AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(16), + .alg = SEC_SKCIPHER_ALG("cfb(sm4)", sec_setkey_sm4_cfb, AES_MIN_KEY_SIZE, + AES_MIN_KEY_SIZE, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(23), + .alg = SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb, SEC_DES3_3KEY_SIZE, + SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE, 0), + }, + { + .alg_msk = BIT(24), + .alg = SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc, SEC_DES3_3KEY_SIZE, + SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE, + DES3_EDE_BLOCK_SIZE), + }, }; static int aead_iv_demension_check(struct aead_request *aead_req) @@ -2414,90 +2437,135 @@ static int sec_aead_decrypt(struct aead_request *a_req) .maxauthsize = max_authsize,\ } -static struct aead_alg sec_aeads[] = { - SEC_AEAD_ALG("authenc(hmac(sha1),cbc(aes))", - sec_setkey_aes_cbc_sha1, sec_aead_sha1_ctx_init, - sec_aead_ctx_exit, AES_BLOCK_SIZE, - AES_BLOCK_SIZE, SHA1_DIGEST_SIZE), +static struct sec_aead sec_aeads[] = { + { + .alg_msk = BIT(6), + .alg = SEC_AEAD_ALG("ccm(aes)", sec_setkey_aes_ccm, sec_aead_xcm_ctx_init, + sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE, + AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(7), + .alg = SEC_AEAD_ALG("gcm(aes)", sec_setkey_aes_gcm, sec_aead_xcm_ctx_init, + sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, SEC_AIV_SIZE, + AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(17), + .alg = SEC_AEAD_ALG("ccm(sm4)", sec_setkey_sm4_ccm, sec_aead_xcm_ctx_init, + sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, AES_BLOCK_SIZE, + AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(18), + .alg = SEC_AEAD_ALG("gcm(sm4)", sec_setkey_sm4_gcm, sec_aead_xcm_ctx_init, + sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, SEC_AIV_SIZE, + AES_BLOCK_SIZE), + }, + { + .alg_msk = BIT(43), + .alg = SEC_AEAD_ALG("authenc(hmac(sha1),cbc(aes))", sec_setkey_aes_cbc_sha1, + sec_aead_sha1_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE, + AES_BLOCK_SIZE, SHA1_DIGEST_SIZE), + }, + { + .alg_msk = BIT(44), + .alg = SEC_AEAD_ALG("authenc(hmac(sha256),cbc(aes))", sec_setkey_aes_cbc_sha256, + sec_aead_sha256_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE, + AES_BLOCK_SIZE, SHA256_DIGEST_SIZE), + }, + { + .alg_msk = BIT(45), + .alg = SEC_AEAD_ALG("authenc(hmac(sha512),cbc(aes))", sec_setkey_aes_cbc_sha512, + sec_aead_sha512_ctx_init, sec_aead_ctx_exit, AES_BLOCK_SIZE, + AES_BLOCK_SIZE, SHA512_DIGEST_SIZE), + }, +}; + +static void sec_unregister_skcipher(u64 alg_mask, int end) +{ + int i; - SEC_AEAD_ALG("authenc(hmac(sha256),cbc(aes))", - sec_setkey_aes_cbc_sha256, sec_aead_sha256_ctx_init, - sec_aead_ctx_exit, AES_BLOCK_SIZE, - AES_BLOCK_SIZE, SHA256_DIGEST_SIZE), + for (i = 0; i < end; i++) + if (sec_skciphers[i].alg_msk & alg_mask) + crypto_unregister_skcipher(&sec_skciphers[i].alg); +} - SEC_AEAD_ALG("authenc(hmac(sha512),cbc(aes))", - sec_setkey_aes_cbc_sha512, sec_aead_sha512_ctx_init, - sec_aead_ctx_exit, AES_BLOCK_SIZE, - AES_BLOCK_SIZE, SHA512_DIGEST_SIZE), +static int sec_register_skcipher(u64 alg_mask) +{ + int i, ret, count; - SEC_AEAD_ALG("ccm(aes)", sec_setkey_aes_ccm, sec_aead_xcm_ctx_init, - sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, - AES_BLOCK_SIZE, AES_BLOCK_SIZE), + count = ARRAY_SIZE(sec_skciphers); - SEC_AEAD_ALG("gcm(aes)", sec_setkey_aes_gcm, sec_aead_xcm_ctx_init, - sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, - SEC_AIV_SIZE, AES_BLOCK_SIZE) -}; + for (i = 0; i < count; i++) { + if (!(sec_skciphers[i].alg_msk & alg_mask)) + continue; -static struct aead_alg sec_aeads_v3[] = { - SEC_AEAD_ALG("ccm(sm4)", sec_setkey_sm4_ccm, sec_aead_xcm_ctx_init, - sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, - AES_BLOCK_SIZE, AES_BLOCK_SIZE), + ret = crypto_register_skcipher(&sec_skciphers[i].alg); + if (ret) + goto err; + } - SEC_AEAD_ALG("gcm(sm4)", sec_setkey_sm4_gcm, sec_aead_xcm_ctx_init, - sec_aead_xcm_ctx_exit, SEC_MIN_BLOCK_SZ, - SEC_AIV_SIZE, AES_BLOCK_SIZE) -}; + return 0; + +err: + sec_unregister_skcipher(alg_mask, i); + + return ret; +} + +static void sec_unregister_aead(u64 alg_mask, int end) +{ + int i; + + for (i = 0; i < end; i++) + if (sec_aeads[i].alg_msk & alg_mask) + crypto_unregister_aead(&sec_aeads[i].alg); +} + +static int sec_register_aead(u64 alg_mask) +{ + int i, ret, count; + + count = ARRAY_SIZE(sec_aeads); + + for (i = 0; i < count; i++) { + if (!(sec_aeads[i].alg_msk & alg_mask)) + continue; + + ret = crypto_register_aead(&sec_aeads[i].alg); + if (ret) + goto err; + } + + return 0; + +err: + sec_unregister_aead(alg_mask, i); + + return ret; +} int sec_register_to_crypto(struct hisi_qm *qm) { + u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW); int ret; - /* To avoid repeat register */ - ret = crypto_register_skciphers(sec_skciphers, - ARRAY_SIZE(sec_skciphers)); + ret = sec_register_skcipher(alg_mask); if (ret) return ret; - if (qm->ver > QM_HW_V2) { - ret = crypto_register_skciphers(sec_skciphers_v3, - ARRAY_SIZE(sec_skciphers_v3)); - if (ret) - goto reg_skcipher_fail; - } - - ret = crypto_register_aeads(sec_aeads, ARRAY_SIZE(sec_aeads)); + ret = sec_register_aead(alg_mask); if (ret) - goto reg_aead_fail; - if (qm->ver > QM_HW_V2) { - ret = crypto_register_aeads(sec_aeads_v3, ARRAY_SIZE(sec_aeads_v3)); - if (ret) - goto reg_aead_v3_fail; - } - return ret; + sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers)); -reg_aead_v3_fail: - crypto_unregister_aeads(sec_aeads, ARRAY_SIZE(sec_aeads)); -reg_aead_fail: - if (qm->ver > QM_HW_V2) - crypto_unregister_skciphers(sec_skciphers_v3, - ARRAY_SIZE(sec_skciphers_v3)); -reg_skcipher_fail: - crypto_unregister_skciphers(sec_skciphers, - ARRAY_SIZE(sec_skciphers)); return ret; } void sec_unregister_from_crypto(struct hisi_qm *qm) { - if (qm->ver > QM_HW_V2) - crypto_unregister_aeads(sec_aeads_v3, - ARRAY_SIZE(sec_aeads_v3)); - crypto_unregister_aeads(sec_aeads, ARRAY_SIZE(sec_aeads)); + u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW); - if (qm->ver > QM_HW_V2) - crypto_unregister_skciphers(sec_skciphers_v3, - ARRAY_SIZE(sec_skciphers_v3)); - crypto_unregister_skciphers(sec_skciphers, - ARRAY_SIZE(sec_skciphers)); + sec_unregister_aead(alg_mask, ARRAY_SIZE(sec_aeads)); + sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers)); } diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index e814f94dd0d4..84233a489c74 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -41,7 +41,6 @@ #define SEC_ECC_NUM 16 #define SEC_ECC_MASH 0xFF #define SEC_CORE_INT_DISABLE 0x0 -#define SEC_SAA_ENABLE 0x17f #define SEC_RAS_CE_REG 0x301050 #define SEC_RAS_FE_REG 0x301054 @@ -114,6 +113,8 @@ #define SEC_DFX_COMMON1_LEN 0x45 #define SEC_DFX_COMMON2_LEN 0xBA +#define SEC_ALG_BITMAP_SHIFT 32 + struct sec_hw_error { u32 int_msk; const char *msg; @@ -141,6 +142,23 @@ static const struct hisi_qm_cap_info sec_basic_info[] = { {SEC_RESET_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x177, 0x177}, {SEC_OOO_SHUTDOWN_MASK_CAP, 0x3134, 0, GENMASK(31, 0), 0x0, 0x4, 0x177}, {SEC_CE_MASK_CAP, 0x3138, 0, GENMASK(31, 0), 0x0, 0x88, 0xC088}, + {SEC_CLUSTER_NUM_CAP, 0x313c, 20, GENMASK(3, 0), 0x1, 0x1, 0x1}, + {SEC_CORE_TYPE_NUM_CAP, 0x313c, 16, GENMASK(3, 0), 0x1, 0x1, 0x1}, + {SEC_CORE_NUM_CAP, 0x313c, 8, GENMASK(7, 0), 0x4, 0x4, 0x4}, + {SEC_CORES_PER_CLUSTER_NUM_CAP, 0x313c, 0, GENMASK(7, 0), 0x4, 0x4, 0x4}, + {SEC_CORE_ENABLE_BITMAP, 0x3140, 32, GENMASK(31, 0), 0x17F, 0x17F, 0xF}, + {SEC_DRV_ALG_BITMAP_LOW, 0x3144, 0, GENMASK(31, 0), 0x18050CB, 0x18050CB, 0x187F0FF}, + {SEC_DRV_ALG_BITMAP_HIGH, 0x3148, 0, GENMASK(31, 0), 0x395C, 0x395C, 0x395C}, + {SEC_DEV_ALG_BITMAP_LOW, 0x314c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}, + {SEC_DEV_ALG_BITMAP_HIGH, 0x3150, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, + {SEC_CORE1_ALG_BITMAP_LOW, 0x3154, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}, + {SEC_CORE1_ALG_BITMAP_HIGH, 0x3158, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, + {SEC_CORE2_ALG_BITMAP_LOW, 0x315c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}, + {SEC_CORE2_ALG_BITMAP_HIGH, 0x3160, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, + {SEC_CORE3_ALG_BITMAP_LOW, 0x3164, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}, + {SEC_CORE3_ALG_BITMAP_HIGH, 0x3168, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, + {SEC_CORE4_ALG_BITMAP_LOW, 0x316c, 0, GENMASK(31, 0), 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}, + {SEC_CORE4_ALG_BITMAP_HIGH, 0x3170, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, }; static const struct sec_hw_error sec_hw_errors[] = { @@ -345,6 +363,16 @@ struct hisi_qp **sec_create_qps(void) return NULL; } +u64 sec_get_alg_bitmap(struct hisi_qm *qm, u32 high, u32 low) +{ + u32 cap_val_h, cap_val_l; + + cap_val_h = hisi_qm_get_hw_info(qm, sec_basic_info, high, qm->cap_ver); + cap_val_l = hisi_qm_get_hw_info(qm, sec_basic_info, low, qm->cap_ver); + + return ((u64)cap_val_h << SEC_ALG_BITMAP_SHIFT) | (u64)cap_val_l; +} + static const struct kernel_param_ops sec_uacce_mode_ops = { .set = uacce_mode_set, .get = param_get_int, @@ -512,7 +540,8 @@ static int sec_engine_init(struct hisi_qm *qm) writel(SEC_SINGLE_PORT_MAX_TRANS, qm->io_base + AM_CFG_SINGLE_PORT_MAX_TRANS); - writel(SEC_SAA_ENABLE, qm->io_base + SEC_SAA_EN_REG); + reg = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_CORE_ENABLE_BITMAP, qm->cap_ver); + writel(reg, qm->io_base + SEC_SAA_EN_REG); if (qm->ver < QM_HW_V3) { /* HW V2 enable sm4 extra mode, as ctr/ecb */ From patchwork Fri Sep 9 09:47:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weili Qian X-Patchwork-Id: 12971430 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FE6FECAAD3 for ; Fri, 9 Sep 2022 09:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232131AbiIIJuY (ORCPT ); Fri, 9 Sep 2022 05:50:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230489AbiIIJuA (ORCPT ); Fri, 9 Sep 2022 05:50:00 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8DEA45040; Fri, 9 Sep 2022 02:49:56 -0700 (PDT) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MPB3n6CHVznV5r; Fri, 9 Sep 2022 17:47:17 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:53 +0800 Received: from localhost.localdomain (10.69.192.56) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 9 Sep 2022 17:49:53 +0800 From: Weili Qian To: CC: , , , , Zhiqi Song , Wenkai Lin , Weili Qian Subject: [PATCH 10/10] crypto: hisilicon - support get algs by the capability register Date: Fri, 9 Sep 2022 17:47:04 +0800 Message-ID: <20220909094704.32099-11-qianweili@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220909094704.32099-1-qianweili@huawei.com> References: <20220909094704.32099-1-qianweili@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zhiqi Song The value of qm algorithm can change dynamically according to the value of the capability register. Add xxx_set_qm_algs() function to obtain the algs that the hardware device supported from the capability register and set them into usr mode attribute files. Signed-off-by: Zhiqi Song Signed-off-by: Wenkai Lin Signed-off-by: Weili Qian --- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 10 +-- drivers/crypto/hisilicon/hpre/hpre_main.c | 83 +++++++++++++++++++-- drivers/crypto/hisilicon/qm.c | 1 - drivers/crypto/hisilicon/sec2/sec_main.c | 71 +++++++++++++++++- drivers/crypto/hisilicon/zip/zip_main.c | 75 +++++++++++++++++-- 5 files changed, 222 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index ac7fabf65865..ef02dadd6217 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -51,11 +51,11 @@ struct hpre_ctx; #define HPRE_ECC_HW256_KSZ_B 32 #define HPRE_ECC_HW384_KSZ_B 48 -/* capability register mask */ -#define HPRE_DRV_RSA_MASK_CAP BIT(0) -#define HPRE_DRV_DH_MASK_CAP BIT(1) -#define HPRE_DRV_ECDH_MASK_CAP BIT(2) -#define HPRE_DRV_X25519_MASK_CAP BIT(5) +/* capability register mask of driver */ +#define HPRE_DRV_RSA_MASK_CAP BIT(0) +#define HPRE_DRV_DH_MASK_CAP BIT(1) +#define HPRE_DRV_ECDH_MASK_CAP BIT(2) +#define HPRE_DRV_X25519_MASK_CAP BIT(5) typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe); diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index 407cdd9d8413..471e5ca720f5 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -118,6 +118,8 @@ #define HPRE_DFX_COMMON2_LEN 0xE #define HPRE_DFX_CORE_LEN 0x43 +#define HPRE_DEV_ALG_MAX_LEN 256 + static const char hpre_name[] = "hisi_hpre"; static struct dentry *hpre_debugfs_root; static const struct pci_device_id hpre_dev_ids[] = { @@ -133,6 +135,38 @@ struct hpre_hw_error { const char *msg; }; +struct hpre_dev_alg { + u32 alg_msk; + const char *alg; +}; + +static const struct hpre_dev_alg hpre_dev_algs[] = { + { + .alg_msk = BIT(0), + .alg = "rsa\n" + }, { + .alg_msk = BIT(1), + .alg = "dh\n" + }, { + .alg_msk = BIT(2), + .alg = "ecdh\n" + }, { + .alg_msk = BIT(3), + .alg = "ecdsa\n" + }, { + .alg_msk = BIT(4), + .alg = "sm2\n" + }, { + .alg_msk = BIT(5), + .alg = "x25519\n" + }, { + .alg_msk = BIT(6), + .alg = "x448\n" + }, { + /* sentinel */ + } +}; + static struct hisi_qm_list hpre_devices = { .register_to_crypto = hpre_algs_register, .unregister_from_crypto = hpre_algs_unregister, @@ -325,6 +359,35 @@ bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg) return false; } +static int hpre_set_qm_algs(struct hisi_qm *qm) +{ + struct device *dev = &qm->pdev->dev; + char *algs, *ptr; + u32 alg_msk; + int i; + + if (!qm->use_sva) + return 0; + + algs = devm_kzalloc(dev, HPRE_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL); + if (!algs) + return -ENOMEM; + + alg_msk = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DEV_ALG_BITMAP_CAP, qm->cap_ver); + + for (i = 0; i < ARRAY_SIZE(hpre_dev_algs); i++) + if (alg_msk & hpre_dev_algs[i].alg_msk) + strcat(algs, hpre_dev_algs[i].alg); + + ptr = strrchr(algs, '\n'); + if (ptr) + *ptr = '\0'; + + qm->uacce->algs = algs; + + return 0; +} + static int hpre_diff_regs_show(struct seq_file *s, void *unused) { struct hisi_qm *qm = s->private; @@ -1073,15 +1136,13 @@ static void hpre_debugfs_exit(struct hisi_qm *qm) static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) { + int ret; + if (pdev->revision == QM_HW_V1) { pci_warn(pdev, "HPRE version 1 is not supported!\n"); return -EINVAL; } - if (pdev->revision >= QM_HW_V3) - qm->algs = "rsa\ndh\necdh\nx25519\nx448\necdsa\nsm2"; - else - qm->algs = "rsa\ndh"; qm->mode = uacce_mode; qm->pdev = pdev; qm->ver = pdev->revision; @@ -1097,7 +1158,19 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) qm->qm_list = &hpre_devices; } - return hisi_qm_init(qm); + ret = hisi_qm_init(qm); + if (ret) { + pci_err(pdev, "Failed to init hpre qm configures!\n"); + return ret; + } + + ret = hpre_set_qm_algs(qm); + if (ret) { + pci_err(pdev, "Failed to set hpre algs!\n"); + hisi_qm_uninit(qm); + } + + return ret; } static int hpre_show_last_regs_init(struct hisi_qm *qm) diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index e4a506d02d23..30fdf0673f00 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -3484,7 +3484,6 @@ static int qm_alloc_uacce(struct hisi_qm *qm) uacce->is_vf = pdev->is_virtfn; uacce->priv = qm; - uacce->algs = qm->algs; if (qm->ver == QM_HW_V1) uacce->api_ver = HISI_QM_API_VER_BASE; diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c index 84233a489c74..3705412bac5f 100644 --- a/drivers/crypto/hisilicon/sec2/sec_main.c +++ b/drivers/crypto/hisilicon/sec2/sec_main.c @@ -115,6 +115,14 @@ #define SEC_ALG_BITMAP_SHIFT 32 +#define SEC_CIPHER_BITMAP (GENMASK_ULL(5, 0) | GENMASK_ULL(16, 12) | \ + GENMASK(24, 21)) +#define SEC_DIGEST_BITMAP (GENMASK_ULL(11, 8) | GENMASK_ULL(20, 19) | \ + GENMASK_ULL(42, 25)) +#define SEC_AEAD_BITMAP (GENMASK_ULL(7, 6) | GENMASK_ULL(18, 17) | \ + GENMASK_ULL(45, 43)) +#define SEC_DEV_ALG_MAX_LEN 256 + struct sec_hw_error { u32 int_msk; const char *msg; @@ -125,6 +133,11 @@ struct sec_dfx_item { u32 offset; }; +struct sec_dev_alg { + u64 alg_msk; + const char *algs; +}; + static const char sec_name[] = "hisi_sec2"; static struct dentry *sec_debugfs_root; @@ -161,6 +174,18 @@ static const struct hisi_qm_cap_info sec_basic_info[] = { {SEC_CORE4_ALG_BITMAP_HIGH, 0x3170, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF}, }; +static const struct sec_dev_alg sec_dev_algs[] = { { + .alg_msk = SEC_CIPHER_BITMAP, + .algs = "cipher\n", + }, { + .alg_msk = SEC_DIGEST_BITMAP, + .algs = "digest\n", + }, { + .alg_msk = SEC_AEAD_BITMAP, + .algs = "aead\n", + }, +}; + static const struct sec_hw_error sec_hw_errors[] = { { .int_msk = BIT(0), @@ -1052,11 +1077,41 @@ static int sec_pf_probe_init(struct sec_dev *sec) return ret; } +static int sec_set_qm_algs(struct hisi_qm *qm) +{ + struct device *dev = &qm->pdev->dev; + char *algs, *ptr; + u64 alg_mask; + int i; + + if (!qm->use_sva) + return 0; + + algs = devm_kzalloc(dev, SEC_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL); + if (!algs) + return -ENOMEM; + + alg_mask = sec_get_alg_bitmap(qm, SEC_DEV_ALG_BITMAP_HIGH, SEC_DEV_ALG_BITMAP_LOW); + + for (i = 0; i < ARRAY_SIZE(sec_dev_algs); i++) + if (alg_mask & sec_dev_algs[i].alg_msk) + strcat(algs, sec_dev_algs[i].algs); + + ptr = strrchr(algs, '\n'); + if (ptr) + *ptr = '\0'; + + qm->uacce->algs = algs; + + return 0; +} + static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) { + int ret; + qm->pdev = pdev; qm->ver = pdev->revision; - qm->algs = "cipher\ndigest\naead"; qm->mode = uacce_mode; qm->sqe_size = SEC_SQE_SIZE; qm->dev_name = sec_name; @@ -1079,7 +1134,19 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) qm->qp_num = SEC_QUEUE_NUM_V1 - SEC_PF_DEF_Q_NUM; } - return hisi_qm_init(qm); + ret = hisi_qm_init(qm); + if (ret) { + pci_err(qm->pdev, "Failed to init sec qm configures!\n"); + return ret; + } + + ret = sec_set_qm_algs(qm); + if (ret) { + pci_err(qm->pdev, "Failed to set sec algs!\n"); + hisi_qm_uninit(qm); + } + + return ret; } static void sec_qm_uninit(struct hisi_qm *qm) diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index a8aedddac67a..c863435e8c75 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -74,6 +74,12 @@ #define HZIP_AXI_SHUTDOWN_ENABLE BIT(14) #define HZIP_WR_PORT BIT(11) +#define HZIP_DEV_ALG_MAX_LEN 256 +#define HZIP_ALG_ZLIB_BIT GENMASK(1, 0) +#define HZIP_ALG_GZIP_BIT GENMASK(3, 2) +#define HZIP_ALG_DEFLATE_BIT GENMASK(5, 4) +#define HZIP_ALG_LZ77_BIT GENMASK(7, 6) + #define HZIP_BUF_SIZE 22 #define HZIP_SQE_MASK_OFFSET 64 #define HZIP_SQE_MASK_LEN 48 @@ -114,6 +120,26 @@ struct zip_dfx_item { u32 offset; }; +struct zip_dev_alg { + u32 alg_msk; + const char *algs; +}; + +static const struct zip_dev_alg zip_dev_algs[] = { { + .alg_msk = HZIP_ALG_ZLIB_BIT, + .algs = "zlib\n", + }, { + .alg_msk = HZIP_ALG_GZIP_BIT, + .algs = "gzip\n", + }, { + .alg_msk = HZIP_ALG_DEFLATE_BIT, + .algs = "deflate\n", + }, { + .alg_msk = HZIP_ALG_LZ77_BIT, + .algs = "lz77_zstd\n", + }, +}; + static struct hisi_qm_list zip_devices = { .register_to_crypto = hisi_zip_register_to_crypto, .unregister_from_crypto = hisi_zip_unregister_from_crypto, @@ -388,6 +414,35 @@ bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg) return false; } +static int hisi_zip_set_qm_algs(struct hisi_qm *qm) +{ + struct device *dev = &qm->pdev->dev; + char *algs, *ptr; + u32 alg_mask; + int i; + + if (!qm->use_sva) + return 0; + + algs = devm_kzalloc(dev, HZIP_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL); + if (!algs) + return -ENOMEM; + + alg_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DEV_ALG_BITMAP, qm->cap_ver); + + for (i = 0; i < ARRAY_SIZE(zip_dev_algs); i++) + if (alg_mask & zip_dev_algs[i].alg_msk) + strcat(algs, zip_dev_algs[i].algs); + + ptr = strrchr(algs, '\n'); + if (ptr) + *ptr = '\0'; + + qm->uacce->algs = algs; + + return 0; +} + static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm) { u32 val; @@ -1071,12 +1126,10 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip) static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) { + int ret; + qm->pdev = pdev; qm->ver = pdev->revision; - if (pdev->revision >= QM_HW_V3) - qm->algs = "zlib\ngzip\ndeflate\nlz77_zstd"; - else - qm->algs = "zlib\ngzip"; qm->mode = uacce_mode; qm->sqe_size = HZIP_SQE_SIZE; qm->dev_name = hisi_zip_name; @@ -1100,7 +1153,19 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev) qm->qp_num = HZIP_QUEUE_NUM_V1 - HZIP_PF_DEF_Q_NUM; } - return hisi_qm_init(qm); + ret = hisi_qm_init(qm); + if (ret) { + pci_err(qm->pdev, "Failed to init zip qm configures!\n"); + return ret; + } + + ret = hisi_zip_set_qm_algs(qm); + if (ret) { + pci_err(qm->pdev, "Failed to set zip algs!\n"); + hisi_qm_uninit(qm); + } + + return ret; } static void hisi_zip_qm_uninit(struct hisi_qm *qm)