From patchwork Tue Mar 26 11:31:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10870909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70C76922 for ; Tue, 26 Mar 2019 11:30:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 592E528565 for ; Tue, 26 Mar 2019 11:30:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BD2128EEA; Tue, 26 Mar 2019 11:30:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51FC528565 for ; Tue, 26 Mar 2019 11:30:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726261AbfCZLay (ORCPT ); Tue, 26 Mar 2019 07:30:54 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:58656 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726248AbfCZLay (ORCPT ); Tue, 26 Mar 2019 07:30:54 -0400 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id D732177A3E800BF7EC21; Tue, 26 Mar 2019 19:30:49 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Tue, 26 Mar 2019 19:30:42 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V9 for-next] RDMA/hns: Dump detailed driver-specific CQ Date: Tue, 26 Mar 2019 19:31:22 +0800 Message-ID: <1553599882-98624-1-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support of resource track for hip08 and take dumping cq context state used for debugging as an example. More resources track supports for hns driver will be added in future. The output should be as follows. $ ./rdma res show cq dev hnseth0 -d dev hnseth0 cqe 1023 users 2 poll-ctx WORKQUEUE pid 0 comm [ib_core] drv_state 2 drv_ceq n 0 drv_cqn 0 drv_hopnum 1 drv_pi 0 drv_ci 0 drv_coalesce 0 drv_period 0 drv_cnt 0 Signed-off-by: Lijun Ou --- v8->v9: 1. resolve the merge conflict becasuse the recent patches("[PATCH V2 for-next 0/8] Hns updates") received. v7->v8: 1. Remove the prefix for all fields of cq context in the driver because the prefix had been added in rdmatool.. 2. resolve the merge conflict becasuse the other patches received. v6->v7: 1. Add prefix for all fields of cq context. 2. Remove unnecessary commit information. v5->v6: 1. resolve the merge conflict. 2. add prefix for some common fields of cq context. v4->v5: 1.Change the cqe and maxcnt in the driver output to a proper name. v3->v4: 1.print values in the granularity of parameterable fields. v2->v3: 1.Modify the output message. v1->v2: 1.Fixed the warning about the SPDX tag. 2.Add the output of rdmatool to the commit message. 3.Modify the function pointer to register res. --- drivers/infiniband/hw/hns/Makefile | 4 +- drivers/infiniband/hw/hns/hns_roce_cmd.h | 1 + drivers/infiniband/hw/hns/hns_roce_device.h | 8 ++ drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 11 ++- drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 8 ++ drivers/infiniband/hw/hns/hns_roce_hw_v2_dfx.c | 36 +++++++ drivers/infiniband/hw/hns/hns_roce_restrack.c | 124 +++++++++++++++++++++++++ 7 files changed, 187 insertions(+), 5 deletions(-) create mode 100644 drivers/infiniband/hw/hns/hns_roce_hw_v2_dfx.c create mode 100644 drivers/infiniband/hw/hns/hns_roce_restrack.c diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile index e2a7f14..eee5205 100644 --- a/drivers/infiniband/hw/hns/Makefile +++ b/drivers/infiniband/hw/hns/Makefile @@ -7,8 +7,8 @@ ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3 obj-$(CONFIG_INFINIBAND_HNS) += hns-roce.o hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \ hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \ - hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o + hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o hns_roce_restrack.o obj-$(CONFIG_INFINIBAND_HNS_HIP06) += hns-roce-hw-v1.o hns-roce-hw-v1-objs := hns_roce_hw_v1.o obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns-roce-hw-v2.o -hns-roce-hw-v2-objs := hns_roce_hw_v2.o +hns-roce-hw-v2-objs := hns_roce_hw_v2.o hns_roce_hw_v2_dfx.o diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h index 059fd1d..2b6ac64 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cmd.h +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h @@ -53,6 +53,7 @@ enum { HNS_ROCE_CMD_QUERY_QPC = 0x42, HNS_ROCE_CMD_MODIFY_CQC = 0x52, + HNS_ROCE_CMD_QUERY_CQC = 0x53, /* CQC BT commands */ HNS_ROCE_CMD_WRITE_CQC_BT0 = 0x10, HNS_ROCE_CMD_WRITE_CQC_BT1 = 0x11, diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 9ee86da..d7872ef 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -869,6 +869,11 @@ struct hns_roce_work { int sub_type; }; +struct hns_roce_dfx_hw { + int (*query_cqc_info)(struct hns_roce_dev *hr_dev, u32 cqn, + int *buffer); +}; + struct hns_roce_hw { int (*reset)(struct hns_roce_dev *hr_dev, bool enable); int (*cmq_init)(struct hns_roce_dev *hr_dev); @@ -985,6 +990,7 @@ struct hns_roce_dev { const struct hns_roce_hw *hw; void *priv; struct workqueue_struct *irq_workq; + const struct hns_roce_dfx_hw *dfx; }; static inline struct hns_roce_dev *to_hr_dev(struct ib_device *ib_dev) @@ -1202,4 +1208,6 @@ int hns_roce_alloc_db(struct hns_roce_dev *hr_dev, struct hns_roce_db *db, int hns_roce_init(struct hns_roce_dev *hr_dev); void hns_roce_exit(struct hns_roce_dev *hr_dev); +int hns_roce_fill_res_entry(struct sk_buff *msg, + struct rdma_restrack_entry *res); #endif /* _HNS_ROCE_DEVICE_H */ diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 14e8945..69a2723 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -953,9 +953,9 @@ static void hns_roce_v2_cmq_exit(struct hns_roce_dev *hr_dev) hns_roce_free_cmq_desc(hr_dev, &priv->cmq.crq); } -static void hns_roce_cmq_setup_basic_desc(struct hns_roce_cmq_desc *desc, - enum hns_roce_opcode_type opcode, - bool is_read) +void hns_roce_cmq_setup_basic_desc(struct hns_roce_cmq_desc *desc, + enum hns_roce_opcode_type opcode, + bool is_read) { memset((void *)desc, 0, sizeof(struct hns_roce_cmq_desc)); desc->opcode = cpu_to_le16(opcode); @@ -6065,6 +6065,10 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq, return ret; } +static const struct hns_roce_dfx_hw hns_roce_dfx_hw_v2 = { + .query_cqc_info = hns_roce_v2_query_cqc_info, +}; + static const struct ib_device_ops hns_roce_v2_dev_ops = { .destroy_qp = hns_roce_v2_destroy_qp, .modify_cq = hns_roce_v2_modify_cq, @@ -6137,6 +6141,7 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev, int i; hr_dev->hw = &hns_roce_hw_v2; + hr_dev->dfx = &hns_roce_dfx_hw_v2; hr_dev->sdb_offset = ROCEE_DB_SQ_L_0_REG; hr_dev->odb_offset = hr_dev->sdb_offset; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 1136763..20cf2961e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -1799,6 +1799,14 @@ struct hns_roce_sccc_clr_done { __le32 rsv[5]; }; +int hns_roce_v2_query_cqc_info(struct hns_roce_dev *hr_dev, u32 cqn, + int *buffer); +void hns_roce_cmq_setup_basic_desc(struct hns_roce_cmq_desc *desc, + enum hns_roce_opcode_type opcode, + bool is_read); +int hns_roce_cmq_send(struct hns_roce_dev *hr_dev, + struct hns_roce_cmq_desc *desc, int num); + static inline void hns_roce_write64(struct hns_roce_dev *hr_dev, __le32 val[2], void __iomem *dest) { diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2_dfx.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2_dfx.c new file mode 100644 index 0000000..76cdbc9 --- /dev/null +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2_dfx.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +// Copyright (c) 2019 Hisilicon Limited. + +#include "hnae3.h" +#include "hns_roce_device.h" +#include "hns_roce_cmd.h" +#include "hns_roce_hw_v2.h" + +int hns_roce_v2_query_cqc_info(struct hns_roce_dev *hr_dev, u32 cqn, + int *buffer) +{ + struct hns_roce_v2_cq_context *cq_context; + struct hns_roce_cmd_mailbox *mailbox; + int ret; + + mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); + if (IS_ERR(mailbox)) + return PTR_ERR(mailbox); + + cq_context = mailbox->buf; + ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, + cqn, 0, + HNS_ROCE_CMD_QUERY_CQC, + HNS_ROCE_CMD_TIMEOUT_MSECS); + if (ret) { + dev_err(hr_dev->dev, "QUERY cqc cmd process error\n"); + goto err_mailbox; + } + + memcpy(buffer, cq_context, sizeof(*cq_context)); + +err_mailbox: + hns_roce_free_cmd_mailbox(hr_dev, mailbox); + + return ret; +} diff --git a/drivers/infiniband/hw/hns/hns_roce_restrack.c b/drivers/infiniband/hw/hns/hns_roce_restrack.c new file mode 100644 index 0000000..246203f --- /dev/null +++ b/drivers/infiniband/hw/hns/hns_roce_restrack.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +// Copyright (c) 2019 Hisilicon Limited. + +#include +#include +#include +#include "hnae3.h" +#include "hns_roce_common.h" +#include "hns_roce_device.h" +#include "hns_roce_hw_v2.h" + +static int hns_roce_fill_cq(struct sk_buff *msg, + struct hns_roce_v2_cq_context *context) +{ + if (rdma_nl_put_driver_u32(msg, "state", + roce_get_field(context->byte_4_pg_ceqn, + V2_CQC_BYTE_4_ARM_ST_M, + V2_CQC_BYTE_4_ARM_ST_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "ceqn", + roce_get_field(context->byte_4_pg_ceqn, + V2_CQC_BYTE_4_CEQN_M, + V2_CQC_BYTE_4_CEQN_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "cqn", + roce_get_field(context->byte_8_cqn, + V2_CQC_BYTE_8_CQN_M, + V2_CQC_BYTE_8_CQN_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "hopnum", + roce_get_field(context->byte_16_hop_addr, + V2_CQC_BYTE_16_CQE_HOP_NUM_M, + V2_CQC_BYTE_16_CQE_HOP_NUM_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "pi", + roce_get_field(context->byte_28_cq_pi, + V2_CQC_BYTE_28_CQ_PRODUCER_IDX_M, + V2_CQC_BYTE_28_CQ_PRODUCER_IDX_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "ci", + roce_get_field(context->byte_32_cq_ci, + V2_CQC_BYTE_32_CQ_CONSUMER_IDX_M, + V2_CQC_BYTE_32_CQ_CONSUMER_IDX_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "coalesce", + roce_get_field( + context->byte_56_cqe_period_maxcnt, + V2_CQC_BYTE_56_CQ_MAX_CNT_M, + V2_CQC_BYTE_56_CQ_MAX_CNT_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "period", + roce_get_field( + context->byte_56_cqe_period_maxcnt, + V2_CQC_BYTE_56_CQ_PERIOD_M, + V2_CQC_BYTE_56_CQ_PERIOD_S))) + goto err; + + if (rdma_nl_put_driver_u32(msg, "cnt", + roce_get_field(context->byte_52_cqe_cnt, + V2_CQC_BYTE_52_CQE_CNT_M, + V2_CQC_BYTE_52_CQE_CNT_S))) + goto err; + + return 0; + +err: + return -EMSGSIZE; +} + +static int hns_roce_fill_res_cq_entry(struct sk_buff *msg, + struct rdma_restrack_entry *res) +{ + struct ib_cq *ib_cq = container_of(res, struct ib_cq, res); + struct hns_roce_dev *hr_dev = to_hr_dev(ib_cq->device); + struct hns_roce_cq *hr_cq = to_hr_cq(ib_cq); + struct hns_roce_v2_cq_context *context; + struct nlattr *table_attr; + int ret; + + if (!hr_dev->dfx->query_cqc_info) + return -EINVAL; + + context = kzalloc(sizeof(struct hns_roce_v2_cq_context), GFP_KERNEL); + if (!context) + return -ENOMEM; + + ret = hr_dev->dfx->query_cqc_info(hr_dev, hr_cq->cqn, (int *)context); + if (ret) + goto err; + + table_attr = nla_nest_start(msg, RDMA_NLDEV_ATTR_DRIVER); + if (!table_attr) + goto err; + + if (hns_roce_fill_cq(msg, context)) + goto err_cancel_table; + + nla_nest_end(msg, table_attr); + kfree(context); + + return 0; + +err_cancel_table: + nla_nest_cancel(msg, table_attr); +err: + kfree(context); + return -EMSGSIZE; +} + +int hns_roce_fill_res_entry(struct sk_buff *msg, + struct rdma_restrack_entry *res) +{ + if (res->type == RDMA_RESTRACK_CQ) + return hns_roce_fill_res_cq_entry(msg, res); + + return 0; +}