From patchwork Mon Jun 21 07:06:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12333915 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19914C49361 for ; Mon, 21 Jun 2021 07:06:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0555161151 for ; Mon, 21 Jun 2021 07:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230039AbhFUHIl (ORCPT ); Mon, 21 Jun 2021 03:08:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:41658 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbhFUHIh (ORCPT ); Mon, 21 Jun 2021 03:08:37 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8B813611BD; Mon, 21 Jun 2021 07:06:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624259184; bh=VrR6OyFEYg3e+K1DBiO/OoVYYizbyTEQb6zjKyXpRww=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AKctzOe7qNMvdrZ52izmfHuhNGpypgGiGQeHHDG9kWXl6FcRI02ltZKU13tXbXmWs SPVjiZoJ1e/qzBm3C8m4N9OWIxRP0SXlpSrHfNRpk1JlmrkSksS6ax7p1KF12XghQw /8Xaz7scx5mbk/J1CU+TPPbao+kLCQ2vHLOmvIP+v6stePoshjWdUa0arP5IDXzrvX A1+adaskJBzrSp2DWndAnp7kExq3s5RSICn7y4HP2txEtkubFvUO2kpNF7/yVXFPMv GBmdL1sAqrqF6D96RxLE+swox9nEa1GK+HAXFn+rmovq2PTdjYV9aFuhHreL5FUpjI gun9H+qRbCQag== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Lior Nahmanson , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Saeed Mahameed Subject: [PATCH mlx5-next v1 1/3] net/mlx5: Add DCS caps & fields support Date: Mon, 21 Jun 2021 10:06:14 +0300 Message-Id: <55e1d69bef1fbfa5cf195c0bfcbe35c8019de35e.1624258894.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Lior Nahmanson This fields will be needed when adding a support for DCS offload max_dci_stream_channels - maximum DCI stream channels supported per DCI. max_dci_errored_streams - maximum DCI error stream channels supported per DCI before a DCI move to error state. Signed-off-by: Lior Nahmanson Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 6d16eed6850e..48b2529451eb 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1655,7 +1655,13 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 max_geneve_tlv_option_data_len[0x5]; u8 reserved_at_570[0x10]; - u8 reserved_at_580[0x33]; + u8 reserved_at_580[0xb]; + u8 log_max_dci_stream_channels[0x5]; + u8 reserved_at_590[0x3]; + u8 log_max_dci_errored_streams[0x5]; + u8 reserved_at_598[0x8]; + + u8 reserved_at_5a0[0x13]; u8 log_max_dek[0x5]; u8 reserved_at_5b8[0x4]; u8 mini_cqe_resp_stride_index[0x1]; @@ -3013,10 +3019,12 @@ struct mlx5_ifc_qpc_bits { u8 reserved_at_3c0[0x8]; u8 next_send_psn[0x18]; - u8 reserved_at_3e0[0x8]; + u8 reserved_at_3e0[0x3]; + u8 log_num_dci_stream_channels[0x5]; u8 cqn_snd[0x18]; - u8 reserved_at_400[0x8]; + u8 reserved_at_400[0x3]; + u8 log_num_dci_errored_streams[0x5]; u8 deth_sqpn[0x18]; u8 reserved_at_420[0x20]; From patchwork Mon Jun 21 07:06:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12333919 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D95AC49EA6 for ; Mon, 21 Jun 2021 07:06:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6726611C0 for ; Mon, 21 Jun 2021 07:06:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230251AbhFUHIw (ORCPT ); Mon, 21 Jun 2021 03:08:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:41782 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbhFUHIo (ORCPT ); Mon, 21 Jun 2021 03:08:44 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 16C9761279; Mon, 21 Jun 2021 07:06:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624259190; bh=lCqUVUVemoEr/r9qwpg+LYNlq5e96lZer/58WomkW+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ANh9uGbhtcOmnoudkyIIF08nigsuljcdsvVFzOW1630WHoEaXm7HcQetnVisEFTA5 nd6XL7IJeUZ1AP/pBAotehCe5NmnFR0+3V+Ce68RktJAite/zP3Terxt3vR8EIUmD0 WjR0Ge4p/yUm6Iqn4C6D7/QtvmjeQkIN3969i88QjOGtRlVNmm076vJOP6w93nv8ai bWiPpuuJ3t0fiQMhl3nCyuei3aYw8+XHoaWCwhiNBZ5jU86IkNAnlHlaPRzuzwRpX6 quqWSVo+juG1J8P4OqpQGEAtBceVUQHdwvC1xDaINka5vdyeVKBq9GpYgqJcrZ3EtB O/5mWeHvxOYtQ== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Lior Nahmanson , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Meir Lichtinger , netdev@vger.kernel.org, Saeed Mahameed Subject: [PATCH rdma-next v1 2/3] RDMA/mlx5: Separate DCI QP creation logic Date: Mon, 21 Jun 2021 10:06:15 +0300 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Lior Nahmanson This patch isolates DCI QP creation logic to separate function, so this change will reduce complexity when adding new features to DCI QP without interfering with other QP types. The code was copied from create_user_qp() while taking only DCI relevant bits. Reviewed-by: Meir Lichtinger Signed-off-by: Lior Nahmanson Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/qp.c | 157 ++++++++++++++++++++++++++++++++ 1 file changed, 157 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 7a5f1eba60e3..65a380543f5a 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1974,6 +1974,160 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, return 0; } +static int create_dci(struct mlx5_ib_dev *dev, struct ib_pd *pd, + struct mlx5_ib_qp *qp, + struct mlx5_create_qp_params *params) +{ + struct ib_qp_init_attr *init_attr = params->attr; + struct mlx5_ib_create_qp *ucmd = params->ucmd; + u32 out[MLX5_ST_SZ_DW(create_qp_out)] = {}; + struct ib_udata *udata = params->udata; + u32 uidx = params->uidx; + struct mlx5_ib_resources *devr = &dev->devr; + int inlen = MLX5_ST_SZ_BYTES(create_qp_in); + struct mlx5_core_dev *mdev = dev->mdev; + struct mlx5_ib_cq *send_cq; + struct mlx5_ib_cq *recv_cq; + unsigned long flags; + struct mlx5_ib_qp_base *base; + int ts_format; + int mlx5_st; + void *qpc; + u32 *in; + int err; + + spin_lock_init(&qp->sq.lock); + spin_lock_init(&qp->rq.lock); + + mlx5_st = to_mlx5_st(qp->type); + if (mlx5_st < 0) + return -EINVAL; + + if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) + qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE; + + base = &qp->trans_qp.base; + + qp->has_rq = qp_has_rq(init_attr); + err = set_rq_size(dev, &init_attr->cap, qp->has_rq, qp, ucmd); + if (err) { + mlx5_ib_dbg(dev, "err %d\n", err); + return err; + } + + if (ucmd->rq_wqe_shift != qp->rq.wqe_shift || + ucmd->rq_wqe_count != qp->rq.wqe_cnt) + return -EINVAL; + + if (ucmd->sq_wqe_count > (1 << MLX5_CAP_GEN(mdev, log_max_qp_sz))) + return -EINVAL; + + ts_format = get_qp_ts_format(dev, to_mcq(init_attr->send_cq), + to_mcq(init_attr->recv_cq)); + + if (ts_format < 0) + return ts_format; + + err = _create_user_qp(dev, pd, qp, udata, init_attr, &in, ¶ms->resp, + &inlen, base, ucmd); + if (err) + return err; + + if (MLX5_CAP_GEN(mdev, ece_support)) + MLX5_SET(create_qp_in, in, ece, ucmd->ece_options); + qpc = MLX5_ADDR_OF(create_qp_in, in, qpc); + + MLX5_SET(qpc, qpc, st, mlx5_st); + MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED); + MLX5_SET(qpc, qpc, pd, to_mpd(pd)->pdn); + + if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE) + MLX5_SET(qpc, qpc, wq_signature, 1); + + if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL) + MLX5_SET(qpc, qpc, cd_master, 1); + if (qp->flags & IB_QP_CREATE_MANAGED_SEND) + MLX5_SET(qpc, qpc, cd_slave_send, 1); + if (qp->flags_en & MLX5_QP_FLAG_SCATTER_CQE) + configure_requester_scat_cqe(dev, qp, init_attr, qpc); + + if (qp->rq.wqe_cnt) { + MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4); + MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt)); + } + + MLX5_SET(qpc, qpc, ts_format, ts_format); + MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr)); + + MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt)); + + /* Set default resources */ + if (init_attr->srq) { + MLX5_SET(qpc, qpc, xrcd, devr->xrcdn0); + MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, + to_msrq(init_attr->srq)->msrq.srqn); + } else { + MLX5_SET(qpc, qpc, xrcd, devr->xrcdn1); + MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, + to_msrq(devr->s1)->msrq.srqn); + } + + if (init_attr->send_cq) + MLX5_SET(qpc, qpc, cqn_snd, + to_mcq(init_attr->send_cq)->mcq.cqn); + + if (init_attr->recv_cq) + MLX5_SET(qpc, qpc, cqn_rcv, + to_mcq(init_attr->recv_cq)->mcq.cqn); + + MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma); + + /* 0xffffff means we ask to work with cqe version 0 */ + if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1) + MLX5_SET(qpc, qpc, user_index, uidx); + + if (qp->flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) { + MLX5_SET(qpc, qpc, end_padding_mode, + MLX5_WQ_END_PAD_MODE_ALIGN); + /* Special case to clean flag */ + qp->flags &= ~IB_QP_CREATE_PCI_WRITE_END_PADDING; + } + + err = mlx5_qpc_create_qp(dev, &base->mqp, in, inlen, out); + + kvfree(in); + if (err) + goto err_create; + + base->container_mibqp = qp; + base->mqp.event = mlx5_ib_qp_event; + if (MLX5_CAP_GEN(mdev, ece_support)) + params->resp.ece_options = MLX5_GET(create_qp_out, out, ece); + + get_cqs(qp->type, init_attr->send_cq, init_attr->recv_cq, + &send_cq, &recv_cq); + spin_lock_irqsave(&dev->reset_flow_resource_lock, flags); + mlx5_ib_lock_cqs(send_cq, recv_cq); + /* Maintain device to QPs access, needed for further handling via reset + * flow + */ + list_add_tail(&qp->qps_list, &dev->qp_list); + /* Maintain CQ to QPs access, needed for further handling via reset flow + */ + if (send_cq) + list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp); + if (recv_cq) + list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp); + mlx5_ib_unlock_cqs(send_cq, recv_cq); + spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags); + + return 0; + +err_create: + destroy_qp(dev, qp, base, udata); + return err; +} + static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, struct mlx5_ib_qp *qp, struct mlx5_create_qp_params *params) @@ -2840,6 +2994,9 @@ static int create_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, case MLX5_IB_QPT_DCT: err = create_dct(dev, pd, qp, params); break; + case MLX5_IB_QPT_DCI: + err = create_dci(dev, pd, qp, params); + break; case IB_QPT_XRC_TGT: err = create_xrc_tgt_qp(dev, qp, params); break; From patchwork Mon Jun 21 07:06:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12333917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E571C49361 for ; Mon, 21 Jun 2021 07:06:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75F01611BD for ; Mon, 21 Jun 2021 07:06:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230075AbhFUHIu (ORCPT ); Mon, 21 Jun 2021 03:08:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:41690 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229967AbhFUHIl (ORCPT ); Mon, 21 Jun 2021 03:08:41 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CE56E61156; Mon, 21 Jun 2021 07:06:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624259187; bh=hb6D3sgBTVn2DYOmVk2efYeS2ZlIhQzxF1c8OG44nGY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=goijfz0bl5IOQKaBoKXPq/jse3a3iWbjlHCs8FdS/diE4MWRYJO8ZpBqU3hZJTK5o o31Q5K9nmFZbFeulU63lDpN3WSoKMC7ro5msTpUN7S/ia6zDKHEz0xCAkxvzrpggAb dI3p6Ru9nESV1IveKULkuCdiZmV+QcBgNcHWifhAt+8FzeBhZXmyhpObu/Cezx7jN1 7xFNexbYVkhN2kZTduznrR4uQNc2YVYuFgqjIm2KFDDTF9Ge6Qu5t416XFvg6XV+JN 46lMHqc4qCJEgnROK2svvZR2iL5EHNtDXFIXcqdxtkvWaP8FUefvKwcp7aOhDK5vEq g5rSsAUOjADvg== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Lior Nahmanson , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Lior Nahmanson , Meir Lichtinger , netdev@vger.kernel.org, Saeed Mahameed Subject: [PATCH rdma-next v1 3/3] RDMA/mlx5: Add DCS offload support Date: Mon, 21 Jun 2021 10:06:16 +0300 Message-Id: <491c2c2afdb5b07de7f03eab3f93cf0704549dbc.1624258894.git.leonro@nvidia.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Lior Nahmanson DCS is an offload to SW load balancing of DC initiator work requests. A single DCI can be connected to only one target at the time and can't start new connection until the previous work request is completed. This limitation will cause to delay when the initiator process needs to transfer data to multiple targets at the same time. The SW solution is to use a process that handling and spreading the work request on many DCIs according to destinations. This feature is an offload to this process and coming to reduce the load from the CPU and improve the performance. Reviewed-by: Meir Lichtinger Signed-off-by: Lior Nahmanson Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 10 ++++++++++ drivers/infiniband/hw/mlx5/qp.c | 11 +++++++++++ include/uapi/rdma/mlx5-abi.h | 17 +++++++++++++++-- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index b145e477e3ba..c12517b63a8d 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1174,6 +1174,16 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, MLX5_IB_TUNNELED_OFFLOADS_MPLS_UDP; } + if (offsetofend(typeof(resp), dci_streams_caps) <= uhw_outlen) { + resp.response_length += sizeof(resp.dci_streams_caps); + + resp.dci_streams_caps.max_log_num_concurent = + MLX5_CAP_GEN(mdev, log_max_dci_stream_channels); + + resp.dci_streams_caps.max_log_num_errored = + MLX5_CAP_GEN(mdev, log_max_dci_errored_streams); + } + if (uhw_outlen) { err = ib_copy_to_udata(uhw, &resp, resp.response_length); diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 65a380543f5a..7b545eac37a3 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -2056,6 +2056,13 @@ static int create_dci(struct mlx5_ib_dev *dev, struct ib_pd *pd, MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt)); } + if (qp->flags_en & MLX5_QP_FLAG_DCI_STREAM) { + MLX5_SET(qpc, qpc, log_num_dci_stream_channels, + ucmd->dci_streams.log_num_concurent); + MLX5_SET(qpc, qpc, log_num_dci_errored_streams, + ucmd->dci_streams.log_num_errored); + } + MLX5_SET(qpc, qpc, ts_format, ts_format); MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr)); @@ -2799,6 +2806,10 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCI, true, qp); process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCT, true, qp); + process_vendor_flag(dev, &flags, MLX5_QP_FLAG_DCI_STREAM, + MLX5_CAP_GEN(mdev, log_max_dci_stream_channels) && + MLX5_CAP_GEN(mdev, log_max_dci_errored_streams), + qp); process_vendor_flag(dev, &flags, MLX5_QP_FLAG_SIGNATURE, true, qp); process_vendor_flag(dev, &flags, MLX5_QP_FLAG_SCATTER_CQE, diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h index 995faf8f44bd..6f54ab3d99e5 100644 --- a/include/uapi/rdma/mlx5-abi.h +++ b/include/uapi/rdma/mlx5-abi.h @@ -50,6 +50,7 @@ enum { MLX5_QP_FLAG_ALLOW_SCATTER_CQE = 1 << 8, MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE = 1 << 9, MLX5_QP_FLAG_UAR_PAGE_INDEX = 1 << 10, + MLX5_QP_FLAG_DCI_STREAM = 1 << 11, }; enum { @@ -237,6 +238,11 @@ struct mlx5_ib_striding_rq_caps { __u32 reserved; }; +struct mlx5_ib_dci_streams_caps { + __u8 max_log_num_concurent; + __u8 max_log_num_errored; +}; + enum mlx5_ib_query_dev_resp_flags { /* Support 128B CQE compression */ MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_COMP = 1 << 0, @@ -265,7 +271,8 @@ struct mlx5_ib_query_device_resp { struct mlx5_ib_sw_parsing_caps sw_parsing_caps; struct mlx5_ib_striding_rq_caps striding_rq_caps; __u32 tunnel_offloads_caps; /* enum mlx5_ib_tunnel_offloads */ - __u32 reserved; + struct mlx5_ib_dci_streams_caps dci_streams_caps; + __u16 reserved; }; enum mlx5_ib_create_cq_flags { @@ -311,6 +318,11 @@ struct mlx5_ib_create_srq_resp { __u32 reserved; }; +struct mlx5_ib_create_qp_dci_streams { + __u8 log_num_concurent; + __u8 log_num_errored; +}; + struct mlx5_ib_create_qp { __aligned_u64 buf_addr; __aligned_u64 db_addr; @@ -325,7 +337,8 @@ struct mlx5_ib_create_qp { __aligned_u64 access_key; }; __u32 ece_options; - __u32 reserved; + struct mlx5_ib_create_qp_dci_streams dci_streams; + __u16 reserved; }; /* RX Hash function flags */