From patchwork Wed Aug 30 09:23:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wei Hu (Xavier)" X-Patchwork-Id: 9929067 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2725C6022E for ; Wed, 30 Aug 2017 08:56:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 184B628138 for ; Wed, 30 Aug 2017 08:56:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0D06E28179; Wed, 30 Aug 2017 08:56:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 085EE28173 for ; Wed, 30 Aug 2017 08:56:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751418AbdH3I4A (ORCPT ); Wed, 30 Aug 2017 04:56:00 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:5055 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751392AbdH3Izo (ORCPT ); Wed, 30 Aug 2017 04:55:44 -0400 Received: from 172.30.72.59 (EHLO DGGEMS401-HUB.china.huawei.com) ([172.30.72.59]) by dggrg05-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DGG47796; Wed, 30 Aug 2017 16:55:35 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.301.0; Wed, 30 Aug 2017 16:55:24 +0800 From: "Wei Hu (Xavier)" To: CC: , , , , , , , , , Subject: [PATCH for-next 16/20] RDMA/hns: Add support for processing send wr and receive wr Date: Wed, 30 Aug 2017 17:23:14 +0800 Message-ID: <1504084998-64397-17-git-send-email-xavier.huwei@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1504084998-64397-1-git-send-email-xavier.huwei@huawei.com> References: <1504084998-64397-1-git-send-email-xavier.huwei@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.59A67D87.00A1, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 44fdd24a8331ef245ad0e5ebf6b2e2ff Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch is implementing for posting send request and receiving request for hip08 RoCE driver. such as post send verbs and post recv verbs. Signed-off-by: Lijun Ou Signed-off-by: Shaobo Xu Signed-off-by: Wei Hu (Xavier) --- drivers/infiniband/hw/hns/hns_roce_cq.c | 4 +- drivers/infiniband/hw/hns/hns_roce_device.h | 2 + drivers/infiniband/hw/hns/hns_roce_hw_v1.c | 4 +- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 325 ++++++++++++++++++++++++++++ drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 72 ++++++ drivers/infiniband/hw/hns/hns_roce_qp.c | 5 +- 6 files changed, 406 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c index fa17dbc..88cdf6f 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cq.c +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c @@ -351,8 +351,8 @@ struct ib_cq *hns_roce_ib_create_cq(struct ib_device *ib_dev, } uar = &hr_dev->priv_uar; - hr_cq->cq_db_l = hr_dev->reg_base + ROCEE_DB_OTHERS_L_0_REG + - 0x1000 * uar->index; + hr_cq->cq_db_l = hr_dev->reg_base + hr_dev->odb_offset + + DB_REG_OFFSET * uar->index; } /* Allocate cq index, fill cq_context */ diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 22f2079..b45dba5 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -643,6 +643,8 @@ struct hns_roce_dev { int cmd_mod; int loop_idc; + u32 sdb_offset; + u32 odb_offset; dma_addr_t tptr_dma_addr; /*only for hw v1*/ u32 tptr_size; /*only for hw v1*/ const struct hns_roce_hw *hw; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c index 8ed0538..426f55a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c @@ -3221,7 +3221,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr, if (ibqp->uobject) { hr_qp->rq.db_reg_l = hr_dev->reg_base + - ROCEE_DB_OTHERS_L_0_REG + + hr_dev->odb_offset + DB_REG_OFFSET * hr_dev->priv_uar.index; } @@ -4084,6 +4084,8 @@ static int hns_roce_get_cfg(struct hns_roce_dev *hr_dev) /* cmd issue mode: 0 is poll, 1 is event */ hr_dev->cmd_mod = 1; hr_dev->loop_idc = 0; + hr_dev->sdb_offset = ROCEE_DB_SQ_L_0_REG; + hr_dev->odb_offset = ROCEE_DB_OTHERS_L_0_REG; /* read the interrupt names from the DT or ACPI */ ret = device_property_read_string_array(dev, "interrupt-names", diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index f2425ee..b614177 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -43,6 +43,327 @@ #include "hns_roce_hem.h" #include "hns_roce_hw_v2.h" +static void set_data_seg_v2(struct hns_roce_v2_wqe_data_seg *dseg, + struct ib_sge *sg) +{ + dseg->lkey = cpu_to_le32(sg->lkey); + dseg->addr = cpu_to_le64(sg->addr); + dseg->len = cpu_to_le32(sg->length); +} + +static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, + struct ib_send_wr **bad_wr) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + struct hns_roce_v2_rc_send_wqe *rc_sq_wqe; + struct hns_roce_qp *qp = to_hr_qp(ibqp); + struct hns_roce_v2_wqe_data_seg *dseg; + struct device *dev = hr_dev->dev; + struct hns_roce_v2_db sq_db; + unsigned int sge_ind = 0; + unsigned int wqe_sz = 0; + unsigned long flags; + unsigned int ind; + void *wqe = NULL; + int ret = 0; + int nreq; + int i; + + if (unlikely(ibqp->qp_type != IB_QPT_RC)) { + dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type); + *bad_wr = NULL; + return -EOPNOTSUPP; + } + + if (unlikely(qp->state != IB_QPS_RTS && qp->state != IB_QPS_SQD)) { + dev_err(dev, "Post WQE fail, QP state %d err!\n", qp->state); + *bad_wr = wr; + return -EINVAL; + } + + spin_lock_irqsave(&qp->sq.lock, flags); + ind = qp->sq_next_wqe; + sge_ind = qp->next_sge; + + for (nreq = 0; wr; ++nreq, wr = wr->next) { + if (hns_roce_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq)) { + ret = -ENOMEM; + *bad_wr = wr; + goto out; + } + + if (unlikely(wr->num_sge > qp->sq.max_gs)) { + dev_err(dev, "num_sge=%d > qp->sq.max_gs=%d\n", + wr->num_sge, qp->sq.max_gs); + ret = -EINVAL; + *bad_wr = wr; + goto out; + } + + wqe = get_send_wqe(qp, ind & (qp->sq.wqe_cnt - 1)); + qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = + wr->wr_id; + + + rc_sq_wqe = wqe; + memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe)); + for (i = 0; i < wr->num_sge; i++) + rc_sq_wqe->msg_len += wr->sg_list[i].length; + + rc_sq_wqe->inv_key_immtdata = send_ieth(wr); + + switch (wr->opcode) { + case IB_WR_RDMA_READ: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_RDMA_READ); + rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey); + rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr); + break; + case IB_WR_RDMA_WRITE: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_RDMA_WRITE); + rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey); + rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr); + break; + case IB_WR_RDMA_WRITE_WITH_IMM: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM); + rc_sq_wqe->rkey = cpu_to_le32(rdma_wr(wr)->rkey); + rc_sq_wqe->va = cpu_to_le64(rdma_wr(wr)->remote_addr); + break; + case IB_WR_SEND: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_SEND); + break; + case IB_WR_SEND_WITH_INV: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_SEND_WITH_INV); + break; + case IB_WR_SEND_WITH_IMM: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM); + break; + case IB_WR_LOCAL_INV: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_LOCAL_INV); + break; + case IB_WR_ATOMIC_CMP_AND_SWP: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP); + break; + case IB_WR_ATOMIC_FETCH_AND_ADD: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD); + break; + case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP); + break; + case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD); + break; + default: + roce_set_field(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_OPCODE_M, + V2_RC_SEND_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_V2_WQE_OP_MASK); + break; + } + + roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_CQE_S, 1); + + wqe += sizeof(struct hns_roce_v2_rc_send_wqe); + dseg = wqe; + if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) { + if (rc_sq_wqe->msg_len > + hr_dev->caps.max_sq_inline) { + ret = -EINVAL; + *bad_wr = wr; + dev_err(dev, "inline len(1-%d)=%d, illegal", + rc_sq_wqe->msg_len, + hr_dev->caps.max_sq_inline); + goto out; + } + + for (i = 0; i < wr->num_sge; i++) { + memcpy(wqe, ((void *)wr->sg_list[i].addr), + wr->sg_list[i].length); + wqe += wr->sg_list[i].length; + wqe_sz += wr->sg_list[i].length; + } + + roce_set_bit(rc_sq_wqe->byte_4, + V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1); + } else { + if (wr->num_sge <= 2) { + for (i = 0; i < wr->num_sge; i++) + set_data_seg_v2(dseg + i, + wr->sg_list + i); + } else { + roce_set_field(rc_sq_wqe->byte_20, + V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M, + V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S, + sge_ind & (qp->sge.sge_cnt - 1)); + + for (i = 0; i < 2; i++) + set_data_seg_v2(dseg + i, + wr->sg_list + i); + + dseg = get_send_extend_sge(qp, + sge_ind & (qp->sge.sge_cnt - 1)); + + for (i = 0; i < wr->num_sge - 2; i++) { + set_data_seg_v2(dseg + i, + wr->sg_list + 2 + i); + sge_ind++; + } + } + + roce_set_field(rc_sq_wqe->byte_16, + V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M, + V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, + wr->num_sge); + wqe_sz += wr->num_sge * + sizeof(struct hns_roce_v2_wqe_data_seg); + } + ind++; + } + +out: + if (likely(nreq)) { + qp->sq.head += nreq; + /* Memory barrier */ + wmb(); + + sq_db.byte_4 = 0; + sq_db.parameter = 0; + + roce_set_field(sq_db.byte_4, V2_DB_BYTE_4_TAG_M, + V2_DB_BYTE_4_TAG_S, qp->doorbell_qpn); + roce_set_field(sq_db.byte_4, V2_DB_BYTE_4_CMD_M, + V2_DB_BYTE_4_CMD_S, HNS_ROCE_V2_SQ_DB); + roce_set_field(sq_db.parameter, V2_DB_PARAMETER_CONS_IDX_M, + V2_DB_PARAMETER_CONS_IDX_S, + qp->sq.head & ((qp->sq.wqe_cnt << 1) - 1)); + roce_set_field(sq_db.parameter, V2_DB_PARAMETER_SL_M, + V2_DB_PARAMETER_SL_S, qp->sl); + + hns_roce_write64_k((__be32 *)&sq_db, qp->sq.db_reg_l); + + qp->sq_next_wqe = ind; + qp->next_sge = sge_ind; + } + + spin_unlock_irqrestore(&qp->sq.lock, flags); + + return ret; +} + +static int hns_roce_v2_post_recv(struct ib_qp *ibqp, struct ib_recv_wr *wr, + struct ib_recv_wr **bad_wr) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); + struct hns_roce_v2_wqe_data_seg *dseg; + struct device *dev = hr_dev->dev; + struct hns_roce_v2_db rq_db; + unsigned long flags; + void *wqe = NULL; + int ret = 0; + int nreq; + int ind; + int i; + + spin_lock_irqsave(&hr_qp->rq.lock, flags); + ind = hr_qp->rq.head & (hr_qp->rq.wqe_cnt - 1); + + if (hr_qp->state == IB_QPS_RESET || hr_qp->state == IB_QPS_ERR) { + spin_unlock_irqrestore(&hr_qp->rq.lock, flags); + *bad_wr = wr; + return -EINVAL; + } + + for (nreq = 0; wr; ++nreq, wr = wr->next) { + if (hns_roce_wq_overflow(&hr_qp->rq, nreq, + hr_qp->ibqp.recv_cq)) { + ret = -ENOMEM; + *bad_wr = wr; + goto out; + } + + if (unlikely(wr->num_sge > hr_qp->rq.max_gs)) { + dev_err(dev, "rq:num_sge=%d > qp->sq.max_gs=%d\n", + wr->num_sge, hr_qp->rq.max_gs); + ret = -EINVAL; + *bad_wr = wr; + goto out; + } + + wqe = get_recv_wqe(hr_qp, ind); + dseg = (struct hns_roce_v2_wqe_data_seg *)wqe; + for (i = 0; i < wr->num_sge; i++) { + if (!wr->sg_list[i].length) + continue; + set_data_seg_v2(dseg, wr->sg_list + i); + dseg++; + } + + if (i < hr_qp->rq.max_gs) { + dseg[i].lkey = cpu_to_be32(HNS_ROCE_INVALID_LKEY); + dseg[i].addr = 0; + } + + hr_qp->rq.wrid[ind] = wr->wr_id; + + ind = (ind + 1) & (hr_qp->rq.wqe_cnt - 1); + } + +out: + if (likely(nreq)) { + hr_qp->rq.head += nreq; + /* Memory barrier */ + wmb(); + + rq_db.byte_4 = 0; + rq_db.parameter = 0; + + roce_set_field(rq_db.byte_4, V2_DB_BYTE_4_TAG_M, + V2_DB_BYTE_4_TAG_S, hr_qp->qpn); + roce_set_field(rq_db.byte_4, V2_DB_BYTE_4_CMD_M, + V2_DB_BYTE_4_CMD_S, HNS_ROCE_V2_RQ_DB); + roce_set_field(rq_db.parameter, V2_DB_PARAMETER_CONS_IDX_M, + V2_DB_PARAMETER_CONS_IDX_S, hr_qp->rq.head); + + hns_roce_write64_k((__be32 *)&rq_db, hr_qp->rq.db_reg_l); + } + spin_unlock_irqrestore(&hr_qp->rq.lock, flags); + + return ret; +} + static int hns_roce_cmq_space(struct hns_roce_v2_cmq_ring *ring) { int ntu = ring->next_to_use; @@ -2588,6 +2909,8 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp) .modify_qp = hns_roce_v2_modify_qp, .query_qp = hns_roce_v2_query_qp, .destroy_qp = hns_roce_v2_destroy_qp, + .post_send = hns_roce_v2_post_send, + .post_recv = hns_roce_v2_post_recv, .req_notify_cq = hns_roce_v2_req_notify_cq, .poll_cq = hns_roce_v2_poll_cq, }; @@ -2612,6 +2935,8 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev, } hr_dev->hw = &hns_roce_hw_v2; + hr_dev->sdb_offset = ROCEE_DB_SQ_L_0_REG; + hr_dev->odb_offset = hr_dev->sdb_offset; /* Get info from NIC driver. */ hr_dev->reg_base = handle->rinfo.roce_io_base; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 0360df0..6172146 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -70,6 +70,7 @@ #define HNS_ROCE_V2_CQE_ENTRY_SIZE 32 #define HNS_ROCE_V2_PAGE_SIZE_SUPPORTED 0xFFFFF000 #define HNS_ROCE_V2_MAX_INNER_MTPT_NUM 2 +#define HNS_ROCE_INVALID_LKEY 0x100 #define HNS_ROCE_CMQ_TX_TIMEOUT 200 #define HNS_ROCE_CONTEXT_HOP_NUM 1 @@ -111,6 +112,23 @@ #define HNS_ROCE_V2_CQE_QPN_MASK 0x3ffff enum { + HNS_ROCE_V2_WQE_OP_SEND = 0x0, + HNS_ROCE_V2_WQE_OP_SEND_WITH_INV = 0x1, + HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM = 0x2, + HNS_ROCE_V2_WQE_OP_RDMA_WRITE = 0x3, + HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM = 0x4, + HNS_ROCE_V2_WQE_OP_RDMA_READ = 0x5, + HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP = 0x6, + HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD = 0x7, + HNS_ROCE_V2_WQE_OP_ATOM_MSK_CMP_AND_SWAP = 0x8, + HNS_ROCE_V2_WQE_OP_ATOM_MSK_FETCH_AND_ADD = 0x9, + HNS_ROCE_V2_WQE_OP_FAST_REG_PMR = 0xa, + HNS_ROCE_V2_WQE_OP_LOCAL_INV = 0xb, + HNS_ROCE_V2_WQE_OP_BIND_MW_TYPE = 0xc, + HNS_ROCE_V2_WQE_OP_MASK = 0x1f, +}; + +enum { HNS_ROCE_SQ_OPCODE_SEND = 0x0, HNS_ROCE_SQ_OPCODE_SEND_WITH_INV = 0x1, HNS_ROCE_SQ_OPCODE_SEND_WITH_IMM = 0x2, @@ -135,6 +153,9 @@ enum { }; enum { + HNS_ROCE_V2_SQ_DB = 0x0, + HNS_ROCE_V2_RQ_DB = 0x1, + HNS_ROCE_V2_SRQ_DB = 0x2, HNS_ROCE_V2_CQ_DB_PTR = 0x3, HNS_ROCE_V2_CQ_DB_NTR = 0x4, }; @@ -775,6 +796,12 @@ struct hns_roce_v2_cqe { #define V2_DB_BYTE_4_CMD_S 24 #define V2_DB_BYTE_4_CMD_M GENMASK(27, 24) +#define V2_DB_PARAMETER_CONS_IDX_S 0 +#define V2_DB_PARAMETER_CONS_IDX_M GENMASK(15, 0) + +#define V2_DB_PARAMETER_SL_S 16 +#define V2_DB_PARAMETER_SL_M GENMASK(18, 16) + struct hns_roce_v2_cq_db { u32 byte_4; u32 parameter; @@ -794,6 +821,51 @@ struct hns_roce_v2_cq_db { #define V2_CQ_DB_PARAMETER_NOTIFY_S 24 +struct hns_roce_v2_rc_send_wqe { + u32 byte_4; + u32 msg_len; + u32 inv_key_immtdata; + u32 byte_16; + u32 byte_20; + u32 rkey; + u64 va; +}; + +#define V2_RC_SEND_WQE_BYTE_4_OPCODE_S 0 +#define V2_RC_SEND_WQE_BYTE_4_OPCODE_M GENMASK(4, 0) + +#define V2_RC_SEND_WQE_BYTE_4_OWNER_S 7 + +#define V2_RC_SEND_WQE_BYTE_4_CQE_S 8 + +#define V2_RC_SEND_WQE_BYTE_4_FENCE_S 9 + +#define V2_RC_SEND_WQE_BYTE_4_SO_S 10 + +#define V2_RC_SEND_WQE_BYTE_4_SE_S 11 + +#define V2_RC_SEND_WQE_BYTE_4_INLINE_S 12 + +#define V2_RC_SEND_WQE_BYTE_16_XRC_SRQN_S 0 +#define V2_RC_SEND_WQE_BYTE_16_XRC_SRQN_M GENMASK(23, 0) + +#define V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S 24 +#define V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M GENMASK(31, 24) + +#define V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S 0 +#define V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M GENMASK(23, 0) + +struct hns_roce_v2_wqe_data_seg { + __be32 len; + __be32 lkey; + __be64 addr; +}; + +struct hns_roce_v2_db { + u32 byte_4; + u32 parameter; +}; + struct hns_roce_query_version { __le16 rocee_vendor_id; __le16 rocee_hw_version; diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index c04aa81..e6d1115 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -549,10 +549,9 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, } /* QP doorbell register address */ - hr_qp->sq.db_reg_l = hr_dev->reg_base + ROCEE_DB_SQ_L_0_REG + + hr_qp->sq.db_reg_l = hr_dev->reg_base + hr_dev->sdb_offset + DB_REG_OFFSET * hr_dev->priv_uar.index; - hr_qp->rq.db_reg_l = hr_dev->reg_base + - ROCEE_DB_OTHERS_L_0_REG + + hr_qp->rq.db_reg_l = hr_dev->reg_base + hr_dev->odb_offset + DB_REG_OFFSET * hr_dev->priv_uar.index; /* Allocate QP buf */