From patchwork Thu Feb 9 22:23:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Perches X-Patchwork-Id: 9565603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 110AE60572 for ; Thu, 9 Feb 2017 22:25:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F02FE2850F for ; Thu, 9 Feb 2017 22:25:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4A0328551; Thu, 9 Feb 2017 22:25:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,FUZZY_XPILL, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C5DB28550 for ; Thu, 9 Feb 2017 22:25:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753733AbdBIWYy (ORCPT ); Thu, 9 Feb 2017 17:24:54 -0500 Received: from smtprelay0253.hostedemail.com ([216.40.44.253]:34106 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753144AbdBIWYP (ORCPT ); Thu, 9 Feb 2017 17:24:15 -0500 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay08.hostedemail.com (Postfix) with ESMTP id 43E4529DD77; Thu, 9 Feb 2017 22:24:04 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-HE-Tag: ghost16_61742a339449 X-Filterd-Recvd-Size: 80170 Received: from joe-laptop.perches.com (unknown [47.151.132.55]) (Authenticated sender: joe@perches.com) by omf11.hostedemail.com (Postfix) with ESMTPA; Thu, 9 Feb 2017 22:24:01 +0000 (UTC) From: Joe Perches To: Steve Wise Cc: Doug Ledford , Sean Hefty , Hal Rosenstock , linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] cxgb3: Convert PDBG to pr_debug Date: Thu, 9 Feb 2017 14:23:49 -0800 Message-Id: <41a74ceecd35cf2f38e2a1a1a394c7718547deb6.1486678686.git.joe@perches.com> X-Mailer: git-send-email 2.10.0.rc2.1.g053435c In-Reply-To: References: Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Using the normal mechanism, not an indirected one, is clearer. Miscellanea: o Coalesce formats o Realign arguments Signed-off-by: Joe Perches Reviewed-by: Steve Wise --- drivers/infiniband/hw/cxgb3/cxio_dbg.c | 35 ++--- drivers/infiniband/hw/cxgb3/cxio_hal.c | 174 ++++++++++++------------ drivers/infiniband/hw/cxgb3/cxio_hal.h | 2 - drivers/infiniband/hw/cxgb3/cxio_resource.c | 20 +-- drivers/infiniband/hw/cxgb3/iwch.c | 6 +- drivers/infiniband/hw/cxgb3/iwch_cm.c | 203 ++++++++++++++-------------- drivers/infiniband/hw/cxgb3/iwch_cm.h | 18 +-- drivers/infiniband/hw/cxgb3/iwch_cq.c | 14 +- drivers/infiniband/hw/cxgb3/iwch_ev.c | 15 +- drivers/infiniband/hw/cxgb3/iwch_mem.c | 2 +- drivers/infiniband/hw/cxgb3/iwch_provider.c | 101 +++++++------- drivers/infiniband/hw/cxgb3/iwch_provider.h | 9 +- drivers/infiniband/hw/cxgb3/iwch_qp.c | 60 ++++---- 13 files changed, 329 insertions(+), 330 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/cxio_dbg.c b/drivers/infiniband/hw/cxgb3/cxio_dbg.c index 445e89e5e7cf..97dbe728520a 100644 --- a/drivers/infiniband/hw/cxgb3/cxio_dbg.c +++ b/drivers/infiniband/hw/cxgb3/cxio_dbg.c @@ -51,17 +51,18 @@ void cxio_dump_tpt(struct cxio_rdev *rdev, u32 stag) m->mem_id = MEM_PMRX; m->addr = (stag>>8) * 32 + rdev->rnic_info.tpt_base; m->len = size; - PDBG("%s TPT addr 0x%x len %d\n", __func__, m->addr, m->len); + pr_debug("%s TPT addr 0x%x len %d\n", __func__, m->addr, m->len); rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m); if (rc) { - PDBG("%s toectl returned error %d\n", __func__, rc); + pr_debug("%s toectl returned error %d\n", __func__, rc); kfree(m); return; } data = (u64 *)m->buf; while (size > 0) { - PDBG("TPT %08x: %016llx\n", m->addr, (unsigned long long) *data); + pr_debug("TPT %08x: %016llx\n", + m->addr, (unsigned long long)*data); size -= 8; data++; m->addr += 8; @@ -87,18 +88,19 @@ void cxio_dump_pbl(struct cxio_rdev *rdev, u32 pbl_addr, uint len, u8 shift) m->mem_id = MEM_PMRX; m->addr = pbl_addr; m->len = size; - PDBG("%s PBL addr 0x%x len %d depth %d\n", - __func__, m->addr, m->len, npages); + pr_debug("%s PBL addr 0x%x len %d depth %d\n", + __func__, m->addr, m->len, npages); rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m); if (rc) { - PDBG("%s toectl returned error %d\n", __func__, rc); + pr_debug("%s toectl returned error %d\n", __func__, rc); kfree(m); return; } data = (u64 *)m->buf; while (size > 0) { - PDBG("PBL %08x: %016llx\n", m->addr, (unsigned long long) *data); + pr_debug("PBL %08x: %016llx\n", + m->addr, (unsigned long long)*data); size -= 8; data++; m->addr += 8; @@ -114,8 +116,8 @@ void cxio_dump_wqe(union t3_wr *wqe) if (size == 0) size = 8; while (size > 0) { - PDBG("WQE %p: %016llx\n", data, - (unsigned long long) be64_to_cpu(*data)); + pr_debug("WQE %p: %016llx\n", + data, (unsigned long long)be64_to_cpu(*data)); size--; data++; } @@ -127,8 +129,8 @@ void cxio_dump_wce(struct t3_cqe *wce) int size = sizeof(*wce); while (size > 0) { - PDBG("WCE %p: %016llx\n", data, - (unsigned long long) be64_to_cpu(*data)); + pr_debug("WCE %p: %016llx\n", + data, (unsigned long long)be64_to_cpu(*data)); size -= 8; data++; } @@ -148,17 +150,18 @@ void cxio_dump_rqt(struct cxio_rdev *rdev, u32 hwtid, int nents) m->mem_id = MEM_PMRX; m->addr = ((hwtid)<<10) + rdev->rnic_info.rqt_base; m->len = size; - PDBG("%s RQT addr 0x%x len %d\n", __func__, m->addr, m->len); + pr_debug("%s RQT addr 0x%x len %d\n", __func__, m->addr, m->len); rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m); if (rc) { - PDBG("%s toectl returned error %d\n", __func__, rc); + pr_debug("%s toectl returned error %d\n", __func__, rc); kfree(m); return; } data = (u64 *)m->buf; while (size > 0) { - PDBG("RQT %08x: %016llx\n", m->addr, (unsigned long long) *data); + pr_debug("RQT %08x: %016llx\n", + m->addr, (unsigned long long)*data); size -= 8; data++; m->addr += 8; @@ -180,10 +183,10 @@ void cxio_dump_tcb(struct cxio_rdev *rdev, u32 hwtid) m->mem_id = MEM_CM; m->addr = hwtid * size; m->len = size; - PDBG("%s TCB %d len %d\n", __func__, m->addr, m->len); + pr_debug("%s TCB %d len %d\n", __func__, m->addr, m->len); rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m); if (rc) { - PDBG("%s toectl returned error %d\n", __func__, rc); + pr_debug("%s toectl returned error %d\n", __func__, rc); kfree(m); return; } diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.c b/drivers/infiniband/hw/cxgb3/cxio_hal.c index 1ee33db2e98a..558d6a03375d 100644 --- a/drivers/infiniband/hw/cxgb3/cxio_hal.c +++ b/drivers/infiniband/hw/cxgb3/cxio_hal.c @@ -139,7 +139,7 @@ static int cxio_hal_clear_qp_ctx(struct cxio_rdev *rdev_p, u32 qpid) struct t3_modify_qp_wr *wqe; struct sk_buff *skb = alloc_skb(sizeof(*wqe), GFP_KERNEL); if (!skb) { - PDBG("%s alloc_skb failed\n", __func__); + pr_debug("%s alloc_skb failed\n", __func__); return -ENOMEM; } wqe = (struct t3_modify_qp_wr *) skb_put(skb, sizeof(*wqe)); @@ -229,7 +229,7 @@ static u32 get_qpid(struct cxio_rdev *rdev_p, struct cxio_ucontext *uctx) } out: mutex_unlock(&uctx->lock); - PDBG("%s qpid 0x%x\n", __func__, qpid); + pr_debug("%s qpid 0x%x\n", __func__, qpid); return qpid; } @@ -241,7 +241,7 @@ static void put_qpid(struct cxio_rdev *rdev_p, u32 qpid, entry = kmalloc(sizeof *entry, GFP_KERNEL); if (!entry) return; - PDBG("%s qpid 0x%x\n", __func__, qpid); + pr_debug("%s qpid 0x%x\n", __func__, qpid); entry->qpid = qpid; mutex_lock(&uctx->lock); list_add_tail(&entry->entry, &uctx->qpids); @@ -305,8 +305,8 @@ int cxio_create_qp(struct cxio_rdev *rdev_p, u32 kernel_domain, wq->udb = (u64)rdev_p->rnic_info.udbell_physbase + (wq->qpid << rdev_p->qpshift); wq->rdev = rdev_p; - PDBG("%s qpid 0x%x doorbell 0x%p udb 0x%llx\n", __func__, - wq->qpid, wq->doorbell, (unsigned long long) wq->udb); + pr_debug("%s qpid 0x%x doorbell 0x%p udb 0x%llx\n", + __func__, wq->qpid, wq->doorbell, (unsigned long long)wq->udb); return 0; err4: kfree(wq->sq); @@ -350,8 +350,8 @@ static void insert_recv_cqe(struct t3_wq *wq, struct t3_cq *cq) { struct t3_cqe cqe; - PDBG("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__, - wq, cq, cq->sw_rptr, cq->sw_wptr); + pr_debug("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__, + wq, cq, cq->sw_rptr, cq->sw_wptr); memset(&cqe, 0, sizeof(cqe)); cqe.header = cpu_to_be32(V_CQE_STATUS(TPT_ERR_SWFLUSH) | V_CQE_OPCODE(T3_SEND) | @@ -369,11 +369,11 @@ int cxio_flush_rq(struct t3_wq *wq, struct t3_cq *cq, int count) u32 ptr; int flushed = 0; - PDBG("%s wq %p cq %p\n", __func__, wq, cq); + pr_debug("%s wq %p cq %p\n", __func__, wq, cq); /* flush RQ */ - PDBG("%s rq_rptr %u rq_wptr %u skip count %u\n", __func__, - wq->rq_rptr, wq->rq_wptr, count); + pr_debug("%s rq_rptr %u rq_wptr %u skip count %u\n", __func__, + wq->rq_rptr, wq->rq_wptr, count); ptr = wq->rq_rptr + count; while (ptr++ != wq->rq_wptr) { insert_recv_cqe(wq, cq); @@ -387,8 +387,8 @@ static void insert_sq_cqe(struct t3_wq *wq, struct t3_cq *cq, { struct t3_cqe cqe; - PDBG("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__, - wq, cq, cq->sw_rptr, cq->sw_wptr); + pr_debug("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__, + wq, cq, cq->sw_rptr, cq->sw_wptr); memset(&cqe, 0, sizeof(cqe)); cqe.header = cpu_to_be32(V_CQE_STATUS(TPT_ERR_SWFLUSH) | V_CQE_OPCODE(sqp->opcode) | @@ -428,11 +428,11 @@ void cxio_flush_hw_cq(struct t3_cq *cq) { struct t3_cqe *cqe, *swcqe; - PDBG("%s cq %p cqid 0x%x\n", __func__, cq, cq->cqid); + pr_debug("%s cq %p cqid 0x%x\n", __func__, cq, cq->cqid); cqe = cxio_next_hw_cqe(cq); while (cqe) { - PDBG("%s flushing hwcq rptr 0x%x to swcq wptr 0x%x\n", - __func__, cq->rptr, cq->sw_wptr); + pr_debug("%s flushing hwcq rptr 0x%x to swcq wptr 0x%x\n", + __func__, cq->rptr, cq->sw_wptr); swcqe = cq->sw_queue + Q_PTR2IDX(cq->sw_wptr, cq->size_log2); *swcqe = *cqe; swcqe->header |= cpu_to_be32(V_CQE_SWCQE(1)); @@ -475,7 +475,7 @@ void cxio_count_scqes(struct t3_cq *cq, struct t3_wq *wq, int *count) (*count)++; ptr++; } - PDBG("%s cq %p count %d\n", __func__, cq, *count); + pr_debug("%s cq %p count %d\n", __func__, cq, *count); } void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count) @@ -484,7 +484,7 @@ void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count) u32 ptr; *count = 0; - PDBG("%s count zero %d\n", __func__, *count); + pr_debug("%s count zero %d\n", __func__, *count); ptr = cq->sw_rptr; while (!Q_EMPTY(ptr, cq->sw_wptr)) { cqe = cq->sw_queue + (Q_PTR2IDX(ptr, cq->size_log2)); @@ -493,7 +493,7 @@ void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count) (*count)++; ptr++; } - PDBG("%s cq %p count %d\n", __func__, cq, *count); + pr_debug("%s cq %p count %d\n", __func__, cq, *count); } static int cxio_hal_init_ctrl_cq(struct cxio_rdev *rdev_p) @@ -520,12 +520,12 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p) skb = alloc_skb(sizeof(*wqe), GFP_KERNEL); if (!skb) { - PDBG("%s alloc_skb failed\n", __func__); + pr_debug("%s alloc_skb failed\n", __func__); return -ENOMEM; } err = cxio_hal_init_ctrl_cq(rdev_p); if (err) { - PDBG("%s err %d initializing ctrl_cq\n", __func__, err); + pr_debug("%s err %d initializing ctrl_cq\n", __func__, err); goto err; } rdev_p->ctrl_qp.workq = dma_alloc_coherent( @@ -535,7 +535,7 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p) &(rdev_p->ctrl_qp.dma_addr), GFP_KERNEL); if (!rdev_p->ctrl_qp.workq) { - PDBG("%s dma_alloc_coherent failed\n", __func__); + pr_debug("%s dma_alloc_coherent failed\n", __func__); err = -ENOMEM; goto err; } @@ -570,9 +570,9 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p) wqe->sge_cmd = cpu_to_be64(sge_cmd); wqe->ctx1 = cpu_to_be64(ctx1); wqe->ctx0 = cpu_to_be64(ctx0); - PDBG("CtrlQP dma_addr 0x%llx workq %p size %d\n", - (unsigned long long) rdev_p->ctrl_qp.dma_addr, - rdev_p->ctrl_qp.workq, 1 << T3_CTRL_QP_SIZE_LOG2); + pr_debug("CtrlQP dma_addr 0x%llx workq %p size %d\n", + (unsigned long long)rdev_p->ctrl_qp.dma_addr, + rdev_p->ctrl_qp.workq, 1 << T3_CTRL_QP_SIZE_LOG2); skb->priority = CPL_PRIORITY_CONTROL; return iwch_cxgb3_ofld_send(rdev_p->t3cdev_p, skb); err: @@ -604,26 +604,26 @@ static int cxio_hal_ctrl_qp_write_mem(struct cxio_rdev *rdev_p, u32 addr, u64 utx_cmd; addr &= 0x7FFFFFF; nr_wqe = len % 96 ? len / 96 + 1 : len / 96; /* 96B max per WQE */ - PDBG("%s wptr 0x%x rptr 0x%x len %d, nr_wqe %d data %p addr 0x%0x\n", - __func__, rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, len, - nr_wqe, data, addr); + pr_debug("%s wptr 0x%x rptr 0x%x len %d, nr_wqe %d data %p addr 0x%0x\n", + __func__, rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, len, + nr_wqe, data, addr); utx_len = 3; /* in 32B unit */ for (i = 0; i < nr_wqe; i++) { if (Q_FULL(rdev_p->ctrl_qp.rptr, rdev_p->ctrl_qp.wptr, T3_CTRL_QP_SIZE_LOG2)) { - PDBG("%s ctrl_qp full wtpr 0x%0x rptr 0x%0x, " - "wait for more space i %d\n", __func__, - rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, i); + pr_debug("%s ctrl_qp full wtpr 0x%0x rptr 0x%0x, wait for more space i %d\n", + __func__, + rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, i); if (wait_event_interruptible(rdev_p->ctrl_qp.waitq, !Q_FULL(rdev_p->ctrl_qp.rptr, rdev_p->ctrl_qp.wptr, T3_CTRL_QP_SIZE_LOG2))) { - PDBG("%s ctrl_qp workq interrupted\n", - __func__); + pr_debug("%s ctrl_qp workq interrupted\n", + __func__); return -ERESTARTSYS; } - PDBG("%s ctrl_qp wakeup, continue posting work request " - "i %d\n", __func__, i); + pr_debug("%s ctrl_qp wakeup, continue posting work request i %d\n", + __func__, i); } wqe = (__be64 *)(rdev_p->ctrl_qp.workq + (rdev_p->ctrl_qp.wptr % (1 << T3_CTRL_QP_SIZE_LOG2))); @@ -644,7 +644,7 @@ static int cxio_hal_ctrl_qp_write_mem(struct cxio_rdev *rdev_p, u32 addr, if ((i != 0) && (i % (((1 << T3_CTRL_QP_SIZE_LOG2)) >> 1) == 0)) { flag = T3_COMPLETION_FLAG; - PDBG("%s force completion at i %d\n", __func__, i); + pr_debug("%s force completion at i %d\n", __func__, i); } /* build the utx mem command */ @@ -716,8 +716,8 @@ static int __cxio_tpt_op(struct cxio_rdev *rdev_p, u32 reset_tpt_entry, return -ENOMEM; *stag = (stag_idx << 8) | ((*stag) & 0xFF); } - PDBG("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n", - __func__, stag_state, type, pdid, stag_idx); + pr_debug("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n", + __func__, stag_state, type, pdid, stag_idx); mutex_lock(&rdev_p->ctrl_qp.lock); @@ -766,9 +766,9 @@ int cxio_write_pbl(struct cxio_rdev *rdev_p, __be64 *pbl, u32 wptr; int err; - PDBG("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n", - __func__, pbl_addr, rdev_p->rnic_info.pbl_base, - pbl_size); + pr_debug("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n", + __func__, pbl_addr, rdev_p->rnic_info.pbl_base, + pbl_size); mutex_lock(&rdev_p->ctrl_qp.lock); err = cxio_hal_ctrl_qp_write_mem(rdev_p, pbl_addr >> 5, pbl_size << 3, @@ -836,7 +836,7 @@ int cxio_rdma_init(struct cxio_rdev *rdev_p, struct t3_rdma_init_attr *attr) struct sk_buff *skb = alloc_skb(sizeof(*wqe), GFP_ATOMIC); if (!skb) return -ENOMEM; - PDBG("%s rdev_p %p\n", __func__, rdev_p); + pr_debug("%s rdev_p %p\n", __func__, rdev_p); wqe = (struct t3_rdma_init_wr *) __skb_put(skb, sizeof(*wqe)); wqe->wrh.op_seop_flags = cpu_to_be32(V_FW_RIWR_OP(T3_WR_INIT)); wqe->wrh.gen_tid_len = cpu_to_be32(V_FW_RIWR_TID(attr->tid) | @@ -879,22 +879,20 @@ static int cxio_hal_ev_handler(struct t3cdev *t3cdev_p, struct sk_buff *skb) static int cnt; struct cxio_rdev *rdev_p = NULL; struct respQ_msg_t *rsp_msg = (struct respQ_msg_t *) skb->data; - PDBG("%d: %s cq_id 0x%x cq_ptr 0x%x genbit %0x overflow %0x an %0x" - " se %0x notify %0x cqbranch %0x creditth %0x\n", - cnt, __func__, RSPQ_CQID(rsp_msg), RSPQ_CQPTR(rsp_msg), - RSPQ_GENBIT(rsp_msg), RSPQ_OVERFLOW(rsp_msg), RSPQ_AN(rsp_msg), - RSPQ_SE(rsp_msg), RSPQ_NOTIFY(rsp_msg), RSPQ_CQBRANCH(rsp_msg), - RSPQ_CREDIT_THRESH(rsp_msg)); - PDBG("CQE: QPID 0x%0x genbit %0x type 0x%0x status 0x%0x opcode %d " - "len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n", - CQE_QPID(rsp_msg->cqe), CQE_GENBIT(rsp_msg->cqe), - CQE_TYPE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe), - CQE_OPCODE(rsp_msg->cqe), CQE_LEN(rsp_msg->cqe), - CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe)); + pr_debug("%d: %s cq_id 0x%x cq_ptr 0x%x genbit %0x overflow %0x an %0x se %0x notify %0x cqbranch %0x creditth %0x\n", + cnt, __func__, RSPQ_CQID(rsp_msg), RSPQ_CQPTR(rsp_msg), + RSPQ_GENBIT(rsp_msg), RSPQ_OVERFLOW(rsp_msg), RSPQ_AN(rsp_msg), + RSPQ_SE(rsp_msg), RSPQ_NOTIFY(rsp_msg), RSPQ_CQBRANCH(rsp_msg), + RSPQ_CREDIT_THRESH(rsp_msg)); + pr_debug("CQE: QPID 0x%0x genbit %0x type 0x%0x status 0x%0x opcode %d len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n", + CQE_QPID(rsp_msg->cqe), CQE_GENBIT(rsp_msg->cqe), + CQE_TYPE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe), + CQE_OPCODE(rsp_msg->cqe), CQE_LEN(rsp_msg->cqe), + CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe)); rdev_p = (struct cxio_rdev *)t3cdev_p->ulp; if (!rdev_p) { - PDBG("%s called by t3cdev %p with null ulp\n", __func__, - t3cdev_p); + pr_debug("%s called by t3cdev %p with null ulp\n", __func__, + t3cdev_p); return 0; } if (CQE_QPID(rsp_msg->cqe) == T3_CTRL_QP_ID) { @@ -933,13 +931,13 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p) strncpy(rdev_p->dev_name, rdev_p->t3cdev_p->name, T3_MAX_DEV_NAME_LEN); } else { - PDBG("%s t3cdev_p or dev_name must be set\n", __func__); + pr_debug("%s t3cdev_p or dev_name must be set\n", __func__); return -EINVAL; } list_add_tail(&rdev_p->entry, &rdev_list); - PDBG("%s opening rnic dev %s\n", __func__, rdev_p->dev_name); + pr_debug("%s opening rnic dev %s\n", __func__, rdev_p->dev_name); memset(&rdev_p->ctrl_qp, 0, sizeof(rdev_p->ctrl_qp)); if (!rdev_p->t3cdev_p) rdev_p->t3cdev_p = dev2t3cdev(netdev_p); @@ -986,18 +984,16 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p) PAGE_SHIFT)); rdev_p->qpnr = rdev_p->rnic_info.udbell_len >> PAGE_SHIFT; rdev_p->qpmask = (65536 >> ilog2(rdev_p->qpnr)) - 1; - PDBG("%s rnic %s info: tpt_base 0x%0x tpt_top 0x%0x num stags %d " - "pbl_base 0x%0x pbl_top 0x%0x rqt_base 0x%0x, rqt_top 0x%0x\n", - __func__, rdev_p->dev_name, rdev_p->rnic_info.tpt_base, - rdev_p->rnic_info.tpt_top, cxio_num_stags(rdev_p), - rdev_p->rnic_info.pbl_base, - rdev_p->rnic_info.pbl_top, rdev_p->rnic_info.rqt_base, - rdev_p->rnic_info.rqt_top); - PDBG("udbell_len 0x%0x udbell_physbase 0x%lx kdb_addr %p qpshift %lu " - "qpnr %d qpmask 0x%x\n", - rdev_p->rnic_info.udbell_len, - rdev_p->rnic_info.udbell_physbase, rdev_p->rnic_info.kdb_addr, - rdev_p->qpshift, rdev_p->qpnr, rdev_p->qpmask); + pr_debug("%s rnic %s info: tpt_base 0x%0x tpt_top 0x%0x num stags %d pbl_base 0x%0x pbl_top 0x%0x rqt_base 0x%0x, rqt_top 0x%0x\n", + __func__, rdev_p->dev_name, rdev_p->rnic_info.tpt_base, + rdev_p->rnic_info.tpt_top, cxio_num_stags(rdev_p), + rdev_p->rnic_info.pbl_base, + rdev_p->rnic_info.pbl_top, rdev_p->rnic_info.rqt_base, + rdev_p->rnic_info.rqt_top); + pr_debug("udbell_len 0x%0x udbell_physbase 0x%lx kdb_addr %p qpshift %lu qpnr %d qpmask 0x%x\n", + rdev_p->rnic_info.udbell_len, + rdev_p->rnic_info.udbell_physbase, rdev_p->rnic_info.kdb_addr, + rdev_p->qpshift, rdev_p->qpnr, rdev_p->qpmask); err = cxio_hal_init_ctrl_qp(rdev_p); if (err) { @@ -1083,9 +1079,9 @@ static void flush_completed_wrs(struct t3_wq *wq, struct t3_cq *cq) /* * Insert this completed cqe into the swcq. */ - PDBG("%s moving cqe into swcq sq idx %ld cq idx %ld\n", - __func__, Q_PTR2IDX(ptr, wq->sq_size_log2), - Q_PTR2IDX(cq->sw_wptr, cq->size_log2)); + pr_debug("%s moving cqe into swcq sq idx %ld cq idx %ld\n", + __func__, Q_PTR2IDX(ptr, wq->sq_size_log2), + Q_PTR2IDX(cq->sw_wptr, cq->size_log2)); sqp->cqe.header |= htonl(V_CQE_SWCQE(1)); *(cq->sw_queue + Q_PTR2IDX(cq->sw_wptr, cq->size_log2)) = sqp->cqe; @@ -1151,12 +1147,11 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe, *credit = 0; hw_cqe = cxio_next_cqe(cq); - PDBG("%s CQE OOO %d qpid 0x%0x genbit %d type %d status 0x%0x" - " opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n", - __func__, CQE_OOO(*hw_cqe), CQE_QPID(*hw_cqe), - CQE_GENBIT(*hw_cqe), CQE_TYPE(*hw_cqe), CQE_STATUS(*hw_cqe), - CQE_OPCODE(*hw_cqe), CQE_LEN(*hw_cqe), CQE_WRID_HI(*hw_cqe), - CQE_WRID_LOW(*hw_cqe)); + pr_debug("%s CQE OOO %d qpid 0x%0x genbit %d type %d status 0x%0x opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n", + __func__, CQE_OOO(*hw_cqe), CQE_QPID(*hw_cqe), + CQE_GENBIT(*hw_cqe), CQE_TYPE(*hw_cqe), CQE_STATUS(*hw_cqe), + CQE_OPCODE(*hw_cqe), CQE_LEN(*hw_cqe), CQE_WRID_HI(*hw_cqe), + CQE_WRID_LOW(*hw_cqe)); /* * skip cqe's not affiliated with a QP. @@ -1275,9 +1270,10 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe, if (!SW_CQE(*hw_cqe) && (CQE_WRID_SQ_WPTR(*hw_cqe) != wq->sq_rptr)) { struct t3_swsq *sqp; - PDBG("%s out of order completion going in swsq at idx %ld\n", - __func__, - Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe), wq->sq_size_log2)); + pr_debug("%s out of order completion going in swsq at idx %ld\n", + __func__, + Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe), + wq->sq_size_log2)); sqp = wq->sq + Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe), wq->sq_size_log2); sqp->cqe = *hw_cqe; @@ -1295,13 +1291,13 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe, */ if (SQ_TYPE(*hw_cqe)) { wq->sq_rptr = CQE_WRID_SQ_WPTR(*hw_cqe); - PDBG("%s completing sq idx %ld\n", __func__, - Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2)); + pr_debug("%s completing sq idx %ld\n", __func__, + Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2)); *cookie = wq->sq[Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2)].wr_id; wq->sq_rptr++; } else { - PDBG("%s completing rq idx %ld\n", __func__, - Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)); + pr_debug("%s completing rq idx %ld\n", __func__, + Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)); *cookie = wq->rq[Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)].wr_id; if (wq->rq[Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)].pbl_addr) cxio_hal_pblpool_free(wq->rdev, @@ -1319,12 +1315,12 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe, skip_cqe: if (SW_CQE(*hw_cqe)) { - PDBG("%s cq %p cqid 0x%x skip sw cqe sw_rptr 0x%x\n", - __func__, cq, cq->cqid, cq->sw_rptr); + pr_debug("%s cq %p cqid 0x%x skip sw cqe sw_rptr 0x%x\n", + __func__, cq, cq->cqid, cq->sw_rptr); ++cq->sw_rptr; } else { - PDBG("%s cq %p cqid 0x%x skip hw cqe rptr 0x%x\n", - __func__, cq, cq->cqid, cq->rptr); + pr_debug("%s cq %p cqid 0x%x skip hw cqe rptr 0x%x\n", + __func__, cq, cq->cqid, cq->rptr); ++cq->rptr; /* diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.h b/drivers/infiniband/hw/cxgb3/cxio_hal.h index 115c0e3a5df5..7e70c5492262 100644 --- a/drivers/infiniband/hw/cxgb3/cxio_hal.h +++ b/drivers/infiniband/hw/cxgb3/cxio_hal.h @@ -202,8 +202,6 @@ int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb); #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#define PDBG(fmt, args...) pr_debug(fmt, ## args) - #ifdef DEBUG void cxio_dump_tpt(struct cxio_rdev *rev, u32 stag); void cxio_dump_pbl(struct cxio_rdev *rev, u32 pbl_addr, uint len, u8 shift); diff --git a/drivers/infiniband/hw/cxgb3/cxio_resource.c b/drivers/infiniband/hw/cxgb3/cxio_resource.c index a826ed165696..c6e7bc4420b6 100644 --- a/drivers/infiniband/hw/cxgb3/cxio_resource.c +++ b/drivers/infiniband/hw/cxgb3/cxio_resource.c @@ -209,13 +209,13 @@ u32 cxio_hal_get_qpid(struct cxio_hal_resource *rscp) { u32 qpid = cxio_hal_get_resource(&rscp->qpid_fifo, &rscp->qpid_fifo_lock); - PDBG("%s qpid 0x%x\n", __func__, qpid); + pr_debug("%s qpid 0x%x\n", __func__, qpid); return qpid; } void cxio_hal_put_qpid(struct cxio_hal_resource *rscp, u32 qpid) { - PDBG("%s qpid 0x%x\n", __func__, qpid); + pr_debug("%s qpid 0x%x\n", __func__, qpid); cxio_hal_put_resource(&rscp->qpid_fifo, &rscp->qpid_fifo_lock, qpid); } @@ -257,13 +257,13 @@ void cxio_hal_destroy_resource(struct cxio_hal_resource *rscp) u32 cxio_hal_pblpool_alloc(struct cxio_rdev *rdev_p, int size) { unsigned long addr = gen_pool_alloc(rdev_p->pbl_pool, size); - PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size); + pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size); return (u32)addr; } void cxio_hal_pblpool_free(struct cxio_rdev *rdev_p, u32 addr, int size) { - PDBG("%s addr 0x%x size %d\n", __func__, addr, size); + pr_debug("%s addr 0x%x size %d\n", __func__, addr, size); gen_pool_free(rdev_p->pbl_pool, (unsigned long)addr, size); } @@ -282,8 +282,8 @@ int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p) pbl_chunk = min(rdev_p->rnic_info.pbl_top - pbl_start + 1, pbl_chunk); if (gen_pool_add(rdev_p->pbl_pool, pbl_start, pbl_chunk, -1)) { - PDBG("%s failed to add PBL chunk (%x/%x)\n", - __func__, pbl_start, pbl_chunk); + pr_debug("%s failed to add PBL chunk (%x/%x)\n", + __func__, pbl_start, pbl_chunk); if (pbl_chunk <= 1024 << MIN_PBL_SHIFT) { pr_warn("%s: Failed to add all PBL chunks (%x/%x)\n", __func__, pbl_start, @@ -292,8 +292,8 @@ int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p) } pbl_chunk >>= 1; } else { - PDBG("%s added PBL chunk (%x/%x)\n", - __func__, pbl_start, pbl_chunk); + pr_debug("%s added PBL chunk (%x/%x)\n", + __func__, pbl_start, pbl_chunk); pbl_start += pbl_chunk; } } @@ -316,13 +316,13 @@ void cxio_hal_pblpool_destroy(struct cxio_rdev *rdev_p) u32 cxio_hal_rqtpool_alloc(struct cxio_rdev *rdev_p, int size) { unsigned long addr = gen_pool_alloc(rdev_p->rqt_pool, size << 6); - PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6); + pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6); return (u32)addr; } void cxio_hal_rqtpool_free(struct cxio_rdev *rdev_p, u32 addr, int size) { - PDBG("%s addr 0x%x size %d\n", __func__, addr, size << 6); + pr_debug("%s addr 0x%x size %d\n", __func__, addr, size << 6); gen_pool_free(rdev_p->rqt_pool, (unsigned long)addr, size << 6); } diff --git a/drivers/infiniband/hw/cxgb3/iwch.c b/drivers/infiniband/hw/cxgb3/iwch.c index ba55010ace5c..47b2ce2ef203 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.c +++ b/drivers/infiniband/hw/cxgb3/iwch.c @@ -105,7 +105,7 @@ static void iwch_db_drop_task(struct work_struct *work) static void rnic_init(struct iwch_dev *rnicp) { - PDBG("%s iwch_dev %p\n", __func__, rnicp); + pr_debug("%s iwch_dev %p\n", __func__, rnicp); idr_init(&rnicp->cqidr); idr_init(&rnicp->qpidr); idr_init(&rnicp->mmidr); @@ -145,7 +145,7 @@ static void open_rnic_dev(struct t3cdev *tdev) { struct iwch_dev *rnicp; - PDBG("%s t3cdev %p\n", __func__, tdev); + pr_debug("%s t3cdev %p\n", __func__, tdev); pr_info_once("Chelsio T3 RDMA Driver - version %s\n", DRV_VERSION); rnicp = (struct iwch_dev *)ib_alloc_device(sizeof(*rnicp)); if (!rnicp) { @@ -181,7 +181,7 @@ static void open_rnic_dev(struct t3cdev *tdev) static void close_rnic_dev(struct t3cdev *tdev) { struct iwch_dev *dev, *tmp; - PDBG("%s t3cdev %p\n", __func__, tdev); + pr_debug("%s t3cdev %p\n", __func__, tdev); mutex_lock(&dev_mutex); list_for_each_entry_safe(dev, tmp, &dev_list, entry) { if (dev->rdev.t3cdev_p == tdev) { diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c index 4461619329ad..b61630eba912 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_cm.c +++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c @@ -112,9 +112,9 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status); static void start_ep_timer(struct iwch_ep *ep) { - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); if (timer_pending(&ep->timer)) { - PDBG("%s stopped / restarted timer ep %p\n", __func__, ep); + pr_debug("%s stopped / restarted timer ep %p\n", __func__, ep); del_timer_sync(&ep->timer); } else get_ep(&ep->com); @@ -126,7 +126,7 @@ static void start_ep_timer(struct iwch_ep *ep) static void stop_ep_timer(struct iwch_ep *ep) { - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); if (!timer_pending(&ep->timer)) { WARN(1, "%s timer stopped when its not running! ep %p state %u\n", __func__, ep, ep->com.state); @@ -227,13 +227,13 @@ int iwch_resume_tid(struct iwch_ep *ep) static void set_emss(struct iwch_ep *ep, u16 opt) { - PDBG("%s ep %p opt %u\n", __func__, ep, opt); + pr_debug("%s ep %p opt %u\n", __func__, ep, opt); ep->emss = T3C_DATA(ep->com.tdev)->mtus[G_TCPOPT_MSS(opt)] - 40; if (G_TCPOPT_TSTAMP(opt)) ep->emss -= 12; if (ep->emss < 128) ep->emss = 128; - PDBG("emss=%d\n", ep->emss); + pr_debug("emss=%d\n", ep->emss); } static enum iwch_ep_state state_read(struct iwch_ep_common *epc) @@ -257,7 +257,7 @@ static void state_set(struct iwch_ep_common *epc, enum iwch_ep_state new) unsigned long flags; spin_lock_irqsave(&epc->lock, flags); - PDBG("%s - %s -> %s\n", __func__, states[epc->state], states[new]); + pr_debug("%s - %s -> %s\n", __func__, states[epc->state], states[new]); __state_set(epc, new); spin_unlock_irqrestore(&epc->lock, flags); return; @@ -273,7 +273,7 @@ static void *alloc_ep(int size, gfp_t gfp) spin_lock_init(&epc->lock); init_waitqueue_head(&epc->waitq); } - PDBG("%s alloc ep %p\n", __func__, epc); + pr_debug("%s alloc ep %p\n", __func__, epc); return epc; } @@ -282,7 +282,8 @@ void __free_ep(struct kref *kref) struct iwch_ep *ep; ep = container_of(container_of(kref, struct iwch_ep_common, kref), struct iwch_ep, com); - PDBG("%s ep %p state %s\n", __func__, ep, states[state_read(&ep->com)]); + pr_debug("%s ep %p state %s\n", + __func__, ep, states[state_read(&ep->com)]); if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) { cxgb3_remove_tid(ep->com.tdev, (void *)ep, ep->hwtid); dst_release(ep->dst); @@ -293,7 +294,7 @@ void __free_ep(struct kref *kref) static void release_ep_resources(struct iwch_ep *ep) { - PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid); + pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid); set_bit(RELEASE_RESOURCES, &ep->com.flags); put_ep(&ep->com); } @@ -358,7 +359,7 @@ static unsigned int find_best_mtu(const struct t3c_data *d, unsigned short mtu) static void arp_failure_discard(struct t3cdev *dev, struct sk_buff *skb) { - PDBG("%s t3cdev %p\n", __func__, dev); + pr_debug("%s t3cdev %p\n", __func__, dev); kfree_skb(skb); } @@ -379,7 +380,7 @@ static void abort_arp_failure(struct t3cdev *dev, struct sk_buff *skb) { struct cpl_abort_req *req = cplhdr(skb); - PDBG("%s t3cdev %p\n", __func__, dev); + pr_debug("%s t3cdev %p\n", __func__, dev); req->cmd = CPL_ABORT_NO_RST; iwch_cxgb3_ofld_send(dev, skb); } @@ -389,7 +390,7 @@ static int send_halfclose(struct iwch_ep *ep, gfp_t gfp) struct cpl_close_con_req *req; struct sk_buff *skb; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb = get_skb(NULL, sizeof(*req), gfp); if (!skb) { pr_err("%s - failed to alloc skb\n", __func__); @@ -408,7 +409,7 @@ static int send_abort(struct iwch_ep *ep, struct sk_buff *skb, gfp_t gfp) { struct cpl_abort_req *req; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb = get_skb(skb, sizeof(*req), gfp); if (!skb) { pr_err("%s - failed to alloc skb\n", __func__); @@ -433,7 +434,7 @@ static int send_connect(struct iwch_ep *ep) unsigned int mtu_idx; int wscale; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb = get_skb(NULL, sizeof(*req), GFP_KERNEL); if (!skb) { @@ -476,7 +477,7 @@ static void send_mpa_req(struct iwch_ep *ep, struct sk_buff *skb) struct mpa_message *mpa; int len; - PDBG("%s ep %p pd_len %d\n", __func__, ep, ep->plen); + pr_debug("%s ep %p pd_len %d\n", __func__, ep, ep->plen); BUG_ON(skb_cloned(skb)); @@ -536,7 +537,7 @@ static int send_mpa_reject(struct iwch_ep *ep, const void *pdata, u8 plen) struct mpa_message *mpa; struct sk_buff *skb; - PDBG("%s ep %p plen %d\n", __func__, ep, plen); + pr_debug("%s ep %p plen %d\n", __func__, ep, plen); mpalen = sizeof(*mpa) + plen; @@ -585,7 +586,7 @@ static int send_mpa_reply(struct iwch_ep *ep, const void *pdata, u8 plen) int len; struct sk_buff *skb; - PDBG("%s ep %p plen %d\n", __func__, ep, plen); + pr_debug("%s ep %p plen %d\n", __func__, ep, plen); mpalen = sizeof(*mpa) + plen; @@ -634,7 +635,7 @@ static int act_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct cpl_act_establish *req = cplhdr(skb); unsigned int tid = GET_TID(req); - PDBG("%s ep %p tid %d\n", __func__, ep, tid); + pr_debug("%s ep %p tid %d\n", __func__, ep, tid); dst_confirm(ep->dst); @@ -658,7 +659,7 @@ static int act_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) static void abort_connection(struct iwch_ep *ep, struct sk_buff *skb, gfp_t gfp) { - PDBG("%s ep %p\n", __FILE__, ep); + pr_debug("%s ep %p\n", __FILE__, ep); state_set(&ep->com, ABORTING); send_abort(ep, skb, gfp); } @@ -667,12 +668,12 @@ static void close_complete_upcall(struct iwch_ep *ep) { struct iw_cm_event event; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_CLOSE; if (ep->com.cm_id) { - PDBG("close complete delivered ep %p cm_id %p tid %d\n", - ep, ep->com.cm_id, ep->hwtid); + pr_debug("close complete delivered ep %p cm_id %p tid %d\n", + ep, ep->com.cm_id, ep->hwtid); ep->com.cm_id->event_handler(ep->com.cm_id, &event); ep->com.cm_id->rem_ref(ep->com.cm_id); ep->com.cm_id = NULL; @@ -684,12 +685,12 @@ static void peer_close_upcall(struct iwch_ep *ep) { struct iw_cm_event event; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_DISCONNECT; if (ep->com.cm_id) { - PDBG("peer close delivered ep %p cm_id %p tid %d\n", - ep, ep->com.cm_id, ep->hwtid); + pr_debug("peer close delivered ep %p cm_id %p tid %d\n", + ep, ep->com.cm_id, ep->hwtid); ep->com.cm_id->event_handler(ep->com.cm_id, &event); } } @@ -698,13 +699,13 @@ static void peer_abort_upcall(struct iwch_ep *ep) { struct iw_cm_event event; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_CLOSE; event.status = -ECONNRESET; if (ep->com.cm_id) { - PDBG("abort delivered ep %p cm_id %p tid %d\n", ep, - ep->com.cm_id, ep->hwtid); + pr_debug("abort delivered ep %p cm_id %p tid %d\n", ep, + ep->com.cm_id, ep->hwtid); ep->com.cm_id->event_handler(ep->com.cm_id, &event); ep->com.cm_id->rem_ref(ep->com.cm_id); ep->com.cm_id = NULL; @@ -716,7 +717,7 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status) { struct iw_cm_event event; - PDBG("%s ep %p status %d\n", __func__, ep, status); + pr_debug("%s ep %p status %d\n", __func__, ep, status); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_CONNECT_REPLY; event.status = status; @@ -730,8 +731,8 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status) event.private_data = ep->mpa_pkt + sizeof(struct mpa_message); } if (ep->com.cm_id) { - PDBG("%s ep %p tid %d status %d\n", __func__, ep, - ep->hwtid, status); + pr_debug("%s ep %p tid %d status %d\n", __func__, ep, + ep->hwtid, status); ep->com.cm_id->event_handler(ep->com.cm_id, &event); } if (status < 0) { @@ -745,7 +746,7 @@ static void connect_request_upcall(struct iwch_ep *ep) { struct iw_cm_event event; - PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid); + pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_CONNECT_REQUEST; memcpy(&event.local_addr, &ep->com.local_addr, @@ -774,7 +775,7 @@ static void established_upcall(struct iwch_ep *ep) { struct iw_cm_event event; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); memset(&event, 0, sizeof(event)); event.event = IW_CM_EVENT_ESTABLISHED; /* @@ -783,7 +784,7 @@ static void established_upcall(struct iwch_ep *ep) */ event.ird = event.ord = 8; if (ep->com.cm_id) { - PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid); + pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid); ep->com.cm_id->event_handler(ep->com.cm_id, &event); } } @@ -793,7 +794,7 @@ static int update_rx_credits(struct iwch_ep *ep, u32 credits) struct cpl_rx_data_ack *req; struct sk_buff *skb; - PDBG("%s ep %p credits %u\n", __func__, ep, credits); + pr_debug("%s ep %p credits %u\n", __func__, ep, credits); skb = get_skb(NULL, sizeof(*req), GFP_KERNEL); if (!skb) { pr_err("update_rx_credits - cannot alloc skb!\n"); @@ -817,7 +818,7 @@ static void process_mpa_reply(struct iwch_ep *ep, struct sk_buff *skb) enum iwch_qp_attr_mask mask; int err; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); /* * Stop mpa timer. If it expired, then the state has @@ -904,10 +905,10 @@ static void process_mpa_reply(struct iwch_ep *ep, struct sk_buff *skb) ep->mpa_attr.recv_marker_enabled = markers_enabled; ep->mpa_attr.xmit_marker_enabled = mpa->flags & MPA_MARKERS ? 1 : 0; ep->mpa_attr.version = mpa_rev; - PDBG("%s - crc_enabled=%d, recv_marker_enabled=%d, " - "xmit_marker_enabled=%d, version=%d\n", __func__, - ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled, - ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version); + pr_debug("%s - crc_enabled=%d, recv_marker_enabled=%d, xmit_marker_enabled=%d, version=%d\n", + __func__, + ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled, + ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version); attrs.mpa_attr = ep->mpa_attr; attrs.max_ird = ep->ird; @@ -942,7 +943,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb) struct mpa_message *mpa; u16 plen; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); /* * Stop mpa timer. If it expired, then the state has @@ -962,7 +963,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb) return; } - PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__); + pr_debug("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__); /* * Copy the new data into our accumulation buffer. @@ -977,7 +978,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb) */ if (ep->mpa_pkt_len < sizeof(*mpa)) return; - PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__); + pr_debug("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__); mpa = (struct mpa_message *) ep->mpa_pkt; /* @@ -1027,10 +1028,10 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb) ep->mpa_attr.recv_marker_enabled = markers_enabled; ep->mpa_attr.xmit_marker_enabled = mpa->flags & MPA_MARKERS ? 1 : 0; ep->mpa_attr.version = mpa_rev; - PDBG("%s - crc_enabled=%d, recv_marker_enabled=%d, " - "xmit_marker_enabled=%d, version=%d\n", __func__, - ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled, - ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version); + pr_debug("%s - crc_enabled=%d, recv_marker_enabled=%d, xmit_marker_enabled=%d, version=%d\n", + __func__, + ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled, + ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version); state_set(&ep->com, MPA_REQ_RCVD); @@ -1045,7 +1046,7 @@ static int rx_data(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct cpl_rx_data *hdr = cplhdr(skb); unsigned int dlen = ntohs(hdr->len); - PDBG("%s ep %p dlen %u\n", __func__, ep, dlen); + pr_debug("%s ep %p dlen %u\n", __func__, ep, dlen); skb_pull(skb, sizeof(*hdr)); skb_trim(skb, dlen); @@ -1092,11 +1093,11 @@ static int tx_ack(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) unsigned long flags; int post_zb = 0; - PDBG("%s ep %p credits %u\n", __func__, ep, credits); + pr_debug("%s ep %p credits %u\n", __func__, ep, credits); if (credits == 0) { - PDBG("%s 0 credit ack ep %p state %u\n", - __func__, ep, state_read(&ep->com)); + pr_debug("%s 0 credit ack ep %p state %u\n", + __func__, ep, state_read(&ep->com)); return CPL_RET_BUF_DONE; } @@ -1104,24 +1105,24 @@ static int tx_ack(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) BUG_ON(credits != 1); dst_confirm(ep->dst); if (!ep->mpa_skb) { - PDBG("%s rdma_init wr_ack ep %p state %u\n", - __func__, ep, ep->com.state); + pr_debug("%s rdma_init wr_ack ep %p state %u\n", + __func__, ep, ep->com.state); if (ep->mpa_attr.initiator) { - PDBG("%s initiator ep %p state %u\n", - __func__, ep, ep->com.state); + pr_debug("%s initiator ep %p state %u\n", + __func__, ep, ep->com.state); if (peer2peer && ep->com.state == FPDU_MODE) post_zb = 1; } else { - PDBG("%s responder ep %p state %u\n", - __func__, ep, ep->com.state); + pr_debug("%s responder ep %p state %u\n", + __func__, ep, ep->com.state); if (ep->com.state == MPA_REQ_RCVD) { ep->com.rpl_done = 1; wake_up(&ep->com.waitq); } } } else { - PDBG("%s lsm ack ep %p state %u freeing skb\n", - __func__, ep, ep->com.state); + pr_debug("%s lsm ack ep %p state %u freeing skb\n", + __func__, ep, ep->com.state); kfree_skb(ep->mpa_skb); ep->mpa_skb = NULL; } @@ -1137,7 +1138,7 @@ static int abort_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) unsigned long flags; int release = 0; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); BUG_ON(!ep); /* @@ -1180,8 +1181,8 @@ static int act_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct iwch_ep *ep = ctx; struct cpl_act_open_rpl *rpl = cplhdr(skb); - PDBG("%s ep %p status %u errno %d\n", __func__, ep, rpl->status, - status2errno(rpl->status)); + pr_debug("%s ep %p status %u errno %d\n", __func__, ep, rpl->status, + status2errno(rpl->status)); connect_reply_upcall(ep, status2errno(rpl->status)); state_set(&ep->com, DEAD); if (ep->com.tdev->type != T3A && act_open_has_tid(rpl->status)) @@ -1198,7 +1199,7 @@ static int listen_start(struct iwch_listen_ep *ep) struct sk_buff *skb; struct cpl_pass_open_req *req; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb = get_skb(NULL, sizeof(*req), GFP_KERNEL); if (!skb) { pr_err("t3c_listen_start failed to alloc skb!\n"); @@ -1226,8 +1227,8 @@ static int pass_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct iwch_listen_ep *ep = ctx; struct cpl_pass_open_rpl *rpl = cplhdr(skb); - PDBG("%s ep %p status %d error %d\n", __func__, ep, - rpl->status, status2errno(rpl->status)); + pr_debug("%s ep %p status %d error %d\n", __func__, ep, + rpl->status, status2errno(rpl->status)); ep->com.rpl_err = status2errno(rpl->status); ep->com.rpl_done = 1; wake_up(&ep->com.waitq); @@ -1240,7 +1241,7 @@ static int listen_stop(struct iwch_listen_ep *ep) struct sk_buff *skb; struct cpl_close_listserv_req *req; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb = get_skb(NULL, sizeof(*req), GFP_KERNEL); if (!skb) { pr_err("%s - failed to alloc skb\n", __func__); @@ -1260,7 +1261,7 @@ static int close_listsrv_rpl(struct t3cdev *tdev, struct sk_buff *skb, struct iwch_listen_ep *ep = ctx; struct cpl_close_listserv_rpl *rpl = cplhdr(skb); - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); ep->com.rpl_err = status2errno(rpl->status); ep->com.rpl_done = 1; wake_up(&ep->com.waitq); @@ -1274,7 +1275,7 @@ static void accept_cr(struct iwch_ep *ep, __be32 peer_ip, struct sk_buff *skb) u32 opt0h, opt0l, opt2; int wscale; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); BUG_ON(skb_cloned(skb)); skb_trim(skb, sizeof(*rpl)); skb_get(skb); @@ -1308,8 +1309,8 @@ static void accept_cr(struct iwch_ep *ep, __be32 peer_ip, struct sk_buff *skb) static void reject_cr(struct t3cdev *tdev, u32 hwtid, __be32 peer_ip, struct sk_buff *skb) { - PDBG("%s t3cdev %p tid %u peer_ip %x\n", __func__, tdev, hwtid, - peer_ip); + pr_debug("%s t3cdev %p tid %u peer_ip %x\n", __func__, tdev, hwtid, + peer_ip); BUG_ON(skb_cloned(skb)); skb_trim(skb, sizeof(struct cpl_tid_release)); skb_get(skb); @@ -1343,7 +1344,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct rtable *rt; struct iff_mac tim; - PDBG("%s parent ep %p tid %u\n", __func__, parent_ep, hwtid); + pr_debug("%s parent ep %p tid %u\n", __func__, parent_ep, hwtid); if (state_read(&parent_ep->com) != LISTEN) { pr_err("%s - listening ep not in LISTEN\n", __func__); @@ -1414,7 +1415,7 @@ static int pass_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct iwch_ep *ep = ctx; struct cpl_pass_establish *req = cplhdr(skb); - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); ep->snd_seq = ntohl(req->snd_isn); ep->rcv_seq = ntohl(req->rcv_isn); @@ -1435,7 +1436,7 @@ static int peer_close(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) int disconnect = 1; int release = 0; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); dst_confirm(ep->dst); spin_lock_irqsave(&ep->com.lock, flags); @@ -1458,14 +1459,14 @@ static int peer_close(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) __state_set(&ep->com, CLOSING); ep->com.rpl_done = 1; ep->com.rpl_err = -ECONNRESET; - PDBG("waking up ep %p\n", ep); + pr_debug("waking up ep %p\n", ep); wake_up(&ep->com.waitq); break; case MPA_REP_SENT: __state_set(&ep->com, CLOSING); ep->com.rpl_done = 1; ep->com.rpl_err = -ECONNRESET; - PDBG("waking up ep %p\n", ep); + pr_debug("waking up ep %p\n", ep); wake_up(&ep->com.waitq); break; case FPDU_MODE: @@ -1530,8 +1531,8 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) unsigned long flags; if (is_neg_adv_abort(req->status)) { - PDBG("%s neg_adv_abort ep %p tid %d\n", __func__, ep, - ep->hwtid); + pr_debug("%s neg_adv_abort ep %p tid %d\n", __func__, ep, + ep->hwtid); t3_l2t_send_event(ep->com.tdev, ep->l2t); return CPL_RET_BUF_DONE; } @@ -1545,7 +1546,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) } spin_lock_irqsave(&ep->com.lock, flags); - PDBG("%s ep %p state %u\n", __func__, ep, ep->com.state); + pr_debug("%s ep %p state %u\n", __func__, ep, ep->com.state); switch (ep->com.state) { case CONNECTING: break; @@ -1559,7 +1560,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) case MPA_REP_SENT: ep->com.rpl_done = 1; ep->com.rpl_err = -ECONNRESET; - PDBG("waking up ep %p\n", ep); + pr_debug("waking up ep %p\n", ep); wake_up(&ep->com.waitq); break; case MPA_REQ_RCVD: @@ -1572,7 +1573,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) */ ep->com.rpl_done = 1; ep->com.rpl_err = -ECONNRESET; - PDBG("waking up ep %p\n", ep); + pr_debug("waking up ep %p\n", ep); wake_up(&ep->com.waitq); break; case MORIBUND: @@ -1593,7 +1594,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) case ABORTING: break; case DEAD: - PDBG("%s PEER_ABORT IN DEAD STATE!!!!\n", __func__); + pr_debug("%s PEER_ABORT IN DEAD STATE!!!!\n", __func__); spin_unlock_irqrestore(&ep->com.lock, flags); return CPL_RET_BUF_DONE; default: @@ -1633,7 +1634,7 @@ static int close_con_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) unsigned long flags; int release = 0; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); BUG_ON(!ep); /* The cm_id may be null if we failed to connect */ @@ -1687,9 +1688,9 @@ static int terminate(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) if (state_read(&ep->com) != FPDU_MODE) return CPL_RET_BUF_DONE; - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); skb_pull(skb, sizeof(struct cpl_rdma_terminate)); - PDBG("%s saving %d bytes of term msg\n", __func__, skb->len); + pr_debug("%s saving %d bytes of term msg\n", __func__, skb->len); skb_copy_from_linear_data(skb, ep->com.qp->attr.terminate_buffer, skb->len); ep->com.qp->attr.terminate_msg_len = skb->len; @@ -1702,8 +1703,8 @@ static int ec_status(struct t3cdev *tdev, struct sk_buff *skb, void *ctx) struct cpl_rdma_ec_status *rep = cplhdr(skb); struct iwch_ep *ep = ctx; - PDBG("%s ep %p tid %u status %d\n", __func__, ep, ep->hwtid, - rep->status); + pr_debug("%s ep %p tid %u status %d\n", __func__, ep, ep->hwtid, + rep->status); if (rep->status) { struct iwch_qp_attributes attrs; @@ -1727,8 +1728,8 @@ static void ep_timeout(unsigned long arg) int abort = 1; spin_lock_irqsave(&ep->com.lock, flags); - PDBG("%s ep %p tid %u state %d\n", __func__, ep, ep->hwtid, - ep->com.state); + pr_debug("%s ep %p tid %u state %d\n", __func__, ep, ep->hwtid, + ep->com.state); switch (ep->com.state) { case MPA_REQ_SENT: __state_set(&ep->com, ABORTING); @@ -1762,7 +1763,7 @@ int iwch_reject_cr(struct iw_cm_id *cm_id, const void *pdata, u8 pdata_len) { int err; struct iwch_ep *ep = to_ep(cm_id); - PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid); + pr_debug("%s ep %p tid %u\n", __func__, ep, ep->hwtid); if (state_read(&ep->com) == DEAD) { put_ep(&ep->com); @@ -1788,7 +1789,7 @@ int iwch_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) struct iwch_dev *h = to_iwch_dev(cm_id->device); struct iwch_qp *qp = get_qhp(h, conn_param->qpn); - PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid); + pr_debug("%s ep %p tid %u\n", __func__, ep, ep->hwtid); if (state_read(&ep->com) == DEAD) { err = -ECONNRESET; goto err; @@ -1814,7 +1815,7 @@ int iwch_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) if (peer2peer && ep->ird == 0) ep->ird = 1; - PDBG("%s %d ird %d ord %d\n", __func__, __LINE__, ep->ird, ep->ord); + pr_debug("%s %d ird %d ord %d\n", __func__, __LINE__, ep->ird, ep->ord); /* bind QP to EP and move to RTS */ attrs.mpa_attr = ep->mpa_attr; @@ -1916,8 +1917,8 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) ep->com.cm_id = cm_id; ep->com.qp = get_qhp(h, conn_param->qpn); BUG_ON(!ep->com.qp); - PDBG("%s qpn 0x%x qp %p cm_id %p\n", __func__, conn_param->qpn, - ep->com.qp, cm_id); + pr_debug("%s qpn 0x%x qp %p cm_id %p\n", __func__, conn_param->qpn, + ep->com.qp, cm_id); /* * Allocate an active TID to initiate a TCP connection. @@ -1991,7 +1992,7 @@ int iwch_create_listen(struct iw_cm_id *cm_id, int backlog) err = -ENOMEM; goto fail1; } - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); ep->com.tdev = h->rdev.t3cdev_p; cm_id->add_ref(cm_id); ep->com.cm_id = cm_id; @@ -2036,7 +2037,7 @@ int iwch_destroy_listen(struct iw_cm_id *cm_id) int err; struct iwch_listen_ep *ep = to_listen_ep(cm_id); - PDBG("%s ep %p\n", __func__, ep); + pr_debug("%s ep %p\n", __func__, ep); might_sleep(); state_set(&ep->com, DEAD); @@ -2065,8 +2066,8 @@ int iwch_ep_disconnect(struct iwch_ep *ep, int abrupt, gfp_t gfp) spin_lock_irqsave(&ep->com.lock, flags); - PDBG("%s ep %p state %s, abrupt %d\n", __func__, ep, - states[ep->com.state], abrupt); + pr_debug("%s ep %p state %s, abrupt %d\n", __func__, ep, + states[ep->com.state], abrupt); tdev = (struct t3cdev *)ep->com.tdev; rdev = (struct cxio_rdev *)tdev->ulp; @@ -2103,8 +2104,8 @@ int iwch_ep_disconnect(struct iwch_ep *ep, int abrupt, gfp_t gfp) case MORIBUND: case ABORTING: case DEAD: - PDBG("%s ignoring disconnect ep %p state %u\n", - __func__, ep, ep->com.state); + pr_debug("%s ignoring disconnect ep %p state %u\n", + __func__, ep, ep->com.state); break; default: BUG(); @@ -2133,8 +2134,8 @@ int iwch_ep_redirect(void *ctx, struct dst_entry *old, struct dst_entry *new, if (ep->dst != old) return 0; - PDBG("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new, - l2t); + pr_debug("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new, + l2t); dst_hold(new); l2t_release(ep->com.tdev, ep->l2t); ep->l2t = l2t; diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.h b/drivers/infiniband/hw/cxgb3/iwch_cm.h index e66e75921797..cc7fe644d260 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_cm.h +++ b/drivers/infiniband/hw/cxgb3/iwch_cm.h @@ -53,17 +53,17 @@ #define MPA_MARKERS 0x80 #define MPA_FLAGS_MASK 0xE0 -#define put_ep(ep) { \ - PDBG("put_ep (via %s:%u) ep %p refcnt %d\n", __func__, __LINE__, \ - ep, kref_read(&((ep)->kref))); \ - WARN_ON(kref_read(&((ep)->kref)) < 1); \ - kref_put(&((ep)->kref), __free_ep); \ +#define put_ep(ep) { \ + pr_debug("put_ep (via %s:%u) ep %p refcnt %d\n", \ + __func__, __LINE__, ep, kref_read(&((ep)->kref))); \ + WARN_ON(kref_read(&((ep)->kref)) < 1); \ + kref_put(&((ep)->kref), __free_ep); \ } -#define get_ep(ep) { \ - PDBG("get_ep (via %s:%u) ep %p, refcnt %d\n", __func__, __LINE__, \ - ep, kref_read(&((ep)->kref))); \ - kref_get(&((ep)->kref)); \ +#define get_ep(ep) { \ + pr_debug("get_ep (via %s:%u) ep %p, refcnt %d\n", \ + __func__, __LINE__, ep, kref_read(&((ep)->kref))); \ + kref_get(&((ep)->kref)); \ } struct mpa_message { diff --git a/drivers/infiniband/hw/cxgb3/iwch_cq.c b/drivers/infiniband/hw/cxgb3/iwch_cq.c index e97120378d63..dd5348e48806 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_cq.c +++ b/drivers/infiniband/hw/cxgb3/iwch_cq.c @@ -67,8 +67,8 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp, ret = cxio_poll_cq(wq, &(chp->cq), &cqe, &cqe_flushed, &cookie, &credit); if (t3a_device(chp->rhp) && credit) { - PDBG("%s updating %d cq credits on id %d\n", __func__, - credit, chp->cq.cqid); + pr_debug("%s updating %d cq credits on id %d\n", __func__, + credit, chp->cq.cqid); cxio_hal_cq_op(&rhp->rdev, &chp->cq, CQ_CREDIT_UPDATE, credit); } @@ -83,11 +83,11 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp, wc->vendor_err = CQE_STATUS(cqe); wc->wc_flags = 0; - PDBG("%s qpid 0x%x type %d opcode %d status 0x%x wrid hi 0x%x " - "lo 0x%x cookie 0x%llx\n", __func__, - CQE_QPID(cqe), CQE_TYPE(cqe), - CQE_OPCODE(cqe), CQE_STATUS(cqe), CQE_WRID_HI(cqe), - CQE_WRID_LOW(cqe), (unsigned long long) cookie); + pr_debug("%s qpid 0x%x type %d opcode %d status 0x%x wrid hi 0x%x lo 0x%x cookie 0x%llx\n", + __func__, + CQE_QPID(cqe), CQE_TYPE(cqe), + CQE_OPCODE(cqe), CQE_STATUS(cqe), CQE_WRID_HI(cqe), + CQE_WRID_LOW(cqe), (unsigned long long)cookie); if (CQE_TYPE(cqe) == 0) { if (!CQE_STATUS(cqe)) diff --git a/drivers/infiniband/hw/cxgb3/iwch_ev.c b/drivers/infiniband/hw/cxgb3/iwch_ev.c index e8ecaeff2745..4a0c82a8fb60 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_ev.c +++ b/drivers/infiniband/hw/cxgb3/iwch_ev.c @@ -61,9 +61,10 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp, if ((qhp->attr.state == IWCH_QP_STATE_ERROR) || (qhp->attr.state == IWCH_QP_STATE_TERMINATE)) { - PDBG("%s AE received after RTS - " - "qp state %d qpid 0x%x status 0x%x\n", __func__, - qhp->attr.state, qhp->wq.qpid, CQE_STATUS(rsp_msg->cqe)); + pr_debug("%s AE received after RTS - qp state %d qpid 0x%x status 0x%x\n", + __func__, + qhp->attr.state, qhp->wq.qpid, + CQE_STATUS(rsp_msg->cqe)); spin_unlock(&rnicp->lock); return; } @@ -136,12 +137,12 @@ void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb) if ((CQE_OPCODE(rsp_msg->cqe) == T3_TERMINATE) && (CQE_STATUS(rsp_msg->cqe) == 0)) { if (SQ_TYPE(rsp_msg->cqe)) { - PDBG("%s QPID 0x%x ep %p disconnecting\n", - __func__, qhp->wq.qpid, qhp->ep); + pr_debug("%s QPID 0x%x ep %p disconnecting\n", + __func__, qhp->wq.qpid, qhp->ep); iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC); } else { - PDBG("%s post REQ_ERR AE QPID 0x%x\n", __func__, - qhp->wq.qpid); + pr_debug("%s post REQ_ERR AE QPID 0x%x\n", __func__, + qhp->wq.qpid); post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_REQ_ERR, 0); iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC); diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c index 1d04c872c9d5..12886b1b4b10 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_mem.c +++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c @@ -48,7 +48,7 @@ static int iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag) mhp->attr.stag = stag; mmid = stag >> 8; mhp->ibmr.rkey = mhp->ibmr.lkey = stag; - PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp); + pr_debug("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp); return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid); } diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index ce7408ae7e6c..2f00ecc34678 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -103,7 +103,7 @@ static int iwch_dealloc_ucontext(struct ib_ucontext *context) struct iwch_ucontext *ucontext = to_iwch_ucontext(context); struct iwch_mm_entry *mm, *tmp; - PDBG("%s context %p\n", __func__, context); + pr_debug("%s context %p\n", __func__, context); list_for_each_entry_safe(mm, tmp, &ucontext->mmaps, entry) kfree(mm); cxio_release_ucontext(&rhp->rdev, &ucontext->uctx); @@ -117,7 +117,7 @@ static struct ib_ucontext *iwch_alloc_ucontext(struct ib_device *ibdev, struct iwch_ucontext *context; struct iwch_dev *rhp = to_iwch_dev(ibdev); - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); context = kzalloc(sizeof(*context), GFP_KERNEL); if (!context) return ERR_PTR(-ENOMEM); @@ -131,7 +131,7 @@ static int iwch_destroy_cq(struct ib_cq *ib_cq) { struct iwch_cq *chp; - PDBG("%s ib_cq %p\n", __func__, ib_cq); + pr_debug("%s ib_cq %p\n", __func__, ib_cq); chp = to_iwch_cq(ib_cq); remove_handle(chp->rhp, &chp->rhp->cqidr, chp->cq.cqid); @@ -157,7 +157,7 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev, static int warned; size_t resplen; - PDBG("%s ib_dev %p entries %d\n", __func__, ibdev, entries); + pr_debug("%s ib_dev %p entries %d\n", __func__, ibdev, entries); if (attr->flags) return ERR_PTR(-EINVAL); @@ -245,9 +245,9 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev, } insert_mmap(ucontext, mm); } - PDBG("created cqid 0x%0x chp %p size 0x%0x, dma_addr 0x%0llx\n", - chp->cq.cqid, chp, (1 << chp->cq.size_log2), - (unsigned long long) chp->cq.dma_addr); + pr_debug("created cqid 0x%0x chp %p size 0x%0x, dma_addr 0x%0llx\n", + chp->cq.cqid, chp, (1 << chp->cq.size_log2), + (unsigned long long)chp->cq.dma_addr); return &chp->ibcq; } @@ -258,7 +258,7 @@ static int iwch_resize_cq(struct ib_cq *cq, int cqe, struct ib_udata *udata) struct t3_cq oldcq, newcq; int ret; - PDBG("%s ib_cq %p cqe %d\n", __func__, cq, cqe); + pr_debug("%s ib_cq %p cqe %d\n", __func__, cq, cqe); /* We don't downsize... */ if (cqe <= cq->cqe) @@ -340,7 +340,7 @@ static int iwch_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) chp->cq.rptr = rptr; } else spin_lock_irqsave(&chp->lock, flag); - PDBG("%s rptr 0x%x\n", __func__, chp->cq.rptr); + pr_debug("%s rptr 0x%x\n", __func__, chp->cq.rptr); err = cxio_hal_cq_op(&rhp->rdev, &chp->cq, cq_op, 0); spin_unlock_irqrestore(&chp->lock, flag); if (err < 0) @@ -360,8 +360,8 @@ static int iwch_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) struct iwch_ucontext *ucontext; u64 addr; - PDBG("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff, - key, len); + pr_debug("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff, + key, len); if (vma->vm_start & (PAGE_SIZE-1)) { return -EINVAL; @@ -413,7 +413,7 @@ static int iwch_deallocate_pd(struct ib_pd *pd) php = to_iwch_pd(pd); rhp = php->rhp; - PDBG("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid); + pr_debug("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid); cxio_hal_put_pdid(rhp->rdev.rscp, php->pdid); kfree(php); return 0; @@ -427,7 +427,7 @@ static struct ib_pd *iwch_allocate_pd(struct ib_device *ibdev, u32 pdid; struct iwch_dev *rhp; - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); rhp = (struct iwch_dev *) ibdev; pdid = cxio_hal_get_pdid(rhp->rdev.rscp); if (!pdid) @@ -445,7 +445,7 @@ static struct ib_pd *iwch_allocate_pd(struct ib_device *ibdev, return ERR_PTR(-EFAULT); } } - PDBG("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php); + pr_debug("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php); return &php->ibpd; } @@ -455,7 +455,7 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr) struct iwch_mr *mhp; u32 mmid; - PDBG("%s ib_mr %p\n", __func__, ib_mr); + pr_debug("%s ib_mr %p\n", __func__, ib_mr); mhp = to_iwch_mr(ib_mr); kfree(mhp->pages); @@ -469,7 +469,7 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr) kfree((void *) (unsigned long) mhp->kva); if (mhp->umem) ib_umem_release(mhp->umem); - PDBG("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp); + pr_debug("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp); kfree(mhp); return 0; } @@ -484,7 +484,7 @@ static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc) __be64 *page_list; int shift = 26, npages, ret, i; - PDBG("%s ib_pd %p\n", __func__, pd); + pr_debug("%s ib_pd %p\n", __func__, pd); /* * T3 only supports 32 bits of size. @@ -515,8 +515,8 @@ static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc) for (i = 0; i < npages; i++) page_list[i] = cpu_to_be64((u64)i << shift); - PDBG("%s mask 0x%llx shift %d len %lld pbl_size %d\n", - __func__, mask, shift, total_size, npages); + pr_debug("%s mask 0x%llx shift %d len %lld pbl_size %d\n", + __func__, mask, shift, total_size, npages); ret = iwch_alloc_pbl(mhp, npages); if (ret) { @@ -564,7 +564,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, struct iwch_mr *mhp; struct iwch_reg_user_mr_resp uresp; struct scatterlist *sg; - PDBG("%s ib_pd %p\n", __func__, pd); + pr_debug("%s ib_pd %p\n", __func__, pd); php = to_iwch_pd(pd); rhp = php->rhp; @@ -634,8 +634,8 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (udata && !t3a_device(rhp)) { uresp.pbl_addr = (mhp->attr.pbl_addr - rhp->rdev.rnic_info.pbl_base) >> 3; - PDBG("%s user resp pbl_addr 0x%x\n", __func__, - uresp.pbl_addr); + pr_debug("%s user resp pbl_addr 0x%x\n", __func__, + uresp.pbl_addr); if (ib_copy_to_udata(udata, &uresp, sizeof (uresp))) { iwch_dereg_mr(&mhp->ibmr); @@ -689,7 +689,7 @@ static struct ib_mw *iwch_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, kfree(mhp); return ERR_PTR(-ENOMEM); } - PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag); + pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag); return &(mhp->ibmw); } @@ -704,7 +704,7 @@ static int iwch_dealloc_mw(struct ib_mw *mw) mmid = (mw->rkey) >> 8; cxio_deallocate_window(&rhp->rdev, mhp->attr.stag); remove_handle(rhp, &rhp->mmidr, mmid); - PDBG("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp); + pr_debug("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp); kfree(mhp); return 0; } @@ -754,7 +754,7 @@ static struct ib_mr *iwch_alloc_mr(struct ib_pd *pd, if (insert_handle(rhp, &rhp->mmidr, mhp, mmid)) goto err3; - PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag); + pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag); return &(mhp->ibmr); err3: cxio_dereg_mem(&rhp->rdev, stag, mhp->attr.pbl_size, @@ -815,8 +815,8 @@ static int iwch_destroy_qp(struct ib_qp *ib_qp) cxio_destroy_qp(&rhp->rdev, &qhp->wq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx); - PDBG("%s ib_qp %p qpid 0x%0x qhp %p\n", __func__, - ib_qp, qhp->wq.qpid, qhp); + pr_debug("%s ib_qp %p qpid 0x%0x qhp %p\n", __func__, + ib_qp, qhp->wq.qpid, qhp); kfree(qhp); return 0; } @@ -834,7 +834,7 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, int wqsize, sqsize, rqsize; struct iwch_ucontext *ucontext; - PDBG("%s ib_pd %p\n", __func__, pd); + pr_debug("%s ib_pd %p\n", __func__, pd); if (attrs->qp_type != IB_QPT_RC) return ERR_PTR(-EINVAL); php = to_iwch_pd(pd); @@ -875,8 +875,8 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, if (!ucontext && wqsize < (rqsize + (2 * sqsize))) wqsize = roundup_pow_of_two(rqsize + roundup_pow_of_two(attrs->cap.max_send_wr * 2)); - PDBG("%s wqsize %d sqsize %d rqsize %d\n", __func__, - wqsize, sqsize, rqsize); + pr_debug("%s wqsize %d sqsize %d rqsize %d\n", __func__, + wqsize, sqsize, rqsize); qhp = kzalloc(sizeof(*qhp), GFP_KERNEL); if (!qhp) return ERR_PTR(-ENOMEM); @@ -971,11 +971,10 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, } qhp->ibqp.qp_num = qhp->wq.qpid; init_timer(&(qhp->timer)); - PDBG("%s sq_num_entries %d, rq_num_entries %d " - "qpid 0x%0x qhp %p dma_addr 0x%llx size %d rq_addr 0x%x\n", - __func__, qhp->attr.sq_num_entries, qhp->attr.rq_num_entries, - qhp->wq.qpid, qhp, (unsigned long long) qhp->wq.dma_addr, - 1 << qhp->wq.size_log2, qhp->wq.rq_addr); + pr_debug("%s sq_num_entries %d, rq_num_entries %d qpid 0x%0x qhp %p dma_addr 0x%llx size %d rq_addr 0x%x\n", + __func__, qhp->attr.sq_num_entries, qhp->attr.rq_num_entries, + qhp->wq.qpid, qhp, (unsigned long long)qhp->wq.dma_addr, + 1 << qhp->wq.size_log2, qhp->wq.rq_addr); return &qhp->ibqp; } @@ -987,7 +986,7 @@ static int iwch_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, enum iwch_qp_attr_mask mask = 0; struct iwch_qp_attributes attrs; - PDBG("%s ib_qp %p\n", __func__, ibqp); + pr_debug("%s ib_qp %p\n", __func__, ibqp); /* iwarp does not support the RTR state */ if ((attr_mask & IB_QP_STATE) && (attr->qp_state == IB_QPS_RTR)) @@ -1020,20 +1019,20 @@ static int iwch_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, void iwch_qp_add_ref(struct ib_qp *qp) { - PDBG("%s ib_qp %p\n", __func__, qp); + pr_debug("%s ib_qp %p\n", __func__, qp); atomic_inc(&(to_iwch_qp(qp)->refcnt)); } void iwch_qp_rem_ref(struct ib_qp *qp) { - PDBG("%s ib_qp %p\n", __func__, qp); + pr_debug("%s ib_qp %p\n", __func__, qp); if (atomic_dec_and_test(&(to_iwch_qp(qp)->refcnt))) wake_up(&(to_iwch_qp(qp)->wait)); } static struct ib_qp *iwch_get_qp(struct ib_device *dev, int qpn) { - PDBG("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn); + pr_debug("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn); return (struct ib_qp *)get_qhp(to_iwch_dev(dev), qpn); } @@ -1041,7 +1040,7 @@ static struct ib_qp *iwch_get_qp(struct ib_device *dev, int qpn) static int iwch_query_pkey(struct ib_device *ibdev, u8 port, u16 index, u16 * pkey) { - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); *pkey = 0; return 0; } @@ -1051,8 +1050,8 @@ static int iwch_query_gid(struct ib_device *ibdev, u8 port, { struct iwch_dev *dev; - PDBG("%s ibdev %p, port %d, index %d, gid %p\n", - __func__, ibdev, port, index, gid); + pr_debug("%s ibdev %p, port %d, index %d, gid %p\n", + __func__, ibdev, port, index, gid); dev = to_iwch_dev(ibdev); BUG_ON(port == 0 || port > 2); memset(&(gid->raw[0]), 0, sizeof(gid->raw)); @@ -1087,7 +1086,7 @@ static int iwch_query_device(struct ib_device *ibdev, struct ib_device_attr *pro struct iwch_dev *dev; - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); if (uhw->inlen || uhw->outlen) return -EINVAL; @@ -1125,7 +1124,7 @@ static int iwch_query_port(struct ib_device *ibdev, struct net_device *netdev; struct in_device *inetdev; - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); dev = to_iwch_dev(ibdev); netdev = dev->rdev.port_info.lldevs[port-1]; @@ -1168,7 +1167,7 @@ static ssize_t show_rev(struct device *dev, struct device_attribute *attr, { struct iwch_dev *iwch_dev = container_of(dev, struct iwch_dev, ibdev.dev); - PDBG("%s dev 0x%p\n", __func__, dev); + pr_debug("%s dev 0x%p\n", __func__, dev); return sprintf(buf, "%d\n", iwch_dev->rdev.t3cdev_p->type); } @@ -1180,7 +1179,7 @@ static ssize_t show_hca(struct device *dev, struct device_attribute *attr, struct ethtool_drvinfo info; struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev; - PDBG("%s dev 0x%p\n", __func__, dev); + pr_debug("%s dev 0x%p\n", __func__, dev); lldev->ethtool_ops->get_drvinfo(lldev, &info); return sprintf(buf, "%s\n", info.driver); } @@ -1190,7 +1189,7 @@ static ssize_t show_board(struct device *dev, struct device_attribute *attr, { struct iwch_dev *iwch_dev = container_of(dev, struct iwch_dev, ibdev.dev); - PDBG("%s dev 0x%p\n", __func__, dev); + pr_debug("%s dev 0x%p\n", __func__, dev); return sprintf(buf, "%x.%x\n", iwch_dev->rdev.rnic_info.pdev->vendor, iwch_dev->rdev.rnic_info.pdev->device); } @@ -1275,7 +1274,7 @@ static int iwch_get_mib(struct ib_device *ibdev, struct rdma_hw_stats *stats, if (port != 0 || !stats) return -ENOSYS; - PDBG("%s ibdev %p\n", __func__, ibdev); + pr_debug("%s ibdev %p\n", __func__, ibdev); dev = to_iwch_dev(ibdev); ret = dev->rdev.t3cdev_p->ctl(dev->rdev.t3cdev_p, RDMA_GET_MIB, &m); if (ret) @@ -1345,7 +1344,7 @@ static void get_dev_fw_ver_str(struct ib_device *ibdev, char *str, struct ethtool_drvinfo info; struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev; - PDBG("%s dev 0x%p\n", __func__, iwch_dev); + pr_debug("%s dev 0x%p\n", __func__, iwch_dev); lldev->ethtool_ops->get_drvinfo(lldev, &info); snprintf(str, str_len, "%s", info.fw_version); } @@ -1355,7 +1354,7 @@ int iwch_register_device(struct iwch_dev *dev) int ret; int i; - PDBG("%s iwch_dev %p\n", __func__, dev); + pr_debug("%s iwch_dev %p\n", __func__, dev); strlcpy(dev->ibdev.name, "cxgb3_%d", IB_DEVICE_NAME_MAX); memset(&dev->ibdev.node_guid, 0, sizeof(dev->ibdev.node_guid)); memcpy(&dev->ibdev.node_guid, dev->rdev.t3cdev_p->lldev->dev_addr, 6); @@ -1466,7 +1465,7 @@ void iwch_unregister_device(struct iwch_dev *dev) { int i; - PDBG("%s iwch_dev %p\n", __func__, dev); + pr_debug("%s iwch_dev %p\n", __func__, dev); for (i = 0; i < ARRAY_SIZE(iwch_class_attributes); ++i) device_remove_file(&dev->ibdev.dev, iwch_class_attributes[i]); diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.h b/drivers/infiniband/hw/cxgb3/iwch_provider.h index 252c464a09f6..9e216edec4c0 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.h +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.h @@ -217,8 +217,9 @@ static inline struct iwch_mm_entry *remove_mmap(struct iwch_ucontext *ucontext, if (mm->key == key && mm->len == len) { list_del_init(&mm->entry); spin_unlock(&ucontext->mmap_lock); - PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__, - key, (unsigned long long) mm->addr, mm->len); + pr_debug("%s key 0x%x addr 0x%llx len %d\n", + __func__, key, + (unsigned long long)mm->addr, mm->len); return mm; } } @@ -230,8 +231,8 @@ static inline void insert_mmap(struct iwch_ucontext *ucontext, struct iwch_mm_entry *mm) { spin_lock(&ucontext->mmap_lock); - PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__, - mm->key, (unsigned long long) mm->addr, mm->len); + pr_debug("%s key 0x%x addr 0x%llx len %d\n", + __func__, mm->key, (unsigned long long)mm->addr, mm->len); list_add_tail(&mm->entry, &ucontext->mmaps); spin_unlock(&ucontext->mmap_lock); } diff --git a/drivers/infiniband/hw/cxgb3/iwch_qp.c b/drivers/infiniband/hw/cxgb3/iwch_qp.c index 405a96b1d215..ba6d5d281b03 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_qp.c +++ b/drivers/infiniband/hw/cxgb3/iwch_qp.c @@ -208,30 +208,30 @@ static int iwch_sgl2pbl_map(struct iwch_dev *rhp, struct ib_sge *sg_list, mhp = get_mhp(rhp, (sg_list[i].lkey) >> 8); if (!mhp) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EIO; } if (!mhp->attr.state) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EIO; } if (mhp->attr.zbva) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EIO; } if (sg_list[i].addr < mhp->attr.va_fbo) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EINVAL; } if (sg_list[i].addr + ((u64) sg_list[i].length) < sg_list[i].addr) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EINVAL; } if (sg_list[i].addr + ((u64) sg_list[i].length) > mhp->attr.va_fbo + ((u64) mhp->attr.len)) { - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); return -EINVAL; } offset = sg_list[i].addr - mhp->attr.va_fbo; @@ -427,8 +427,8 @@ int iwch_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, err = build_inv_stag(wqe, wr, &t3_wr_flit_cnt); break; default: - PDBG("%s post of type=%d TBD!\n", __func__, - wr->opcode); + pr_debug("%s post of type=%d TBD!\n", __func__, + wr->opcode); err = -EINVAL; } if (err) @@ -444,10 +444,10 @@ int iwch_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, Q_GENBIT(qhp->wq.wptr, qhp->wq.size_log2), 0, t3_wr_flit_cnt, (wr_cnt == 1) ? T3_SOPEOP : T3_SOP); - PDBG("%s cookie 0x%llx wq idx 0x%x swsq idx %ld opcode %d\n", - __func__, (unsigned long long) wr->wr_id, idx, - Q_PTR2IDX(qhp->wq.sq_wptr, qhp->wq.sq_size_log2), - sqp->opcode); + pr_debug("%s cookie 0x%llx wq idx 0x%x swsq idx %ld opcode %d\n", + __func__, (unsigned long long)wr->wr_id, idx, + Q_PTR2IDX(qhp->wq.sq_wptr, qhp->wq.sq_size_log2), + sqp->opcode); wr = wr->next; num_wrs--; qhp->wq.wptr += wr_cnt; @@ -508,9 +508,9 @@ int iwch_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr, build_fw_riwrh((void *) wqe, T3_WR_RCV, T3_COMPLETION_FLAG, Q_GENBIT(qhp->wq.wptr, qhp->wq.size_log2), 0, sizeof(struct t3_receive_wr) >> 3, T3_SOPEOP); - PDBG("%s cookie 0x%llx idx 0x%x rq_wptr 0x%x rw_rptr 0x%x " - "wqe %p \n", __func__, (unsigned long long) wr->wr_id, - idx, qhp->wq.rq_wptr, qhp->wq.rq_rptr, wqe); + pr_debug("%s cookie 0x%llx idx 0x%x rq_wptr 0x%x rw_rptr 0x%x wqe %p\n", + __func__, (unsigned long long)wr->wr_id, + idx, qhp->wq.rq_wptr, qhp->wq.rq_rptr, wqe); ++(qhp->wq.rq_wptr); ++(qhp->wq.wptr); wr = wr->next; @@ -664,7 +664,7 @@ int iwch_post_zb_read(struct iwch_ep *ep) struct sk_buff *skb; u8 flit_cnt = sizeof(struct t3_rdma_read_wr) >> 3; - PDBG("%s enter\n", __func__); + pr_debug("%s enter\n", __func__); skb = alloc_skb(40, GFP_KERNEL); if (!skb) { pr_err("%s cannot send zb_read!!\n", __func__); @@ -696,7 +696,7 @@ int iwch_post_terminate(struct iwch_qp *qhp, struct respQ_msg_t *rsp_msg) struct terminate_message *term; struct sk_buff *skb; - PDBG("%s %d\n", __func__, __LINE__); + pr_debug("%s %d\n", __func__, __LINE__); skb = alloc_skb(40, GFP_ATOMIC); if (!skb) { pr_err("%s cannot send TERMINATE!\n", __func__); @@ -729,7 +729,7 @@ static void __flush_qp(struct iwch_qp *qhp, struct iwch_cq *rchp, int flushed; - PDBG("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp); + pr_debug("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp); /* take a ref on the qhp since we must release the lock */ atomic_inc(&qhp->refcnt); spin_unlock(&qhp->lock); @@ -807,7 +807,7 @@ u16 iwch_rqes_posted(struct iwch_qp *qhp) count++; wqe++; } - PDBG("%s qhp %p count %u\n", __func__, qhp, count); + pr_debug("%s qhp %p count %u\n", __func__, qhp, count); return count; } @@ -854,12 +854,12 @@ static int rdma_init(struct iwch_dev *rhp, struct iwch_qp *qhp, } else init_attr.rtr_type = 0; init_attr.irs = qhp->ep->rcv_seq; - PDBG("%s init_attr.rq_addr 0x%x init_attr.rq_size = %d " - "flags 0x%x qpcaps 0x%x\n", __func__, - init_attr.rq_addr, init_attr.rq_size, - init_attr.flags, init_attr.qpcaps); + pr_debug("%s init_attr.rq_addr 0x%x init_attr.rq_size = %d flags 0x%x qpcaps 0x%x\n", + __func__, + init_attr.rq_addr, init_attr.rq_size, + init_attr.flags, init_attr.qpcaps); ret = cxio_rdma_init(&rhp->rdev, &init_attr); - PDBG("%s ret %d\n", __func__, ret); + pr_debug("%s ret %d\n", __func__, ret); return ret; } @@ -877,9 +877,9 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp, int free = 0; struct iwch_ep *ep = NULL; - PDBG("%s qhp %p qpid 0x%x ep %p state %d -> %d\n", __func__, - qhp, qhp->wq.qpid, qhp->ep, qhp->attr.state, - (mask & IWCH_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1); + pr_debug("%s qhp %p qpid 0x%x ep %p state %d -> %d\n", __func__, + qhp, qhp->wq.qpid, qhp->ep, qhp->attr.state, + (mask & IWCH_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1); spin_lock_irqsave(&qhp->lock, flag); @@ -1041,8 +1041,8 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp, } goto out; err: - PDBG("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep, - qhp->wq.qpid); + pr_debug("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep, + qhp->wq.qpid); /* disassociate the LLP connection */ qhp->attr.llp_stream_handle = NULL; @@ -1076,6 +1076,6 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp, if (free) put_ep(&ep->com); - PDBG("%s exit state %d\n", __func__, qhp->attr.state); + pr_debug("%s exit state %d\n", __func__, qhp->attr.state); return ret; }