From patchwork Mon Jan 8 19:00:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10150357 X-Patchwork-Delegate: dledford@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D99E4601BE for ; Mon, 8 Jan 2018 19:01:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E399D288CA for ; Mon, 8 Jan 2018 19:01:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D8BF7289DD; Mon, 8 Jan 2018 19:01:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 664FC288CA for ; Mon, 8 Jan 2018 19:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932088AbeAHTA6 (ORCPT ); Mon, 8 Jan 2018 14:00:58 -0500 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:44242 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932086AbeAHTAx (ORCPT ); Mon, 8 Jan 2018 14:00:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1515438054; x=1546974054; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=9asbwKCPBWC5U8YRS5xlFOgEcP6Vme+iFBUxhhxO77E=; b=f1xesqJdPdf76uP3PWeiYn/Vm8ycKA2Ke+P6RcZyvgaJU+dMjH1dvTkU O7uH3RZ0dSq8XbGBHxAw5fI8KAJX7mKC0+mLphLFwGgW8d0BBldRH+jDx 7kjfxCbPIxQCrnqsrOuPGE/j6x9SWbyirUYftKvAR83eIr+acWMHwNO65 Cm6WzxVG/FMzpffWF8DEVduHGmDPjVPS3wblIZjlZgrr6McMKmD0OrrwL A3iVLDJ8Dul4tRxAa6D2UzWnJoDyOgxYFdVhVWi59j5Fv+r5RxKJ7WFOw Pji3clRFPYmUWleTqEqf7w19mUZyk+TtWlxZPvNMn64ou/mz712AI9oDn w==; X-IronPort-AV: E=Sophos;i="5.46,332,1511798400"; d="scan'208";a="68728507" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 09 Jan 2018 03:00:52 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 08 Jan 2018 10:56:49 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.171.236]) by uls-op-cesaip02.wdc.com with ESMTP; 08 Jan 2018 11:00:52 -0800 From: Bart Van Assche To: Jason Gunthorpe Cc: Doug Ledford , linux-rdma@vger.kernel.org, Bart Van Assche Subject: [PATCH 04/12] IB/srpt: Rename a local variable, a member variable and a constant Date: Mon, 8 Jan 2018 11:00:43 -0800 Message-Id: <20180108190051.32008-5-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180108190051.32008-1-bart.vanassche@wdc.com> References: <20180108190051.32008-1-bart.vanassche@wdc.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Rename rsp_size into max_rsp_size and SRPT_RQ_SIZE into MAX_SRPT_RQ_SIZE. The new names better reflect the role of this member variable and constant. Since the prefix "srp_" is superfluous in the context of the function that creates an RDMA channel, rename srp_sq_size into sq_size. Signed-off-by: Bart Van Assche --- drivers/infiniband/ulp/srpt/ib_srpt.c | 26 +++++++++++++------------- drivers/infiniband/ulp/srpt/ib_srpt.h | 6 +++--- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index 1bcf8b3b6095..4fb884c070af 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -325,7 +325,7 @@ static void srpt_get_ioc(struct srpt_port *sport, u32 slot, if (sdev->use_srq) send_queue_depth = sdev->srq_size; else - send_queue_depth = min(SRPT_RQ_SIZE, + send_queue_depth = min(MAX_SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); memset(iocp, 0, sizeof(*iocp)); @@ -1693,7 +1693,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) struct srpt_port *sport = ch->sport; struct srpt_device *sdev = sport->sdev; const struct ib_device_attr *attrs = &sdev->device->attrs; - u32 srp_sq_size = sport->port_attrib.srp_sq_size; + int sq_size = sport->port_attrib.srp_sq_size; int i, ret; WARN_ON(ch->rq_size < 1); @@ -1704,12 +1704,12 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) goto out; retry: - ch->cq = ib_alloc_cq(sdev->device, ch, ch->rq_size + srp_sq_size, + ch->cq = ib_alloc_cq(sdev->device, ch, ch->rq_size + sq_size, 0 /* XXX: spread CQs */, IB_POLL_WORKQUEUE); if (IS_ERR(ch->cq)) { ret = PTR_ERR(ch->cq); pr_err("failed to create CQ cqe= %d ret= %d\n", - ch->rq_size + srp_sq_size, ret); + ch->rq_size + sq_size, ret); goto out; } @@ -1727,8 +1727,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) * both both, as RDMA contexts will also post completions for the * RDMA READ case. */ - qp_init->cap.max_send_wr = min(srp_sq_size / 2, attrs->max_qp_wr + 0U); - qp_init->cap.max_rdma_ctxs = srp_sq_size / 2; + qp_init->cap.max_send_wr = min(sq_size / 2, attrs->max_qp_wr); + qp_init->cap.max_rdma_ctxs = sq_size / 2; qp_init->cap.max_send_sge = min(attrs->max_sge, SRPT_MAX_SG_PER_WQE); qp_init->port_num = ch->sport->port; if (sdev->use_srq) { @@ -1742,8 +1742,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) if (IS_ERR(ch->qp)) { ret = PTR_ERR(ch->qp); if (ret == -ENOMEM) { - srp_sq_size /= 2; - if (srp_sq_size >= MIN_SRPT_SQ_SIZE) { + sq_size /= 2; + if (sq_size >= MIN_SRPT_SQ_SIZE) { ib_destroy_cq(ch->cq); goto retry; } @@ -1950,7 +1950,7 @@ static void srpt_release_channel_work(struct work_struct *w) srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring, sdev, ch->rq_size, @@ -2098,16 +2098,16 @@ static int srpt_cm_req_recv(struct ib_cm_id *cm_id, * depth to avoid that the initiator driver has to report QUEUE_FULL * to the SCSI mid-layer. */ - ch->rq_size = min(SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); + ch->rq_size = min(MAX_SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); spin_lock_init(&ch->spinlock); ch->state = CH_CONNECTING; INIT_LIST_HEAD(&ch->cmd_wait_list); - ch->rsp_size = ch->sport->port_attrib.srp_max_rsp_size; + ch->max_rsp_size = ch->sport->port_attrib.srp_max_rsp_size; ch->ioctx_ring = (struct srpt_send_ioctx **) srpt_alloc_ioctx_ring(ch->sport->sdev, ch->rq_size, sizeof(*ch->ioctx_ring[0]), - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); if (!ch->ioctx_ring) goto free_ch; @@ -2235,7 +2235,7 @@ static int srpt_cm_req_recv(struct ib_cm_id *cm_id, free_ring: srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); free_ch: kfree(ch); diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h index 0c94cd6a7e9d..8f90a7ca98e6 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.h +++ b/drivers/infiniband/ulp/srpt/ib_srpt.h @@ -114,7 +114,7 @@ enum { MIN_SRPT_SQ_SIZE = 16, DEF_SRPT_SQ_SIZE = 4096, - SRPT_RQ_SIZE = 128, + MAX_SRPT_RQ_SIZE = 128, MIN_SRPT_SRQ_SIZE = 4, DEFAULT_SRPT_SRQ_SIZE = 4095, MAX_SRPT_SRQ_SIZE = 65535, @@ -248,7 +248,7 @@ enum rdma_ch_state { * @zw_cqe: Zero-length write CQE. * @kref: kref for this channel. * @rq_size: IB receive queue size. - * @rsp_size: IB response message size in bytes. + * @max_rsp_size: Maximum size of an RSP response message in bytes. * @sq_wr_avail: number of work requests available in the send queue. * @sport: pointer to the information of the HCA port used by this * channel. @@ -280,7 +280,7 @@ struct srpt_rdma_ch { struct ib_cqe zw_cqe; struct kref kref; int rq_size; - u32 rsp_size; + u32 max_rsp_size; atomic_t sq_wr_avail; struct srpt_port *sport; u8 i_port_id[16];