From patchwork Wed Jan 3 21:39:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10143479 X-Patchwork-Delegate: dledford@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BFDA260594 for ; Wed, 3 Jan 2018 21:39:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B345A29361 for ; Wed, 3 Jan 2018 21:39:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A82642936D; Wed, 3 Jan 2018 21:39:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 330DB29361 for ; Wed, 3 Jan 2018 21:39:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751329AbeACVjy (ORCPT ); Wed, 3 Jan 2018 16:39:54 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:48572 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751274AbeACVjr (ORCPT ); Wed, 3 Jan 2018 16:39:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1515015588; x=1546551588; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=yrl809J8qlW0weZT3/M1dRfWbfFte2b9MVEwPshi8+o=; b=KQjB8dCtUg94PJxZkAcFUfZzIyQEaqq+aDLsfRctf/5Hi7inTcsReSGW MVbzzj6hkDzF4VDVL+HipBAEipMDkO3/uHNUH1cFBk7zbG5ilDjyeT9LG RnsByGd8MplN0ACyJR+dab2bFXyLW5TT+THPxOWRcIgkC30LlPQW2edSJ 7y39o+p/J0W2m3wRIH2H/JWAwOHJAKoACnRV9Tl9xqwqhB3xuTYaVYsi8 JtKsSwVE8GUZcwtEUmFMKUGHDsEaf5AbDhz86LRK6VR4oLMljpbJX6nFl Bd6yHDNxDKVR2p/fAhg4DKmwMUaAYmtMp9AS1GpB2VKI//5iWCxm4twaU A==; X-IronPort-AV: E=Sophos;i="5.45,504,1508774400"; d="scan'208";a="68005929" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 04 Jan 2018 05:39:41 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP; 03 Jan 2018 13:35:49 -0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.171.236]) by uls-op-cesaip02.wdc.com with ESMTP; 03 Jan 2018 13:39:40 -0800 From: Bart Van Assche To: Jason Gunthorpe Cc: Doug Ledford , linux-rdma@vger.kernel.org, Bart Van Assche Subject: [PATCH 12/28] IB/srpt: Rename a local variable, a member variable and a constant Date: Wed, 3 Jan 2018 13:39:22 -0800 Message-Id: <20180103213938.11664-13-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180103213938.11664-1-bart.vanassche@wdc.com> References: <20180103213938.11664-1-bart.vanassche@wdc.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Rename rsp_size into max_rsp_size and SRPT_RQ_SIZE into MAX_SRPT_RQ_SIZE. The new names better reflect the role of this member variable and constant. Since the prefix "srp_" is superfluous in the context of the function that creates an RDMA channel, rename srp_sq_size into sq_size. Signed-off-by: Bart Van Assche --- drivers/infiniband/ulp/srpt/ib_srpt.c | 26 +++++++++++++------------- drivers/infiniband/ulp/srpt/ib_srpt.h | 6 +++--- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c index 943fec0d0548..6cff7a9b4cbb 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -314,7 +314,7 @@ static void srpt_get_ioc(struct srpt_port *sport, u32 slot, if (sdev->use_srq) send_queue_depth = sdev->srq_size; else - send_queue_depth = min(SRPT_RQ_SIZE, + send_queue_depth = min(MAX_SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); memset(iocp, 0, sizeof(*iocp)); @@ -1628,7 +1628,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) struct srpt_port *sport = ch->sport; struct srpt_device *sdev = sport->sdev; const struct ib_device_attr *attrs = &sdev->device->attrs; - u32 srp_sq_size = sport->port_attrib.srp_sq_size; + int sq_size = sport->port_attrib.srp_sq_size; int i, ret; WARN_ON(ch->rq_size < 1); @@ -1639,12 +1639,12 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) goto out; retry: - ch->cq = ib_alloc_cq(sdev->device, ch, ch->rq_size + srp_sq_size, + ch->cq = ib_alloc_cq(sdev->device, ch, ch->rq_size + sq_size, 0 /* XXX: spread CQs */, IB_POLL_WORKQUEUE); if (IS_ERR(ch->cq)) { ret = PTR_ERR(ch->cq); pr_err("failed to create CQ cqe= %d ret= %d\n", - ch->rq_size + srp_sq_size, ret); + ch->rq_size + sq_size, ret); goto out; } @@ -1662,8 +1662,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) * both both, as RDMA contexts will also post completions for the * RDMA READ case. */ - qp_init->cap.max_send_wr = min(srp_sq_size / 2, attrs->max_qp_wr + 0U); - qp_init->cap.max_rdma_ctxs = srp_sq_size / 2; + qp_init->cap.max_send_wr = min(sq_size / 2, attrs->max_qp_wr); + qp_init->cap.max_rdma_ctxs = sq_size / 2; qp_init->cap.max_send_sge = min(attrs->max_sge, SRPT_MAX_SG_PER_WQE); qp_init->port_num = ch->sport->port; if (sdev->use_srq) { @@ -1677,8 +1677,8 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch) if (IS_ERR(ch->qp)) { ret = PTR_ERR(ch->qp); if (ret == -ENOMEM) { - srp_sq_size /= 2; - if (srp_sq_size >= MIN_SRPT_SQ_SIZE) { + sq_size /= 2; + if (sq_size >= MIN_SRPT_SQ_SIZE) { ib_destroy_cq(ch->cq); goto retry; } @@ -1894,7 +1894,7 @@ static void srpt_release_channel_work(struct work_struct *w) srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_recv_ring, sdev, ch->rq_size, @@ -2038,16 +2038,16 @@ static int srpt_cm_req_recv(struct ib_cm_id *cm_id, * depth to avoid that the initiator driver has to report QUEUE_FULL * to the SCSI mid-layer. */ - ch->rq_size = min(SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); + ch->rq_size = min(MAX_SRPT_RQ_SIZE, sdev->device->attrs.max_qp_wr); spin_lock_init(&ch->spinlock); ch->state = CH_CONNECTING; INIT_LIST_HEAD(&ch->cmd_wait_list); - ch->rsp_size = ch->sport->port_attrib.srp_max_rsp_size; + ch->max_rsp_size = ch->sport->port_attrib.srp_max_rsp_size; ch->ioctx_ring = (struct srpt_send_ioctx **) srpt_alloc_ioctx_ring(ch->sport->sdev, ch->rq_size, sizeof(*ch->ioctx_ring[0]), - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); if (!ch->ioctx_ring) goto free_ch; @@ -2175,7 +2175,7 @@ static int srpt_cm_req_recv(struct ib_cm_id *cm_id, free_ring: srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ch->sport->sdev, ch->rq_size, - ch->rsp_size, DMA_TO_DEVICE); + ch->max_rsp_size, DMA_TO_DEVICE); free_ch: kfree(ch); diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h index 3cf917720b35..ac76825152f1 100644 --- a/drivers/infiniband/ulp/srpt/ib_srpt.h +++ b/drivers/infiniband/ulp/srpt/ib_srpt.h @@ -114,7 +114,7 @@ enum { MIN_SRPT_SQ_SIZE = 16, DEF_SRPT_SQ_SIZE = 4096, - SRPT_RQ_SIZE = 128, + MAX_SRPT_RQ_SIZE = 128, MIN_SRPT_SRQ_SIZE = 4, DEFAULT_SRPT_SRQ_SIZE = 4095, MAX_SRPT_SRQ_SIZE = 65535, @@ -249,7 +249,7 @@ enum rdma_ch_state { * @rcu: RCU head. * @kref: kref for this channel. * @rq_size: IB receive queue size. - * @rsp_size: IB response message size in bytes. + * @max_rsp_size: Maximum size of an RSP response message in bytes. * @sq_wr_avail: number of work requests available in the send queue. * @sport: pointer to the information of the HCA port used by this * channel. @@ -281,7 +281,7 @@ struct srpt_rdma_ch { struct rcu_head rcu; struct kref kref; int rq_size; - u32 rsp_size; + u32 max_rsp_size; atomic_t sq_wr_avail; struct srpt_port *sport; u8 i_port_id[16];