From patchwork Mon Dec 17 21:20:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 10734313 X-Patchwork-Delegate: dledford@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BAC4E13AD for ; Mon, 17 Dec 2018 21:21:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD1E92A1AD for ; Mon, 17 Dec 2018 21:21:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A17752A1D4; Mon, 17 Dec 2018 21:21:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D80862A1AD for ; Mon, 17 Dec 2018 21:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732463AbeLQVVY (ORCPT ); Mon, 17 Dec 2018 16:21:24 -0500 Received: from com-out001.mailprotect.be ([83.217.72.83]:52487 "EHLO com-out001.mailprotect.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728202AbeLQVVX (ORCPT ); Mon, 17 Dec 2018 16:21:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mailprotect.be; s=mail; h=Content-Transfer-Encoding:MIME-Version:References :In-Reply-To:Message-Id:Date:Subject:Cc:To:From:reply-to:sender:bcc: content-type; bh=tnLH+3hh4OL7mFy8Z4c+SAqa894ahOatw9RIGVMB0Tw=; b=XPP1sDY14/XR lJ8aqL3LnTIz+wznrQsYmiVRBNkim8SY6ItPc4lkWKXL+xVO2DqDB4JcEvWHrmIefnDJnBphbLD+g 4nZKW3HoRDwUQcP8mJpWW1evJUztlSVQ1ycpc48QasP8I54yM0Q0qcvJCpyJAxv7GCGOEtVoJrrjE Rn7An/1Ccu0nsCaaBWVGGdtouKi51MhvyKfPiMdyXqo/0VI3GG9zUl/BkKCy2dKqWlpQWXkPJsUwf gGWzkYYuR5E8yoBO0+VSGnEZ4druAB5OWRdLbc1KqKN+FAeRH+s3+H0ta6K4/H2AljXWK024rRQZ/ 02mrl7ul7PoljE77cZ1v8A==; Received: from smtp-auth.mailprotect.be ([178.208.39.159]) by com-mpt-out001.mailprotect.be with esmtp (Exim 4.89) (envelope-from ) id 1gZ0KD-0001Zd-Kl; Mon, 17 Dec 2018 22:21:17 +0100 Received: from desktop-bart.svl.corp.google.com (unknown [104.133.8.89]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp-auth.mailprotect.be (Postfix) with ESMTPSA id 9C99BC075C; Mon, 17 Dec 2018 22:21:15 +0100 (CET) From: Bart Van Assche To: Jason Gunthorpe Cc: Doug Ledford , linux-rdma@vger.kernel.org, Bart Van Assche , Sergey Gorenko , Max Gurtovoy , Laurence Oberman Subject: [PATCH v2 08/15] RDMA/srp: Add support for immediate data Date: Mon, 17 Dec 2018 13:20:39 -0800 Message-Id: <20181217212046.71017-9-bvanassche@acm.org> X-Mailer: git-send-email 2.20.0.405.gbc1bbc6f85-goog In-Reply-To: <20181217212046.71017-1-bvanassche@acm.org> References: <20181217212046.71017-1-bvanassche@acm.org> MIME-Version: 1.0 X-Originating-IP: 178.208.39.159 X-SpamExperts-Domain: mailprotect.be X-SpamExperts-Username: 178.208.39.128/27 Authentication-Results: mailprotect.be; auth=pass smtp.auth=178.208.39.128/27@mailprotect.be X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: SB/global_tokens (0.0086731946949) X-Recommended-Action: accept X-Filter-ID: EX5BVjFpneJeBchSMxfU5uwul3eZEMw172e8mpqxBGB602E9L7XzfQH6nu9C/Fh9KJzpNe6xgvOx q3u0UDjvO1tLifGj39bI0bcPyaJsYTbLl/LY5YF9Vo2OhJGJjKCGCCZpQ81ZPiyihk8FOHdtvo4l V1QHumGYL6MoCWQBeDaU32qUPOJX7vhbzYcc5l71m4VKGlZauPS2ZI2/MGgfC8Av92G8VFvri8dZ 651y1LnBHvoYB1tNyzlE795H++xpIGTTJ1+gdXdk1Q+yT5nnaqsSkubKoeJsHgdbUJ7agEzXnQGj WDOitbxNOIjwRljN2/vPiuvwfF0arOjmgaC7w8902MBxp+G42GIBqokZAnJ5xBHkCMCjMposz65/ e+f6NiSg6RpPztwkEB6XvRg4fu70pheH5TNN6RiG9N0Jt+0ML5i3g2GFaGQnZ3KowlU8anvoc7s0 eHtsyrH7f4vWs1KcgxQTqdSE3cxlP67ssuZxn2a3k66wqHQRkdHrnUf0XuDVW1RAVP/E7/UYy9T8 rkAMvHmBTTipZKIPq3iOj9tSKKqNUqeXR0rRzsN1a4pG17niOlhFcyO9tgQb+/TkwEEqJ2YhaHYK PX4HHcW4fmLLdh/NSb/qpYVFz2DoGXALBBlIJ0WBFPu4HenOcZSI92EP+JYPRX76oPSBtBBPLYxI dE/smxj1IgWUYKeT3voJFsCJuVlVkChAavMv6VkLdtjlDHh8k6TTdHl8m1/8O/+BJ3GMOLWW0/7C m295vYiNyLS7X0NCOkhwtiYnV40XsePthB8DPVcWbpQfMz+dJcOY1blwvt+Ut+6uSxRfrUrgOEGK lykqDKr67TgsHxRXGXcwRpKL9rZSlplfHnvuvZVfkv4nvWz8lA+HgKkiKsBZTgka5QxI8QziNK+7 QPGUAEhNl7HGKUVz26Ejndnif9fnShMu2z+YCLdt1O36QJzTXh1YmXSTB9eWa9TZuy8mu3wEi7As PjxoBx2TQOInN0MahmhzVNNvsMnlKmltg9KTnpBkO9UBwlGCoO6DYZIkF1FmDOy0gCDqsjCfIxj7 utX1H/aAwarQpYDOYx/6JtUO/GqJg4E/KNK41fXYWiMK3pv7c+m0ulsMI3nr6mT+esOUkxE+SK4C kqqQuY/ADtB7MjMm/D2rEPnNAuCCnt9B91lvN+2w6IrWsFqaJSVzMeH10tfqoGlbdTkTLaWCAXLa p1cbnmh6pFmhgqKvdyLtSa8zJ25JHkf2y1fH38x/opU= X-Report-Abuse-To: spam@com-mpt-mgt001.mailprotect.be Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Request permission to send immediate data during login. If the SRP target grants this request, send the payload of write requests <= 8 KB as immediate data. Cc: Sergey Gorenko Cc: Max Gurtovoy Cc: Laurence Oberman Signed-off-by: Bart Van Assche --- drivers/infiniband/ulp/srp/ib_srp.c | 91 ++++++++++++++++++++++++----- drivers/infiniband/ulp/srp/ib_srp.h | 12 ++++ 2 files changed, 89 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index de53bbf91c62..5f3ef4e6c3fd 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -132,6 +132,15 @@ MODULE_PARM_DESC(dev_loss_tmo, " if fast_io_fail_tmo has not been set. \"off\" means that" " this functionality is disabled."); +static bool srp_use_imm_data = true; +module_param_named(use_imm_data, srp_use_imm_data, bool, 0644); +MODULE_PARM_DESC(use_imm_data, + "Whether or not to request permission to use immediate data during SRP login."); + +static unsigned int srp_max_imm_data = 8 * 1024; +module_param_named(max_imm_data, srp_max_imm_data, uint, 0644); +MODULE_PARM_DESC(max_imm_data, "Maximum immediate data size."); + static unsigned ch_count; module_param(ch_count, uint, 0444); MODULE_PARM_DESC(ch_count, @@ -573,7 +582,7 @@ static int srp_create_ch_ib(struct srp_rdma_ch *ch) init_attr->cap.max_send_wr = m * target->queue_size; init_attr->cap.max_recv_wr = target->queue_size + 1; init_attr->cap.max_recv_sge = 1; - init_attr->cap.max_send_sge = 1; + init_attr->cap.max_send_sge = SRP_MAX_SGE; init_attr->sq_sig_type = IB_SIGNAL_REQ_WR; init_attr->qp_type = IB_QPT_RC; init_attr->send_cq = send_cq; @@ -858,6 +867,10 @@ static int srp_send_req(struct srp_rdma_ch *ch, uint32_t max_iu_len, SRP_BUF_FORMAT_INDIRECT); req->ib_req.req_flags = (multich ? SRP_MULTICHAN_MULTI : SRP_MULTICHAN_SINGLE); + if (srp_use_imm_data) { + req->ib_req.req_flags |= SRP_IMMED_REQUESTED; + req->ib_req.imm_data_offset = cpu_to_be16(SRP_IMM_DATA_OFFSET); + } if (target->using_rdma_cm) { req->rdma_param.flow_control = req->ib_param.flow_control; @@ -874,6 +887,7 @@ static int srp_send_req(struct srp_rdma_ch *ch, uint32_t max_iu_len, req->rdma_req.req_it_iu_len = req->ib_req.req_it_iu_len; req->rdma_req.req_buf_fmt = req->ib_req.req_buf_fmt; req->rdma_req.req_flags = req->ib_req.req_flags; + req->rdma_req.imm_data_offset = req->ib_req.imm_data_offset; ipi = req->rdma_req.initiator_port_id; tpi = req->rdma_req.target_port_id; @@ -1347,12 +1361,16 @@ static void srp_terminate_io(struct srp_rport *rport) } /* Calculate maximum initiator to target information unit length. */ -static uint32_t srp_max_it_iu_len(int cmd_sg_cnt) +static uint32_t srp_max_it_iu_len(int cmd_sg_cnt, bool use_imm_data) { uint32_t max_iu_len = sizeof(struct srp_cmd) + SRP_MAX_ADD_CDB_LEN + sizeof(struct srp_indirect_buf) + cmd_sg_cnt * sizeof(struct srp_direct_buf); + if (use_imm_data) + max_iu_len = max(max_iu_len, SRP_IMM_DATA_OFFSET + + srp_max_imm_data); + return max_iu_len; } @@ -1369,7 +1387,8 @@ static int srp_rport_reconnect(struct srp_rport *rport) { struct srp_target_port *target = rport->lld_data; struct srp_rdma_ch *ch; - uint32_t max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt); + uint32_t max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, + srp_use_imm_data); int i, j, ret = 0; bool multich = false; @@ -1777,23 +1796,27 @@ static void srp_check_mapping(struct srp_map_state *state, * @req: SRP request * * Returns the length in bytes of the SRP_CMD IU or a negative value if - * mapping failed. + * mapping failed. The size of any immediate data is not included in the + * return value. */ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, struct srp_request *req) { struct srp_target_port *target = ch->target; - struct scatterlist *scat; + struct scatterlist *scat, *sg; struct srp_cmd *cmd = req->cmd->buf; - int len, nents, count, ret; + int i, len, nents, count, ret; struct srp_device *dev; struct ib_device *ibdev; struct srp_map_state state; struct srp_indirect_buf *indirect_hdr; + u64 data_len; u32 idb_len, table_len; __be32 idb_rkey; u8 fmt; + req->cmd->num_sge = 1; + if (!scsi_sglist(scmnd) || scmnd->sc_data_direction == DMA_NONE) return sizeof(struct srp_cmd) + cmd->add_cdb_len; @@ -1807,6 +1830,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, nents = scsi_sg_count(scmnd); scat = scsi_sglist(scmnd); + data_len = scsi_bufflen(scmnd); dev = target->srp_host->srp_dev; ibdev = dev->dev; @@ -1815,6 +1839,28 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch, if (unlikely(count == 0)) return -EIO; + if (ch->use_imm_data && + count <= SRP_MAX_IMM_SGE && + SRP_IMM_DATA_OFFSET + data_len <= ch->max_it_iu_len && + scmnd->sc_data_direction == DMA_TO_DEVICE) { + struct srp_imm_buf *buf; + struct ib_sge *sge = &req->cmd->sge[1]; + + fmt = SRP_DATA_DESC_IMM; + len = SRP_IMM_DATA_OFFSET; + req->nmdesc = 0; + buf = (void *)cmd->add_data + cmd->add_cdb_len; + buf->len = cpu_to_be32(data_len); + WARN_ON_ONCE((void *)(buf + 1) > (void *)cmd + len); + for_each_sg(scat, sg, count, i) { + sge[i].addr = ib_sg_dma_address(ibdev, sg); + sge[i].length = ib_sg_dma_len(ibdev, sg); + sge[i].lkey = target->lkey; + } + req->cmd->num_sge += count; + goto map_complete; + } + fmt = SRP_DATA_DESC_DIRECT; len = sizeof(struct srp_cmd) + cmd->add_cdb_len + sizeof(struct srp_direct_buf); @@ -2018,22 +2064,30 @@ static void srp_send_done(struct ib_cq *cq, struct ib_wc *wc) list_add(&iu->list, &ch->free_tx); } +/** + * srp_post_send() - send an SRP information unit + * @ch: RDMA channel over which to send the information unit. + * @iu: Information unit to send. + * @len: Length of the information unit excluding immediate data. + */ static int srp_post_send(struct srp_rdma_ch *ch, struct srp_iu *iu, int len) { struct srp_target_port *target = ch->target; - struct ib_sge list; struct ib_send_wr wr; - list.addr = iu->dma; - list.length = len; - list.lkey = target->lkey; + if (WARN_ON_ONCE(iu->num_sge > SRP_MAX_SGE)) + return -EINVAL; + + iu->sge[0].addr = iu->dma; + iu->sge[0].length = len; + iu->sge[0].lkey = target->lkey; iu->cqe.done = srp_send_done; wr.next = NULL; wr.wr_cqe = &iu->cqe; - wr.sg_list = &list; - wr.num_sge = 1; + wr.sg_list = &iu->sge[0]; + wr.num_sge = iu->num_sge; wr.opcode = IB_WR_SEND; wr.send_flags = IB_SEND_SIGNALED; @@ -2146,6 +2200,7 @@ static int srp_response_common(struct srp_rdma_ch *ch, s32 req_delta, return 1; } + iu->num_sge = 1; ib_dma_sync_single_for_cpu(dev, iu->dma, len, DMA_TO_DEVICE); memcpy(iu->buf, rsp, len); ib_dma_sync_single_for_device(dev, iu->dma, len, DMA_TO_DEVICE); @@ -2500,10 +2555,16 @@ static void srp_cm_rep_handler(struct ib_cm_id *cm_id, if (lrsp->opcode == SRP_LOGIN_RSP) { ch->max_ti_iu_len = be32_to_cpu(lrsp->max_ti_iu_len); ch->req_lim = be32_to_cpu(lrsp->req_lim_delta); - ch->max_it_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt); + ch->use_imm_data = lrsp->rsp_flags & SRP_LOGIN_RSP_IMMED_SUPP; + ch->max_it_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, + ch->use_imm_data); WARN_ON_ONCE(ch->max_it_iu_len > be32_to_cpu(lrsp->max_it_iu_len)); + if (ch->use_imm_data) + shost_printk(KERN_DEBUG, target->scsi_host, + PFX "using immediate data\n"); + /* * Reserve credits for task management so we don't * bounce requests back to the SCSI mid-layer. @@ -2891,6 +2952,8 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun, return -1; } + iu->num_sge = 1; + ib_dma_sync_single_for_cpu(dev, iu->dma, sizeof *tsk_mgmt, DMA_TO_DEVICE); tsk_mgmt = iu->buf; @@ -3856,7 +3919,7 @@ static ssize_t srp_create_target(struct device *dev, target->mr_per_cmd = mr_per_cmd; target->indirect_size = target->sg_tablesize * sizeof (struct srp_direct_buf); - max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt); + max_iu_len = srp_max_it_iu_len(target->cmd_sg_cnt, srp_use_imm_data); INIT_WORK(&target->tl_err_work, srp_tl_err_work); INIT_WORK(&target->remove_work, srp_remove_work); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index 9a271ae6573b..b2861cd2087a 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -69,6 +69,15 @@ enum { SRP_MAX_PAGES_PER_MR = 512, SRP_MAX_ADD_CDB_LEN = 16, + + SRP_MAX_IMM_SGE = 2, + SRP_MAX_SGE = SRP_MAX_IMM_SGE + 1, + /* + * Choose the immediate data offset such that a 32 byte CDB still fits. + */ + SRP_IMM_DATA_OFFSET = sizeof(struct srp_cmd) + + SRP_MAX_ADD_CDB_LEN + + sizeof(struct srp_imm_buf), }; enum srp_target_state { @@ -152,6 +161,7 @@ struct srp_rdma_ch { }; uint32_t max_it_iu_len; uint32_t max_ti_iu_len; + bool use_imm_data; /* Everything above this point is used in the hot path of * command processing. Try to keep them packed into cachelines. @@ -263,6 +273,8 @@ struct srp_iu { void *buf; size_t size; enum dma_data_direction direction; + u32 num_sge; + struct ib_sge sge[SRP_MAX_SGE]; struct ib_cqe cqe; };