From patchwork Wed Aug 2 20:10:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 9877613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6732B60360 for ; Wed, 2 Aug 2017 20:21:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58E23286BB for ; Wed, 2 Aug 2017 20:21:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DA5A287C2; Wed, 2 Aug 2017 20:21:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E415C286BB for ; Wed, 2 Aug 2017 20:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752299AbdHBUSF (ORCPT ); Wed, 2 Aug 2017 16:18:05 -0400 Received: from a2nlsmtp01-02.prod.iad2.secureserver.net ([198.71.225.36]:56586 "EHLO a2nlsmtp01-02.prod.iad2.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752976AbdHBUMZ (ORCPT ); Wed, 2 Aug 2017 16:12:25 -0400 Received: from linuxonhyperv.com ([107.180.71.197]) by : HOSTING RELAY : with SMTP id czzHdRhDLhEkZczzHdWmoH; Wed, 02 Aug 2017 13:11:23 -0700 x-originating-ip: 107.180.71.197 Received: from longli by linuxonhyperv.com with local (Exim 4.89) (envelope-from ) id 1dczzH-0005H3-Kl; Wed, 02 Aug 2017 13:11:23 -0700 From: Long Li To: Steve French , linux-cifs@vger.kernel.org, samba-technical@lists.samba.org, linux-kernel@vger.kernel.org Cc: Long Li Subject: [[PATCH v1] 14/37] [CIFS] SMBD: Post a SMBD data transfer message with page payload Date: Wed, 2 Aug 2017 13:10:25 -0700 Message-Id: <1501704648-20159-15-git-send-email-longli@exchange.microsoft.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1501704648-20159-1-git-send-email-longli@exchange.microsoft.com> References: <1501704648-20159-1-git-send-email-longli@exchange.microsoft.com> X-CMAE-Envelope: MS4wfLSz2A45cJVIT0DFJ7nuOvb0c5pEo4KOXo+l5y34B2aex66k3NoSHbIxqOAk9KFYVNTuBL9zpwDXjZdTN5ro4YS/IlENVWl17mXNGa2tOWOW6MUbXqVs DoEV/de+FAQ/N/zyS57gvq52Ks3ILnFSVMTlwC/4uUCSQM4JJ5V+dFUI3NIAgasoATJZQEG44vbrBlTI4Xl+xUIaqZFY4urLbfGUmkMKV03HES+2t6eCkXoA tzuV+BgSCZyajMLv2VeSGhBAY/uzE9YGLn751FTOWBnnX2r7KxvdTzj2BSyGW4vwfd7ijex7qdeBzOE4+midzig+rk1hJ9ykJca15gB0rPo= Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Long Li Add the function to send a SMBD data transfer message to server with page passed from upper layer. Signed-off-by: Long Li --- fs/cifs/cifsrdma.c | 113 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) diff --git a/fs/cifs/cifsrdma.c b/fs/cifs/cifsrdma.c index aa3d1a5..b3ec109 100644 --- a/fs/cifs/cifsrdma.c +++ b/fs/cifs/cifsrdma.c @@ -66,6 +66,10 @@ static int cifs_rdma_post_recv( struct cifs_rdma_info *info, struct cifs_rdma_response *response); +static int cifs_rdma_post_send_page(struct cifs_rdma_info *info, + struct page *page, unsigned long offset, + size_t size, int remaining_data_length); + /* * Per RDMA transport connection parameters * as defined in [MS-SMBD] 3.1.1.1 @@ -558,6 +562,115 @@ static int cifs_rdma_post_send_negotiate_req(struct cifs_rdma_info *info) } /* + * Send a page + * page: the page to send + * offset: offset in the page to send + * size: length in the page to send + * remaining_data_length: remaining data to send in this payload + */ +static int cifs_rdma_post_send_page(struct cifs_rdma_info *info, struct page *page, + unsigned long offset, size_t size, int remaining_data_length) +{ + struct cifs_rdma_request *request; + struct smbd_data_transfer *packet; + struct ib_send_wr send_wr, *send_wr_fail; + int rc = -ENOMEM; + int i; + + request = mempool_alloc(info->request_mempool, GFP_KERNEL); + if (!request) + return rc; + + request->info = info; + + wait_event(info->wait_send_queue, atomic_read(&info->send_credits) > 0); + atomic_dec(&info->send_credits); + + packet = (struct smbd_data_transfer *) request->packet; + packet->credits_requested = cpu_to_le16(info->send_credit_target); + packet->flags = cpu_to_le16(0); + + packet->reserved = cpu_to_le16(0); + packet->data_offset = cpu_to_le32(24); + packet->data_length = cpu_to_le32(size); + packet->remaining_data_length = cpu_to_le32(remaining_data_length); + + packet->padding = cpu_to_le32(0); + + log_outgoing("credits_requested=%d credits_granted=%d data_offset=%d " + "data_length=%d remaining_data_length=%d\n", + le16_to_cpu(packet->credits_requested), + le16_to_cpu(packet->credits_granted), + le32_to_cpu(packet->data_offset), + le32_to_cpu(packet->data_length), + le32_to_cpu(packet->remaining_data_length)); + + request->sge = kzalloc(sizeof(struct ib_sge)*2, GFP_KERNEL); + if (!request->sge) + goto allocate_sge_failed; + request->num_sge = 2; + + request->sge[0].addr = ib_dma_map_single(info->id->device, + (void *)packet, + sizeof(*packet), + DMA_BIDIRECTIONAL); + if(ib_dma_mapping_error(info->id->device, request->sge[0].addr)) { + rc = -EIO; + goto dma_mapping_failed; + } + request->sge[0].length = sizeof(*packet); + request->sge[0].lkey = info->pd->local_dma_lkey; + ib_dma_sync_single_for_device(info->id->device, request->sge[0].addr, + request->sge[0].length, DMA_TO_DEVICE); + + request->sge[1].addr = ib_dma_map_page(info->id->device, page, + offset, size, DMA_BIDIRECTIONAL); + if(ib_dma_mapping_error(info->id->device, request->sge[1].addr)) { + rc = -EIO; + goto dma_mapping_failed; + } + request->sge[1].length = size; + request->sge[1].lkey = info->pd->local_dma_lkey; + ib_dma_sync_single_for_device(info->id->device, request->sge[1].addr, + request->sge[1].length, DMA_TO_DEVICE); + + log_rdma_send("rdma_request sge[0] addr=%llu legnth=%u lkey=%u sge[1] " + "addr=%llu length=%u lkey=%u\n", + request->sge[0].addr, request->sge[0].length, + request->sge[0].lkey, request->sge[1].addr, + request->sge[1].length, request->sge[1].lkey); + + request->cqe.done = send_done; + + send_wr.next = NULL; + send_wr.wr_cqe = &request->cqe; + send_wr.sg_list = request->sge; + send_wr.num_sge = request->num_sge; + send_wr.opcode = IB_WR_SEND; + send_wr.send_flags = IB_SEND_SIGNALED; + + rc = ib_post_send(info->id->qp, &send_wr, &send_wr_fail); + if (!rc) + return 0; + + // post send failed + log_rdma_send("ib_post_send failed rc=%d\n", rc); + +dma_mapping_failed: + for (i=0; i<2; i++) + if (request->sge[i].addr) + ib_dma_unmap_single(info->id->device, + request->sge[i].addr, + request->sge[i].length, + DMA_TO_DEVICE); + kfree(request->sge); + +allocate_sge_failed: + mempool_free(request, info->request_mempool); + return rc; +} + +/* * Post a receive request to the transport * The remote peer can only send data when a receive is posted * The interaction is controlled by send/recieve credit system