From patchwork Mon Oct 19 16:43:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 7438511 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 303F1BEEA4 for ; Mon, 19 Oct 2015 16:44:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1019F20793 for ; Mon, 19 Oct 2015 16:44:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AD813207A9 for ; Mon, 19 Oct 2015 16:44:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752035AbbJSQo2 (ORCPT ); Mon, 19 Oct 2015 12:44:28 -0400 Received: from mga03.intel.com ([134.134.136.65]:51921 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752012AbbJSQo0 (ORCPT ); Mon, 19 Oct 2015 12:44:26 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 19 Oct 2015 09:44:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,702,1437462000"; d="scan'208";a="814501999" Received: from phlsvsds.ph.intel.com ([10.228.195.38]) by fmsmga001.fm.intel.com with ESMTP; 19 Oct 2015 09:44:22 -0700 Received: from phlsvsds.ph.intel.com (localhost.localdomain [127.0.0.1]) by phlsvsds.ph.intel.com (8.13.8/8.13.8) with ESMTP id t9JGiKqD032674; Mon, 19 Oct 2015 12:44:20 -0400 Received: (from iweiny@localhost) by phlsvsds.ph.intel.com (8.13.8/8.13.8/Submit) id t9JGiKEB032585; Mon, 19 Oct 2015 12:44:20 -0400 X-Authentication-Warning: phlsvsds.ph.intel.com: iweiny set sender to ira.weiny@intel.com using -f From: ira.weiny@intel.com To: gregkh@linuxfoundation.org, devel@driverdev.osuosl.org Cc: dledford@redhat.com, linux-rdma@vger.kernel.org, dennis.dalessandro@intel.com, mike.marciniszyn@intel.com, Niranjana Vishwanathapura , Ira Weiny Subject: [PATCH 06/23] staging/rdma/hfi1: Add coalescing support for SDMA TX descriptors Date: Mon, 19 Oct 2015 12:43:30 -0400 Message-Id: <1445273027-29634-7-git-send-email-ira.weiny@intel.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1445273027-29634-1-git-send-email-ira.weiny@intel.com> References: <1445273027-29634-1-git-send-email-ira.weiny@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Niranjana Vishwanathapura When the number of scatter gather elements in the request is more that the number of per packet descriptors supported by the hardware, allocate and coalesce the extra scatter gather elements into a single buffer. The last descriptor is reserved and used for this coalesced buffer. Verbs potentially need this support when transferring small data chunks involving different memory regions. Reviewed-by: Mike Marciniszyn Reviewed-by: Mitko Haralanov Signed-off-by: Niranjana Vishwanathapura Signed-off-by: Ira Weiny --- drivers/staging/rdma/hfi1/sdma.c | 124 ++++++++++++++++++++++++++++++++++++--- drivers/staging/rdma/hfi1/sdma.h | 74 +++++++++++++++-------- 2 files changed, 168 insertions(+), 30 deletions(-) diff --git a/drivers/staging/rdma/hfi1/sdma.c b/drivers/staging/rdma/hfi1/sdma.c index d57531796723..53b3e4d9518b 100644 --- a/drivers/staging/rdma/hfi1/sdma.c +++ b/drivers/staging/rdma/hfi1/sdma.c @@ -55,6 +55,7 @@ #include #include #include +#include #include "hfi.h" #include "common.h" @@ -2706,27 +2707,134 @@ static void __sdma_process_event(struct sdma_engine *sde, * of descriptors in the sdma_txreq is exhausted. * * The code will bump the allocation up to the max - * of MAX_DESC (64) descriptors. There doesn't seem - * much point in an interim step. + * of MAX_DESC (64) descriptors. There doesn't seem + * much point in an interim step. The last descriptor + * is reserved for coalesce buffer in order to support + * cases where input packet has >MAX_DESC iovecs. * */ -int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx) +static int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx) { int i; + /* Handle last descriptor */ + if (unlikely((tx->num_desc == (MAX_DESC - 1)))) { + /* if tlen is 0, it is for padding, release last descriptor */ + if (!tx->tlen) { + tx->desc_limit = MAX_DESC; + } else if (!tx->coalesce_buf) { + /* allocate coalesce buffer with space for padding */ + tx->coalesce_buf = kmalloc(tx->tlen + sizeof(u32), + GFP_ATOMIC); + if (!tx->coalesce_buf) + return -ENOMEM; + + tx->coalesce_idx = 0; + } + return 0; + } + + if (unlikely(tx->num_desc == MAX_DESC)) + return -ENOMEM; + tx->descp = kmalloc_array( MAX_DESC, sizeof(struct sdma_desc), GFP_ATOMIC); if (!tx->descp) return -ENOMEM; - tx->desc_limit = MAX_DESC; + + /* reserve last descriptor for coalescing */ + tx->desc_limit = MAX_DESC - 1; /* copy ones already built */ for (i = 0; i < tx->num_desc; i++) tx->descp[i] = tx->descs[i]; return 0; } +/* + * ext_coal_sdma_tx_descs() - extend or coalesce sdma tx descriptors + * + * This is called once the initial nominal allocation of descriptors + * in the sdma_txreq is exhausted. + * + * This function calls _extend_sdma_tx_descs to extend or allocate + * coalesce buffer. If there is a allocated coalesce buffer, it will + * copy the input packet data into the coalesce buffer. It also adds + * coalesce buffer descriptor once whe whole packet is received. + * + * Return: + * <0 - error + * 0 - coalescing, don't populate descriptor + * 1 - continue with populating descriptor + */ +int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, + int type, void *kvaddr, struct page *page, + unsigned long offset, u16 len) +{ + int pad_len, rval; + dma_addr_t addr; + + rval = _extend_sdma_tx_descs(dd, tx); + if (rval) { + sdma_txclean(dd, tx); + return rval; + } + + /* If coalesce buffer is allocated, copy data into it */ + if (tx->coalesce_buf) { + if (type == SDMA_MAP_NONE) { + sdma_txclean(dd, tx); + return -EINVAL; + } + + if (type == SDMA_MAP_PAGE) { + kvaddr = kmap(page); + kvaddr += offset; + } else if (WARN_ON(!kvaddr)) { + sdma_txclean(dd, tx); + return -EINVAL; + } + + memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len); + tx->coalesce_idx += len; + if (type == SDMA_MAP_PAGE) + kunmap(page); + + /* If there is more data, return */ + if (tx->tlen - tx->coalesce_idx) + return 0; + + /* Whole packet is received; add any padding */ + pad_len = tx->packet_len & (sizeof(u32) - 1); + if (pad_len) { + pad_len = sizeof(u32) - pad_len; + memset(tx->coalesce_buf + tx->coalesce_idx, 0, pad_len); + /* padding is taken care of for coalescing case */ + tx->packet_len += pad_len; + tx->tlen += pad_len; + } + + /* dma map the coalesce buffer */ + addr = dma_map_single(&dd->pcidev->dev, + tx->coalesce_buf, + tx->tlen, + DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(&dd->pcidev->dev, addr))) { + sdma_txclean(dd, tx); + return -ENOSPC; + } + + /* Add descriptor for coalesce buffer */ + tx->desc_limit = MAX_DESC; + return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx, + addr, tx->tlen); + } + + return 1; +} + /* Update sdes when the lmc changes */ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid) { @@ -2752,13 +2860,15 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx) { int rval = 0; + tx->num_desc++; if ((unlikely(tx->num_desc == tx->desc_limit))) { rval = _extend_sdma_tx_descs(dd, tx); - if (rval) + if (rval) { + sdma_txclean(dd, tx); return rval; + } } - /* finish the one just added */ - tx->num_desc++; + /* finish the one just added */ make_tx_sdma_desc( tx, SDMA_MAP_NONE, diff --git a/drivers/staging/rdma/hfi1/sdma.h b/drivers/staging/rdma/hfi1/sdma.h index 496086903891..52a7d04067e0 100644 --- a/drivers/staging/rdma/hfi1/sdma.h +++ b/drivers/staging/rdma/hfi1/sdma.h @@ -352,6 +352,8 @@ struct sdma_txreq { /* private: */ void *coalesce_buf; /* private: */ + u16 coalesce_idx; + /* private: */ struct iowait *wait; /* private: */ callback_t complete; @@ -735,7 +737,9 @@ static inline void make_tx_sdma_desc( } /* helper to extend txreq */ -int _extend_sdma_tx_descs(struct hfi1_devdata *, struct sdma_txreq *); +int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx, + int type, void *kvaddr, struct page *page, + unsigned long offset, u16 len); int _pad_sdma_tx_descs(struct hfi1_devdata *, struct sdma_txreq *); void sdma_txclean(struct hfi1_devdata *, struct sdma_txreq *); @@ -762,11 +766,6 @@ static inline int _sdma_txadd_daddr( { int rval = 0; - if ((unlikely(tx->num_desc == tx->desc_limit))) { - rval = _extend_sdma_tx_descs(dd, tx); - if (rval) - return rval; - } make_tx_sdma_desc( tx, type, @@ -798,9 +797,7 @@ static inline int _sdma_txadd_daddr( * * Return: * 0 - success, -ENOSPC - mapping fail, -ENOMEM - couldn't - * extend descriptor array or couldn't allocate coalesce - * buffer. - * + * extend/coalesce descriptor array */ static inline int sdma_txadd_page( struct hfi1_devdata *dd, @@ -809,17 +806,28 @@ static inline int sdma_txadd_page( unsigned long offset, u16 len) { - dma_addr_t addr = - dma_map_page( - &dd->pcidev->dev, - page, - offset, - len, - DMA_TO_DEVICE); + dma_addr_t addr; + int rval; + + if ((unlikely(tx->num_desc == tx->desc_limit))) { + rval = ext_coal_sdma_tx_descs(dd, tx, SDMA_MAP_PAGE, + NULL, page, offset, len); + if (rval <= 0) + return rval; + } + + addr = dma_map_page( + &dd->pcidev->dev, + page, + offset, + len, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(&dd->pcidev->dev, addr))) { sdma_txclean(dd, tx); return -ENOSPC; } + return _sdma_txadd_daddr( dd, SDMA_MAP_PAGE, tx, addr, len); } @@ -846,6 +854,15 @@ static inline int sdma_txadd_daddr( dma_addr_t addr, u16 len) { + int rval; + + if ((unlikely(tx->num_desc == tx->desc_limit))) { + rval = ext_coal_sdma_tx_descs(dd, tx, SDMA_MAP_NONE, + NULL, NULL, 0, 0); + if (rval <= 0) + return rval; + } + return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len); } @@ -862,7 +879,7 @@ static inline int sdma_txadd_daddr( * The mapping/unmapping of the kvaddr and len is automatically handled. * * Return: - * 0 - success, -ENOSPC - mapping fail, -ENOMEM - couldn't extend + * 0 - success, -ENOSPC - mapping fail, -ENOMEM - couldn't extend/coalesce * descriptor array */ static inline int sdma_txadd_kvaddr( @@ -871,16 +888,27 @@ static inline int sdma_txadd_kvaddr( void *kvaddr, u16 len) { - dma_addr_t addr = - dma_map_single( - &dd->pcidev->dev, - kvaddr, - len, - DMA_TO_DEVICE); + dma_addr_t addr; + int rval; + + if ((unlikely(tx->num_desc == tx->desc_limit))) { + rval = ext_coal_sdma_tx_descs(dd, tx, SDMA_MAP_SINGLE, + kvaddr, 0, 0, len); + if (rval <= 0) + return rval; + } + + addr = dma_map_single( + &dd->pcidev->dev, + kvaddr, + len, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(&dd->pcidev->dev, addr))) { sdma_txclean(dd, tx); return -ENOSPC; } + return _sdma_txadd_daddr( dd, SDMA_MAP_SINGLE, tx, addr, len); }