From patchwork Wed Aug 2 18:41:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 9877389 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6BDDD602BC for ; Wed, 2 Aug 2017 18:41:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54C3C28822 for ; Wed, 2 Aug 2017 18:41:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49A8628826; Wed, 2 Aug 2017 18:41:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9E25228827 for ; Wed, 2 Aug 2017 18:41:10 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id AB2DE21E1DAEE; Wed, 2 Aug 2017 11:38:59 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 7196021E1DAC1 for ; Wed, 2 Aug 2017 11:38:58 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Aug 2017 11:41:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.41,312,1498546800"; d="scan'208"; a="1158325350" Received: from djiang5-desk3.ch.intel.com ([143.182.137.38]) by orsmga001.jf.intel.com with ESMTP; 02 Aug 2017 11:41:08 -0700 Subject: [PATCH v2 2/5] dmaengine: ioatdma: dma_prep_memcpy_sg support From: Dave Jiang To: vinod.koul@intel.com, dan.j.williams@intel.com Date: Wed, 02 Aug 2017 11:41:08 -0700 Message-ID: <150169926805.59677.10006232109908411716.stgit@djiang5-desk3.ch.intel.com> In-Reply-To: <150169902310.59677.18062301799811367806.stgit@djiang5-desk3.ch.intel.com> References: <150169902310.59677.18062301799811367806.stgit@djiang5-desk3.ch.intel.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dmaengine@vger.kernel.org, linux-nvdimm@lists.01.org Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Adding ioatdma support to copy from a physically contiguos buffer to a provided scatterlist and vice versa. This is used to support reading/writing persistent memory in the pmem driver. Signed-off-by: Dave Jiang --- drivers/dma/ioat/dma.h | 4 +++ drivers/dma/ioat/init.c | 1 + drivers/dma/ioat/prep.c | 57 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/dmaengine.h | 5 ++++ 4 files changed, 67 insertions(+) diff --git a/drivers/dma/ioat/dma.h b/drivers/dma/ioat/dma.h index 56200ee..6c08b06 100644 --- a/drivers/dma/ioat/dma.h +++ b/drivers/dma/ioat/dma.h @@ -370,6 +370,10 @@ struct dma_async_tx_descriptor * ioat_dma_prep_memcpy_lock(struct dma_chan *c, dma_addr_t dma_dest, dma_addr_t dma_src, size_t len, unsigned long flags); struct dma_async_tx_descriptor * +ioat_dma_prep_memcpy_sg_lock(struct dma_chan *c, + struct scatterlist *sg, unsigned int sg_nents, + dma_addr_t dma_addr, bool to_sg, unsigned long flags); +struct dma_async_tx_descriptor * ioat_prep_interrupt_lock(struct dma_chan *c, unsigned long flags); struct dma_async_tx_descriptor * ioat_prep_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src, diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c index e437112..f82d3bb 100644 --- a/drivers/dma/ioat/init.c +++ b/drivers/dma/ioat/init.c @@ -1091,6 +1091,7 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca) dma = &ioat_dma->dma_dev; dma->device_prep_dma_memcpy = ioat_dma_prep_memcpy_lock; + dma->device_prep_dma_memcpy_sg = ioat_dma_prep_memcpy_sg_lock; dma->device_issue_pending = ioat_issue_pending; dma->device_alloc_chan_resources = ioat_alloc_chan_resources; dma->device_free_chan_resources = ioat_free_chan_resources; diff --git a/drivers/dma/ioat/prep.c b/drivers/dma/ioat/prep.c index 243421a..d8219af 100644 --- a/drivers/dma/ioat/prep.c +++ b/drivers/dma/ioat/prep.c @@ -159,6 +159,63 @@ ioat_dma_prep_memcpy_lock(struct dma_chan *c, dma_addr_t dma_dest, return &desc->txd; } +struct dma_async_tx_descriptor * +ioat_dma_prep_memcpy_sg_lock(struct dma_chan *c, + struct scatterlist *sg, unsigned int sg_nents, + dma_addr_t dma_addr, bool to_sg, unsigned long flags) +{ + struct ioatdma_chan *ioat_chan = to_ioat_chan(c); + struct ioat_dma_descriptor *hw = NULL; + struct ioat_ring_ent *desc = NULL; + dma_addr_t dma_off = dma_addr; + int num_descs, idx, i; + struct scatterlist *s; + size_t total_len = 0, len; + + + if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state)) + return NULL; + + /* + * The upper layer will garantee that each entry does not exceed + * xfercap. + */ + num_descs = sg_nents; + + if (likely(num_descs) && + ioat_check_space_lock(ioat_chan, num_descs) == 0) + idx = ioat_chan->head; + else + return NULL; + + for_each_sg(sg, s, sg_nents, i) { + desc = ioat_get_ring_ent(ioat_chan, idx + i); + hw = desc->hw; + len = sg_dma_len(s); + hw->size = len; + hw->ctl = 0; + if (to_sg) { + hw->src_addr = dma_off; + hw->dst_addr = sg_dma_address(s); + } else { + hw->src_addr = sg_dma_address(s); + hw->dst_addr = dma_off; + } + dma_off += len; + total_len += len; + dump_desc_dbg(ioat_chan, desc); + } + + desc->txd.flags = flags; + desc->len = total_len; + hw->ctl_f.int_en = !!(flags & DMA_PREP_INTERRUPT); + hw->ctl_f.fence = !!(flags & DMA_PREP_FENCE); + hw->ctl_f.compl_write = 1; + dump_desc_dbg(ioat_chan, desc); + /* we leave the channel locked to ensure in order submission */ + + return &desc->txd; +} static struct dma_async_tx_descriptor * __ioat_prep_xor_lock(struct dma_chan *c, enum sum_check_flags *result, diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 5336808..060f152 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -694,6 +694,7 @@ struct dma_filter { * @device_prep_dma_memset: prepares a memset operation * @device_prep_dma_memset_sg: prepares a memset operation over a scatter list * @device_prep_dma_interrupt: prepares an end of chain interrupt operation + * @device_prep_dma_memcpy_sg: prepares memcpy between scatterlist and buffer * @device_prep_slave_sg: prepares a slave dma operation * @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio. * The function takes a buffer of size buf_len. The callback function will @@ -776,6 +777,10 @@ struct dma_device { struct scatterlist *dst_sg, unsigned int dst_nents, struct scatterlist *src_sg, unsigned int src_nents, unsigned long flags); + struct dma_async_tx_descriptor *(*device_prep_dma_memcpy_sg)( + struct dma_chan *chan, + struct scatterlist *dst_sg, unsigned int dst_nents, + dma_addr_t src, bool to_sg, unsigned long flags); struct dma_async_tx_descriptor *(*device_prep_slave_sg)( struct dma_chan *chan, struct scatterlist *sgl,