From patchwork Thu Jul 23 18:39:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 6855571 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4FBE8C05AC for ; Thu, 23 Jul 2015 18:38:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 771A820717 for ; Thu, 23 Jul 2015 18:38:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80A84206E3 for ; Thu, 23 Jul 2015 18:38:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753777AbbGWSil (ORCPT ); Thu, 23 Jul 2015 14:38:41 -0400 Received: from mga11.intel.com ([192.55.52.93]:6268 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753546AbbGWSik (ORCPT ); Thu, 23 Jul 2015 14:38:40 -0400 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 23 Jul 2015 11:38:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,532,1432623600"; d="scan'208";a="611678738" Received: from vkoul-udesk7.iind.intel.com ([10.223.84.34]) by orsmga003.jf.intel.com with ESMTP; 23 Jul 2015 11:38:38 -0700 From: Vinod Koul To: dmaengine@vger.kernel.org Cc: Robert Jarzmik , Lars-Peter Clausen , Jun Nie , Vinod Koul Subject: [PATCH 2/3] dmaengine: Add DMA_CTRL_REUSE Date: Fri, 24 Jul 2015 00:09:51 +0530 Message-Id: <1437676792-13465-2-git-send-email-vinod.koul@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437676792-13465-1-git-send-email-vinod.koul@intel.com> References: <1437676792-13465-1-git-send-email-vinod.koul@intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-8.1 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This adds new descriptor flag for reusing a descriptor by submitting multiple times by a client, for example video buffer. Add helper APIs for this as well Signed-off-by: Vinod Koul --- include/linux/dmaengine.h | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index e2f5eb419976..2adcd3c1ae48 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -183,6 +183,8 @@ struct dma_interleaved_template { * operation it continues the calculation with new sources * @DMA_PREP_FENCE - tell the driver that subsequent operations depend * on the result of this operation + * @DMA_CTRL_REUSE: Client can reuse the descriptor and submit again till + * cleared or freed */ enum dma_ctrl_flags { DMA_PREP_INTERRUPT = (1 << 0), @@ -191,6 +193,7 @@ enum dma_ctrl_flags { DMA_PREP_PQ_DISABLE_Q = (1 << 3), DMA_PREP_CONTINUE = (1 << 4), DMA_PREP_FENCE = (1 << 5), + DMA_CTRL_REUSE = (1 << 6), }; /** @@ -400,6 +403,8 @@ enum dma_residue_granularity { * @cmd_pause: true, if pause and thereby resume is supported * @cmd_terminate: true, if terminate cmd is supported * @residue_granularity: granularity of the reported transfer residue + * @descriptor_reuse: if a descriptor can be reused by client and + * resubmitted multiple times */ struct dma_slave_caps { u32 src_addr_widths; @@ -408,6 +413,7 @@ struct dma_slave_caps { bool cmd_pause; bool cmd_terminate; enum dma_residue_granularity residue_granularity; + bool descriptor_reuse; }; static inline const char *dma_chan_name(struct dma_chan *chan) @@ -467,6 +473,7 @@ struct dma_async_tx_descriptor { dma_addr_t phys; struct dma_chan *chan; dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx); + int (*desc_free)(struct dma_async_tx_descriptor *tx); dma_async_tx_callback callback; void *callback_param; struct dmaengine_unmap_data *unmap; @@ -833,6 +840,41 @@ static inline dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc return desc->tx_submit(desc); } +int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps); + +static inline int dmaengine_desc_set_reuse(struct dma_async_tx_descriptor *tx) +{ + struct dma_slave_caps caps; + + dma_get_slave_caps(tx->chan, &caps); + + if (caps.descriptor_reuse) { + tx->flags |= DMA_CTRL_REUSE; + return 0; + } else { + return -EPERM; + } +} + +static inline void dmaengine_desc_clear_reuse(struct dma_async_tx_descriptor *tx) +{ + tx->flags &= ~DMA_CTRL_REUSE; +} + +static inline bool dmaengine_desc_test_reuse(struct dma_async_tx_descriptor *tx) +{ + return (tx->flags & DMA_CTRL_REUSE) == DMA_CTRL_REUSE; +} + +static inline int dmaengine_desc_free(struct dma_async_tx_descriptor *desc) +{ + /* this is supported for reusable desc, so check that */ + if (!dmaengine_desc_test_reuse(desc)) + return desc->desc_free(desc); + else + return -EPERM; +} + static inline bool dmaengine_check_align(u8 align, size_t off1, size_t off2, size_t len) { size_t mask;