From patchwork Thu Oct 16 10:17:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 5090561 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7C32DC11AC for ; Thu, 16 Oct 2014 10:30:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B051820166 for ; Thu, 16 Oct 2014 10:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C9EE720115 for ; Thu, 16 Oct 2014 10:30:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752884AbaJPK17 (ORCPT ); Thu, 16 Oct 2014 06:27:59 -0400 Received: from top.free-electrons.com ([176.31.233.9]:37320 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752571AbaJPKUS (ORCPT ); Thu, 16 Oct 2014 06:20:18 -0400 Received: by mail.free-electrons.com (Postfix, from userid 106) id 50C943C5; Thu, 16 Oct 2014 12:20:15 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_WEB, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from localhost (unknown [62.156.150.204]) by mail.free-electrons.com (Postfix) with ESMTPSA id 27850334E; Thu, 16 Oct 2014 12:18:33 +0200 (CEST) From: Maxime Ripard To: dmaengine@vger.kernel.org, Vinod Koul Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Laurent Pinchart , =?UTF-8?q?Antoine=20T=C3=A9nart?= , lars@metafoo.de, Russell King , Maxime Ripard , Dan Williams Subject: [PATCH v2 35/53] dmaengine: sh: Split device_control Date: Thu, 16 Oct 2014 12:17:34 +0200 Message-Id: <1413454672-27400-36-git-send-email-maxime.ripard@free-electrons.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1413454672-27400-1-git-send-email-maxime.ripard@free-electrons.com> References: <1413454672-27400-1-git-send-email-maxime.ripard@free-electrons.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Split the device_control callback of the Super-H DMA driver to make use of the newly introduced callbacks, that will eventually be used to retrieve slave capabilities. Signed-off-by: Maxime Ripard Acked-by: Laurent Pinchart --- drivers/dma/sh/shdma-base.c | 72 ++++++++++++++++++++++----------------------- 1 file changed, 35 insertions(+), 37 deletions(-) diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c index 42d497416196..eda59515f704 100644 --- a/drivers/dma/sh/shdma-base.c +++ b/drivers/dma/sh/shdma-base.c @@ -727,8 +727,7 @@ static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic( return desc; } -static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, - unsigned long arg) +static int shdma_terminate_all(struct dma_chan *chan) { struct shdma_chan *schan = to_shdma_chan(chan); struct shdma_dev *sdev = to_shdma_dev(chan->device); @@ -737,43 +736,41 @@ static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, unsigned long flags; int ret; - switch (cmd) { - case DMA_TERMINATE_ALL: - spin_lock_irqsave(&schan->chan_lock, flags); - ops->halt_channel(schan); + spin_lock_irqsave(&schan->chan_lock, flags); + ops->halt_channel(schan); - if (ops->get_partial && !list_empty(&schan->ld_queue)) { - /* Record partial transfer */ - struct shdma_desc *desc = list_first_entry(&schan->ld_queue, - struct shdma_desc, node); - desc->partial = ops->get_partial(schan, desc); - } + if (ops->get_partial && !list_empty(&schan->ld_queue)) { + /* Record partial transfer */ + struct shdma_desc *desc = list_first_entry(&schan->ld_queue, + struct shdma_desc, node); + desc->partial = ops->get_partial(schan, desc); + } - spin_unlock_irqrestore(&schan->chan_lock, flags); + spin_unlock_irqrestore(&schan->chan_lock, flags); - shdma_chan_ld_cleanup(schan, true); - break; - case DMA_SLAVE_CONFIG: - /* - * So far only .slave_id is used, but the slave drivers are - * encouraged to also set a transfer direction and an address. - */ - if (!arg) - return -EINVAL; - /* - * We could lock this, but you shouldn't be configuring the - * channel, while using it... - */ - config = (struct dma_slave_config *)arg; - ret = shdma_setup_slave(schan, config->slave_id, - config->direction == DMA_DEV_TO_MEM ? - config->src_addr : config->dst_addr); - if (ret < 0) - return ret; - break; - default: - return -ENXIO; - } + shdma_chan_ld_cleanup(schan, true); + + return 0; +} + +static int shdma_config(struct dma_chan *chan, + struct dma_slave_config *config) +{ + struct shdma_chan *schan = to_shdma_chan(chan); + + /* + * So far only .slave_id is used, but the slave drivers are + * encouraged to also set a transfer direction and an address. + */ + if (!config) + return -EINVAL; + /* + * We could lock this, but you shouldn't be configuring the + * channel, while using it... + */ + return shdma_setup_slave(schan, config->slave_id, + config->direction == DMA_DEV_TO_MEM ? + config->src_addr : config->dst_addr); return 0; } @@ -1000,7 +997,8 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev, /* Compulsory for DMA_SLAVE fields */ dma_dev->device_prep_slave_sg = shdma_prep_slave_sg; dma_dev->device_prep_dma_cyclic = shdma_prep_dma_cyclic; - dma_dev->device_control = shdma_control; + dma_dev->device_config = shdma_config; + dma_dev->device_terminate_all = shdma_terminate_all; dma_dev->dev = dev;