From patchwork Thu Mar 17 03:45:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Lechner X-Patchwork-Id: 8607001 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 41B95C0553 for ; Thu, 17 Mar 2016 03:48:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5BCE52026F for ; Thu, 17 Mar 2016 03:48:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6C39B2026C for ; Thu, 17 Mar 2016 03:48:37 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1agOtq-0003Ks-BO; Thu, 17 Mar 2016 03:47:02 +0000 Received: from [50.115.127.205] (helo=vern.gendns.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1agOtZ-0002w4-OI for linux-arm-kernel@lists.infradead.org; Thu, 17 Mar 2016 03:46:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lechnology.com; s=default; h=References:In-Reply-To:Message-Id:Date:Subject :Cc:To:From; bh=25i85V+yMbJkY1n5PeWGu6VKJNBlQMqQ49NmFlBekyw=; b=KlVVU2aV1xtBl 2md5MnnX9DGpnqanUGhzdBnD1Asp7o/gd0flJo3e/Bm9N8RWYLKY9s2oaFhznCxDQIwnyA/fPxSR9 3G3Pvn1dOhZ837tHZGviHwHrzpvEpk/wlkr/hEGLOftdViSTYKpP9bUPe2kLmTjRMNRpdizhTXQbq fMT9CX+kKH8B7lJ6qstOXVUD67kUAh7YWurf16H5nG7iuVcKbzt3/HFe1MTFLfctMPKps74UZq/8b +uYiac4SyFPVSqSwCFStZkAl4IvGoNPhQOkszpH7b3fqvp3vCdN6EKaSMwvKNEc8NjaflHUXQUDJ6 DG7zqdrmozNqXYZARw95Q==; Received: from 108-198-5-147.lightspeed.okcbok.sbcglobal.net ([108.198.5.147]:51767 helo=freyr.lechnology.com) by vern.gendns.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-SHA256:128) (Exim 4.86_1) (envelope-from ) id 1agOtE-004EFV-G5; Wed, 16 Mar 2016 23:46:24 -0400 From: David Lechner To: Subject: [PATCH v3 1/5] mmc: davinci_mmc: Use dma_request_chan() to requesting DMA channel Date: Wed, 16 Mar 2016 22:45:31 -0500 Message-Id: <1458186341-6970-2-git-send-email-david@lechnology.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1458186341-6970-1-git-send-email-david@lechnology.com> References: <1458186341-6970-1-git-send-email-david@lechnology.com> X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - vern.gendns.com X-AntiAbuse: Original Domain - lists.infradead.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - lechnology.com X-Get-Message-Sender-Via: vern.gendns.com: authenticated_id: davidmain+lechnology.com/only user confirmed/virtual account not confirmed X-Authenticated-Sender: vern.gendns.com: davidmain@lechnology.com X-Source: X-Source-Args: X-Source-Dir: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160316_204645_932782_3A47DD4B X-CRM114-Status: GOOD ( 12.26 ) X-Spam-Score: -0.5 (/) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jaehoon Chung , Ulf Hansson , Russell King , David Lechner , Kevin Hilman , Krzysztof Kozlowski , Sekhar Nori , linux-kernel@vger.kernel.org, linux-mmc@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Peter Ujfalusi With the new dma_request_chan() the client driver does not need to look for the DMA resource and it does not need to pass filter_fn anymore. By switching to the new API the driver can now support deferred probing against DMA. Signed-off-by: Peter Ujfalusi --- v3 changes: This is a new patch in v3 (from Peter Ujfalusi) and replaces my previous "mmc: davinci: don't use dma platform resources" patch. drivers/mmc/host/davinci_mmc.c | 53 +++++++++++------------------------------- 1 file changed, 14 insertions(+), 39 deletions(-) diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c index 693144e..eb3ca9e 100644 --- a/drivers/mmc/host/davinci_mmc.c +++ b/drivers/mmc/host/davinci_mmc.c @@ -32,12 +32,10 @@ #include #include #include -#include #include #include #include -#include #include /* @@ -202,7 +200,6 @@ struct mmc_davinci_host { u32 buffer_bytes_left; u32 bytes_left; - u32 rxdma, txdma; struct dma_chan *dma_tx; struct dma_chan *dma_rx; bool use_dma; @@ -513,35 +510,20 @@ davinci_release_dma_channels(struct mmc_davinci_host *host) static int __init davinci_acquire_dma_channels(struct mmc_davinci_host *host) { - int r; - dma_cap_mask_t mask; - - dma_cap_zero(mask); - dma_cap_set(DMA_SLAVE, mask); - - host->dma_tx = - dma_request_slave_channel_compat(mask, edma_filter_fn, - &host->txdma, mmc_dev(host->mmc), "tx"); - if (!host->dma_tx) { + host->dma_tx = dma_request_chan(mmc_dev(host->mmc), "tx"); + if (IS_ERR(host->dma_tx)) { dev_err(mmc_dev(host->mmc), "Can't get dma_tx channel\n"); - return -ENODEV; + return PTR_ERR(host->dma_tx); } - host->dma_rx = - dma_request_slave_channel_compat(mask, edma_filter_fn, - &host->rxdma, mmc_dev(host->mmc), "rx"); - if (!host->dma_rx) { + host->dma_rx = dma_request_chan(mmc_dev(host->mmc), "rx"); + if (IS_ERR(host->dma_rx)) { dev_err(mmc_dev(host->mmc), "Can't get dma_rx channel\n"); - r = -ENODEV; - goto free_master_write; + dma_release_channel(host->dma_tx); + return PTR_ERR(host->dma_rx); } return 0; - -free_master_write: - dma_release_channel(host->dma_tx); - - return r; } /*----------------------------------------------------------------------*/ @@ -1253,18 +1235,6 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev) host = mmc_priv(mmc); host->mmc = mmc; /* Important */ - r = platform_get_resource(pdev, IORESOURCE_DMA, 0); - if (!r) - dev_warn(&pdev->dev, "RX DMA resource not specified\n"); - else - host->rxdma = r->start; - - r = platform_get_resource(pdev, IORESOURCE_DMA, 1); - if (!r) - dev_warn(&pdev->dev, "TX DMA resource not specified\n"); - else - host->txdma = r->start; - host->mem_res = mem; host->base = ioremap(mem->start, mem_size); if (!host->base) @@ -1291,8 +1261,13 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev) host->mmc_irq = irq; host->sdio_irq = platform_get_irq(pdev, 1); - if (host->use_dma && davinci_acquire_dma_channels(host) != 0) - host->use_dma = 0; + if (host->use_dma) { + ret = davinci_acquire_dma_channels(host); + if (ret == -EPROBE_DEFER) + goto out; + else if (ret) + host->use_dma = 0; + } /* REVISIT: someday, support IRQ-driven card detection. */ mmc->caps |= MMC_CAP_NEEDS_POLL;