From patchwork Mon Dec 14 20:47:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 7848461 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4EF9E9F32E for ; Mon, 14 Dec 2015 20:50:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5872B2034C for ; Mon, 14 Dec 2015 20:50:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5315220166 for ; Mon, 14 Dec 2015 20:50:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a8a36-0006mL-FH; Mon, 14 Dec 2015 20:48:48 +0000 Received: from arroyo.ext.ti.com ([192.94.94.40]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a8a2h-0006Yz-Un for linux-arm-kernel@lists.infradead.org; Mon, 14 Dec 2015 20:48:27 +0000 Received: from dflxv15.itg.ti.com ([128.247.5.124]) by arroyo.ext.ti.com (8.13.7/8.13.7) with ESMTP id tBEKlst5028586; Mon, 14 Dec 2015 14:47:54 -0600 Received: from DLEE70.ent.ti.com (dlee70.ent.ti.com [157.170.170.113]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id tBEKlsF5018651; Mon, 14 Dec 2015 14:47:54 -0600 Received: from dlep32.itg.ti.com (157.170.170.100) by DLEE70.ent.ti.com (157.170.170.113) with Microsoft SMTP Server id 14.3.224.2; Mon, 14 Dec 2015 14:47:54 -0600 Received: from dflp33.itg.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dlep32.itg.ti.com (8.14.3/8.13.8) with ESMTP id tBEKljp0006651; Mon, 14 Dec 2015 14:47:51 -0600 From: Peter Ujfalusi To: , , Subject: [PATCH V03 2/5] dmaengine: core: Move and merge the code paths using private_candidate Date: Mon, 14 Dec 2015 22:47:39 +0200 Message-ID: <1450126062-8063-3-git-send-email-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1450126062-8063-1-git-send-email-peter.ujfalusi@ti.com> References: <1450126062-8063-1-git-send-email-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151214_124824_529144_392B9237 X-CRM114-Status: GOOD ( 15.22 ) X-Spam-Score: -6.9 (------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: tony@atomide.com, nsekhar@ti.com, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-omap@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Channel matching with private_candidate() is used in two paths, the error checking is slightly different in them and they are duplicating code also. Move the code under find_candidate() to provide consistent execution and going to allow us to reuse this mode of channel lookup later. Signed-off-by: Peter Ujfalusi Reviewed-by: Andy Shevchenko Reviewed-by: Arnd Bergmann --- drivers/dma/dmaengine.c | 81 +++++++++++++++++++++++++------------------------ 1 file changed, 42 insertions(+), 39 deletions(-) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 6311e1fc80be..ea9d66982d40 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -546,6 +546,42 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask, return NULL; } +static struct dma_chan *find_candidate(struct dma_device *device, + const dma_cap_mask_t *mask, + dma_filter_fn fn, void *fn_param) +{ + struct dma_chan *chan = private_candidate(mask, device, fn, fn_param); + int err; + + if (chan) { + /* Found a suitable channel, try to grab, prep, and return it. + * We first set DMA_PRIVATE to disable balance_ref_count as this + * channel will not be published in the general-purpose + * allocator + */ + dma_cap_set(DMA_PRIVATE, device->cap_mask); + device->privatecnt++; + err = dma_chan_get(chan); + + if (err) { + if (err == -ENODEV) { + pr_debug("%s: %s module removed\n", __func__, + dma_chan_name(chan)); + list_del_rcu(&device->global_node); + } else + pr_debug("%s: failed to get %s: (%d)\n", + __func__, dma_chan_name(chan), err); + + if (--device->privatecnt == 0) + dma_cap_clear(DMA_PRIVATE, device->cap_mask); + + chan = ERR_PTR(err); + } + } + + return chan ? chan : ERR_PTR(-EPROBE_DEFER); +} + /** * dma_get_slave_channel - try to get specific channel exclusively * @chan: target channel @@ -584,7 +620,6 @@ struct dma_chan *dma_get_any_slave_channel(struct dma_device *device) { dma_cap_mask_t mask; struct dma_chan *chan; - int err; dma_cap_zero(mask); dma_cap_set(DMA_SLAVE, mask); @@ -592,23 +627,11 @@ struct dma_chan *dma_get_any_slave_channel(struct dma_device *device) /* lock against __dma_request_channel */ mutex_lock(&dma_list_mutex); - chan = private_candidate(&mask, device, NULL, NULL); - if (chan) { - dma_cap_set(DMA_PRIVATE, device->cap_mask); - device->privatecnt++; - err = dma_chan_get(chan); - if (err) { - pr_debug("%s: failed to get %s: (%d)\n", - __func__, dma_chan_name(chan), err); - chan = NULL; - if (--device->privatecnt == 0) - dma_cap_clear(DMA_PRIVATE, device->cap_mask); - } - } + chan = find_candidate(device, &mask, NULL, NULL); mutex_unlock(&dma_list_mutex); - return chan; + return IS_ERR(chan) ? NULL : chan; } EXPORT_SYMBOL_GPL(dma_get_any_slave_channel); @@ -625,35 +648,15 @@ struct dma_chan *__dma_request_channel(const dma_cap_mask_t *mask, { struct dma_device *device, *_d; struct dma_chan *chan = NULL; - int err; /* Find a channel */ mutex_lock(&dma_list_mutex); list_for_each_entry_safe(device, _d, &dma_device_list, global_node) { - chan = private_candidate(mask, device, fn, fn_param); - if (chan) { - /* Found a suitable channel, try to grab, prep, and - * return it. We first set DMA_PRIVATE to disable - * balance_ref_count as this channel will not be - * published in the general-purpose allocator - */ - dma_cap_set(DMA_PRIVATE, device->cap_mask); - device->privatecnt++; - err = dma_chan_get(chan); + chan = find_candidate(device, mask, fn, fn_param); + if (!IS_ERR(chan)) + break; - if (err == -ENODEV) { - pr_debug("%s: %s module removed\n", - __func__, dma_chan_name(chan)); - list_del_rcu(&device->global_node); - } else if (err) - pr_debug("%s: failed to get %s: (%d)\n", - __func__, dma_chan_name(chan), err); - else - break; - if (--device->privatecnt == 0) - dma_cap_clear(DMA_PRIVATE, device->cap_mask); - chan = NULL; - } + chan = NULL; } mutex_unlock(&dma_list_mutex);