From patchwork Thu Mar 13 09:18:33 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 3823541 X-Patchwork-Delegate: vinod.koul@intel.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5B980BF549 for ; Thu, 13 Mar 2014 09:23:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7F8812020A for ; Thu, 13 Mar 2014 09:23:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A487D2027D for ; Thu, 13 Mar 2014 09:23:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753749AbaCMJT6 (ORCPT ); Thu, 13 Mar 2014 05:19:58 -0400 Received: from bear.ext.ti.com ([192.94.94.41]:57541 "EHLO bear.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753752AbaCMJT4 (ORCPT ); Thu, 13 Mar 2014 05:19:56 -0400 Received: from dflxv15.itg.ti.com ([128.247.5.124]) by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id s2D9JSfQ002606; Thu, 13 Mar 2014 04:19:28 -0500 Received: from DFLE73.ent.ti.com (dfle73.ent.ti.com [128.247.5.110]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s2D9JRMc011269; Thu, 13 Mar 2014 04:19:27 -0500 Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE73.ent.ti.com (128.247.5.110) with Microsoft SMTP Server id 14.3.174.1; Thu, 13 Mar 2014 04:19:27 -0500 Received: from dflp32.itg.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id s2D9Ifhg005531; Thu, 13 Mar 2014 04:19:24 -0500 From: Peter Ujfalusi To: , , , , CC: , , Mark Brown , Liam Girdwood , Tony Lindgren , , Jyri Sarha , , , Subject: [PATCH 11/18] dma: edma: Prefix debug prints where the text were identical in prep callbacks Date: Thu, 13 Mar 2014 11:18:33 +0200 Message-ID: <1394702320-21743-12-git-send-email-peter.ujfalusi@ti.com> X-Mailer: git-send-email 1.9.0 In-Reply-To: <1394702320-21743-1-git-send-email-peter.ujfalusi@ti.com> References: <1394702320-21743-1-git-send-email-peter.ujfalusi@ti.com> MIME-Version: 1.0 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP prep_slave_sg and prep_dma_cyclic callbacks have mostly same failure cases with the same texts printed in case we hit them. It helps when debugging if we know exactly which callback generated the errors. At the same time change the debug level for descriptor allocation failure from dbg to err since all other error cases are dev_err and this failure is similarly fatal as the other ones. Signed-off-by: Peter Ujfalusi --- drivers/dma/edma.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c index e2aa42b8342f..07ac5f4eeb56 100644 --- a/drivers/dma/edma.c +++ b/drivers/dma/edma.c @@ -430,14 +430,14 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( } if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { - dev_err(dev, "Undefined slave buswidth\n"); + dev_err(dev, "%s: Undefined slave buswidth\n", __func__); return NULL; } edesc = kzalloc(sizeof(*edesc) + sg_len * sizeof(edesc->pset[0]), GFP_ATOMIC); if (!edesc) { - dev_dbg(dev, "Failed to allocate a descriptor\n"); + dev_err(dev, "%s: Failed to allocate a descriptor\n", __func__); return NULL; } @@ -453,7 +453,8 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg( EDMA_SLOT_ANY); if (echan->slot[i] < 0) { kfree(edesc); - dev_err(dev, "Failed to allocate slot\n"); + dev_err(dev, "%s: Failed to allocate slot\n", + __func__); return NULL; } } @@ -522,7 +523,7 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic( } if (dev_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) { - dev_err(dev, "Undefined slave buswidth\n"); + dev_err(dev, "%s: Undefined slave buswidth\n", __func__); return NULL; } @@ -547,7 +548,7 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic( edesc = kzalloc(sizeof(*edesc) + nslots * sizeof(edesc->pset[0]), GFP_ATOMIC); if (!edesc) { - dev_dbg(dev, "Failed to allocate a descriptor\n"); + dev_err(dev, "%s: Failed to allocate a descriptor\n", __func__); return NULL; } @@ -565,7 +566,8 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic( EDMA_SLOT_ANY); if (echan->slot[i] < 0) { kfree(edesc); - dev_err(dev, "Failed to allocate slot\n"); + dev_err(dev, "%s: Failed to allocate slot\n", + __func__); return NULL; } }