From patchwork Tue Oct 1 13:51:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Mack X-Patchwork-Id: 2970161 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 8C06FBFF0B for ; Tue, 1 Oct 2013 13:52:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 53840203E3 for ; Tue, 1 Oct 2013 13:52:47 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6E97E20148 for ; Tue, 1 Oct 2013 13:52:42 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VR0N8-0006K2-3p; Tue, 01 Oct 2013 13:52:18 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VR0N2-0001kF-3X; Tue, 01 Oct 2013 13:52:12 +0000 Received: from svenfoo.org ([82.94.215.22] helo=mail.zonque.de) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VR0Mt-0001iG-Kd for linux-arm-kernel@lists.infradead.org; Tue, 01 Oct 2013 13:52:04 +0000 Received: from localhost (localhost [127.0.0.1]) by mail.zonque.de (Postfix) with ESMTP id 3EC4EC1744; Tue, 1 Oct 2013 15:51:39 +0200 (CEST) Received: from mail.zonque.de ([127.0.0.1]) by localhost (rambrand.bugwerft.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7MetkYM355HT; Tue, 1 Oct 2013 15:51:39 +0200 (CEST) Received: from tamtam.teufel.local (unknown [212.87.41.14]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.zonque.de (Postfix) with ESMTPSA id DD7E8C1671; Tue, 1 Oct 2013 15:51:38 +0200 (CEST) From: Daniel Mack To: linux-omap@vger.kernel.org Subject: [PATCH] ARM: omap: edma: add suspend suspend/resume hooks Date: Tue, 1 Oct 2013 15:51:33 +0200 Message-Id: <1380635493-31040-2-git-send-email-zonque@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1380635493-31040-1-git-send-email-zonque@gmail.com> References: <1380635493-31040-1-git-send-email-zonque@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131001_095203_868555_59A4204C X-CRM114-Status: GOOD ( 13.85 ) X-Spam-Score: -0.3 (/) Cc: mporter@ti.com, alsa-devel@alsa-project.org, nsekhar@ti.com, s.neumann@raumfeld.com, gururaja.hebbar@ti.com, Daniel Mack , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes the edma driver resume correctly after suspend. Tested on an AM33xx platform with cyclic audio streams. The code was shamelessly taken from an ancient BSP tree. Signed-off-by: Daniel Mack --- arch/arm/common/edma.c | 133 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/arch/arm/common/edma.c b/arch/arm/common/edma.c index 2a72169..d787f14 100644 --- a/arch/arm/common/edma.c +++ b/arch/arm/common/edma.c @@ -258,6 +258,20 @@ struct edma { void *data); void *data; } intr_data[EDMA_MAX_DMACH]; + + struct { + struct edmacc_param *prm_set; + unsigned int *ch_map; /* 64 registers */ + unsigned int *que_num; /* 8 registers */ + unsigned int sh_esr; + unsigned int sh_esrh; + unsigned int sh_eesr; + unsigned int sh_eesrh; + unsigned int sh_iesr; + unsigned int sh_iesrh; + unsigned int que_tc_map; + unsigned int que_pri; + } context; }; static struct edma *edma_cc[EDMA_MAX_CC]; @@ -1655,6 +1669,16 @@ static int edma_probe(struct platform_device *pdev) memcpy_toio(edmacc_regs_base[j] + PARM_OFFSET(i), &dummy_paramset, PARM_SIZE); + /* resume context */ + edma_cc[j]->context.prm_set = + kzalloc((sizeof(struct edmacc_param) * + edma_cc[j]->num_slots), GFP_KERNEL); + edma_cc[j]->context.ch_map = + kzalloc((sizeof(unsigned int) * + edma_cc[j]->num_channels), GFP_KERNEL); + edma_cc[j]->context.que_num = + kzalloc((sizeof(unsigned int) * 8), GFP_KERNEL); + /* Mark all channels as unused */ memset(edma_cc[j]->edma_unused, 0xff, sizeof(edma_cc[j]->edma_unused)); @@ -1762,6 +1786,114 @@ static int edma_probe(struct platform_device *pdev) return 0; } +static int edma_pm_suspend(struct device *dev) +{ + int i, j; + + pm_runtime_get_sync(dev); + + for (i = 0; i < arch_num_cc; i++) { + struct edma *ecc = edma_cc[i]; + + /* backup channel data */ + for (j = 0; j < ecc->num_channels; j++) + ecc->context.ch_map[j] = + edma_read_array(i, EDMA_DCHMAP, j); + + /* backup DMA Queue Number */ + for (j = 0; j < 8; j++) + ecc->context.que_num[j] = + edma_read_array(i, EDMA_DMAQNUM, j); + + /* backup DMA shadow Event Set data */ + ecc->context.sh_esr = edma_shadow0_read_array(i, SH_ESR, 0); + ecc->context.sh_esrh = edma_shadow0_read_array(i, SH_ESR, 1); + + /* backup DMA Shadow Event Enable Set data */ + ecc->context.sh_eesr = + edma_shadow0_read_array(i, SH_EER, 0); + ecc->context.sh_eesrh = + edma_shadow0_read_array(i, SH_EER, 1); + + /* backup DMA Shadow Interrupt Enable Set data */ + ecc->context.sh_iesr = + edma_shadow0_read_array(i, SH_IER, 0); + ecc->context.sh_iesrh = + edma_shadow0_read_array(i, SH_IER, 1); + + ecc->context.que_tc_map = edma_read(i, EDMA_QUETCMAP); + + /* backup DMA Queue Priority data */ + ecc->context.que_pri = edma_read(i, EDMA_QUEPRI); + + /* backup paramset */ + for (j = 0; j < ecc->num_slots; j++) + memcpy_fromio(&ecc->context.prm_set[j], + edmacc_regs_base[i] + PARM_OFFSET(j), + PARM_SIZE); + } + + pm_runtime_put_sync(dev); + + return 0; +} + +static int edma_pm_resume(struct device *dev) +{ + int i, j; + + pm_runtime_get_sync(dev); + + for (i = 0; i < arch_num_cc; i++) { + struct edma *ecc = edma_cc[i]; + + /* restore channel data */ + for (j = 0; j < ecc->num_channels; j++) { + edma_write_array(i, EDMA_DCHMAP, j, + ecc->context.ch_map[j]); + } + + /* restore DMA Queue Number */ + for (j = 0; j < 8; j++) { + edma_write_array(i, EDMA_DMAQNUM, j, + ecc->context.que_num[j]); + } + + /* restore DMA shadow Event Set data */ + edma_shadow0_write_array(i, SH_ESR, 0, ecc->context.sh_esr); + edma_shadow0_write_array(i, SH_ESR, 1, ecc->context.sh_esrh); + + /* restore DMA Shadow Event Enable Set data */ + edma_shadow0_write_array(i, SH_EESR, 0, + ecc->context.sh_eesr); + edma_shadow0_write_array(i, SH_EESR, 1, + ecc->context.sh_eesrh); + + /* restore DMA Shadow Interrupt Enable Set data */ + edma_shadow0_write_array(i, SH_IESR, 0, + ecc->context.sh_iesr); + edma_shadow0_write_array(i, SH_IESR, 1, + ecc->context.sh_iesrh); + + edma_write(i, EDMA_QUETCMAP, ecc->context.que_tc_map); + + /* restore DMA Queue Priority data */ + edma_write(i, EDMA_QUEPRI, ecc->context.que_pri); + + /* restore paramset */ + for (j = 0; j < ecc->num_slots; j++) { + memcpy_toio(edmacc_regs_base[i] + PARM_OFFSET(j), + &ecc->context.prm_set[j], PARM_SIZE); + } + } + + pm_runtime_put_sync(dev); + + return 0; +} + +static SIMPLE_DEV_PM_OPS(edma_pm_ops, edma_pm_suspend, edma_pm_resume); + static const struct of_device_id edma_of_ids[] = { { .compatible = "ti,edma3", }, {} @@ -1770,6 +1902,7 @@ static const struct of_device_id edma_of_ids[] = { static struct platform_driver edma_driver = { .driver = { .name = "edma", + .pm = &edma_pm_ops, .of_match_table = edma_of_ids, }, .probe = edma_probe,