From patchwork Tue Aug 6 17:49:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gerlach X-Patchwork-Id: 2839510 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 52EEF9F479 for ; Tue, 6 Aug 2013 17:53:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CE0D520206 for ; Tue, 6 Aug 2013 17:53:03 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 28DED201E9 for ; Tue, 6 Aug 2013 17:53:02 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6lQE-0007hx-A7; Tue, 06 Aug 2013 17:51:51 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6lPm-0004za-AA; Tue, 06 Aug 2013 17:51:22 +0000 Received: from bear.ext.ti.com ([192.94.94.41]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6lOs-0004t1-8N for linux-arm-kernel@lists.infradead.org; Tue, 06 Aug 2013 17:50:40 +0000 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id r76Ho8D7027569; Tue, 6 Aug 2013 12:50:08 -0500 Received: from DLEE71.ent.ti.com (dlee71.ent.ti.com [157.170.170.114]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id r76Ho8XU028899; Tue, 6 Aug 2013 12:50:08 -0500 Received: from dlelxv23.itg.ti.com (172.17.1.198) by DLEE71.ent.ti.com (157.170.170.114) with Microsoft SMTP Server id 14.2.342.3; Tue, 6 Aug 2013 12:50:08 -0500 Received: from legion.dal.design.ti.com (legion.dal.design.ti.com [128.247.22.53]) by dlelxv23.itg.ti.com (8.13.8/8.13.8) with ESMTP id r76Ho7t1020779; Tue, 6 Aug 2013 12:50:08 -0500 Received: from localhost (lta0274052.am.dhcp.ti.com [128.247.106.219]) by legion.dal.design.ti.com (8.11.7p1+Sun/8.11.7) with ESMTP id r76Ho7t02108; Tue, 6 Aug 2013 12:50:07 -0500 (CDT) From: Dave Gerlach To: , Subject: [PATCHv3 5/9] ARM: OMAP2+: AM33XX: Add assembly code for PM operations Date: Tue, 6 Aug 2013 12:49:32 -0500 Message-ID: <1375811376-49985-6-git-send-email-d-gerlach@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1375811376-49985-1-git-send-email-d-gerlach@ti.com> References: <1375811376-49985-1-git-send-email-d-gerlach@ti.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130806_135027_119793_54273424 X-CRM114-Status: GOOD ( 21.33 ) X-Spam-Score: -6.9 (------) Cc: Paul Walmsley , Kevin Hilman , Santosh Shilimkar , Vaibhav Bedia , Dave Gerlach X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vaibhav Bedia In preparation for suspend-resume support for AM33XX, add the assembly file with the code which is copied to internal memory (OCMC RAM) during bootup and runs from there. As part of the low power entry (DeepSleep0 mode in AM33XX TRM), the code running from OCMC RAM does the following 1. Stores the EMIF configuration 2. Puts external memory in self-refresh 3. Disables EMIF clock 4. Executes WFI after writing to MPU_CLKCTRL register. If no interrupts have come, WFI execution on MPU gets registered as an interrupt with the WKUP-M3. WKUP-M3 takes care of disabling some clocks which MPU should not (L3, L4, OCMC RAM etc) and takes care of clockdomain and powerdomain transitions as part of the DeepSleep0 mode entry. In case a late interrupt comes in, WFI ends up as a NOP and MPU continues execution from internal memory. The 'abort path' code undoes whatever was done as part of the low power entry and indicates a suspend failure by passing a non-zero value to the cpu_resume routine. The 'resume path' code is similar to the 'abort path' with the key difference of MMU being enabled in the 'abort path' but being disabled in the 'resume path' due to MPU getting powered off. Signed-off-by: Vaibhav Bedia Signed-off-by: Dave Gerlach Cc: Santosh Shilimkar Cc: Kevin Hilman Reviewed-by: Russ Dill --- arch/arm/mach-omap2/sleep33xx.S | 350 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 350 insertions(+) create mode 100644 arch/arm/mach-omap2/sleep33xx.S diff --git a/arch/arm/mach-omap2/sleep33xx.S b/arch/arm/mach-omap2/sleep33xx.S new file mode 100644 index 0000000..834c7d4 --- /dev/null +++ b/arch/arm/mach-omap2/sleep33xx.S @@ -0,0 +1,350 @@ +/* + * Low level suspend code for AM33XX SoCs + * + * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/ + * Vaibhav Bedia + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation version 2. + * + * This program is distributed "as is" WITHOUT ANY WARRANTY of any + * kind, whether express or implied; without even the implied warranty + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#include +#include +#include +#include + +#include "cm33xx.h" +#include "pm33xx.h" +#include "prm33xx.h" + + .text + .align 3 + +/* + * This routine is executed from internal RAM and expects some + * parameters to be passed in r0 _strictly_ in following order: + * 1) emif_addr_virt - ioremapped EMIF address + * 2) mem_type - 2 -> DDR2, 3-> DDR3 + * 3) dram_sync_word - uncached word in SDRAM + * + * The code loads these values taking r0 value as reference to + * the array in registers starting from r0, i.e emif_addr_virt + * goes to r1, mem_type goes to r2 and and so on. These are + * then saved into memory locations before proceeding with the + * sleep sequence and hence registers r0, r1 etc can still be + * used in the rest of the sleep code. + */ + +ENTRY(am33xx_do_wfi) + stmfd sp!, {r4 - r11, lr} @ save registers on stack + + ldm r0, {r1-r3} @ gather values passed + + /* Save the values passed */ + str r1, emif_addr_virt + str r2, mem_type + str r3, dram_sync_word + + /* + * Flush all data from the L1 data cache before disabling + * SCTLR.C bit. + */ + ldr r1, kernel_flush + blx r1 + + /* + * Clear the SCTLR.C bit to prevent further data cache + * allocation. Clearing SCTLR.C would make all the data accesses + * strongly ordered and would not hit the cache. + */ + mrc p15, 0, r0, c1, c0, 0 + bic r0, r0, #(1 << 2) @ Disable the C bit + mcr p15, 0, r0, c1, c0, 0 + isb + + /* + * Invalidate L1 data cache. Even though only invalidate is + * necessary exported flush API is used here. Doing clean + * on already clean cache would be almost NOP. + */ + ldr r1, kernel_flush + blx r1 + + ldr r0, emif_addr_virt + /* Save EMIF configuration */ + ldr r1, [r0, #EMIF_SDRAM_CONFIG] + str r1, emif_sdcfg_val + ldr r1, [r0, #EMIF_SDRAM_REFRESH_CONTROL] + str r1, emif_ref_ctrl_val + ldr r1, [r0, #EMIF_SDRAM_TIMING_1] + str r1, emif_timing1_val + ldr r1, [r0, #EMIF_SDRAM_TIMING_2] + str r1, emif_timing2_val + ldr r1, [r0, #EMIF_SDRAM_TIMING_3] + str r1, emif_timing3_val + ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + str r1, emif_pmcr_val + ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW] + str r1, emif_pmcr_shdw_val + ldr r1, [r0, #EMIF_SDRAM_OUTPUT_IMPEDANCE_CALIBRATION_CONFIG] + str r1, emif_zqcfg_val + ldr r1, [r0, #EMIF_DDR_PHY_CTRL_1] + str r1, emif_rd_lat_val + + /* Put SDRAM in self-refresh */ + ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + orr r1, r1, #0xa0 + str r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW] + str r1, [r0, #4] + + ldr r1, dram_sync_word @ a dummy access to DDR as per spec + ldr r2, [r1, #0] + ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + orr r1, r1, #0x200 + str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + + mov r1, #0x1000 @ Wait for system to enter SR +wait_sr: + subs r1, r1, #1 + bne wait_sr + + /* Disable EMIF */ + ldr r1, virt_emif_clkctrl + ldr r2, [r1] + bic r2, r2, #0x03 + str r2, [r1] + + ldr r1, virt_emif_clkctrl +wait_emif_disable: + ldr r2, [r1] + ldr r3, module_disabled_val + cmp r2, r3 + bne wait_emif_disable + + /* + * For the MPU WFI to be registered as an interrupt + * to WKUP_M3, MPU_CLKCTRL.MODULEMODE needs to be set + * to DISABLED + */ + ldr r1, virt_mpu_clkctrl + ldr r2, [r1] + bic r2, r2, #0x03 + str r2, [r1] + + /* + * Execute an ISB instruction to ensure that all of the + * CP15 register changes have been committed. + */ + isb + + /* + * Execute a barrier instruction to ensure that all cache, + * TLB and branch predictor maintenance operations issued + * have completed. + */ + dsb + dmb + + /* + * Execute a WFI instruction and wait until the + * STANDBYWFI output is asserted to indicate that the + * CPU is in idle and low power state. CPU can specualatively + * prefetch the instructions so add NOPs after WFI. Thirteen + * NOPs as per Cortex-A8 pipeline. + */ + wfi + + nop + nop + nop + nop + nop + nop + nop + nop + nop + nop + nop + nop + nop + + /* We come here in case of an abort due to a late interrupt */ + + /* Set MPU_CLKCTRL.MODULEMODE back to ENABLE */ + ldr r1, virt_mpu_clkctrl + mov r2, #0x02 + str r2, [r1] + + /* Re-enable EMIF */ + ldr r1, virt_emif_clkctrl + mov r2, #0x02 + str r2, [r1] +wait_emif_enable: + ldr r3, [r1] + cmp r2, r3 + bne wait_emif_enable + + /* Disable EMIF self-refresh */ + ldr r0, emif_addr_virt + ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + bic r1, r1, #LP_MODE_MASK + str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + str r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW] + + /* + * A write to SDRAM CONFIG register triggers + * an init sequence and hence it must be done + * at the end for DDR2 + */ + ldr r0, emif_addr_virt + add r0, r0, #EMIF_SDRAM_CONFIG + ldr r4, emif_sdcfg_val + str r4, [r0] + + /* + * Set SCTLR.C bit to allow data cache allocation + */ + mrc p15, 0, r0, c1, c0, 0 + orr r0, r0, #(1 << 2) @ Enable the C bit + mcr p15, 0, r0, c1, c0, 0 + isb + + /* Kill some time for sanity to settle in */ + mov r0, #0x1000 +wait_abt: + subs r0, r0, #1 + bne wait_abt + + /* Let the suspend code know about the abort */ + mov r0, #1 + ldmfd sp!, {r4 - r11, pc} @ restore regs and return +ENDPROC(am33xx_do_wfi) + + .align +ENTRY(am33xx_resume_offset) + .word . - am33xx_do_wfi + +ENTRY(am33xx_resume_from_deep_sleep) + /* Re-enable EMIF */ + ldr r0, phys_emif_clkctrl + mov r1, #0x02 + str r1, [r0] +wait_emif_enable1: + ldr r2, [r0] + cmp r1, r2 + bne wait_emif_enable1 + + /* Config EMIF Timings */ + ldr r0, emif_phys_addr + ldr r1, emif_rd_lat_val + str r1, [r0, #EMIF_DDR_PHY_CTRL_1] + str r1, [r0, #EMIF_DDR_PHY_CTRL_1_SHDW] + ldr r1, emif_timing1_val + str r1, [r0, #EMIF_SDRAM_TIMING_1] + str r1, [r0, #EMIF_SDRAM_TIMING_1_SHDW] + ldr r1, emif_timing2_val + str r1, [r0, #EMIF_SDRAM_TIMING_2] + str r1, [r0, #EMIF_SDRAM_TIMING_2_SHDW] + ldr r1, emif_timing3_val + str r1, [r0, #EMIF_SDRAM_TIMING_3] + str r1, [r0, #EMIF_SDRAM_TIMING_3_SHDW] + ldr r1, emif_ref_ctrl_val + str r1, [r0, #EMIF_SDRAM_REFRESH_CONTROL] + str r1, [r0, #EMIF_SDRAM_REFRESH_CTRL_SHDW] + ldr r1, emif_pmcr_val + str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL] + ldr r1, emif_pmcr_shdw_val + str r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW] + + /* + * Output impedence calib needed only for DDR3 + * but since the initial state of this will be + * disabled for DDR2 no harm in restoring the + * old configuration + */ + ldr r1, emif_zqcfg_val + str r1, [r0, #EMIF_SDRAM_OUTPUT_IMPEDANCE_CALIBRATION_CONFIG] + + /* Write to SDRAM_CONFIG only for DDR2 */ + ldr r2, mem_type + cmp r2, #MEM_TYPE_DDR2 + bne resume_to_ddr + + /* + * A write to SDRAM CONFIG register triggers + * an init sequence and hence it must be done + * at the end for DDR2 + */ + ldr r1, emif_sdcfg_val + str r1, [r0, #EMIF_SDRAM_CONFIG] + +resume_to_ddr: + /* Back from la-la-land. Kill some time for sanity to settle in */ + mov r0, #0x1000 +wait_resume: + subs r0, r0, #1 + bne wait_resume + + /* We are back. Branch to the common CPU resume routine */ + mov r0, #0 + ldr pc, resume_addr +ENDPROC(am33xx_resume_from_deep_sleep) + + +/* + * Local variables + */ + .align +resume_addr: + .word cpu_resume - PAGE_OFFSET + 0x80000000 +kernel_flush: + .word v7_flush_dcache_all +ddr_start: + .word PAGE_OFFSET +emif_phys_addr: + .word AM33XX_EMIF_BASE +virt_mpu_clkctrl: + .word AM33XX_CM_MPU_MPU_CLKCTRL +virt_emif_clkctrl: + .word AM33XX_CM_PER_EMIF_CLKCTRL +phys_emif_clkctrl: + .word (AM33XX_CM_BASE + AM33XX_CM_PER_MOD + \ + AM33XX_CM_PER_EMIF_CLKCTRL_OFFSET) +module_disabled_val: + .word 0x30000 + +/* DDR related defines */ +dram_sync_word: + .word 0xDEADBEEF +mem_type: + .word 0xDEADBEEF +emif_addr_virt: + .word 0xDEADBEEF +emif_rd_lat_val: + .word 0xDEADBEEF +emif_timing1_val: + .word 0xDEADBEEF +emif_timing2_val: + .word 0xDEADBEEF +emif_timing3_val: + .word 0xDEADBEEF +emif_sdcfg_val: + .word 0xDEADBEEF +emif_ref_ctrl_val: + .word 0xDEADBEEF +emif_zqcfg_val: + .word 0xDEADBEEF +emif_pmcr_val: + .word 0xDEADBEEF +emif_pmcr_shdw_val: + .word 0xDEADBEEF + + .align 3 +ENTRY(am33xx_do_wfi_sz) + .word . - am33xx_do_wfi