diff mbox

[v2,17/18] ARM: OMAP4+: CPUidle: Add OMAP5 idle driver support

Message ID 1364205910-32392-18-git-send-email-santosh.shilimkar@ti.com (mailing list archive)
State New, archived
Headers show

Commit Message

Santosh Shilimkar March 25, 2013, 10:05 a.m. UTC
The OMAP5 idle driver can re-use OMAP4 CPUidle driver implementation thanks
to compatible MPUSS design.

Though unlike OMAP4, on OMAP5 devices, MPUSS CSWR (Close Switch Retention)
power states can be achieved with respective power states on CPU0 and CPU1
power domain. This mode was broken on OMAP4 devices because of hardware
limitation. Also there is no CPU low power entry order requirement like
master CPU etc for CSWR C-state, which is icing on the cake. Code makes
use of voting scheme for cluster low power state to support MPUSS CSWR
C-state.

Supported OMAP5 CPUidle C-states:
        C1 - CPU0 ON(WFI) + CPU1 ON(WFI) + MPUSS ON
        C2 - CPU0 CSWR + CPU1 CSWR + MPUSS CSWR
        C3 - CPU0 OFF + CPU1 OFF + MPUSS OSWR

Acked-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
 arch/arm/mach-omap2/Kconfig                        |    1 +
 arch/arm/mach-omap2/Makefile                       |    3 +-
 .../{cpuidle44xx.c => cpuidle_omap4plus.c}         |  110 +++++++++++++++++++-
 arch/arm/mach-omap2/omap-mpuss-lowpower.c          |    7 +-
 arch/arm/mach-omap2/pm_omap4plus.c                 |    2 +-
 5 files changed, 115 insertions(+), 8 deletions(-)
 rename arch/arm/mach-omap2/{cpuidle44xx.c => cpuidle_omap4plus.c} (69%)

Comments

Kevin Hilman April 3, 2013, 9:25 p.m. UTC | #1
Santosh Shilimkar <santosh.shilimkar@ti.com> writes:

> The OMAP5 idle driver can re-use OMAP4 CPUidle driver implementation thanks
> to compatible MPUSS design.
>
> Though unlike OMAP4, on OMAP5 devices, MPUSS CSWR (Close Switch
> Retention) power states can be achieved with respective power states
> on CPU0 and CPU1 power domain. This mode was broken on OMAP4 devices
> because of hardware limitation.

I'm a bit confused by the description here.

I gather from the code that this means that CPU0 and CPU1 can hit CSWR
independently, correct?

> Also there is no CPU low power entry order requirement like
> master CPU etc for CSWR C-state, which is icing on the cake. 

Even on secure devices?

> Code makes use of voting scheme for cluster low power state to support
> MPUSS CSWR C-state.

The voting scheme and associated locking should be better described
here, or commented in the code itself.

> Supported OMAP5 CPUidle C-states:
>         C1 - CPU0 ON(WFI) + CPU1 ON(WFI) + MPUSS ON
>         C2 - CPU0 CSWR + CPU1 CSWR + MPUSS CSWR
>         C3 - CPU0 OFF + CPU1 OFF + MPUSS OSWR
>
> Acked-by: Nishanth Menon <nm@ti.com>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>

[...]

> +static int omap_enter_idle_smp(struct cpuidle_device *dev,
> +			struct cpuidle_driver *drv,
> +			int index)
> +{
> +	struct idle_statedata *cx = state_ptr + index;
> +	int cpu_id = smp_processor_id();
> +	unsigned long flag;
> +
> +	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id);

I think the CPUidle core handles the broadcast notification now.

> +	raw_spin_lock_irqsave(&mpu_lock, flag);
> +	cx->mpu_state_vote++;

How about using an atomic_t and atomic_inc()/atomic_dec() instead, which
will avoid the need for a spinlock.

Even with that, it still seems potentially racy.  Do you need a memory
barrier here to be sure that any changes from another CPU are visible
here, and vice versa?

> +	if (cx->mpu_state_vote == num_online_cpus()) {
> +		pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
> +		omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
> +	}
> +	raw_spin_unlock_irqrestore(&mpu_lock, flag);
> +
> +	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
> +
> +	raw_spin_lock_irqsave(&mpu_lock, flag);
> +	if (cx->mpu_state_vote == num_online_cpus())
> +		omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);
> +	cx->mpu_state_vote--;
> +	raw_spin_unlock_irqrestore(&mpu_lock, flag);
> +
> +	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu_id);
> +
> +	return index;
> +}

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Santosh Shilimkar April 4, 2013, 2:16 p.m. UTC | #2
On Thursday 04 April 2013 02:55 AM, Kevin Hilman wrote:
> Santosh Shilimkar <santosh.shilimkar@ti.com> writes:
> 
>> The OMAP5 idle driver can re-use OMAP4 CPUidle driver implementation thanks
>> to compatible MPUSS design.
>>
>> Though unlike OMAP4, on OMAP5 devices, MPUSS CSWR (Close Switch
>> Retention) power states can be achieved with respective power states
>> on CPU0 and CPU1 power domain. This mode was broken on OMAP4 devices
>> because of hardware limitation.
> 
> I'm a bit confused by the description here.
> 
> I gather from the code that this means that CPU0 and CPU1 can hit CSWR
> independently, correct?
> 
They can be programmed independently without any ordering(like
CPU0 last etc), but the actual transition to the low power CSWR
happens together. Till that the first CPU hit WFI remains in WFI
state waiting for other CPU to hit WFI and then both transition
together.
Completely managed inside hardware.


>> Also there is no CPU low power entry order requirement like
>> master CPU etc for CSWR C-state, which is icing on the cake. 
> 
> Even on secure devices?
> 
Yes. On secure devices too. Actually since we don't loose context,
secure entry/exit doesn't come into picture.

>> Code makes use of voting scheme for cluster low power state to support
>> MPUSS CSWR C-state.
> 
> The voting scheme and associated locking should be better described
> here, or commented in the code itself.
> 
You are right. It deserves some description.

>> Supported OMAP5 CPUidle C-states:
>>         C1 - CPU0 ON(WFI) + CPU1 ON(WFI) + MPUSS ON
>>         C2 - CPU0 CSWR + CPU1 CSWR + MPUSS CSWR
>>         C3 - CPU0 OFF + CPU1 OFF + MPUSS OSWR
>>
>> Acked-by: Nishanth Menon <nm@ti.com>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> 
> [...]
> 
>> +static int omap_enter_idle_smp(struct cpuidle_device *dev,
>> +			struct cpuidle_driver *drv,
>> +			int index)
>> +{
>> +	struct idle_statedata *cx = state_ptr + index;
>> +	int cpu_id = smp_processor_id();
>> +	unsigned long flag;
>> +
>> +	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id);
> 
> I think the CPUidle core handles the broadcast notification now.
> 
Not in mainline yet. And those patches came after my patches and
I don't wanted un-necessary merge dependency, I avoided it. Its trivial
though to drop if from here once the idle next is merged.

>> +	raw_spin_lock_irqsave(&mpu_lock, flag);
>> +	cx->mpu_state_vote++;
> 
> How about using an atomic_t and atomic_inc()/atomic_dec() instead, which
> will avoid the need for a spinlock.
> 
Spin lock is not just for the vote variable. I had atomics opps in first
version I gave it to product team. But they found a race condition in
where MPU power state was getting overwritten by other CPU.

> Even with that, it still seems potentially racy.  Do you need a memory
> barrier here to be sure that any changes from another CPU are visible
> here, and vice versa?
> 
With locks, you don't need barriers since the updated copy is guaranteed.
Can you please elaborate on potential race ? I have given pretty hard
thought and didn't see any race which can be exposed with locks in place.

Regards,
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Kevin Hilman April 4, 2013, 5:55 p.m. UTC | #3
Santosh Shilimkar <santosh.shilimkar@ti.com> writes:

> On Thursday 04 April 2013 02:55 AM, Kevin Hilman wrote:
>> Santosh Shilimkar <santosh.shilimkar@ti.com> writes:
>> 
>>> The OMAP5 idle driver can re-use OMAP4 CPUidle driver implementation thanks
>>> to compatible MPUSS design.
>>>
>>> Though unlike OMAP4, on OMAP5 devices, MPUSS CSWR (Close Switch
>>> Retention) power states can be achieved with respective power states
>>> on CPU0 and CPU1 power domain. This mode was broken on OMAP4 devices
>>> because of hardware limitation.
>> 
>> I'm a bit confused by the description here.
>> 
>> I gather from the code that this means that CPU0 and CPU1 can hit CSWR
>> independently, correct?
>> 
> They can be programmed independently without any ordering(like
> CPU0 last etc), but the actual transition to the low power CSWR
> happens together. Till that the first CPU hit WFI remains in WFI
> state waiting for other CPU to hit WFI and then both transition
> together.
> Completely managed inside hardware.

OK, elaborating this in the changelog would be helpful.  Use the "will I
understand this changelog in a year" rule to see if the changelog is
detailed enough.  Or better, "will Kevin understand this changelog in a
year."  (hint: the answer is usually no.)  ;)

>>> Also there is no CPU low power entry order requirement like
>>> master CPU etc for CSWR C-state, which is icing on the cake. 
>> 
>> Even on secure devices?
>> 
> Yes. On secure devices too. Actually since we don't loose context,
> secure entry/exit doesn't come into picture.
>
>>> Code makes use of voting scheme for cluster low power state to support
>>> MPUSS CSWR C-state.
>> 
>> The voting scheme and associated locking should be better described
>> here, or commented in the code itself.
>> 
> You are right. It deserves some description.
>
>>> Supported OMAP5 CPUidle C-states:
>>>         C1 - CPU0 ON(WFI) + CPU1 ON(WFI) + MPUSS ON
>>>         C2 - CPU0 CSWR + CPU1 CSWR + MPUSS CSWR
>>>         C3 - CPU0 OFF + CPU1 OFF + MPUSS OSWR
>>>
>>> Acked-by: Nishanth Menon <nm@ti.com>
>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> 
>> [...]
>> 
>>> +static int omap_enter_idle_smp(struct cpuidle_device *dev,
>>> +			struct cpuidle_driver *drv,
>>> +			int index)
>>> +{
>>> +	struct idle_statedata *cx = state_ptr + index;
>>> +	int cpu_id = smp_processor_id();
>>> +	unsigned long flag;
>>> +
>>> +	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id);
>> 
>> I think the CPUidle core handles the broadcast notification now.
>> 
> Not in mainline yet. And those patches came after my patches and
> I don't wanted un-necessary merge dependency, I avoided it. Its trivial
> though to drop if from here once the idle next is merged.

OK.

I believe that stuff is already queued, no?  Maybe ahave this as an
add-on separate patch that can be used for your loal testing, but does
not go upstream.

I would only include this if you're sure the other series is not going
upstream.

>>> +	raw_spin_lock_irqsave(&mpu_lock, flag);
>>> +	cx->mpu_state_vote++;
>> 
>> How about using an atomic_t and atomic_inc()/atomic_dec() instead, which
>> will avoid the need for a spinlock.
>> 
> Spin lock is not just for the vote variable. I had atomics opps in first
> version I gave it to product team. But they found a race condition in
> where MPU power state was getting overwritten by other CPU.
>
>> Even with that, it still seems potentially racy.  Do you need a memory
>> barrier here to be sure that any changes from another CPU are visible
>> here, and vice versa?
>> 
> With locks, you don't need barriers since the updated copy is guaranteed.

It's guaranteed because the spinlock implementation uses barriers. 

> Can you please elaborate on potential race ? I have given pretty hard
> thought and didn't see any race which can be exposed with locks in place.

I was referring to using atomic ops.  With atomic ops, you'd need an
explicit barrier (which you're getting inside the spinlock
implementation)

That being said, I have not thought through all the corner cases as you
have, so I'll defer to your choice (as long as it's well documented.)
If you decide to stick with spinlocks, be sure to describe all of what
the spinlock is protecting, and why.

Thanks,

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Santosh Shilimkar April 5, 2013, 9:41 a.m. UTC | #4
On Thursday 04 April 2013 11:25 PM, Kevin Hilman wrote:
> Santosh Shilimkar <santosh.shilimkar@ti.com> writes:
> 
>> On Thursday 04 April 2013 02:55 AM, Kevin Hilman wrote:
>>> Santosh Shilimkar <santosh.shilimkar@ti.com> writes:
>>>
>>>> The OMAP5 idle driver can re-use OMAP4 CPUidle driver implementation thanks
>>>> to compatible MPUSS design.
>>>>
>>>> Though unlike OMAP4, on OMAP5 devices, MPUSS CSWR (Close Switch
>>>> Retention) power states can be achieved with respective power states
>>>> on CPU0 and CPU1 power domain. This mode was broken on OMAP4 devices
>>>> because of hardware limitation.
>>>
>>> I'm a bit confused by the description here.
>>>
>>> I gather from the code that this means that CPU0 and CPU1 can hit CSWR
>>> independently, correct?
>>>
>> They can be programmed independently without any ordering(like
>> CPU0 last etc), but the actual transition to the low power CSWR
>> happens together. Till that the first CPU hit WFI remains in WFI
>> state waiting for other CPU to hit WFI and then both transition
>> together.
>> Completely managed inside hardware.
> 
> OK, elaborating this in the changelog would be helpful.  Use the "will I
> understand this changelog in a year" rule to see if the changelog is
> detailed enough.  Or better, "will Kevin understand this changelog in a
> year."  (hint: the answer is usually no.)  ;)
> 
:-) I added above description in change-log.

>>>> Also there is no CPU low power entry order requirement like
>>>> master CPU etc for CSWR C-state, which is icing on the cake. 
>>>
>>> Even on secure devices?
>>>
>> Yes. On secure devices too. Actually since we don't loose context,
>> secure entry/exit doesn't come into picture.
>>
>>>> Code makes use of voting scheme for cluster low power state to support
>>>> MPUSS CSWR C-state.
>>>
>>> The voting scheme and associated locking should be better described
>>> here, or commented in the code itself.
>>>
>> You are right. It deserves some description.
>>
>>>> Supported OMAP5 CPUidle C-states:
>>>>         C1 - CPU0 ON(WFI) + CPU1 ON(WFI) + MPUSS ON
>>>>         C2 - CPU0 CSWR + CPU1 CSWR + MPUSS CSWR
>>>>         C3 - CPU0 OFF + CPU1 OFF + MPUSS OSWR
>>>>
>>>> Acked-by: Nishanth Menon <nm@ti.com>
>>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>>
>>> [...]
>>>
>>>> +static int omap_enter_idle_smp(struct cpuidle_device *dev,
>>>> +			struct cpuidle_driver *drv,
>>>> +			int index)
>>>> +{
>>>> +	struct idle_statedata *cx = state_ptr + index;
>>>> +	int cpu_id = smp_processor_id();
>>>> +	unsigned long flag;
>>>> +
>>>> +	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id);
>>>
>>> I think the CPUidle core handles the broadcast notification now.
>>>
>> Not in mainline yet. And those patches came after my patches and
>> I don't wanted un-necessary merge dependency, I avoided it. Its trivial
>> though to drop if from here once the idle next is merged.
> 
> OK.
> 
> I believe that stuff is already queued, no?  Maybe ahave this as an
> add-on separate patch that can be used for your loal testing, but does
> not go upstream.
> 
Will do.

> I would only include this if you're sure the other series is not going
> upstream.
>
It might go upstream so I will manually apply the patch or pull a branch
if available.
 
>>>> +	raw_spin_lock_irqsave(&mpu_lock, flag);
>>>> +	cx->mpu_state_vote++;
>>>
>>> How about using an atomic_t and atomic_inc()/atomic_dec() instead, which
>>> will avoid the need for a spinlock.
>>>
>> Spin lock is not just for the vote variable. I had atomics opps in first
>> version I gave it to product team. But they found a race condition in
>> where MPU power state was getting overwritten by other CPU.
>>
>>> Even with that, it still seems potentially racy.  Do you need a memory
>>> barrier here to be sure that any changes from another CPU are visible
>>> here, and vice versa?
>>>
>> With locks, you don't need barriers since the updated copy is guaranteed.
> 
> It's guaranteed because the spinlock implementation uses barriers. 
> 
Yeah.

>> Can you please elaborate on potential race ? I have given pretty hard
>> thought and didn't see any race which can be exposed with locks in place.
> 
> I was referring to using atomic ops.  With atomic ops, you'd need an
> explicit barrier (which you're getting inside the spinlock
> implementation)
> 
> That being said, I have not thought through all the corner cases as you
> have, so I'll defer to your choice (as long as it's well documented.)
> If you decide to stick with spinlocks, be sure to describe all of what
> the spinlock is protecting, and why.
> 
Have updated the change-log as well as code to elaborate the lock use.

Thanks for review.

Regards,
Santosh

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/arm/mach-omap2/Kconfig b/arch/arm/mach-omap2/Kconfig
index b9c0ed3..838ea67 100644
--- a/arch/arm/mach-omap2/Kconfig
+++ b/arch/arm/mach-omap2/Kconfig
@@ -105,6 +105,7 @@  config SOC_OMAP5
 	select HAVE_SMP
 	select COMMON_CLK
 	select HAVE_ARM_ARCH_TIMER
+	select ARCH_NEEDS_CPU_IDLE_COUPLED
 
 comment "OMAP Core Type"
 	depends on ARCH_OMAP2
diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile
index d91ae0f..ddaaa6c 100644
--- a/arch/arm/mach-omap2/Makefile
+++ b/arch/arm/mach-omap2/Makefile
@@ -102,7 +102,8 @@  endif
 
 ifeq ($(CONFIG_CPU_IDLE),y)
 obj-$(CONFIG_ARCH_OMAP3)                += cpuidle34xx.o
-obj-$(CONFIG_ARCH_OMAP4)                += cpuidle44xx.o
+obj-$(CONFIG_ARCH_OMAP4)		+= cpuidle_omap4plus.o
+obj-$(CONFIG_SOC_OMAP5)			+= cpuidle_omap4plus.o
 endif
 
 # PRCM
diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle_omap4plus.c
similarity index 69%
rename from arch/arm/mach-omap2/cpuidle44xx.c
rename to arch/arm/mach-omap2/cpuidle_omap4plus.c
index e6218d3..defb830 100644
--- a/arch/arm/mach-omap2/cpuidle44xx.c
+++ b/arch/arm/mach-omap2/cpuidle_omap4plus.c
@@ -22,12 +22,14 @@ 
 #include "pm.h"
 #include "prm.h"
 #include "clockdomain.h"
+#include "soc.h"
 
 /* Machine specific information */
 struct idle_statedata {
 	u32 cpu_state;
 	u32 mpu_logic_state;
 	u32 mpu_state;
+	u32 mpu_state_vote;
 };
 
 static struct idle_statedata omap4_idle_data[] = {
@@ -48,17 +50,36 @@  static struct idle_statedata omap4_idle_data[] = {
 	},
 };
 
+static struct idle_statedata omap5_idle_data[] = {
+	{
+		.cpu_state = PWRDM_POWER_ON,
+		.mpu_state = PWRDM_POWER_ON,
+		.mpu_logic_state = PWRDM_POWER_RET,
+	},
+	{
+		.cpu_state = PWRDM_POWER_RET,
+		.mpu_state = PWRDM_POWER_RET,
+		.mpu_logic_state = PWRDM_POWER_RET,
+	},
+	{
+		.cpu_state = PWRDM_POWER_OFF,
+		.mpu_state = PWRDM_POWER_RET,
+		.mpu_logic_state = PWRDM_POWER_OFF,
+	},
+};
+
 static struct powerdomain *mpu_pd, *cpu_pd[NR_CPUS];
 static struct clockdomain *cpu_clkdm[NR_CPUS];
 
 static atomic_t abort_barrier;
 static bool cpu_done[NR_CPUS];
-static struct idle_statedata *state_ptr = &omap4_idle_data[0];
+static struct idle_statedata *state_ptr;
+static DEFINE_RAW_SPINLOCK(mpu_lock);
 
 /* Private functions */
 
 /**
- * omap_enter_idle_[simple/coupled] - OMAP4PLUS cpuidle entry functions
+ * omap_enter_idle_[simple/smp/coupled] - OMAP4PLUS cpuidle entry functions
  * @dev: cpuidle device
  * @drv: cpuidle driver
  * @index: the index of state to be entered
@@ -131,6 +152,8 @@  static int omap_enter_idle_coupled(struct cpuidle_device *dev,
 
 	/* Wakeup CPU1 only if it is not offlined */
 	if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) {
+		/* Restore MPU PD state post idle */
+		omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);
 		clkdm_wakeup(cpu_clkdm[1]);
 		omap_set_pwrdm_state(cpu_pd[1], PWRDM_POWER_ON);
 		clkdm_allow_idle(cpu_clkdm[1]);
@@ -159,6 +182,37 @@  fail:
 	return index;
 }
 
+static int omap_enter_idle_smp(struct cpuidle_device *dev,
+			struct cpuidle_driver *drv,
+			int index)
+{
+	struct idle_statedata *cx = state_ptr + index;
+	int cpu_id = smp_processor_id();
+	unsigned long flag;
+
+	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id);
+
+	raw_spin_lock_irqsave(&mpu_lock, flag);
+	cx->mpu_state_vote++;
+	if (cx->mpu_state_vote == num_online_cpus()) {
+		pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
+		omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
+	}
+	raw_spin_unlock_irqrestore(&mpu_lock, flag);
+
+	omap4_enter_lowpower(dev->cpu, cx->cpu_state);
+
+	raw_spin_lock_irqsave(&mpu_lock, flag);
+	if (cx->mpu_state_vote == num_online_cpus())
+		omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);
+	cx->mpu_state_vote--;
+	raw_spin_unlock_irqrestore(&mpu_lock, flag);
+
+	clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu_id);
+
+	return index;
+}
+
 /*
  * For each cpu, setup the broadcast timer because local timers
  * stops for the states above C1.
@@ -208,6 +262,43 @@  static struct cpuidle_driver omap4_idle_driver = {
 	.safe_state_index = 0,
 };
 
+static struct cpuidle_driver omap5_idle_driver = {
+	.name				= "omap5_idle",
+	.owner				= THIS_MODULE,
+	.en_core_tk_irqen		= 1,
+	.states = {
+		{
+			/* C1 - CPU0 ON + CPU1 ON + MPU ON */
+			.exit_latency = 2 + 2,
+			.target_residency = 5,
+			.flags = CPUIDLE_FLAG_TIME_VALID,
+			.enter = omap_enter_idle_simple,
+			.name = "C1",
+			.desc = "CPUx ON, MPUSS ON"
+		},
+		{
+			/* C2 - CPU0 CSWR + CPU1 CSWR + MPU CSWR */
+			.exit_latency = 16 + 16,
+			.target_residency = 40,
+			.flags = CPUIDLE_FLAG_TIME_VALID,
+			.enter = omap_enter_idle_smp,
+			.name = "C2",
+			.desc = "CPUx CSWR, MPUSS CSWR",
+		},
+		{
+			/* C3 - CPU0 OFF + CPU1 OFF + MPU OSWR */
+			.exit_latency = 460 + 518,
+			.target_residency = 1100,
+			.flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED,
+			.enter = omap_enter_idle_coupled,
+			.name = "C3",
+			.desc = "CPUx OFF, MPUSS OSWR",
+		},
+	},
+	.state_count = ARRAY_SIZE(omap5_idle_data),
+	.safe_state_index = 0,
+};
+
 /* Public functions */
 
 /**
@@ -219,7 +310,8 @@  static struct cpuidle_driver omap4_idle_driver = {
 int __init omap4_idle_init(void)
 {
 	struct cpuidle_device *dev;
-	unsigned int cpu_id = 0;
+	static struct cpuidle_driver *drv;
+	unsigned cpu_id = 0;
 
 	mpu_pd = pwrdm_lookup("mpu_pwrdm");
 	cpu_pd[0] = pwrdm_lookup("cpu0_pwrdm");
@@ -235,7 +327,15 @@  int __init omap4_idle_init(void)
 	/* Configure the broadcast timer on each cpu */
 	on_each_cpu(omap_setup_broadcast_timer, NULL, 1);
 
-	if (cpuidle_register_driver(&omap4_idle_driver)) {
+	if (cpu_is_omap44xx()) {
+		drv = &omap4_idle_driver;
+		state_ptr = &omap4_idle_data[0];
+	} else if (soc_is_omap54xx()) {
+		drv = &omap5_idle_driver;
+		state_ptr = &omap5_idle_data[0];
+	}
+
+	if (cpuidle_register_driver(drv)) {
 		pr_err("%s: CPUidle driver register failed\n", __func__);
 		return -EIO;
 	}
@@ -248,7 +348,7 @@  int __init omap4_idle_init(void)
 #endif
 		if (cpuidle_register_device(dev)) {
 			pr_err("%s: CPUidle register failed\n", __func__);
-			cpuidle_unregister_driver(&omap4_idle_driver);
+			cpuidle_unregister_driver(drv);
 			return -EIO;
 		}
 	}
diff --git a/arch/arm/mach-omap2/omap-mpuss-lowpower.c b/arch/arm/mach-omap2/omap-mpuss-lowpower.c
index 6e917d7..8f6a0be 100644
--- a/arch/arm/mach-omap2/omap-mpuss-lowpower.c
+++ b/arch/arm/mach-omap2/omap-mpuss-lowpower.c
@@ -100,7 +100,7 @@  extern int omap5_finish_suspend(unsigned long cpu_state);
 static DEFINE_PER_CPU(struct omap4_cpu_pm_info, omap4_pm_info);
 static struct powerdomain *mpuss_pd;
 static void __iomem *sar_base;
-static u32 cpu_context_offset;
+static u32 cpu_context_offset, cpu_cswr_supported;
 
 static int default_finish_suspend(unsigned long cpu_state)
 {
@@ -248,6 +248,10 @@  int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state)
 		save_state = 1;
 		break;
 	case PWRDM_POWER_RET:
+		if (cpu_cswr_supported) {
+			save_state = 0;
+			break;
+		}
 	default:
 		/*
 		 * CPUx CSWR is invalid hardware state. Also CPUx OSWR
@@ -441,6 +445,7 @@  int __init omap4_mpuss_init(void)
 		omap_pm_ops.resume = cpu_resume;
 		cpu_context_offset = OMAP54XX_RM_CPU0_CPU0_CONTEXT_OFFSET;
 		enable_mercury_retention_mode();
+		cpu_cswr_supported = 1;
 	}
 
 	/* Over-write the OMAP4 hook to take care of ROM BUG */
diff --git a/arch/arm/mach-omap2/pm_omap4plus.c b/arch/arm/mach-omap2/pm_omap4plus.c
index 2dadb4e..85408ce 100644
--- a/arch/arm/mach-omap2/pm_omap4plus.c
+++ b/arch/arm/mach-omap2/pm_omap4plus.c
@@ -275,7 +275,7 @@  int __init omap4_pm_init(void)
 	/* Overwrite the default cpu_do_idle() */
 	arm_pm_idle = omap_default_idle;
 
-	if (cpu_is_omap44xx())
+	if (cpu_is_omap44xx() || soc_is_omap54xx())
 		omap4_idle_init();
 
 err2: