diff mbox series

[2/2] arm64: Save and restore OSDLR_EL1

Message ID 20190408171719.44711-3-jean-philippe.brucker@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: Clear OS Double Lock Register | expand

Commit Message

Jean-Philippe Brucker April 8, 2019, 5:17 p.m. UTC
When the CPU comes out of suspend, the firmware may have modified the OS
Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and
restore it on resume.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 arch/arm64/mm/proc.S | 34 ++++++++++++++++++----------------
 1 file changed, 18 insertions(+), 16 deletions(-)

Comments

Will Deacon April 9, 2019, 10:37 a.m. UTC | #1
Hi Jean-Philippe,

On Mon, Apr 08, 2019 at 06:17:19PM +0100, Jean-Philippe Brucker wrote:
> When the CPU comes out of suspend, the firmware may have modified the OS
> Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and
> restore it on resume.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  arch/arm64/mm/proc.S | 34 ++++++++++++++++++----------------
>  1 file changed, 18 insertions(+), 16 deletions(-)

[...]

> @@ -105,8 +106,8 @@ ENTRY(cpu_do_resume)
>  	msr	cpacr_el1, x6
>  
>  	/* Don't change t0sz here, mask those bits when restoring */
> -	mrs	x5, tcr_el1
> -	bfi	x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
> +	mrs	x7, tcr_el1
> +	bfi	x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
>  
>  	msr	tcr_el1, x8
>  	msr	vbar_el1, x9
> @@ -132,6 +133,7 @@ alternative_endif
>  	 */
>  	ubfx	x11, x11, #1, #1
>  	msr	oslar_el1, x11
> +	msr	osdlr_el1, x5

Just one oddity, but I notice you restore these in the opposite order from
the order in which initialise them on the boot path. Is that ok?

Also, did you test a suspend/resume cycle with this change?

Cheers,

Will
Jean-Philippe Brucker April 9, 2019, 11:29 a.m. UTC | #2
On 09/04/2019 11:37, Will Deacon wrote:
> Hi Jean-Philippe,
> 
> On Mon, Apr 08, 2019 at 06:17:19PM +0100, Jean-Philippe Brucker wrote:
>> When the CPU comes out of suspend, the firmware may have modified the OS
>> Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and
>> restore it on resume.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  arch/arm64/mm/proc.S | 34 ++++++++++++++++++----------------
>>  1 file changed, 18 insertions(+), 16 deletions(-)
> 
> [...]
> 
>> @@ -105,8 +106,8 @@ ENTRY(cpu_do_resume)
>>  	msr	cpacr_el1, x6
>>  
>>  	/* Don't change t0sz here, mask those bits when restoring */
>> -	mrs	x5, tcr_el1
>> -	bfi	x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
>> +	mrs	x7, tcr_el1
>> +	bfi	x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
>>  
>>  	msr	tcr_el1, x8
>>  	msr	vbar_el1, x9
>> @@ -132,6 +133,7 @@ alternative_endif
>>  	 */
>>  	ubfx	x11, x11, #1, #1
>>  	msr	oslar_el1, x11
>> +	msr	osdlr_el1, x5
> 
> Just one oddity, but I notice you restore these in the opposite order from
> the order in which initialise them on the boot path. Is that ok?

I haven't found any documentation about the unlock order. Given that OS
Lock is locked before OS Double Lock, I think restoring OSDLR before
OSLAR makes more sense, so I can change the order in this patch for
consistency.

> Also, did you test a suspend/resume cycle with this change?

Yes but only on Juno, because the suspend/resume code doesn't seem to be
used on TX2

Thanks,
Jean
diff mbox series

Patch

diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index aa0817c9c4c3..3a621f83e938 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -65,24 +65,25 @@  ENTRY(cpu_do_suspend)
 	mrs	x2, tpidr_el0
 	mrs	x3, tpidrro_el0
 	mrs	x4, contextidr_el1
-	mrs	x5, cpacr_el1
-	mrs	x6, tcr_el1
-	mrs	x7, vbar_el1
-	mrs	x8, mdscr_el1
-	mrs	x9, oslsr_el1
-	mrs	x10, sctlr_el1
+	mrs	x5, osdlr_el1
+	mrs	x6, cpacr_el1
+	mrs	x7, tcr_el1
+	mrs	x8, vbar_el1
+	mrs	x9, mdscr_el1
+	mrs	x10, oslsr_el1
+	mrs	x11, sctlr_el1
 alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
-	mrs	x11, tpidr_el1
+	mrs	x12, tpidr_el1
 alternative_else
-	mrs	x11, tpidr_el2
+	mrs	x12, tpidr_el2
 alternative_endif
-	mrs	x12, sp_el0
+	mrs	x13, sp_el0
 	stp	x2, x3, [x0]
-	stp	x4, xzr, [x0, #16]
-	stp	x5, x6, [x0, #32]
-	stp	x7, x8, [x0, #48]
-	stp	x9, x10, [x0, #64]
-	stp	x11, x12, [x0, #80]
+	stp	x4, x5, [x0, #16]
+	stp	x6, x7, [x0, #32]
+	stp	x8, x9, [x0, #48]
+	stp	x10, x11, [x0, #64]
+	stp	x12, x13, [x0, #80]
 	ret
 ENDPROC(cpu_do_suspend)
 
@@ -105,8 +106,8 @@  ENTRY(cpu_do_resume)
 	msr	cpacr_el1, x6
 
 	/* Don't change t0sz here, mask those bits when restoring */
-	mrs	x5, tcr_el1
-	bfi	x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+	mrs	x7, tcr_el1
+	bfi	x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
 
 	msr	tcr_el1, x8
 	msr	vbar_el1, x9
@@ -132,6 +133,7 @@  alternative_endif
 	 */
 	ubfx	x11, x11, #1, #1
 	msr	oslar_el1, x11
+	msr	osdlr_el1, x5
 	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
 
 alternative_if ARM64_HAS_RAS_EXTN