diff mbox series

[v2] KVM: PPC: Book3S PR: Enable MSR_DR for switch_mmu_context()

Message ID 20220510111809.15987-1-graf@amazon.com (mailing list archive)
State New, archived
Headers show
Series [v2] KVM: PPC: Book3S PR: Enable MSR_DR for switch_mmu_context() | expand

Commit Message

Alexander Graf May 10, 2022, 11:18 a.m. UTC
Commit 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")
moved the switch_mmu_context() to C. While in principle a good idea, it
meant that the function now uses the stack. The stack is not accessible
from real mode though.

So to keep calling the function, let's turn on MSR_DR while we call it.
That way, all pointer references to the stack are handled virtually.

In addition, make sure to save/restore r12 in an SPRG, as it may get
clobbered by the C function.

Reported-by: Matt Evans <matt@ozlabs.org>
Fixes: 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")
Signed-off-by: Alexander Graf <graf@amazon.com>
Cc: stable@vger.kernel.org # v5.14+

---

v1 -> v2:

  - Save and restore R12, so that we don't touch volatile registers
    while calling into C.
---
 arch/powerpc/kvm/book3s_32_sr.S | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

Comments

Christophe Leroy May 10, 2022, 11:31 a.m. UTC | #1
Le 10/05/2022 à 13:18, Alexander Graf a écrit :
> Commit 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")
> moved the switch_mmu_context() to C. While in principle a good idea, it
> meant that the function now uses the stack. The stack is not accessible
> from real mode though.
> 
> So to keep calling the function, let's turn on MSR_DR while we call it.
> That way, all pointer references to the stack are handled virtually.

Is the system ready to handle a DSI in case the stack is not mapped ?

> 
> In addition, make sure to save/restore r12 in an SPRG, as it may get
> clobbered by the C function.
> 
> Reported-by: Matt Evans <matt@ozlabs.org>
> Fixes: 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")

Oops, sorry for that. I didn't realise that there was other callers to 
switch_mmu_context() than switch_mm_irqs_off().

Christophe

> Signed-off-by: Alexander Graf <graf@amazon.com>
> Cc: stable@vger.kernel.org # v5.14+
> 
> ---
> 
> v1 -> v2:
> 
>    - Save and restore R12, so that we don't touch volatile registers
>      while calling into C.
> ---
>   arch/powerpc/kvm/book3s_32_sr.S | 26 +++++++++++++++++++++-----
>   1 file changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S
> index e3ab9df6cf19..1ce13e3ab072 100644
> --- a/arch/powerpc/kvm/book3s_32_sr.S
> +++ b/arch/powerpc/kvm/book3s_32_sr.S
> @@ -122,11 +122,27 @@
>   
>   	/* 0x0 - 0xb */
>   
> -	/* 'current->mm' needs to be in r4 */
> -	tophys(r4, r2)
> -	lwz	r4, MM(r4)
> -	tophys(r4, r4)
> -	/* This only clobbers r0, r3, r4 and r5 */
> +	/* switch_mmu_context() clobbers r12, rescue it */
> +	SET_SCRATCH0(r12)
> +
> +	/* switch_mmu_context() needs paging, let's enable it */
> +	mfmsr   r9
> +	ori     r11, r9, MSR_DR
> +	mtmsr   r11
> +	sync
> +
> +	/* Calling switch_mmu_context(<inv>, current->mm, <inv>); */
> +	lwz	r4, MM(r2)
>   	bl	switch_mmu_context
>   
> +	/* Disable paging again */
> +	mfmsr   r9
> +	li      r6, MSR_DR
> +	andc    r9, r9, r6
> +	mtmsr	r9
> +	sync
> +
> +	/* restore r12 */
> +	GET_SCRATCH0(r12)
> +
>   .endm
Alexander Graf May 10, 2022, 12:33 p.m. UTC | #2
On 10.05.22 13:31, Christophe Leroy wrote:
> Le 10/05/2022 à 13:18, Alexander Graf a écrit :
>> Commit 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")
>> moved the switch_mmu_context() to C. While in principle a good idea, it
>> meant that the function now uses the stack. The stack is not accessible
>> from real mode though.
>>
>> So to keep calling the function, let's turn on MSR_DR while we call it.
>> That way, all pointer references to the stack are handled virtually.
> Is the system ready to handle a DSI in case the stack is not mapped ?


A DSI itself will be an interrupt again which will in turn destroy the 
SPRG that we're saving. Guess I was trying to be too smart :). I'll use 
Matt's original suggestion and just put it on the stack.


>> In addition, make sure to save/restore r12 in an SPRG, as it may get
>> clobbered by the C function.
>>
>> Reported-by: Matt Evans <matt@ozlabs.org>
>> Fixes: 863771a28e27 ("powerpc/32s: Convert switch_mmu_context() to C")
> Oops, sorry for that. I didn't realise that there was other callers to
> switch_mmu_context() than switch_mm_irqs_off().


No worries, the compiled C version looks a lot nicer than the previous 
asm one - and it was a good way to identify whether there still are 
users of KVM on Book3S 32bit out there :)


Alex





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S
index e3ab9df6cf19..1ce13e3ab072 100644
--- a/arch/powerpc/kvm/book3s_32_sr.S
+++ b/arch/powerpc/kvm/book3s_32_sr.S
@@ -122,11 +122,27 @@ 
 
 	/* 0x0 - 0xb */
 
-	/* 'current->mm' needs to be in r4 */
-	tophys(r4, r2)
-	lwz	r4, MM(r4)
-	tophys(r4, r4)
-	/* This only clobbers r0, r3, r4 and r5 */
+	/* switch_mmu_context() clobbers r12, rescue it */
+	SET_SCRATCH0(r12)
+
+	/* switch_mmu_context() needs paging, let's enable it */
+	mfmsr   r9
+	ori     r11, r9, MSR_DR
+	mtmsr   r11
+	sync
+
+	/* Calling switch_mmu_context(<inv>, current->mm, <inv>); */
+	lwz	r4, MM(r2)
 	bl	switch_mmu_context
 
+	/* Disable paging again */
+	mfmsr   r9
+	li      r6, MSR_DR
+	andc    r9, r9, r6
+	mtmsr	r9
+	sync
+
+	/* restore r12 */
+	GET_SCRATCH0(r12)
+
 .endm