Message ID | 20190717165602.114502-1-james.morse@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: entry: SP Alignment Fault doesn't write to FAR_EL1 | expand |
Hi James, On Wed, Jul 17, 2019 at 05:56:02PM +0100, James Morse wrote: > Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with > AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP > to FAR_EL1, but this is where we read from, and the address we provide > to user-space with the BUS_ADRALN signal. > > This value will be UNKNOWN due to the previous ERET to user-space. > If the last value is preserved, on systems with KASLR or KPTI this will > be the user-space link-register left in FAR_EL1 by tramp_exit(). > > Fix this to retrieve the original sp_el0 value, and pass this to > do_sp_pc_fault(). > > Fixes: 60ffc30d5652 ("arm64: Exception handling") > Signed-off-by: James Morse <james.morse@arm.com> > --- > arch/arm64/kernel/entry.S | 25 ++++++++++++++++++++----- > 1 file changed, 20 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S > index 165da78815c5..023e533c537e 100644 > --- a/arch/arm64/kernel/entry.S > +++ b/arch/arm64/kernel/entry.S > @@ -743,9 +743,9 @@ el0_sync: > ccmp x24, #ESR_ELx_EC_WFx, #4, ne > b.eq el0_sys > cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception > - b.eq el0_sp_pc > + b.eq el0_sp > cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception > - b.eq el0_sp_pc > + b.eq el0_pc > cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0 > b.eq el0_undef > cmp x24, #ESR_ELx_EC_BREAKPT_LOW // debug exception in EL0 > @@ -769,7 +769,7 @@ el0_sync_compat: > cmp x24, #ESR_ELx_EC_FP_EXC32 // FP/ASIMD exception > b.eq el0_fpsimd_exc > cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception > - b.eq el0_sp_pc > + b.eq el0_pc > cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0 > b.eq el0_undef > cmp x24, #ESR_ELx_EC_CP15_32 // CP15 MRC/MCR trap > @@ -869,9 +869,24 @@ el0_fpsimd_exc: > mov x1, sp > bl do_fpsimd_exc > b ret_to_user > -el0_sp_pc: > +el0_sp: > /* > - * Stack or PC alignment exception handling > + * Stack alignment exception handling > + */ > + gic_prio_kentry_setup tmp=x0 > + enable_da_f > +#ifdef CONFIG_TRACE_IRQFLAGS > + bl trace_hardirqs_off > +#endif > + ct_user_exit > + ldr x0, [sp, #S_SP] > + mov x1, x25 > + mov x2, sp > + bl do_sp_pc_abort > + b ret_to_user > +el0_pc: > + /* > + * PC alignment exception handling Given that this really isn't a fast path, I think it's preferable to avoid duplicating the entry code and instead just have something like: @@ -858,11 +858,15 @@ el0_fpsimd_exc: mov x1, sp bl do_fpsimd_exc b ret_to_user +el0_sp: + ldr x26, [sp, #S_SP] + b el0_sp_pc +el0_pc: + mrs x26, far_el1 el0_sp_pc: /* * Stack or PC alignment exception handling */ - mrs x26, far_el1 gic_prio_kentry_setup tmp=x0 enable_da_f #ifdef CONFIG_TRACE_IRQFLAGS I also think we should do the same thing for the EL1 case, even though the address is currently ignored by the C handler. What do you reckon? Will
Hi Will, On 22/07/2019 11:34, Will Deacon wrote: > Hi James, > > On Wed, Jul 17, 2019 at 05:56:02PM +0100, James Morse wrote: >> Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with >> AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP >> to FAR_EL1, but this is where we read from, and the address we provide >> to user-space with the BUS_ADRALN signal. >> >> This value will be UNKNOWN due to the previous ERET to user-space. >> If the last value is preserved, on systems with KASLR or KPTI this will >> be the user-space link-register left in FAR_EL1 by tramp_exit(). >> >> Fix this to retrieve the original sp_el0 value, and pass this to >> do_sp_pc_fault(). >> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S >> index 165da78815c5..023e533c537e 100644 >> --- a/arch/arm64/kernel/entry.S >> +++ b/arch/arm64/kernel/entry.S >> @@ -869,9 +869,24 @@ el0_fpsimd_exc: >> mov x1, sp >> bl do_fpsimd_exc >> b ret_to_user >> -el0_sp_pc: >> +el0_sp: >> /* >> - * Stack or PC alignment exception handling >> + * Stack alignment exception handling >> + */ >> + gic_prio_kentry_setup tmp=x0 >> + enable_da_f >> +#ifdef CONFIG_TRACE_IRQFLAGS >> + bl trace_hardirqs_off >> +#endif >> + ct_user_exit >> + ldr x0, [sp, #S_SP] >> + mov x1, x25 >> + mov x2, sp >> + bl do_sp_pc_abort >> + b ret_to_user >> +el0_pc: >> + /* >> + * PC alignment exception handling > Given that this really isn't a fast path, I think it's preferable to avoid > duplicating the entry code and instead just have something like: > > @@ -858,11 +858,15 @@ el0_fpsimd_exc: > mov x1, sp > bl do_fpsimd_exc > b ret_to_user > +el0_sp: > + ldr x26, [sp, #S_SP] > + b el0_sp_pc > +el0_pc: > + mrs x26, far_el1 > el0_sp_pc: > /* > * Stack or PC alignment exception handling > */ > - mrs x26, far_el1 > gic_prio_kentry_setup tmp=x0 > enable_da_f > #ifdef CONFIG_TRACE_IRQFLAGS > I'll do it this way. I came to this from Mark's series that moves this to C, where it ends up being duplicated, so I didn't think very long about it. > I also think we should do the same thing for the EL1 case, even though > the address is currently ignored by the C handler. > What do you reckon? I wrote that patch, but then found we don't fix misaligned EL1 stacks, so that code never runs. We end up panic()ing on the overflow stack. There was a series from Julien Thierry[0] that fixed this, but it looks like the conclusion was gcc would never generate code that misaligns the stack. If you like I can fix the existing el1_sp_pc, just because its broken. We don't want to re-discover this bug in the future. Thanks, James [0] https://www.spinics.net/lists/arm-kernel/msg686173.html
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 165da78815c5..023e533c537e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -743,9 +743,9 @@ el0_sync: ccmp x24, #ESR_ELx_EC_WFx, #4, ne b.eq el0_sys cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception - b.eq el0_sp_pc + b.eq el0_sp cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception - b.eq el0_sp_pc + b.eq el0_pc cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0 b.eq el0_undef cmp x24, #ESR_ELx_EC_BREAKPT_LOW // debug exception in EL0 @@ -769,7 +769,7 @@ el0_sync_compat: cmp x24, #ESR_ELx_EC_FP_EXC32 // FP/ASIMD exception b.eq el0_fpsimd_exc cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception - b.eq el0_sp_pc + b.eq el0_pc cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0 b.eq el0_undef cmp x24, #ESR_ELx_EC_CP15_32 // CP15 MRC/MCR trap @@ -869,9 +869,24 @@ el0_fpsimd_exc: mov x1, sp bl do_fpsimd_exc b ret_to_user -el0_sp_pc: +el0_sp: /* - * Stack or PC alignment exception handling + * Stack alignment exception handling + */ + gic_prio_kentry_setup tmp=x0 + enable_da_f +#ifdef CONFIG_TRACE_IRQFLAGS + bl trace_hardirqs_off +#endif + ct_user_exit + ldr x0, [sp, #S_SP] + mov x1, x25 + mov x2, sp + bl do_sp_pc_abort + b ret_to_user +el0_pc: + /* + * PC alignment exception handling */ mrs x26, far_el1 gic_prio_kentry_setup tmp=x0
Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP to FAR_EL1, but this is where we read from, and the address we provide to user-space with the BUS_ADRALN signal. This value will be UNKNOWN due to the previous ERET to user-space. If the last value is preserved, on systems with KASLR or KPTI this will be the user-space link-register left in FAR_EL1 by tramp_exit(). Fix this to retrieve the original sp_el0 value, and pass this to do_sp_pc_fault(). Fixes: 60ffc30d5652 ("arm64: Exception handling") Signed-off-by: James Morse <james.morse@arm.com> --- arch/arm64/kernel/entry.S | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-)