Message ID | 20221128094939.801232-1-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: efi: Make runtime service wrapper more robust | expand |
On Mon, Nov 28, 2022 at 10:49:39AM +0100, Ard Biesheuvel wrote: > Prevent abuse of the runtime service wrapper code by avoiding restoring > the shadow call stack pointer from the ordinary stack, or the stack > pointer itself from a GPR. Also, given that the exception recovery > routine is never called in an ordinary way, it doesn't need BTI landing > pads so it can be SYM_CODE rather than SYM_FUNC. Does this mean x18 is now being spilled to the stack? (Do we already spill it in other places?)
On Fri, 2 Dec 2022 at 00:45, Kees Cook <keescook@chromium.org> wrote: > > On Mon, Nov 28, 2022 at 10:49:39AM +0100, Ard Biesheuvel wrote: > > Prevent abuse of the runtime service wrapper code by avoiding restoring > > the shadow call stack pointer from the ordinary stack, or the stack > > pointer itself from a GPR. Also, given that the exception recovery > > routine is never called in an ordinary way, it doesn't need BTI landing > > pads so it can be SYM_CODE rather than SYM_FUNC. > > Does this mean x18 is now being spilled to the stack? (Do we already > spill it in other places?) > I've found a better way of addressing this, by moving this code out of the kernel .text mapping entirely, and only mapping it executable in the EFI page tables (which are only active while a runtime service call is in progress, and only on a single CPU running with preemption disabled) https://git.kernel.org/pub/scm/linux/kernel/git/efi/efi.git/commit/?id=47f68266d6ad94860c6cd9d2145cb91350b47e43
On Fri, 2 Dec 2022 at 00:47, Ard Biesheuvel <ardb@kernel.org> wrote: > > On Fri, 2 Dec 2022 at 00:45, Kees Cook <keescook@chromium.org> wrote: > > > > On Mon, Nov 28, 2022 at 10:49:39AM +0100, Ard Biesheuvel wrote: > > > Prevent abuse of the runtime service wrapper code by avoiding restoring > > > the shadow call stack pointer from the ordinary stack, or the stack > > > pointer itself from a GPR. Also, given that the exception recovery > > > routine is never called in an ordinary way, it doesn't need BTI landing > > > pads so it can be SYM_CODE rather than SYM_FUNC. > > > > Does this mean x18 is now being spilled to the stack? (Do we already > > spill it in other places?) > > > > I've found a better way of addressing this, by moving this code out of > the kernel .text mapping entirely, and only mapping it executable in > the EFI page tables (which are only active while a runtime service > call is in progress, and only on a single CPU running with preemption > disabled) > > https://git.kernel.org/pub/scm/linux/kernel/git/efi/efi.git/commit/?id=47f68266d6ad94860c6cd9d2145cb91350b47e43 And to answer your question: yes, x18 is currently spllled to the stack in both of those routines. I've reverted the patch that added the second one (which was only added this cycle). The other one needs a fix going to -stable, so I'll backport the patch I quoted above once it hits linus's tree.
diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 67babd5f04c27c7a..afd3e81e1b627b87 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -28,7 +28,7 @@ SYM_FUNC_START(__efi_rt_asm_wrapper) stp x27, x28, [sp, #96] adr_this_cpu x8, __efi_rt_asm_recover_sp, x9 - str x29, [x8] + stp x29, x18, [x8] /* * We are lucky enough that no EFI runtime services take more than @@ -56,15 +56,17 @@ SYM_FUNC_START(__efi_rt_asm_wrapper) * called with preemption disabled and a separate shadow stack is used * for interrupts. */ - mov x18, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr_this_cpu x18, __efi_rt_asm_recover_sp + 8, x9 +#endif + b efi_handle_corrupted_x18 // tail call SYM_FUNC_END(__efi_rt_asm_wrapper) -SYM_FUNC_START(__efi_rt_asm_recover) - ldr_this_cpu x8, __efi_rt_asm_recover_sp, x9 - mov sp, x8 +SYM_CODE_START(__efi_rt_asm_recover) + mov sp, x30 - ldp x0, x18, [sp, #16] + ldr x0, [sp, #16] ldp x19, x20, [sp, #32] ldp x21, x22, [sp, #48] ldp x23, x24, [sp, #64] @@ -73,4 +75,4 @@ SYM_FUNC_START(__efi_rt_asm_recover) ldp x29, x30, [sp], #112 b efi_handle_runtime_exception -SYM_FUNC_END(__efi_rt_asm_recover) +SYM_CODE_END(__efi_rt_asm_recover) diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 8d36e66a6e64cdaa..db7bdce1c7da578b 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -130,7 +130,7 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f) return s; } -asmlinkage DEFINE_PER_CPU(u64, __efi_rt_asm_recover_sp); +asmlinkage DEFINE_PER_CPU(u64[2], __efi_rt_asm_recover_sp); asmlinkage efi_status_t __efi_rt_asm_recover(void); @@ -151,6 +151,10 @@ bool efi_runtime_fixup_exception(struct pt_regs *regs, const char *msg) add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); dump_stack(); + regs->regs[30] = __this_cpu_read(__efi_rt_asm_recover_sp[0]); +#ifdef CONFIG_SHADOW_CALL_STACK + regs->regs[18] = __this_cpu_read(__efi_rt_asm_recover_sp[1]); +#endif regs->pc = (u64)__efi_rt_asm_recover; return true; }
Prevent abuse of the runtime service wrapper code by avoiding restoring the shadow call stack pointer from the ordinary stack, or the stack pointer itself from a GPR. Also, given that the exception recovery routine is never called in an ordinary way, it doesn't need BTI landing pads so it can be SYM_CODE rather than SYM_FUNC. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- arch/arm64/kernel/efi-rt-wrapper.S | 16 +++++++++------- arch/arm64/kernel/efi.c | 6 +++++- 2 files changed, 14 insertions(+), 8 deletions(-)