Message ID | 20210402032404.47239-2-madvenka@linux.microsoft.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Implement stack trace termination record | expand |
On Thu, Apr 01, 2021 at 10:24:04PM -0500, madvenka@linux.microsoft.com wrote: > From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> > @@ -447,9 +464,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) > #endif > bl switch_to_vhe // Prefer VHE if possible > add sp, sp, #16 > - mov x29, #0 > - mov x30, #0 > - b start_kernel > + setup_final_frame > + bl start_kernel > + nop > SYM_FUNC_END(__primary_switched) > > .pushsection ".rodata", "a" > @@ -606,14 +623,14 @@ SYM_FUNC_START_LOCAL(__secondary_switched) > cbz x2, __secondary_too_slow > msr sp_el0, x2 > scs_load x2, x3 > - mov x29, #0 > - mov x30, #0 > + setup_final_frame > > #ifdef CONFIG_ARM64_PTR_AUTH > ptrauth_keys_init_cpu x2, x3, x4, x5 > #endif > > - b secondary_start_kernel > + bl secondary_start_kernel > + nop > SYM_FUNC_END(__secondary_switched) I'm somewhat arm-ignorant, so take the following comments with a grain of salt. I don't think changing these to 'bl' is necessary, unless you wanted __primary_switched() and __secondary_switched() to show up in the stacktrace for some reason? If so, that seems like a separate patch. Also, why are nops added after the calls? My guess would be because, since these are basically tail calls to "noreturn" functions, the stack dump code would otherwise show the wrong function, i.e. whatever function happens to be after the 'bl'. We had the same issue for x86. It can be fixed by using '%pB' instead of '%pS' when printing the address in dump_backtrace_entry(). See sprint_backtrace() for more details. BTW I think the same issue exists for GCC-generated code. The following shows several such cases: objdump -dr vmlinux |awk '/bl / {bl=1;l=$0;next} bl == 1 && /^$/ {print l; print} // {bl=0}' However, looking at how arm64 unwinds through exceptions in kernel space, using '%pB' might have side effects when the exception LR (elr_el1) points to the beginning of a function. Then '%pB' would show the end of the previous function, instead of the function which was interrupted. So you may need to rethink how to unwind through in-kernel exceptions. Basically, when printing a stack return address, you want to use '%pB' for a call return address and '%pS' for an interrupted address. On x86, with the frame pointer unwinder, we encode the frame pointer by setting a bit in %rbp which tells the unwinder that it's a special pt_regs frame. Then instead of treating it like a normal call frame, the stack dump code prints the registers, and the return address (regs->ip) gets printed with '%pS'. > SYM_FUNC_START_LOCAL(__secondary_too_slow) > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c > index 325c83b1a24d..906baa232a89 100644 > --- a/arch/arm64/kernel/process.c > +++ b/arch/arm64/kernel/process.c > @@ -437,6 +437,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, > } > p->thread.cpu_context.pc = (unsigned long)ret_from_fork; > p->thread.cpu_context.sp = (unsigned long)childregs; > + /* > + * For the benefit of the unwinder, set up childregs->stackframe > + * as the final frame for the new task. > + */ > + p->thread.cpu_context.fp = (unsigned long)childregs->stackframe; > > ptrace_hw_copy_thread(p); > > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > index ad20981dfda4..72f5af8c69dc 100644 > --- a/arch/arm64/kernel/stacktrace.c > +++ b/arch/arm64/kernel/stacktrace.c > @@ -44,16 +44,16 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) > unsigned long fp = frame->fp; > struct stack_info info; > > - /* Terminal record; nothing to unwind */ > - if (!fp) > + if (!tsk) > + tsk = current; > + > + /* Final frame; nothing to unwind */ > + if (fp == (unsigned long) task_pt_regs(tsk)->stackframe) > return -ENOENT; As far as I can tell, the regs stackframe value is initialized to zero during syscall entry, so isn't this basically just 'if (fp == 0)'? Shouldn't it instead be comparing with the _address_ of the stackframe field to make sure it reached the end?
On 4/3/21 10:59 AM, Josh Poimboeuf wrote: > On Thu, Apr 01, 2021 at 10:24:04PM -0500, madvenka@linux.microsoft.com wrote: >> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> >> @@ -447,9 +464,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) >> #endif >> bl switch_to_vhe // Prefer VHE if possible >> add sp, sp, #16 >> - mov x29, #0 >> - mov x30, #0 >> - b start_kernel >> + setup_final_frame >> + bl start_kernel >> + nop >> SYM_FUNC_END(__primary_switched) >> >> .pushsection ".rodata", "a" >> @@ -606,14 +623,14 @@ SYM_FUNC_START_LOCAL(__secondary_switched) >> cbz x2, __secondary_too_slow >> msr sp_el0, x2 >> scs_load x2, x3 >> - mov x29, #0 >> - mov x30, #0 >> + setup_final_frame >> >> #ifdef CONFIG_ARM64_PTR_AUTH >> ptrauth_keys_init_cpu x2, x3, x4, x5 >> #endif >> >> - b secondary_start_kernel >> + bl secondary_start_kernel >> + nop >> SYM_FUNC_END(__secondary_switched) > > I'm somewhat arm-ignorant, so take the following comments with a grain > of salt. > > > I don't think changing these to 'bl' is necessary, unless you wanted > __primary_switched() and __secondary_switched() to show up in the > stacktrace for some reason? If so, that seems like a separate patch. > The problem is with __secondary_switched. If you trace the code back to where a secondary CPU is started, I don't see any calls anywhere. There are only branches if I am not mistaken. So, the return address register never gets set up with a proper address. The stack trace shows some hexadecimal value instead of a symbol name. On ARM64, the call instruction is actually a branch instruction IIUC. The only extra thing it does is to load the link register (return address register) with the return address. That is all. Instead of the link register pointing to some arbitrary code in startup that did not call start_kernel() or secondary_start_kernel(), I wanted to set it up as shown above. > > Also, why are nops added after the calls? My guess would be because, > since these are basically tail calls to "noreturn" functions, the stack > dump code would otherwise show the wrong function, i.e. whatever > function happens to be after the 'bl'. > That is correct. The stack trace shows something arbitrary. > We had the same issue for x86. It can be fixed by using '%pB' instead > of '%pS' when printing the address in dump_backtrace_entry(). See > sprint_backtrace() for more details. > > BTW I think the same issue exists for GCC-generated code. The following > shows several such cases: > > objdump -dr vmlinux |awk '/bl / {bl=1;l=$0;next} bl == 1 && /^$/ {print l; print} // {bl=0}' > > > However, looking at how arm64 unwinds through exceptions in kernel > space, using '%pB' might have side effects when the exception LR > (elr_el1) points to the beginning of a function. Then '%pB' would show > the end of the previous function, instead of the function which was > interrupted. > > So you may need to rethink how to unwind through in-kernel exceptions. > > Basically, when printing a stack return address, you want to use '%pB' > for a call return address and '%pS' for an interrupted address. > > On x86, with the frame pointer unwinder, we encode the frame pointer by > setting a bit in %rbp which tells the unwinder that it's a special > pt_regs frame. Then instead of treating it like a normal call frame, > the stack dump code prints the registers, and the return address > (regs->ip) gets printed with '%pS'. > Yes. But there are objections to that kind of encoding. Having the nop above does not do any harm. It just adds 4 bytes to the function text. I would rather keep this simple right now because this is only for getting a sensible stack trace for idle tasks. Is there any other problem that you can see? >> SYM_FUNC_START_LOCAL(__secondary_too_slow) >> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c >> index 325c83b1a24d..906baa232a89 100644 >> --- a/arch/arm64/kernel/process.c >> +++ b/arch/arm64/kernel/process.c >> @@ -437,6 +437,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, >> } >> p->thread.cpu_context.pc = (unsigned long)ret_from_fork; >> p->thread.cpu_context.sp = (unsigned long)childregs; >> + /* >> + * For the benefit of the unwinder, set up childregs->stackframe >> + * as the final frame for the new task. >> + */ >> + p->thread.cpu_context.fp = (unsigned long)childregs->stackframe; >> >> ptrace_hw_copy_thread(p); >> >> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c >> index ad20981dfda4..72f5af8c69dc 100644 >> --- a/arch/arm64/kernel/stacktrace.c >> +++ b/arch/arm64/kernel/stacktrace.c >> @@ -44,16 +44,16 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) >> unsigned long fp = frame->fp; >> struct stack_info info; >> >> - /* Terminal record; nothing to unwind */ >> - if (!fp) >> + if (!tsk) >> + tsk = current; >> + >> + /* Final frame; nothing to unwind */ >> + if (fp == (unsigned long) task_pt_regs(tsk)->stackframe) >> return -ENOENT; > > As far as I can tell, the regs stackframe value is initialized to zero > during syscall entry, so isn't this basically just 'if (fp == 0)'? > > Shouldn't it instead be comparing with the _address_ of the stackframe > field to make sure it reached the end? > pt_regs->stackframe is an array of two u64 elements- one for FP and one for PC. So, I am comparing the address and not the value of FP. u64 stackframe[2]; Madhavan
On 4/3/21 10:46 PM, Madhavan T. Venkataraman wrote: >> I'm somewhat arm-ignorant, so take the following comments with a grain >> of salt. >> >> >> I don't think changing these to 'bl' is necessary, unless you wanted >> __primary_switched() and __secondary_switched() to show up in the >> stacktrace for some reason? If so, that seems like a separate patch. >> > The problem is with __secondary_switched. If you trace the code back to where > a secondary CPU is started, I don't see any calls anywhere. There are only > branches if I am not mistaken. So, the return address register never gets > set up with a proper address. The stack trace shows some hexadecimal value > instead of a symbol name. > Actually, I take that back. There are calls in that code path. But I did only see some hexadecimal value instead of a proper address in the stack trace. Sorry about that confusion. My reason to convert the branches to calls is this - the value of the return address register at that point is the return PC of the previous branch and link instruction wherever that happens to be. I think that is a little arbitrary. Instead, if I call start_kernel() and secondary_start_kernel(), the return address gets set up to the next instruction which, IMHO, is better. But I am open to other suggestions. Madhavan
On 4/3/21 11:40 PM, Madhavan T. Venkataraman wrote: > > > On 4/3/21 10:46 PM, Madhavan T. Venkataraman wrote: >>> I'm somewhat arm-ignorant, so take the following comments with a grain >>> of salt. >>> >>> >>> I don't think changing these to 'bl' is necessary, unless you wanted >>> __primary_switched() and __secondary_switched() to show up in the >>> stacktrace for some reason? If so, that seems like a separate patch. >>> >> The problem is with __secondary_switched. If you trace the code back to where >> a secondary CPU is started, I don't see any calls anywhere. There are only >> branches if I am not mistaken. So, the return address register never gets >> set up with a proper address. The stack trace shows some hexadecimal value >> instead of a symbol name. >> > > Actually, I take that back. There are calls in that code path. But I did only > see some hexadecimal value instead of a proper address in the stack trace. > Sorry about that confusion. > Again, I apologize. I had this confused with something else in my notes. So, the stack trace looks like this without my changes to convert the branch to secondary_start_kernel() to a call: ... [ 0.022492] secondary_start_kernel+0x188/0x1e0 [ 0.022503] 0xf8689e1cc It looks like the code calls __enable_mmu before reaching the place where it branches to secondary_start_kernel(). bl __enable_mmu The return address register should be set to the next instruction address. I am guessing that the return address is 0xf8689e1cc because of the idmap stuff. Madhavan
Hi Mark Rutland, Mark Brown, Could you take a look at this version for proper stack termination and let me know what you think? Thanks! Madhavan On 4/1/21 10:24 PM, madvenka@linux.microsoft.com wrote: > From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> > > Reliable stacktracing requires that we identify when a stacktrace is > terminated early. We can do this by ensuring all tasks have a final > frame record at a known location on their task stack, and checking > that this is the final frame record in the chain. > > Kernel Tasks > ============ > > All tasks except the idle task have a pt_regs structure right after the > task stack. This is called the task pt_regs. The pt_regs structure has a > special stackframe field. Make this stackframe field the final frame in the > task stack. This needs to be done in copy_thread() which initializes a new > task's pt_regs and initial CPU context. > > For the idle task, there is no task pt_regs. For our purpose, we need one. > So, create a pt_regs just like other kernel tasks and make > pt_regs->stackframe the final frame in the idle task stack. This needs to be > done at two places: > > - On the primary CPU, the boot task runs. It calls start_kernel() > and eventually becomes the idle task for the primary CPU. Just > before start_kernel() is called, set up the final frame. > > - On each secondary CPU, a startup task runs that calls > secondary_startup_kernel() and eventually becomes the idle task > on the secondary CPU. Just before secondary_start_kernel() is > called, set up the final frame. > > User Tasks > ========== > > User tasks are initially set up like kernel tasks when they are created. > Then, they return to userland after fork via ret_from_fork(). After that, > they enter the kernel only on an EL0 exception. (In arm64, system calls are > also EL0 exceptions). The EL0 exception handler stores state in the task > pt_regs and calls different functions based on the type of exception. The > stack trace for an EL0 exception must end at the task pt_regs. So, make > task pt_regs->stackframe as the final frame in the EL0 exception stack. > > In summary, task pt_regs->stackframe is where a successful stack trace ends. > > Stack trace termination > ======================= > > In the unwinder, terminate the stack trace successfully when > task_pt_regs(task)->stackframe is reached. For stack traces in the kernel, > this will correctly terminate the stack trace at the right place. > > However, debuggers terminate the stack trace when FP == 0. In the > pt_regs->stackframe, the PC is 0 as well. So, stack traces taken in the > debugger may print an extra record 0x0 at the end. While this is not > pretty, this does not do any harm. This is a small price to pay for > having reliable stack trace termination in the kernel. > > Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> > --- > arch/arm64/kernel/entry.S | 8 +++++--- > arch/arm64/kernel/head.S | 29 +++++++++++++++++++++++------ > arch/arm64/kernel/process.c | 5 +++++ > arch/arm64/kernel/stacktrace.c | 10 +++++----- > 4 files changed, 38 insertions(+), 14 deletions(-) > > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S > index a31a0a713c85..e2dc2e998934 100644 > --- a/arch/arm64/kernel/entry.S > +++ b/arch/arm64/kernel/entry.S > @@ -261,16 +261,18 @@ alternative_else_nop_endif > stp lr, x21, [sp, #S_LR] > > /* > - * For exceptions from EL0, terminate the callchain here. > + * For exceptions from EL0, terminate the callchain here at > + * task_pt_regs(current)->stackframe. > + * > * For exceptions from EL1, create a synthetic frame record so the > * interrupted code shows up in the backtrace. > */ > .if \el == 0 > - mov x29, xzr > + stp xzr, xzr, [sp, #S_STACKFRAME] > .else > stp x29, x22, [sp, #S_STACKFRAME] > - add x29, sp, #S_STACKFRAME > .endif > + add x29, sp, #S_STACKFRAME > > #ifdef CONFIG_ARM64_SW_TTBR0_PAN > alternative_if_not ARM64_HAS_PAN > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > index 840bda1869e9..743c019a42c7 100644 > --- a/arch/arm64/kernel/head.S > +++ b/arch/arm64/kernel/head.S > @@ -393,6 +393,23 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > ret x28 > SYM_FUNC_END(__create_page_tables) > > + /* > + * The boot task becomes the idle task for the primary CPU. The > + * CPU startup task on each secondary CPU becomes the idle task > + * for the secondary CPU. > + * > + * The idle task does not require pt_regs. But create a dummy > + * pt_regs so that task_pt_regs(idle_task)->stackframe can be > + * set up to be the final frame on the idle task stack just like > + * all the other kernel tasks. This helps the unwinder to > + * terminate the stack trace at a well-known stack offset. > + */ > + .macro setup_final_frame > + sub sp, sp, #PT_REGS_SIZE > + stp xzr, xzr, [sp, #S_STACKFRAME] > + add x29, sp, #S_STACKFRAME > + .endm > + > /* > * The following fragment of code is executed with the MMU enabled. > * > @@ -447,9 +464,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) > #endif > bl switch_to_vhe // Prefer VHE if possible > add sp, sp, #16 > - mov x29, #0 > - mov x30, #0 > - b start_kernel > + setup_final_frame > + bl start_kernel > + nop > SYM_FUNC_END(__primary_switched) > > .pushsection ".rodata", "a" > @@ -606,14 +623,14 @@ SYM_FUNC_START_LOCAL(__secondary_switched) > cbz x2, __secondary_too_slow > msr sp_el0, x2 > scs_load x2, x3 > - mov x29, #0 > - mov x30, #0 > + setup_final_frame > > #ifdef CONFIG_ARM64_PTR_AUTH > ptrauth_keys_init_cpu x2, x3, x4, x5 > #endif > > - b secondary_start_kernel > + bl secondary_start_kernel > + nop > SYM_FUNC_END(__secondary_switched) > > SYM_FUNC_START_LOCAL(__secondary_too_slow) > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c > index 325c83b1a24d..906baa232a89 100644 > --- a/arch/arm64/kernel/process.c > +++ b/arch/arm64/kernel/process.c > @@ -437,6 +437,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, > } > p->thread.cpu_context.pc = (unsigned long)ret_from_fork; > p->thread.cpu_context.sp = (unsigned long)childregs; > + /* > + * For the benefit of the unwinder, set up childregs->stackframe > + * as the final frame for the new task. > + */ > + p->thread.cpu_context.fp = (unsigned long)childregs->stackframe; > > ptrace_hw_copy_thread(p); > > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > index ad20981dfda4..72f5af8c69dc 100644 > --- a/arch/arm64/kernel/stacktrace.c > +++ b/arch/arm64/kernel/stacktrace.c > @@ -44,16 +44,16 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) > unsigned long fp = frame->fp; > struct stack_info info; > > - /* Terminal record; nothing to unwind */ > - if (!fp) > + if (!tsk) > + tsk = current; > + > + /* Final frame; nothing to unwind */ > + if (fp == (unsigned long) task_pt_regs(tsk)->stackframe) > return -ENOENT; > > if (fp & 0xf) > return -EINVAL; > > - if (!tsk) > - tsk = current; > - > if (!on_accessible_stack(tsk, fp, &info)) > return -EINVAL; > >
On Thu, Apr 01, 2021 at 10:24:04PM -0500, madvenka@linux.microsoft.com wrote: > Reliable stacktracing requires that we identify when a stacktrace is > terminated early. We can do this by ensuring all tasks have a final > frame record at a known location on their task stack, and checking > that this is the final frame record in the chain. Reviewed-by: Mark Brown <broonie@kernel.org>
Thanks! Madhavan On 4/16/21 11:17 AM, Mark Brown wrote: > On Thu, Apr 01, 2021 at 10:24:04PM -0500, madvenka@linux.microsoft.com wrote: > >> Reliable stacktracing requires that we identify when a stacktrace is >> terminated early. We can do this by ensuring all tasks have a final >> frame record at a known location on their task stack, and checking >> that this is the final frame record in the chain. > > Reviewed-by: Mark Brown <broonie@kernel.org> >
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index a31a0a713c85..e2dc2e998934 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -261,16 +261,18 @@ alternative_else_nop_endif stp lr, x21, [sp, #S_LR] /* - * For exceptions from EL0, terminate the callchain here. + * For exceptions from EL0, terminate the callchain here at + * task_pt_regs(current)->stackframe. + * * For exceptions from EL1, create a synthetic frame record so the * interrupted code shows up in the backtrace. */ .if \el == 0 - mov x29, xzr + stp xzr, xzr, [sp, #S_STACKFRAME] .else stp x29, x22, [sp, #S_STACKFRAME] - add x29, sp, #S_STACKFRAME .endif + add x29, sp, #S_STACKFRAME #ifdef CONFIG_ARM64_SW_TTBR0_PAN alternative_if_not ARM64_HAS_PAN diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 840bda1869e9..743c019a42c7 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -393,6 +393,23 @@ SYM_FUNC_START_LOCAL(__create_page_tables) ret x28 SYM_FUNC_END(__create_page_tables) + /* + * The boot task becomes the idle task for the primary CPU. The + * CPU startup task on each secondary CPU becomes the idle task + * for the secondary CPU. + * + * The idle task does not require pt_regs. But create a dummy + * pt_regs so that task_pt_regs(idle_task)->stackframe can be + * set up to be the final frame on the idle task stack just like + * all the other kernel tasks. This helps the unwinder to + * terminate the stack trace at a well-known stack offset. + */ + .macro setup_final_frame + sub sp, sp, #PT_REGS_SIZE + stp xzr, xzr, [sp, #S_STACKFRAME] + add x29, sp, #S_STACKFRAME + .endm + /* * The following fragment of code is executed with the MMU enabled. * @@ -447,9 +464,9 @@ SYM_FUNC_START_LOCAL(__primary_switched) #endif bl switch_to_vhe // Prefer VHE if possible add sp, sp, #16 - mov x29, #0 - mov x30, #0 - b start_kernel + setup_final_frame + bl start_kernel + nop SYM_FUNC_END(__primary_switched) .pushsection ".rodata", "a" @@ -606,14 +623,14 @@ SYM_FUNC_START_LOCAL(__secondary_switched) cbz x2, __secondary_too_slow msr sp_el0, x2 scs_load x2, x3 - mov x29, #0 - mov x30, #0 + setup_final_frame #ifdef CONFIG_ARM64_PTR_AUTH ptrauth_keys_init_cpu x2, x3, x4, x5 #endif - b secondary_start_kernel + bl secondary_start_kernel + nop SYM_FUNC_END(__secondary_switched) SYM_FUNC_START_LOCAL(__secondary_too_slow) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 325c83b1a24d..906baa232a89 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -437,6 +437,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, } p->thread.cpu_context.pc = (unsigned long)ret_from_fork; p->thread.cpu_context.sp = (unsigned long)childregs; + /* + * For the benefit of the unwinder, set up childregs->stackframe + * as the final frame for the new task. + */ + p->thread.cpu_context.fp = (unsigned long)childregs->stackframe; ptrace_hw_copy_thread(p); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ad20981dfda4..72f5af8c69dc 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -44,16 +44,16 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) unsigned long fp = frame->fp; struct stack_info info; - /* Terminal record; nothing to unwind */ - if (!fp) + if (!tsk) + tsk = current; + + /* Final frame; nothing to unwind */ + if (fp == (unsigned long) task_pt_regs(tsk)->stackframe) return -ENOENT; if (fp & 0xf) return -EINVAL; - if (!tsk) - tsk = current; - if (!on_accessible_stack(tsk, fp, &info)) return -EINVAL;