mbox series

[0/3] arm64: stacktrace: improve robustness

Message ID 20190606125402.10229-1-mark.rutland@arm.com (mailing list archive)
Headers show
Series arm64: stacktrace: improve robustness | expand

Message

Mark Rutland June 6, 2019, 12:53 p.m. UTC
The arm64 stacktrace code is careful to only access valid stack
locations, but in the presence of a corrupted stack where frame records
form a loop, it will never terminate.

This series updates the stacktrace code to terminate in finite time even
when a stack is corrupted. A stacktrace will be terminated if the next
record is at a lower (or equal) address on the current stack, or when
the next record is on a stack we've already completed unwinding.

The first couple of patches come from Dave's prior attempt to fix this
[1], with the final patch relying on infrastructure which has been
introduced in the mean time.

I've given this a quick spin with magic-sysrq L in a KVM guest, and
things look fine, but further testing would be appreciated.

This series (based on v5.2-rc1) can also be found in my
arm64/robust-stracktrace branch on kernel.org [2].

Thanks,
Mark.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-April/572685.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/robust-stacktrace

Dave Martin (2):
  arm64: stacktrace: Constify stacktrace.h functions
  arm64: stacktrace: Factor out backtrace initialisation

Mark Rutland (1):
  arm64: stacktrace: better handle corrupted stacks

 arch/arm64/include/asm/stacktrace.h | 55 ++++++++++++++++++++++++++++---------
 arch/arm64/kernel/process.c         |  6 +---
 arch/arm64/kernel/stacktrace.c      | 16 ++++++++++-
 arch/arm64/kernel/time.c            |  6 +---
 arch/arm64/kernel/traps.c           | 13 ++++-----
 5 files changed, 65 insertions(+), 31 deletions(-)