diff mbox

ARM: kprobes: Eliminate test code's use of BX instruction on ARMv4 CPUs

Message ID 1421687930.4201.21.camel@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Jon Medhurst (Tixy) Jan. 19, 2015, 5:18 p.m. UTC
Non-T variants of ARMv4 CPUs don't support the BX instruction so
eliminate it use.

Signed-off-by: Jon Medhurst <tixy@linaro.org>
---

This patch applies on top on my kprobe-opt branch [1] for which I plan on
sending a pull request for shortly.

[1] git://git.linaro.org/people/tixy/kernel.git kprobe-opt

 arch/arm/probes/kprobes/test-arm.c  |  2 ++
 arch/arm/probes/kprobes/test-core.c | 10 +++++++---
 2 files changed, 9 insertions(+), 3 deletions(-)

Comments

Russell King - ARM Linux Jan. 19, 2015, 5:27 p.m. UTC | #1
On Mon, Jan 19, 2015 at 05:18:50PM +0000, Jon Medhurst (Tixy) wrote:
> diff --git a/arch/arm/probes/kprobes/test-arm.c b/arch/arm/probes/kprobes/test-arm.c
> index e72b07e..5c6e37e 100644
> --- a/arch/arm/probes/kprobes/test-arm.c
> +++ b/arch/arm/probes/kprobes/test-arm.c
> @@ -215,9 +215,11 @@ void kprobe_arm_test_cases(void)
>  	TEST_UNSUPPORTED("msr	cpsr_f, lr")
>  	TEST_UNSUPPORTED("msr	spsr, r0")
>  
> +#if (__LINUX_ARM_ARCH__ >= 5) || defined(CONFIG_CPU_32v4T)
>  	TEST_BF_R("bx	r",0,2f,"")
>  	TEST_BB_R("bx	r",7,2f,"")
>  	TEST_BF_R("bxeq	r",14,2f,"")
> +#endif

Unnecessary ()... and this isn't correct.  With a multi-platform kernel, we
can end up with CONFIG_CPU_32v4 and CONFIG_CPU_32v4T both set.

I think:

#if __LINUX_ARM_ARCH__ >= 5 || \
    (__LINUX_ARM_ARCH__ == 4 && !defined(CONFIG_CPU_32v4))

would cover it.
Jon Medhurst (Tixy) Jan. 19, 2015, 6:19 p.m. UTC | #2
On Mon, 2015-01-19 at 17:27 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 19, 2015 at 05:18:50PM +0000, Jon Medhurst (Tixy) wrote:
> > diff --git a/arch/arm/probes/kprobes/test-arm.c b/arch/arm/probes/kprobes/test-arm.c
> > index e72b07e..5c6e37e 100644
> > --- a/arch/arm/probes/kprobes/test-arm.c
> > +++ b/arch/arm/probes/kprobes/test-arm.c
> > @@ -215,9 +215,11 @@ void kprobe_arm_test_cases(void)
> >  	TEST_UNSUPPORTED("msr	cpsr_f, lr")
> >  	TEST_UNSUPPORTED("msr	spsr, r0")
> >  
> > +#if (__LINUX_ARM_ARCH__ >= 5) || defined(CONFIG_CPU_32v4T)
> >  	TEST_BF_R("bx	r",0,2f,"")
> >  	TEST_BB_R("bx	r",7,2f,"")
> >  	TEST_BF_R("bxeq	r",14,2f,"")
> > +#endif
> 
> Unnecessary ()... and this isn't correct.  With a multi-platform kernel, we
> can end up with CONFIG_CPU_32v4 and CONFIG_CPU_32v4T both set.
> 
> I think:
> 
> #if __LINUX_ARM_ARCH__ >= 5 || \
>     (__LINUX_ARM_ARCH__ == 4 && !defined(CONFIG_CPU_32v4))
> 
> would cover it.

Multi-platform kernels in general are problematic, e.g. we have quite a
few

#if __LINUX_ARM_ARCH__ >= 7

which would also need

 && !defined(CONFIG_CPU_32v6) && !defined(CONFIG_CPU_32v6K)

which isn't going to scale well. What's really needed is runtime checks,
and that will also let the test not have to fall back to lowest common
denominator. Implementing that is something I never got around to doing,
and I probably should. We would need to do something like export
cpu_architecture() for module use, what do you think of that?

So, for now, I will fix the compile time tests as you suggest for this
patch, and will look at introducing run-time tests for a wider solution.
That won't be for some weeks yet though.
diff mbox

Patch

diff --git a/arch/arm/probes/kprobes/test-arm.c b/arch/arm/probes/kprobes/test-arm.c
index e72b07e..5c6e37e 100644
--- a/arch/arm/probes/kprobes/test-arm.c
+++ b/arch/arm/probes/kprobes/test-arm.c
@@ -215,9 +215,11 @@  void kprobe_arm_test_cases(void)
 	TEST_UNSUPPORTED("msr	cpsr_f, lr")
 	TEST_UNSUPPORTED("msr	spsr, r0")
 
+#if (__LINUX_ARM_ARCH__ >= 5) || defined(CONFIG_CPU_32v4T)
 	TEST_BF_R("bx	r",0,2f,"")
 	TEST_BB_R("bx	r",7,2f,"")
 	TEST_BF_R("bxeq	r",14,2f,"")
+#endif
 
 #if __LINUX_ARM_ARCH__ >= 5
 	TEST_R("clz	r0, r",0, 0x0,"")
diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
index e495127..9775de2 100644
--- a/arch/arm/probes/kprobes/test-core.c
+++ b/arch/arm/probes/kprobes/test-core.c
@@ -236,6 +236,8 @@  static int tests_failed;
 
 #ifndef CONFIG_THUMB2_KERNEL
 
+#define RET(reg)	"mov	pc, "#reg
+
 long arm_func(long r0, long r1);
 
 static void __used __naked __arm_kprobes_test_func(void)
@@ -245,7 +247,7 @@  static void __used __naked __arm_kprobes_test_func(void)
 		".type arm_func, %%function		\n\t"
 		"arm_func:				\n\t"
 		"adds	r0, r0, r1			\n\t"
-		"bx	lr				\n\t"
+		"mov	pc, lr				\n\t"
 		".code "NORMAL_ISA	 /* Back to Thumb if necessary */
 		: : : "r0", "r1", "cc"
 	);
@@ -253,6 +255,8 @@  static void __used __naked __arm_kprobes_test_func(void)
 
 #else /* CONFIG_THUMB2_KERNEL */
 
+#define RET(reg)	"bx	"#reg
+
 long thumb16_func(long r0, long r1);
 long thumb32even_func(long r0, long r1);
 long thumb32odd_func(long r0, long r1);
@@ -494,7 +498,7 @@  static void __naked benchmark_nop(void)
 {
 	__asm__ __volatile__ (
 		"nop		\n\t"
-		"bx	lr"
+		RET(lr)"	\n\t"
 	);
 }
 
@@ -977,7 +981,7 @@  void __naked __kprobes_test_case_start(void)
 		"bic	r0, lr, #1  @ r0 = inline data		\n\t"
 		"mov	r1, sp					\n\t"
 		"bl	kprobes_test_case_start			\n\t"
-		"bx	r0					\n\t"
+		RET(r0)"					\n\t"
 	);
 }