diff mbox series

[v2] riscv: misaligned: remove CONFIG_RISCV_M_MODE specific code

Message ID 20240206154104.896809-1-cleger@rivosinc.com (mailing list archive)
State New
Headers show
Series [v2] riscv: misaligned: remove CONFIG_RISCV_M_MODE specific code | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR success PR summary
conchuod/patch-1-test-1 success .github/scripts/patches/tests/build_rv32_defconfig.sh
conchuod/patch-1-test-2 success .github/scripts/patches/tests/build_rv64_clang_allmodconfig.sh
conchuod/patch-1-test-3 success .github/scripts/patches/tests/build_rv64_gcc_allmodconfig.sh
conchuod/patch-1-test-4 success .github/scripts/patches/tests/build_rv64_nommu_k210_defconfig.sh
conchuod/patch-1-test-5 success .github/scripts/patches/tests/build_rv64_nommu_virt_defconfig.sh
conchuod/patch-1-test-6 success .github/scripts/patches/tests/checkpatch.sh
conchuod/patch-1-test-7 success .github/scripts/patches/tests/dtb_warn_rv64.sh
conchuod/patch-1-test-8 success .github/scripts/patches/tests/header_inline.sh
conchuod/patch-1-test-9 success .github/scripts/patches/tests/kdoc.sh
conchuod/patch-1-test-10 success .github/scripts/patches/tests/module_param.sh
conchuod/patch-1-test-11 success .github/scripts/patches/tests/verify_fixes.sh
conchuod/patch-1-test-12 success .github/scripts/patches/tests/verify_signedoff.sh

Commit Message

Clément Léger Feb. 6, 2024, 3:40 p.m. UTC
While reworking code to fix sparse errors, it appears that the
RISCV_M_MODE specific could actually be removed and use the one for
normal mode. Even though RISCV_M_MODE can do direct user memory access,
using the user uaccess helpers is also going to work. Since there is no
need anymore for specific accessors (load_u8()/store_u8()), we can
directly use memcpy()/copy_{to/from}_user() and get rid of the copy
loop entirely. __read_insn() is also fixed to use an unsigned long
instead of a pointer which was cast in __user address space. The
insn_addr parameter is now cast from unsigned lnog to the correct
address space directly.

Signed-off-by: Clément Léger <cleger@rivosinc.com>

---

The test used to validate these changes is the one used originally for
S-mode misaligned support:

https://github.com/clementleger/unaligned_test

This test exercise (almost) all the supported instructions, all the
registers for FPU instructions and is compiled with and without
compressed instructions.

For S-mode, you simply need a classic toolchain and export CROSS_COMPILE
to match it.

For M-mode validation, the following steps can be used:

Build a nommu toolchain with buildroot toolchain:
$ git clone https://github.com/buildroot/buildroot.git
$ cd buildroot
$ make O=build_nommu qemu_riscv64_nommu_virt_defconfig

Test:
$ git clone https://github.com/clementleger/unaligned_test.git
$ cd unaligned_test
$ make CFLAGS="-fPIC -Wl,-elf2flt=-r"
CROSS_COMPILE=<buildroot>/build_nommu/host/bin/riscv64-buildroot-linux-uclibc-

Copy the resulting elf files (unaligned & unaligned_c) to buildroot rootfs and rebuild it.
$ cp unaligned unaligned_c <buildroot>/build_nommu/target/root
$ cd <buildroot>/build_nommu/
$ make

Kernel:
$ make O=build_nommu nommu_virt_defconfig
$ make O=build_nommu loader

Either set the kernel initramfs or provide one on spike command line
using the one built with buildroot

Then to run it on spike (QEMU always emulate misaligned accesses and
won't generate any misaligned exception):

$ spike <kernel>/build_nommu/loader

---

V2:
 - Rebased on master
 - Align macro end "\"

Link to v1: https://lore.kernel.org/linux-riscv/20231128165206.589240-1-cleger@rivosinc.com/

Notes: This patch is a complete rework of a previous one [1] and thus is
not a V3.

[1] https://lore.kernel.org/linux-riscv/d156242a-f104-4925-9736-624a4ba8210d@rivosinc.com/
---
 arch/riscv/kernel/traps_misaligned.c | 106 +++++----------------------
 1 file changed, 17 insertions(+), 89 deletions(-)

Comments

Charlie Jenkins Feb. 7, 2024, 10:08 p.m. UTC | #1
On Tue, Feb 06, 2024 at 04:40:59PM +0100, Clément Léger wrote:
> While reworking code to fix sparse errors, it appears that the
> RISCV_M_MODE specific could actually be removed and use the one for
> normal mode. Even though RISCV_M_MODE can do direct user memory access,
> using the user uaccess helpers is also going to work. Since there is no
> need anymore for specific accessors (load_u8()/store_u8()), we can
> directly use memcpy()/copy_{to/from}_user() and get rid of the copy
> loop entirely. __read_insn() is also fixed to use an unsigned long
> instead of a pointer which was cast in __user address space. The
> insn_addr parameter is now cast from unsigned lnog to the correct
> address space directly.
> 
> Signed-off-by: Clément Léger <cleger@rivosinc.com>
> 
> ---
> 
> The test used to validate these changes is the one used originally for
> S-mode misaligned support:
> 
> https://github.com/clementleger/unaligned_test
> 
> This test exercise (almost) all the supported instructions, all the
> registers for FPU instructions and is compiled with and without
> compressed instructions.
> 
> For S-mode, you simply need a classic toolchain and export CROSS_COMPILE
> to match it.
> 
> For M-mode validation, the following steps can be used:
> 
> Build a nommu toolchain with buildroot toolchain:
> $ git clone https://github.com/buildroot/buildroot.git
> $ cd buildroot
> $ make O=build_nommu qemu_riscv64_nommu_virt_defconfig
> 
> Test:
> $ git clone https://github.com/clementleger/unaligned_test.git
> $ cd unaligned_test
> $ make CFLAGS="-fPIC -Wl,-elf2flt=-r"
> CROSS_COMPILE=<buildroot>/build_nommu/host/bin/riscv64-buildroot-linux-uclibc-
> 
> Copy the resulting elf files (unaligned & unaligned_c) to buildroot rootfs and rebuild it.
> $ cp unaligned unaligned_c <buildroot>/build_nommu/target/root
> $ cd <buildroot>/build_nommu/
> $ make
> 
> Kernel:
> $ make O=build_nommu nommu_virt_defconfig
> $ make O=build_nommu loader
> 
> Either set the kernel initramfs or provide one on spike command line
> using the one built with buildroot
> 
> Then to run it on spike (QEMU always emulate misaligned accesses and
> won't generate any misaligned exception):
> 
> $ spike <kernel>/build_nommu/loader
> 
> ---
> 
> V2:
>  - Rebased on master
>  - Align macro end "\"
> 
> Link to v1: https://lore.kernel.org/linux-riscv/20231128165206.589240-1-cleger@rivosinc.com/
> 
> Notes: This patch is a complete rework of a previous one [1] and thus is
> not a V3.
> 
> [1] https://lore.kernel.org/linux-riscv/d156242a-f104-4925-9736-624a4ba8210d@rivosinc.com/
> ---
>  arch/riscv/kernel/traps_misaligned.c | 106 +++++----------------------
>  1 file changed, 17 insertions(+), 89 deletions(-)
> 
> diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
> index 8ded225e8c5b..fb202dd18fe5 100644
> --- a/arch/riscv/kernel/traps_misaligned.c
> +++ b/arch/riscv/kernel/traps_misaligned.c
> @@ -264,86 +264,14 @@ static unsigned long get_f32_rs(unsigned long insn, u8 fp_reg_offset,
>  #define GET_F32_RS2C(insn, regs) (get_f32_rs(insn, 2, regs))
>  #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs))
>  
> -#ifdef CONFIG_RISCV_M_MODE
> -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val)
> -{
> -	u8 val;
> -
> -	asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr));
> -	*r_val = val;
> -
> -	return 0;
> -}
> -
> -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val)
> -{
> -	asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr));
> -
> -	return 0;
> -}
> -
> -static inline int get_insn(struct pt_regs *regs, ulong mepc, ulong *r_insn)
> -{
> -	register ulong __mepc asm ("a2") = mepc;
> -	ulong val, rvc_mask = 3, tmp;
> -
> -	asm ("and %[tmp], %[addr], 2\n"
> -		"bnez %[tmp], 1f\n"
> -#if defined(CONFIG_64BIT)
> -		__stringify(LWU) " %[insn], (%[addr])\n"
> -#else
> -		__stringify(LW) " %[insn], (%[addr])\n"
> -#endif
> -		"and %[tmp], %[insn], %[rvc_mask]\n"
> -		"beq %[tmp], %[rvc_mask], 2f\n"
> -		"sll %[insn], %[insn], %[xlen_minus_16]\n"
> -		"srl %[insn], %[insn], %[xlen_minus_16]\n"
> -		"j 2f\n"
> -		"1:\n"
> -		"lhu %[insn], (%[addr])\n"
> -		"and %[tmp], %[insn], %[rvc_mask]\n"
> -		"bne %[tmp], %[rvc_mask], 2f\n"
> -		"lhu %[tmp], 2(%[addr])\n"
> -		"sll %[tmp], %[tmp], 16\n"
> -		"add %[insn], %[insn], %[tmp]\n"
> -		"2:"
> -	: [insn] "=&r" (val), [tmp] "=&r" (tmp)
> -	: [addr] "r" (__mepc), [rvc_mask] "r" (rvc_mask),
> -	  [xlen_minus_16] "i" (XLEN_MINUS_16));
> -
> -	*r_insn = val;
> -
> -	return 0;
> -}
> -#else
> -static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val)
> -{
> -	if (user_mode(regs)) {
> -		return __get_user(*r_val, (u8 __user *)addr);
> -	} else {
> -		*r_val = *addr;
> -		return 0;
> -	}
> -}
> -
> -static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val)
> -{
> -	if (user_mode(regs)) {
> -		return __put_user(val, (u8 __user *)addr);
> -	} else {
> -		*addr = val;
> -		return 0;
> -	}
> -}
> -
> -#define __read_insn(regs, insn, insn_addr)		\
> +#define __read_insn(regs, insn, insn_addr, type)	\
>  ({							\
>  	int __ret;					\
>  							\
>  	if (user_mode(regs)) {				\
> -		__ret = __get_user(insn, insn_addr);	\
> +		__ret = __get_user(insn, (type __user *) insn_addr); \
>  	} else {					\
> -		insn = *(__force u16 *)insn_addr;	\
> +		insn = *(type *)insn_addr;		\
>  		__ret = 0;				\
>  	}						\
>  							\
> @@ -356,9 +284,8 @@ static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
>  
>  	if (epc & 0x2) {
>  		ulong tmp = 0;
> -		u16 __user *insn_addr = (u16 __user *)epc;
>  
> -		if (__read_insn(regs, insn, insn_addr))
> +		if (__read_insn(regs, insn, epc, u16))
>  			return -EFAULT;
>  		/* __get_user() uses regular "lw" which sign extend the loaded
>  		 * value make sure to clear higher order bits in case we "or" it
> @@ -369,16 +296,14 @@ static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
>  			*r_insn = insn;
>  			return 0;
>  		}
> -		insn_addr++;
> -		if (__read_insn(regs, tmp, insn_addr))
> +		epc += sizeof(u16);
> +		if (__read_insn(regs, tmp, epc, u16))
>  			return -EFAULT;
>  		*r_insn = (tmp << 16) | insn;
>  
>  		return 0;
>  	} else {
> -		u32 __user *insn_addr = (u32 __user *)epc;
> -
> -		if (__read_insn(regs, insn, insn_addr))
> +		if (__read_insn(regs, insn, epc, u32))
>  			return -EFAULT;
>  		if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) {
>  			*r_insn = insn;
> @@ -390,7 +315,6 @@ static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
>  		return 0;
>  	}
>  }
> -#endif
>  
>  union reg_data {
>  	u8 data_bytes[8];
> @@ -409,7 +333,7 @@ int handle_misaligned_load(struct pt_regs *regs)
>  	unsigned long epc = regs->epc;
>  	unsigned long insn;
>  	unsigned long addr = regs->badaddr;
> -	int i, fp = 0, shift = 0, len = 0;
> +	int fp = 0, shift = 0, len = 0;
>  
>  	perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
>  
> @@ -490,9 +414,11 @@ int handle_misaligned_load(struct pt_regs *regs)
>  		return -EOPNOTSUPP;
>  
>  	val.data_u64 = 0;
> -	for (i = 0; i < len; i++) {
> -		if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i]))
> +	if (user_mode(regs)) {
> +		if (raw_copy_from_user(&val, (u8 __user *)addr, len))
>  			return -1;
> +	} else {
> +		memcpy(&val, (u8 *)addr, len);
>  	}
>  
>  	if (!fp)
> @@ -513,7 +439,7 @@ int handle_misaligned_store(struct pt_regs *regs)
>  	unsigned long epc = regs->epc;
>  	unsigned long insn;
>  	unsigned long addr = regs->badaddr;
> -	int i, len = 0, fp = 0;
> +	int len = 0, fp = 0;
>  
>  	perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
>  
> @@ -586,9 +512,11 @@ int handle_misaligned_store(struct pt_regs *regs)
>  	if (!IS_ENABLED(CONFIG_FPU) && fp)
>  		return -EOPNOTSUPP;
>  
> -	for (i = 0; i < len; i++) {
> -		if (store_u8(regs, (void *)(addr + i), val.data_bytes[i]))
> +	if (user_mode(regs)) {
> +		if (raw_copy_to_user((u8 __user *)addr, &val, len))
>  			return -1;
> +	} else {
> +		memcpy((u8 *)addr, &val, len);
>  	}
>  
>  	regs->epc = epc + INSN_LEN(insn);
> -- 
> 2.43.0
> 

Thank you for posting the testing instructions! I tested it and it
worked as expected.

Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
Conor Dooley April 24, 2024, 10:21 a.m. UTC | #2
On Tue, Feb 06, 2024 at 04:40:59PM +0100, Clément Léger wrote:
> While reworking code to fix sparse errors, it appears that the
> RISCV_M_MODE specific could actually be removed and use the one for
> normal mode. Even though RISCV_M_MODE can do direct user memory access,
> using the user uaccess helpers is also going to work. Since there is no
> need anymore for specific accessors (load_u8()/store_u8()), we can
> directly use memcpy()/copy_{to/from}_user() and get rid of the copy
> loop entirely. __read_insn() is also fixed to use an unsigned long
> instead of a pointer which was cast in __user address space. The
> insn_addr parameter is now cast from unsigned lnog to the correct
> address space directly.
> 
> Signed-off-by: Clément Léger <cleger@rivosinc.com>

Removing some m-mode only code always feels like a win to me, given how
little testing and attention it usually gets.
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>

Cheers,
Conor.
diff mbox series

Patch

diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
index 8ded225e8c5b..fb202dd18fe5 100644
--- a/arch/riscv/kernel/traps_misaligned.c
+++ b/arch/riscv/kernel/traps_misaligned.c
@@ -264,86 +264,14 @@  static unsigned long get_f32_rs(unsigned long insn, u8 fp_reg_offset,
 #define GET_F32_RS2C(insn, regs) (get_f32_rs(insn, 2, regs))
 #define GET_F32_RS2S(insn, regs) (get_f32_rs(RVC_RS2S(insn), 0, regs))
 
-#ifdef CONFIG_RISCV_M_MODE
-static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val)
-{
-	u8 val;
-
-	asm volatile("lbu %0, %1" : "=&r" (val) : "m" (*addr));
-	*r_val = val;
-
-	return 0;
-}
-
-static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val)
-{
-	asm volatile ("sb %0, %1\n" : : "r" (val), "m" (*addr));
-
-	return 0;
-}
-
-static inline int get_insn(struct pt_regs *regs, ulong mepc, ulong *r_insn)
-{
-	register ulong __mepc asm ("a2") = mepc;
-	ulong val, rvc_mask = 3, tmp;
-
-	asm ("and %[tmp], %[addr], 2\n"
-		"bnez %[tmp], 1f\n"
-#if defined(CONFIG_64BIT)
-		__stringify(LWU) " %[insn], (%[addr])\n"
-#else
-		__stringify(LW) " %[insn], (%[addr])\n"
-#endif
-		"and %[tmp], %[insn], %[rvc_mask]\n"
-		"beq %[tmp], %[rvc_mask], 2f\n"
-		"sll %[insn], %[insn], %[xlen_minus_16]\n"
-		"srl %[insn], %[insn], %[xlen_minus_16]\n"
-		"j 2f\n"
-		"1:\n"
-		"lhu %[insn], (%[addr])\n"
-		"and %[tmp], %[insn], %[rvc_mask]\n"
-		"bne %[tmp], %[rvc_mask], 2f\n"
-		"lhu %[tmp], 2(%[addr])\n"
-		"sll %[tmp], %[tmp], 16\n"
-		"add %[insn], %[insn], %[tmp]\n"
-		"2:"
-	: [insn] "=&r" (val), [tmp] "=&r" (tmp)
-	: [addr] "r" (__mepc), [rvc_mask] "r" (rvc_mask),
-	  [xlen_minus_16] "i" (XLEN_MINUS_16));
-
-	*r_insn = val;
-
-	return 0;
-}
-#else
-static inline int load_u8(struct pt_regs *regs, const u8 *addr, u8 *r_val)
-{
-	if (user_mode(regs)) {
-		return __get_user(*r_val, (u8 __user *)addr);
-	} else {
-		*r_val = *addr;
-		return 0;
-	}
-}
-
-static inline int store_u8(struct pt_regs *regs, u8 *addr, u8 val)
-{
-	if (user_mode(regs)) {
-		return __put_user(val, (u8 __user *)addr);
-	} else {
-		*addr = val;
-		return 0;
-	}
-}
-
-#define __read_insn(regs, insn, insn_addr)		\
+#define __read_insn(regs, insn, insn_addr, type)	\
 ({							\
 	int __ret;					\
 							\
 	if (user_mode(regs)) {				\
-		__ret = __get_user(insn, insn_addr);	\
+		__ret = __get_user(insn, (type __user *) insn_addr); \
 	} else {					\
-		insn = *(__force u16 *)insn_addr;	\
+		insn = *(type *)insn_addr;		\
 		__ret = 0;				\
 	}						\
 							\
@@ -356,9 +284,8 @@  static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
 
 	if (epc & 0x2) {
 		ulong tmp = 0;
-		u16 __user *insn_addr = (u16 __user *)epc;
 
-		if (__read_insn(regs, insn, insn_addr))
+		if (__read_insn(regs, insn, epc, u16))
 			return -EFAULT;
 		/* __get_user() uses regular "lw" which sign extend the loaded
 		 * value make sure to clear higher order bits in case we "or" it
@@ -369,16 +296,14 @@  static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
 			*r_insn = insn;
 			return 0;
 		}
-		insn_addr++;
-		if (__read_insn(regs, tmp, insn_addr))
+		epc += sizeof(u16);
+		if (__read_insn(regs, tmp, epc, u16))
 			return -EFAULT;
 		*r_insn = (tmp << 16) | insn;
 
 		return 0;
 	} else {
-		u32 __user *insn_addr = (u32 __user *)epc;
-
-		if (__read_insn(regs, insn, insn_addr))
+		if (__read_insn(regs, insn, epc, u32))
 			return -EFAULT;
 		if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) {
 			*r_insn = insn;
@@ -390,7 +315,6 @@  static inline int get_insn(struct pt_regs *regs, ulong epc, ulong *r_insn)
 		return 0;
 	}
 }
-#endif
 
 union reg_data {
 	u8 data_bytes[8];
@@ -409,7 +333,7 @@  int handle_misaligned_load(struct pt_regs *regs)
 	unsigned long epc = regs->epc;
 	unsigned long insn;
 	unsigned long addr = regs->badaddr;
-	int i, fp = 0, shift = 0, len = 0;
+	int fp = 0, shift = 0, len = 0;
 
 	perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
 
@@ -490,9 +414,11 @@  int handle_misaligned_load(struct pt_regs *regs)
 		return -EOPNOTSUPP;
 
 	val.data_u64 = 0;
-	for (i = 0; i < len; i++) {
-		if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i]))
+	if (user_mode(regs)) {
+		if (raw_copy_from_user(&val, (u8 __user *)addr, len))
 			return -1;
+	} else {
+		memcpy(&val, (u8 *)addr, len);
 	}
 
 	if (!fp)
@@ -513,7 +439,7 @@  int handle_misaligned_store(struct pt_regs *regs)
 	unsigned long epc = regs->epc;
 	unsigned long insn;
 	unsigned long addr = regs->badaddr;
-	int i, len = 0, fp = 0;
+	int len = 0, fp = 0;
 
 	perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
 
@@ -586,9 +512,11 @@  int handle_misaligned_store(struct pt_regs *regs)
 	if (!IS_ENABLED(CONFIG_FPU) && fp)
 		return -EOPNOTSUPP;
 
-	for (i = 0; i < len; i++) {
-		if (store_u8(regs, (void *)(addr + i), val.data_bytes[i]))
+	if (user_mode(regs)) {
+		if (raw_copy_to_user((u8 __user *)addr, &val, len))
 			return -1;
+	} else {
+		memcpy((u8 *)addr, &val, len);
 	}
 
 	regs->epc = epc + INSN_LEN(insn);