Message ID | 20210726141141.2839385-9-arnd@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | ARM: remove set_fs callers and implementation | expand |
On Mon, Jul 26, 2021 at 04:11:39PM +0200, Arnd Bergmann wrote: > From: Arnd Bergmann <arnd@arndb.de> > > These mimic the behavior of get_user and put_user, except > for domain switching, address limit checking and handling > of mismatched sizes, none of which are relevant here. > > To work with pre-Armv6 kernels, this has to avoid TUSER() > inside of the new macros, the new approach passes the "t" > string along with the opcode, which is a bit uglier but > avoids duplicating more code. > > As there is no __get_user_asm_dword(), I work around it > by copying 32 bit at a time, which is possible because > the output size is known. > > Signed-off-by: Arnd Bergmann <arnd@arndb.de> I've just been bisecting some regressions running the kgdbts tests on arm and this patch came up. It looks like once this patch applies then copy_from_kernel_nofault() starts faulting when it called from kgdb. I've put an example stack trace at the bottom of this mail and the most simplified reproduction I currently have is: ~~~ make multi_v7_defconfig ../scripts/config --enable KGDB --enable KGDB_TESTS make olddefconfig make -j `nproc` qemu-system-arm -M virt -m 1G -nographic \ -kernel arch/arm/boot/zImage -initrd rootfs.cpio.gz # Boot and login echo V1 > /sys/module/kgdbts/parameters/kgdbts ~~~ I suspect this will reproduce on any arm system with CONFIG_KGDB and CONFIG_KGDB_TESTS enabled simply by running that last echo command... but I have only tested on QEMU for now. Daniel. Stack trace: ~~~ # echo kgdbts=V1F1000 > /sys/module/kgdbts/parameters/kgdbts [ 34.995507] KGDB: Registered I/O driver kgdbts [ 35.038102] kgdbts:RUN plant and detach test Entering kdb (current=0xd4264380, pid 134) on processor 0 due to Keyboard Entry [0]kdb> [ 35.056005] kgdbts:RUN sw breakpoint test [ 35.062309] kgdbts:RUN bad memory access test [ 35.063619] 8<--- cut here --- [ 35.064022] Unhandled fault: page domain fault (0x01b) at 0x00000000 [ 35.064212] pgd = (ptrval) [ 35.064459] [00000000] *pgd=942dc835, *pte=00000000, *ppte=00000000 [ 35.065071] Internal error: : 1b [#1] SMP ARM [ 35.065381] KGDB: re-enter exception: ALL breakpoints killed [ 35.065850] ---[ end trace 909d8c43057666be ]--- [ 35.066088] 8<--- cut here --- [ 35.066189] Unhandled fault: page domain fault (0x01b) at 0x00000000 [ 35.066332] pgd = (ptrval) [ 35.066406] [00000000] *pgd=942dc835, *pte=00000000, *ppte=00000000 [ 35.066597] Internal error: : 1b [#2] SMP ARM [ 35.066906] CPU: 0 PID: 134 Comm: sh Tainted: G D 5.14.0-rc1-00013-g2df4c9a741a0 #60 [ 35.067152] Hardware name: ARM-Versatile Express [ 35.067432] [<c0311bdc>] (unwind_backtrace) from [<c030bdc0>] (show_stack+0x10/0x14) [ 35.067880] [<c030bdc0>] (show_stack) from [<c114b9c8>] (dump_stack_lvl+0x58/0x70) [ 35.068054] [<c114b9c8>] (dump_stack_lvl) from [<c0430cdc>] (kgdb_reenter_check+0x104/0x150) [ 35.068213] [<c0430cdc>] (kgdb_reenter_check) from [<c0430dcc>] (kgdb_handle_exception+0xa4/0x114) [ 35.068395] [<c0430dcc>] (kgdb_handle_exception) from [<c0311268>] (kgdb_notify+0x30/0x74) [ 35.068563] [<c0311268>] (kgdb_notify) from [<c037422c>] (atomic_notifier_call_chain+0xac/0x194) [ 35.068745] [<c037422c>] (atomic_notifier_call_chain) from [<c0374370>] (notify_die+0x5c/0xbc) [ 35.068933] [<c0374370>] (notify_die) from [<c030bf04>] (die+0x140/0x544) [ 35.069079] [<c030bf04>] (die) from [<c03164d4>] (do_DataAbort+0xb8/0xbc) [ 35.069220] [<c03164d4>] (do_DataAbort) from [<c0300afc>] (__dabt_svc+0x5c/0xa0) [ 35.069434] Exception stack(0xd4249c10 to 0xd4249c58) [ 35.069616] 9c00: ???????? ???????? ???????? ???????? [ 35.069776] 9c20: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ???????? [ 35.069943] 9c40: ???????? ???????? ???????? ???????? ???????? ???????? [ 35.070107] [<c0300afc>] (__dabt_svc) from [<c049c8c4>] (copy_from_kernel_nofault+0x114/0x13c) [ 35.070291] [<c049c8c4>] (copy_from_kernel_nofault) from [<c0431688>] (kgdb_mem2hex+0x1c/0x88) [ 35.070463] [<c0431688>] (kgdb_mem2hex) from [<c04322b0>] (gdb_serial_stub+0x8c4/0x1088) [ 35.070640] [<c04322b0>] (gdb_serial_stub) from [<c04302e8>] (kgdb_cpu_enter+0x4f4/0x988) [ 35.070796] [<c04302e8>] (kgdb_cpu_enter) from [<c0430e08>] (kgdb_handle_exception+0xe0/0x114) [ 35.070982] [<c0430e08>] (kgdb_handle_exception) from [<c0311210>] (kgdb_compiled_brk_fn+0x24/0x2c) [ 35.071166] [<c0311210>] (kgdb_compiled_brk_fn) from [<c030c40c>] (do_undefinstr+0x104/0x230) [ 35.071342] [<c030c40c>] (do_undefinstr) from [<c0300c6c>] (__und_svc_finish+0x0/0x54) [ 35.071502] Exception stack(0xd4249dc8 to 0xd4249e10) [ 35.071614] 9dc0: ???????? ???????? ???????? ???????? ???????? ???????? [ 35.071778] 9de0: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ???????? [ 35.071944] 9e00: ???????? ???????? ???????? ???????? [ 35.072054] [<c0300c6c>] (__und_svc_finish) from [<c042fd20>] (kgdb_breakpoint+0x30/0x58) [ 35.072211] [<c042fd20>] (kgdb_breakpoint) from [<c0b14b08>] (configure_kgdbts+0x228/0x68c) [ 35.072395] [<c0b14b08>] (configure_kgdbts) from [<c036fdcc>] (param_attr_store+0x60/0xb8) [ 35.072560] [<c036fdcc>] (param_attr_store) from [<c05bcf14>] (kernfs_fop_write_iter+0x110/0x1d4) [ 35.072745] [<c05bcf14>] (kernfs_fop_write_iter) from [<c050f074>] (vfs_write+0x350/0x508) [ 35.072920] [<c050f074>] (vfs_write) from [<c050f370>] (ksys_write+0x64/0xdc) [ 35.073075] [<c050f370>] (ksys_write) from [<c03000c0>] (ret_fast_syscall+0x0/0x2c) [ 35.073259] Exception stack(0xd4249fa8 to 0xd4249ff0) [ 35.073372] 9fa0: ???????? ???????? ???????? ???????? ???????? ???????? [ 35.073527] 9fc0: ???????? ???????? ???????? ???????? ???????? ???????? ???????? ???????? [ 35.073679] 9fe0: ???????? ???????? ???????? ???????? [ 35.073960] Kernel panic - not syncing: Recursive entry to debugger [ 36.286118] SMP: failed to stop secondary CPUs [ 36.286568] ---[ end Kernel panic - not syncing: Recursive entry to debugger ]--- ~~~
On Wed, Jan 12, 2022 at 06:08:17PM +0000, Russell King (Oracle) wrote: > On Wed, Jan 12, 2022 at 05:29:03PM +0000, Daniel Thompson wrote: > > On Mon, Jul 26, 2021 at 04:11:39PM +0200, Arnd Bergmann wrote: > > > From: Arnd Bergmann <arnd@arndb.de> > > > > > > These mimic the behavior of get_user and put_user, except > > > for domain switching, address limit checking and handling > > > of mismatched sizes, none of which are relevant here. > > > > > > To work with pre-Armv6 kernels, this has to avoid TUSER() > > > inside of the new macros, the new approach passes the "t" > > > string along with the opcode, which is a bit uglier but > > > avoids duplicating more code. > > > > > > As there is no __get_user_asm_dword(), I work around it > > > by copying 32 bit at a time, which is possible because > > > the output size is known. > > > > > > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > > > > I've just been bisecting some regressions running the kgdbts tests on > > arm and this patch came up. > > So the software PAN code is working :) Interesting. I noticed it was odd that kgdbts works just fine if launched from kernel command line. I guess that runs before PAN is activated. Neat. > The kernel attempted to access an address that is in the userspace > domain (NULL pointer) and took an exception. > > I suppose we should handle a domain fault more gracefully - what are > the required semantics if the kernel attempts a userspace access > using one of the _nofault() accessors? I think the best answer might well be that, if the arch provides implementations of hooks such as copy_from_kernel_nofault_allowed() then the kernel should never attempt a userspace access using the _nofault() accessors. That means they can do whatever they like! In other words something like the patch below looks like a promising approach. Daniel. From f66a63b504ff582f261a506c54ceab8c0e77a98c Mon Sep 17 00:00:00 2001 From: Daniel Thompson <daniel.thompson@linaro.org> Date: Thu, 13 Jan 2022 09:34:45 +0000 Subject: [PATCH] arm: mm: Implement copy_from_kernel_nofault_allowed() Currently copy_from_kernel_nofault() can actually fault (due to software PAN) if we attempt userspace access. In any case, the documented behaviour for this function is to return -ERANGE if we attempt an access outside of kernel space. Implementing copy_from_kernel_nofault_allowed() solves both these problems. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> --- arch/arm/mm/Makefile | 2 +- arch/arm/mm/maccess.c | 9 +++++++++ 2 files changed, 10 insertions(+), 1 deletion(-) create mode 100644 arch/arm/mm/maccess.c diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 3510503bc5e6..d1c5f4f256de 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -3,7 +3,7 @@ # Makefile for the linux arm-specific parts of the memory manager. # -obj-y := extable.o fault.o init.o iomap.o +obj-y := extable.o fault.o init.o iomap.o maccess.o obj-y += dma-mapping$(MMUEXT).o obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \ mmap.o pgd.o mmu.o pageattr.o diff --git a/arch/arm/mm/maccess.c b/arch/arm/mm/maccess.c new file mode 100644 index 000000000000..0251062cb40d --- /dev/null +++ b/arch/arm/mm/maccess.c @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <linux/uaccess.h> +#include <linux/kernel.h> + +bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size) +{ + return (unsigned long)unsafe_src >= TASK_SIZE; +}
On Thu, Jan 13, 2022 at 10:47 AM Daniel Thompson <daniel.thompson@linaro.org> wrote: > On Wed, Jan 12, 2022 at 06:08:17PM +0000, Russell King (Oracle) wrote: > > > The kernel attempted to access an address that is in the userspace > > domain (NULL pointer) and took an exception. > > > > I suppose we should handle a domain fault more gracefully - what are > > the required semantics if the kernel attempts a userspace access > > using one of the _nofault() accessors? > > I think the best answer might well be that, if the arch provides > implementations of hooks such as copy_from_kernel_nofault_allowed() > then the kernel should never attempt a userspace access using the > _nofault() accessors. That means they can do whatever they like! > > In other words something like the patch below looks like a promising > approach. Right, it seems this is the same as on x86. > From f66a63b504ff582f261a506c54ceab8c0e77a98c Mon Sep 17 00:00:00 2001 > From: Daniel Thompson <daniel.thompson@linaro.org> > Date: Thu, 13 Jan 2022 09:34:45 +0000 > Subject: [PATCH] arm: mm: Implement copy_from_kernel_nofault_allowed() > > Currently copy_from_kernel_nofault() can actually fault (due to software > PAN) if we attempt userspace access. In any case, the documented > behaviour for this function is to return -ERANGE if we attempt an access > outside of kernel space. > > Implementing copy_from_kernel_nofault_allowed() solves both these > problems. > > Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
On Thu, Jan 13, 2022 at 12:14:50PM +0100, Arnd Bergmann wrote: > On Thu, Jan 13, 2022 at 10:47 AM Daniel Thompson > <daniel.thompson@linaro.org> wrote: > > On Wed, Jan 12, 2022 at 06:08:17PM +0000, Russell King (Oracle) wrote: > > > > > The kernel attempted to access an address that is in the userspace > > > domain (NULL pointer) and took an exception. > > > > > > I suppose we should handle a domain fault more gracefully - what are > > > the required semantics if the kernel attempts a userspace access > > > using one of the _nofault() accessors? > > > > I think the best answer might well be that, if the arch provides > > implementations of hooks such as copy_from_kernel_nofault_allowed() > > then the kernel should never attempt a userspace access using the > > _nofault() accessors. That means they can do whatever they like! > > > > In other words something like the patch below looks like a promising > > approach. > > Right, it seems this is the same as on x86. Hmnn... Looking a bit deeper into copy_from_kernel_nofault() there is an odd asymmetry between copy_to_kernel_nofault(). Basically there is copy_from_kernel_nofault_allowed() but no corresponding copy_to_kernel_nofault_allowed() which means we cannot defend memory pokes using a helper function. I checked the behaviour of copy_to_kernel_nofault() on arm, arm64, mips, powerpc, riscv, x86 kernels (which is pretty much everything where I know how to fire up qemu). All except arm gracefully handle an attempt to write to userspace (well, NULL actually) with copy_to_kernel_nofault() so I think there still a few more changes to fully fix this. Looks like we would need a slightly more assertive change, either adding a copy_to_kernel_nofault_allowed() or modifying the arm dabt handlers to avoid faults on userspace access. Any views on which is better? Daniel. > > > From f66a63b504ff582f261a506c54ceab8c0e77a98c Mon Sep 17 00:00:00 2001 > > From: Daniel Thompson <daniel.thompson@linaro.org> > > Date: Thu, 13 Jan 2022 09:34:45 +0000 > > Subject: [PATCH] arm: mm: Implement copy_from_kernel_nofault_allowed() > > > > Currently copy_from_kernel_nofault() can actually fault (due to software > > PAN) if we attempt userspace access. In any case, the documented > > behaviour for this function is to return -ERANGE if we attempt an access > > outside of kernel space. > > > > Implementing copy_from_kernel_nofault_allowed() solves both these > > problems. > > > > Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> > > Reviewed-by: Arnd Bergmann <arnd@arndb.de>
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index a13d90206472..4f60638755c4 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -308,11 +308,11 @@ static inline void set_fs(mm_segment_t fs) #define __get_user(x, ptr) \ ({ \ long __gu_err = 0; \ - __get_user_err((x), (ptr), __gu_err); \ + __get_user_err((x), (ptr), __gu_err, TUSER()); \ __gu_err; \ }) -#define __get_user_err(x, ptr, err) \ +#define __get_user_err(x, ptr, err, __t) \ do { \ unsigned long __gu_addr = (unsigned long)(ptr); \ unsigned long __gu_val; \ @@ -321,18 +321,19 @@ do { \ might_fault(); \ __ua_flags = uaccess_save_and_enable(); \ switch (sizeof(*(ptr))) { \ - case 1: __get_user_asm_byte(__gu_val, __gu_addr, err); break; \ - case 2: __get_user_asm_half(__gu_val, __gu_addr, err); break; \ - case 4: __get_user_asm_word(__gu_val, __gu_addr, err); break; \ + case 1: __get_user_asm_byte(__gu_val, __gu_addr, err, __t); break; \ + case 2: __get_user_asm_half(__gu_val, __gu_addr, err, __t); break; \ + case 4: __get_user_asm_word(__gu_val, __gu_addr, err, __t); break; \ default: (__gu_val) = __get_user_bad(); \ } \ uaccess_restore(__ua_flags); \ (x) = (__typeof__(*(ptr)))__gu_val; \ } while (0) +#endif #define __get_user_asm(x, addr, err, instr) \ __asm__ __volatile__( \ - "1: " TUSER(instr) " %1, [%2], #0\n" \ + "1: " instr " %1, [%2], #0\n" \ "2:\n" \ " .pushsection .text.fixup,\"ax\"\n" \ " .align 2\n" \ @@ -348,40 +349,38 @@ do { \ : "r" (addr), "i" (-EFAULT) \ : "cc") -#define __get_user_asm_byte(x, addr, err) \ - __get_user_asm(x, addr, err, ldrb) +#define __get_user_asm_byte(x, addr, err, __t) \ + __get_user_asm(x, addr, err, "ldrb" __t) #if __LINUX_ARM_ARCH__ >= 6 -#define __get_user_asm_half(x, addr, err) \ - __get_user_asm(x, addr, err, ldrh) +#define __get_user_asm_half(x, addr, err, __t) \ + __get_user_asm(x, addr, err, "ldrh" __t) #else #ifndef __ARMEB__ -#define __get_user_asm_half(x, __gu_addr, err) \ +#define __get_user_asm_half(x, __gu_addr, err, __t) \ ({ \ unsigned long __b1, __b2; \ - __get_user_asm_byte(__b1, __gu_addr, err); \ - __get_user_asm_byte(__b2, __gu_addr + 1, err); \ + __get_user_asm_byte(__b1, __gu_addr, err, __t); \ + __get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \ (x) = __b1 | (__b2 << 8); \ }) #else -#define __get_user_asm_half(x, __gu_addr, err) \ +#define __get_user_asm_half(x, __gu_addr, err, __t) \ ({ \ unsigned long __b1, __b2; \ - __get_user_asm_byte(__b1, __gu_addr, err); \ - __get_user_asm_byte(__b2, __gu_addr + 1, err); \ + __get_user_asm_byte(__b1, __gu_addr, err, __t); \ + __get_user_asm_byte(__b2, __gu_addr + 1, err, __t); \ (x) = (__b1 << 8) | __b2; \ }) #endif #endif /* __LINUX_ARM_ARCH__ >= 6 */ -#define __get_user_asm_word(x, addr, err) \ - __get_user_asm(x, addr, err, ldr) -#endif - +#define __get_user_asm_word(x, addr, err, __t) \ + __get_user_asm(x, addr, err, "ldr" __t) #define __put_user_switch(x, ptr, __err, __fn) \ do { \ @@ -425,7 +424,7 @@ do { \ #define __put_user_nocheck(x, __pu_ptr, __err, __size) \ do { \ unsigned long __pu_addr = (unsigned long)__pu_ptr; \ - __put_user_nocheck_##__size(x, __pu_addr, __err); \ + __put_user_nocheck_##__size(x, __pu_addr, __err, TUSER());\ } while (0) #define __put_user_nocheck_1 __put_user_asm_byte @@ -433,9 +432,11 @@ do { \ #define __put_user_nocheck_4 __put_user_asm_word #define __put_user_nocheck_8 __put_user_asm_dword +#endif /* !CONFIG_CPU_SPECTRE */ + #define __put_user_asm(x, __pu_addr, err, instr) \ __asm__ __volatile__( \ - "1: " TUSER(instr) " %1, [%2], #0\n" \ + "1: " instr " %1, [%2], #0\n" \ "2:\n" \ " .pushsection .text.fixup,\"ax\"\n" \ " .align 2\n" \ @@ -450,36 +451,36 @@ do { \ : "r" (x), "r" (__pu_addr), "i" (-EFAULT) \ : "cc") -#define __put_user_asm_byte(x, __pu_addr, err) \ - __put_user_asm(x, __pu_addr, err, strb) +#define __put_user_asm_byte(x, __pu_addr, err, __t) \ + __put_user_asm(x, __pu_addr, err, "strb" __t) #if __LINUX_ARM_ARCH__ >= 6 -#define __put_user_asm_half(x, __pu_addr, err) \ - __put_user_asm(x, __pu_addr, err, strh) +#define __put_user_asm_half(x, __pu_addr, err, __t) \ + __put_user_asm(x, __pu_addr, err, "strh" __t) #else #ifndef __ARMEB__ -#define __put_user_asm_half(x, __pu_addr, err) \ +#define __put_user_asm_half(x, __pu_addr, err, __t) \ ({ \ unsigned long __temp = (__force unsigned long)(x); \ - __put_user_asm_byte(__temp, __pu_addr, err); \ - __put_user_asm_byte(__temp >> 8, __pu_addr + 1, err); \ + __put_user_asm_byte(__temp, __pu_addr, err, __t); \ + __put_user_asm_byte(__temp >> 8, __pu_addr + 1, err, __t);\ }) #else -#define __put_user_asm_half(x, __pu_addr, err) \ +#define __put_user_asm_half(x, __pu_addr, err, __t) \ ({ \ unsigned long __temp = (__force unsigned long)(x); \ - __put_user_asm_byte(__temp >> 8, __pu_addr, err); \ - __put_user_asm_byte(__temp, __pu_addr + 1, err); \ + __put_user_asm_byte(__temp >> 8, __pu_addr, err, __t); \ + __put_user_asm_byte(__temp, __pu_addr + 1, err, __t); \ }) #endif #endif /* __LINUX_ARM_ARCH__ >= 6 */ -#define __put_user_asm_word(x, __pu_addr, err) \ - __put_user_asm(x, __pu_addr, err, str) +#define __put_user_asm_word(x, __pu_addr, err, __t) \ + __put_user_asm(x, __pu_addr, err, "str" __t) #ifndef __ARMEB__ #define __reg_oper0 "%R2" @@ -489,12 +490,12 @@ do { \ #define __reg_oper1 "%R2" #endif -#define __put_user_asm_dword(x, __pu_addr, err) \ +#define __put_user_asm_dword(x, __pu_addr, err, __t) \ __asm__ __volatile__( \ - ARM( "1: " TUSER(str) " " __reg_oper1 ", [%1], #4\n" ) \ - ARM( "2: " TUSER(str) " " __reg_oper0 ", [%1]\n" ) \ - THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \ - THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \ + ARM( "1: str" __t " " __reg_oper1 ", [%1], #4\n" ) \ + ARM( "2: str" __t " " __reg_oper0 ", [%1]\n" ) \ + THUMB( "1: str" __t " " __reg_oper1 ", [%1]\n" ) \ + THUMB( "2: str" __t " " __reg_oper0 ", [%1, #4]\n" ) \ "3:\n" \ " .pushsection .text.fixup,\"ax\"\n" \ " .align 2\n" \ @@ -510,7 +511,49 @@ do { \ : "r" (x), "i" (-EFAULT) \ : "cc") -#endif /* !CONFIG_CPU_SPECTRE */ +#define HAVE_GET_KERNEL_NOFAULT + +#define __get_kernel_nofault(dst, src, type, err_label) \ +do { \ + const type *__pk_ptr = (src); \ + unsigned long __src = (unsigned long)(__pk_ptr); \ + type __val; \ + int __err = 0; \ + switch (sizeof(type)) { \ + case 1: __get_user_asm_byte(__val, __src, __err, ""); break; \ + case 2: __get_user_asm_half(__val, __src, __err, ""); break; \ + case 4: __get_user_asm_word(__val, __src, __err, ""); break; \ + case 8: { \ + u32 *__v32 = (u32*)&__val; \ + __get_user_asm_word(__v32[0], __src, __err, ""); \ + if (__err) \ + break; \ + __get_user_asm_word(__v32[1], __src+4, __err, ""); \ + break; \ + } \ + default: __err = __get_user_bad(); break; \ + } \ + *(type *)(dst) = __val; \ + if (__err) \ + goto err_label; \ +} while (0) + +#define __put_kernel_nofault(dst, src, type, err_label) \ +do { \ + const type *__pk_ptr = (dst); \ + unsigned long __dst = (unsigned long)__pk_ptr; \ + int __err = 0; \ + type __val = *(type *)src; \ + switch (sizeof(type)) { \ + case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \ + case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \ + case 4: __put_user_asm_word(__val, __dst, __err, ""); break; \ + case 8: __put_user_asm_dword(__val, __dst, __err, ""); break; \ + default: __err = __put_user_bad(); break; \ + } \ + if (__err) \ + goto err_label; \ +} while (0) #ifdef CONFIG_MMU extern unsigned long __must_check