Message ID | 20191121184805.414758-2-pasha.tatashin@soleen.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Use C inlines for uaccess | expand |
On Thu, Nov 21, 2019 at 01:48:03PM -0500, Pavel Tatashin wrote: > privcmd_call requires to enable access to userspace for the > duration of the hypercall. > > Currently, this is done via assembly macros. Change it to C > inlines instead. > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > --- > arch/arm/include/asm/assembler.h | 2 +- > arch/arm/include/asm/uaccess.h | 32 ++++++++++++++++++++++++++------ > arch/arm/xen/enlighten.c | 2 +- > arch/arm/xen/hypercall.S | 15 ++------------- > arch/arm64/xen/hypercall.S | 19 ++----------------- > include/xen/arm/hypercall.h | 23 ++++++++++++++++++++--- > 6 files changed, 52 insertions(+), 41 deletions(-) > > diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h > index 99929122dad7..8e9262a0f016 100644 > --- a/arch/arm/include/asm/assembler.h > +++ b/arch/arm/include/asm/assembler.h > @@ -480,7 +480,7 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) > .macro uaccess_disable, tmp, isb=1 > #ifdef CONFIG_CPU_SW_DOMAIN_PAN > /* > - * Whenever we re-enter userspace, the domains should always be > + * Whenever we re-enter kernel, the domains should always be > * set appropriately. > */ > mov \tmp, #DACR_UACCESS_DISABLE > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > index 98c6b91be4a8..79d4efa3eb62 100644 > --- a/arch/arm/include/asm/uaccess.h > +++ b/arch/arm/include/asm/uaccess.h > @@ -16,6 +16,23 @@ > > #include <asm/extable.h> > > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > +static __always_inline void uaccess_enable(void) > +{ > + unsigned long val = DACR_UACCESS_ENABLE; > + > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > + isb(); > +} > + > +static __always_inline void uaccess_disable(void) > +{ > + unsigned long val = DACR_UACCESS_ENABLE; > + > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > + isb(); > +} Rather than inventing these, why not use uaccess_save_and_enable().. uaccess_restore() around the Xen call?
> > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > > +static __always_inline void uaccess_enable(void) > > +{ > > + unsigned long val = DACR_UACCESS_ENABLE; > > + > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > + isb(); > > +} > > + > > +static __always_inline void uaccess_disable(void) > > +{ > > + unsigned long val = DACR_UACCESS_ENABLE; Oops, should be DACR_UACCESS_DISABLE. > > + > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > + isb(); > > +} > > Rather than inventing these, why not use uaccess_save_and_enable().. > uaccess_restore() around the Xen call? Thank you for suggestion: uaccess_enable() and uaccess_disable() are common calls with arm64, so I will need them, but I think I can use set_domain() with DACR_UACCESS_DISABLE /DACR_UACCESS_ENABLE inside these inlines. Pasha
On Thu, Nov 21, 2019 at 07:30:41PM -0500, Pavel Tatashin wrote: > > > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > > > +static __always_inline void uaccess_enable(void) > > > +{ > > > + unsigned long val = DACR_UACCESS_ENABLE; > > > + > > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > > + isb(); > > > +} > > > + > > > +static __always_inline void uaccess_disable(void) > > > +{ > > > + unsigned long val = DACR_UACCESS_ENABLE; > > Oops, should be DACR_UACCESS_DISABLE. > > > > + > > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > > + isb(); > > > +} > > > > Rather than inventing these, why not use uaccess_save_and_enable().. > > uaccess_restore() around the Xen call? > > Thank you for suggestion: uaccess_enable() and uaccess_disable() are > common calls with arm64, so I will need them, but I think I can use > set_domain() with DACR_UACCESS_DISABLE /DACR_UACCESS_ENABLE inside > these inlines. That may be, but be very careful that you only use them in ARMv7-only code. Using them elsewhere is unsafe as the domain register is used for other purposes, and merely blatting over it (as your uaccess_enable and uaccess_disable functions do) is unsafe.
On Fri, Nov 22, 2019 at 12:34:03AM +0000, Russell King - ARM Linux admin wrote: > On Thu, Nov 21, 2019 at 07:30:41PM -0500, Pavel Tatashin wrote: > > > > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > > > > +static __always_inline void uaccess_enable(void) > > > > +{ > > > > + unsigned long val = DACR_UACCESS_ENABLE; > > > > + > > > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > > > + isb(); > > > > +} > > > > + > > > > +static __always_inline void uaccess_disable(void) > > > > +{ > > > > + unsigned long val = DACR_UACCESS_ENABLE; > > > > Oops, should be DACR_UACCESS_DISABLE. > > > > > > + > > > > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); > > > > + isb(); > > > > +} > > > > > > Rather than inventing these, why not use uaccess_save_and_enable().. > > > uaccess_restore() around the Xen call? > > > > Thank you for suggestion: uaccess_enable() and uaccess_disable() are > > common calls with arm64, so I will need them, but I think I can use > > set_domain() with DACR_UACCESS_DISABLE /DACR_UACCESS_ENABLE inside > > these inlines. > > That may be, but be very careful that you only use them in ARMv7-only > code. Using them elsewhere is unsafe as the domain register is used > for other purposes, and merely blatting over it (as your > uaccess_enable and uaccess_disable functions do) is unsafe. In fact, I'll turn that into a bit more than a suggestion. I'll make it a NAK on adding them to 32-bit ARM.
> > That may be, but be very careful that you only use them in ARMv7-only > > code. Using them elsewhere is unsafe as the domain register is used > > for other purposes, and merely blatting over it (as your > > uaccess_enable and uaccess_disable functions do) is unsafe. > > In fact, I'll turn that into a bit more than a suggestion. I'll make > it a NAK on adding them to 32-bit ARM. > That's fine, and I also did not want to change ARM 32-bit. But, do you have a suggestion how differentiate between arm64 and arm in include/xen/arm/hypercall.h without ugly ifdefs? Thank you, Pasha
On Thu, Nov 21, 2019 at 07:39:22PM -0500, Pavel Tatashin wrote: > > > That may be, but be very careful that you only use them in ARMv7-only > > > code. Using them elsewhere is unsafe as the domain register is used > > > for other purposes, and merely blatting over it (as your > > > uaccess_enable and uaccess_disable functions do) is unsafe. > > > > In fact, I'll turn that into a bit more than a suggestion. I'll make > > it a NAK on adding them to 32-bit ARM. > > > > That's fine, and I also did not want to change ARM 32-bit. But, do you > have a suggestion how differentiate between arm64 and arm in > include/xen/arm/hypercall.h without ugly ifdefs? Sorry, I don't. I'm surprised ARM64 doesn't have anything like that, but I suspect that's because they don't need to do a save/restore type operation. Whereas, 32-bit ARM does very much need the save/restore behaviour (although not in this path.) The problem is, turning uaccess_enable/disable into C code means that it's open to being used elsewhere in the kernel (ooh, a couple of useful looking functions that work on both architectures! I can use that too!) and then we end up with stuff breaking subtly. It's the potential for subtle breakage that is making me NAK the idea of adding the inline C functions. Given the two have diverged, the only answer is ifdefs, sorry.
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 99929122dad7..8e9262a0f016 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -480,7 +480,7 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .macro uaccess_disable, tmp, isb=1 #ifdef CONFIG_CPU_SW_DOMAIN_PAN /* - * Whenever we re-enter userspace, the domains should always be + * Whenever we re-enter kernel, the domains should always be * set appropriately. */ mov \tmp, #DACR_UACCESS_DISABLE diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 98c6b91be4a8..79d4efa3eb62 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -16,6 +16,23 @@ #include <asm/extable.h> +#ifdef CONFIG_CPU_SW_DOMAIN_PAN +static __always_inline void uaccess_enable(void) +{ + unsigned long val = DACR_UACCESS_ENABLE; + + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); + isb(); +} + +static __always_inline void uaccess_disable(void) +{ + unsigned long val = DACR_UACCESS_ENABLE; + + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val)); + isb(); +} + /* * These two functions allow hooking accesses to userspace to increase * system integrity by ensuring that the kernel can not inadvertantly @@ -24,7 +41,6 @@ */ static __always_inline unsigned int uaccess_save_and_enable(void) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN unsigned int old_domain = get_domain(); /* Set the current domain access to permit user accesses */ @@ -32,18 +48,22 @@ static __always_inline unsigned int uaccess_save_and_enable(void) domain_val(DOMAIN_USER, DOMAIN_CLIENT)); return old_domain; -#else - return 0; -#endif } static __always_inline void uaccess_restore(unsigned int flags) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* Restore the user access mask */ set_domain(flags); -#endif } +#else +static __always_inline void uaccess_enable(void) {} +static __always_inline void uaccess_disable(void) {} +static __always_inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} +static __always_inline void uaccess_restore(unsigned int flags) {} +#endif /* CONFIG_CPU_SW_DOMAIN_PAN */ /* * These two are intentionally not defined anywhere - if the kernel diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index dd6804a64f1a..e87280c6d25d 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -440,4 +440,4 @@ EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op_raw); EXPORT_SYMBOL_GPL(HYPERVISOR_multicall); EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist); EXPORT_SYMBOL_GPL(HYPERVISOR_dm_op); -EXPORT_SYMBOL_GPL(privcmd_call); +EXPORT_SYMBOL_GPL(arch_privcmd_call); diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S index b11bba542fac..2f5be0dc6195 100644 --- a/arch/arm/xen/hypercall.S +++ b/arch/arm/xen/hypercall.S @@ -94,29 +94,18 @@ HYPERCALL2(multicall); HYPERCALL2(vm_assist); HYPERCALL3(dm_op); -ENTRY(privcmd_call) +ENTRY(arch_privcmd_call) stmdb sp!, {r4} mov r12, r0 mov r0, r1 mov r1, r2 mov r2, r3 ldr r3, [sp, #8] - /* - * Privcmd calls are issued by the userspace. We need to allow the - * kernel to access the userspace memory before issuing the hypercall. - */ - uaccess_enable r4 /* r4 is loaded now as we use it as scratch register before */ ldr r4, [sp, #4] __HVC(XEN_IMM) - /* - * Disable userspace access from kernel. This is fine to do it - * unconditionally as no set_fs(KERNEL_DS) is called before. - */ - uaccess_disable r4 - ldm sp!, {r4} ret lr -ENDPROC(privcmd_call); +ENDPROC(arch_privcmd_call); diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S index c5f05c4a4d00..921611778d2a 100644 --- a/arch/arm64/xen/hypercall.S +++ b/arch/arm64/xen/hypercall.S @@ -49,7 +49,6 @@ #include <linux/linkage.h> #include <asm/assembler.h> -#include <asm/asm-uaccess.h> #include <xen/interface/xen.h> @@ -86,27 +85,13 @@ HYPERCALL2(multicall); HYPERCALL2(vm_assist); HYPERCALL3(dm_op); -ENTRY(privcmd_call) +ENTRY(arch_privcmd_call) mov x16, x0 mov x0, x1 mov x1, x2 mov x2, x3 mov x3, x4 mov x4, x5 - /* - * Privcmd calls are issued by the userspace. The kernel needs to - * enable access to TTBR0_EL1 as the hypervisor would issue stage 1 - * translations to user memory via AT instructions. Since AT - * instructions are not affected by the PAN bit (ARMv8.1), we only - * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation - * is enabled (it implies that hardware UAO and PAN disabled). - */ - uaccess_ttbr0_enable x6, x7, x8 hvc XEN_IMM - - /* - * Disable userspace access from kernel once the hyp call completed. - */ - uaccess_ttbr0_disable x6, x7 ret -ENDPROC(privcmd_call); +ENDPROC(arch_privcmd_call); diff --git a/include/xen/arm/hypercall.h b/include/xen/arm/hypercall.h index b40485e54d80..cfb704fd78c8 100644 --- a/include/xen/arm/hypercall.h +++ b/include/xen/arm/hypercall.h @@ -34,16 +34,33 @@ #define _ASM_ARM_XEN_HYPERCALL_H #include <linux/bug.h> +#include <linux/uaccess.h> #include <xen/interface/xen.h> #include <xen/interface/sched.h> #include <xen/interface/platform.h> struct xen_dm_op_buf; +long arch_privcmd_call(unsigned int call, unsigned long a1, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5); -long privcmd_call(unsigned call, unsigned long a1, - unsigned long a2, unsigned long a3, - unsigned long a4, unsigned long a5); +static inline long privcmd_call(unsigned int call, unsigned long a1, + unsigned long a2, unsigned long a3, + unsigned long a4, unsigned long a5) +{ + long rv; + + /* + * Privcmd calls are issued by the userspace. We need to allow the + * kernel to access the userspace memory before issuing the hypercall. + */ + uaccess_enable(); + rv = arch_privcmd_call(call, a1, a2, a3, a4, a5); + uaccess_disable(); + + return rv; +} int HYPERVISOR_xen_version(int cmd, void *arg); int HYPERVISOR_console_io(int cmd, int count, char *str); int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
privcmd_call requires to enable access to userspace for the duration of the hypercall. Currently, this is done via assembly macros. Change it to C inlines instead. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> --- arch/arm/include/asm/assembler.h | 2 +- arch/arm/include/asm/uaccess.h | 32 ++++++++++++++++++++++++++------ arch/arm/xen/enlighten.c | 2 +- arch/arm/xen/hypercall.S | 15 ++------------- arch/arm64/xen/hypercall.S | 19 ++----------------- include/xen/arm/hypercall.h | 23 ++++++++++++++++++++--- 6 files changed, 52 insertions(+), 41 deletions(-)