Message ID | E1ZUGMS-0000BG-8B@rmk-PC.arm.linux.org.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Russell, On Tue, Aug 25, 2015 at 04:42:08PM +0100, Russell King wrote: > Provide a software-based implementation of the priviledged no access > support found in ARMv8.1. > > Userspace pages are mapped using a different domain number from the > kernel and IO mappings. If we switch the user domain to "no access" > when we enter the kernel, we can prevent the kernel from touching > userspace. > > However, the kernel needs to be able to access userspace via the > various user accessor functions. With the wrapping in the previous > patch, we can temporarily enable access when the kernel needs user > access, and re-disable it afterwards. > > This allows us to trap non-intended accesses to userspace, eg, caused > by an inadvertent dereference of the LIST_POISON* values, which, with > appropriate user mappings setup, can be made to succeed. This in turn > can allow use-after-free bugs to be further exploited than would > otherwise be possible. > > Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> > --- > arch/arm/Kconfig | 15 +++++++++++++++ > arch/arm/include/asm/assembler.h | 30 ++++++++++++++++++++++++++++++ > arch/arm/include/asm/domain.h | 21 +++++++++++++++++++-- > arch/arm/include/asm/uaccess.h | 14 ++++++++++++++ > arch/arm/kernel/process.c | 24 ++++++++++++++++++------ > arch/arm/lib/csumpartialcopyuser.S | 14 ++++++++++++++ > 6 files changed, 110 insertions(+), 8 deletions(-) > > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig > index a750c1425c3a..a898eb72da51 100644 > --- a/arch/arm/Kconfig > +++ b/arch/arm/Kconfig > @@ -1694,6 +1694,21 @@ config HIGHPTE > bool "Allocate 2nd-level pagetables from highmem" > depends on HIGHMEM > > +config CPU_SW_DOMAIN_PAN > + bool "Enable use of CPU domains to implement priviledged no-access" Minor comment, but you've consistently misspelt "privileged". Anyway, I tried this on my TC2 board running Debian Jessie armhf and, whilst it boots to a shell on the console, ssh connections appear to hang on the client before even trying to auth. I don't see anything like a domain fault and the machine is still responsive on the console. Disabling this option gets things working again for me. Note that I *do* see undefined instruction exceptions from sshd regardless of this patch, however I think they're triggered from something like libcrypto which is prepared to handle the SIGILL. FWIW, I'm using your ten patches from this series on top of 4.2-rc8 and I've put the .config here: http://www.willdeacon.ukfsn.org/bitbucket/oopsen/pan/pan-tc2.config Will
On 08/25/2015 05:42 PM, Russell King wrote: > Provide a software-based implementation of the priviledged no access > support found in ARMv8.1. [...] > diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h > index a91177043467..ff74f0b54b0e 100644 > --- a/arch/arm/include/asm/assembler.h > +++ b/arch/arm/include/asm/assembler.h > @@ -446,15 +446,45 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) > .endm > > .macro uaccess_disable, tmp, isb=1 > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > + /* > + * Whenever we re-enter userspace, the domains should always be > + * set appropriately. > + */ > + mov \tmp, #DACR_UACCESS_DISABLE > + mcr p15, 0, \tmp, c3, c0, 0 @ Set domain register > + .if \isb > + isb > + .endif > +#endif > .endm > > .macro uaccess_enable, tmp, isb=1 > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN > + /* > + * Whenever we re-enter userspace, the domains should always be > + * set appropriately. > + */ > + mov \tmp, #DACR_UACCESS_ENABLE > + mcr p15, 0, \tmp, c3, c0, 0 > + .if \isb > + isb > + .endif > +#endif > .endm Thanks for the updated serie, on ARMv5, I get the following compile error: arch/arm/kernel/entry-common.S: Assembler messages: arch/arm/kernel/entry-common.S:200: Error: selected processor does not support ARM mode `isb' replacing those two "isb" occurences with "instr_sync" fixed it. With that added access to LIST_POISON are still correctly catched, when CONFIG_CPU_SW_DOMAIN_PAN is set. Also the transmit of an ipv6 packet does not result in a fault anymore. Also with CONFIG_CPU_SW_DOMAIN_PAN disabled, the system now boots fine. This has been tested on Linux 4.1 / kirkwood and Linux 4.2-rc8 / qemu,armv5. Thanks,
On Tue, Aug 25, 2015 at 07:07:39PM +0200, Nicolas Schichan wrote: > arch/arm/kernel/entry-common.S: Assembler messages: > arch/arm/kernel/entry-common.S:200: Error: selected processor does not support > ARM mode `isb' > > replacing those two "isb" occurences with "instr_sync" fixed it. Thanks, that's exactly what I've done. I've pushed that and the other fixes out for linux-next to pick up, hopefully with fewer failures. This series passed my own build tests (which included building and booting SDP4430, LDP4430, Versatile Express and iMX6 platforms.) Unfortunately, they're all Cortex-A8 or A9 platforms. Olof's builder is showing some build failures in the boot log, but I'll assume that they're down to the above - I don't yet have the build log, and that's going to arrive at some point after I've left for a committee meeting which will extend for most of the evening. So, I'm hoping that tonight's linux-next will see improvement rather than deterioration - that's all I can do at this point... hope. I'm out of time to do any more build checking prior to linux-next pulling my tree.
On 08/25/2015 07:48 PM, Russell King - ARM Linux wrote: > On Tue, Aug 25, 2015 at 07:07:39PM +0200, Nicolas Schichan wrote: >> arch/arm/kernel/entry-common.S: Assembler messages: >> arch/arm/kernel/entry-common.S:200: Error: selected processor does not support >> ARM mode `isb' >> >> replacing those two "isb" occurences with "instr_sync" fixed it. > > Thanks, that's exactly what I've done. I've pushed that and the other > fixes out for linux-next to pick up, hopefully with fewer failures. For the code in next-20150826: Tested-by: Nicolas Schichan <nschichan@freebox.fr> Thanks,
On Tue, Aug 25, 2015 at 5:42 PM, Russell King <rmk+kernel@arm.linux.org.uk> wrote: > Provide a software-based implementation of the priviledged no access > support found in ARMv8.1. > > Userspace pages are mapped using a different domain number from the > kernel and IO mappings. If we switch the user domain to "no access" > when we enter the kernel, we can prevent the kernel from touching > userspace. > > However, the kernel needs to be able to access userspace via the > various user accessor functions. With the wrapping in the previous > patch, we can temporarily enable access when the kernel needs user > access, and re-disable it afterwards. > > This allows us to trap non-intended accesses to userspace, eg, caused > by an inadvertent dereference of the LIST_POISON* values, which, with > appropriate user mappings setup, can be made to succeed. This in turn > can allow use-after-free bugs to be further exploited than would > otherwise be possible. > > Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> For some reason this patch explodes on my ARM PB11MPCore, it is a weird beast and corner case machine so I guess that is why it wasn't noticed. This happens a bit into the boot when freeing unused pages: Freeing unused kernel memory: 2672K (c0448000 - c06e4000) Unable to handle kernel paging request at virtual address b6f069f4 pgd = c6e58000 [b6f069f4] *pgd=76e09831, *pte=77ff759f, *ppte=77ff7e6e Internal error: Oops: 17 [#1] SMP ARM Modules linked in: CPU: 2 PID: 1 Comm: init Not tainted 4.3.0-rc4-00015-gf6702681a0af #48 Hardware name: ARM-RealView PB11MPCore task: c7827bc0 ti: c782c000 task.ti: c782c000 PC is at v6wbi_flush_user_tlb_range+0x28/0x48 LR is at on_each_cpu_mask+0x58/0x60 pc : [<c001abf0>] lr : [<c007c18c>] psr: 20000093 sp : c782deb8 ip : 00000000 fp : 00000000 r10: c6e5adc8 r9 : 00000001 r8 : b6f02000 r7 : c7a17180 r6 : c782ded4 r5 : c0015118 r4 : 20000013 r3 : 00000002 r2 : 00100075 r1 : b6f02000 r0 : b6f01002 Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none Control: 00c5787d Table: 76e5800a DAC: 00000051 Process init (pid: 1, stack limit = 0xc782c190) Stack: (0xc782deb8 to 0xc782e000) dea0: b6f02000 c6e09408 dec0: c6e09404 b6f02000 b6f02000 c0015378 706db5df c7988f50 b6f01000 b6f02000 dee0: 706db55f c00ad710 00000001 b6f02000 b6f01fff c7988f50 00000181 706db5df df00: c7fd313c c6e5adc0 c7a17020 b6f01000 c79885b0 00000000 c7988f50 00100075 df20: b6f01000 b6f02000 00000000 00100077 c7a17020 c00ad84c 00000000 00000000 df40: c78c7aa0 00000056 00000000 c7a17058 c782df8c 00000001 00000000 b6f02000 df60: b6f02000 00000005 00000001 b6f01000 c782c000 00000000 bee4ab2c c00ada8c df80: 00100075 00000000 ffffffff c7988f50 b6f2ef78 b6f2c490 00000000 0000007d dfa0: c000f624 c000f460 b6f2ef78 b6f2c490 b6f01000 00001000 00000001 b6f01cd8 dfc0: b6f2ef78 b6f2c490 00000000 0000007d b6f2ef78 00000004 00000004 bee4ab2c dfe0: b6f2d2a8 bee4ab18 b6f24eb0 b6f2214c 80000010 b6f01000 45355559 dd550555 [<c001abf0>] (v6wbi_flush_user_tlb_range) from [<b6f01000>] (0xb6f01000) Code: e20330ff e1830600 e1a01601 e5922028 (ee080f36) ---[ end trace c90cca4faa737700 ]--- Kernel panic - not syncing: Fatal exception CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 4.3.0-rc4-00015-gf6702681a0af #48 Hardware name: ARM-RealView PB11MPCore [<c0015f64>] (unwind_backtrace) from [<c0012dc0>] (show_stack+0x10/0x14) [<c0012dc0>] (show_stack) from [<c01778c4>] (dump_stack+0x84/0x9c) [<c01778c4>] (dump_stack) from [<c0014f24>] (handle_IPI+0x174/0x1b4) [<c0014f24>] (handle_IPI) from [<c00094b0>] (gic_handle_irq+0x80/0x8c) [<c00094b0>] (gic_handle_irq) from [<c00138f4>] (__irq_svc+0x54/0x70) Exception stack(0xc785bf90 to 0xc785bfd8) bf80: 00003228 00000000 00000000 00000000 bfa0: c785a000 c06edac4 00000000 c06eda78 c06e1284 c785bfe8 c033d738 00000001 bfc0: 00000000 c785bfe0 c000ff58 c000ff5c 60000113 ffffffff [<c00138f4>] (__irq_svc) from [<c000ff5c>] (arch_cpu_idle+0x28/0x30) [<c000ff5c>] (arch_cpu_idle) from [<c0052c24>] (cpu_startup_entry+0xf8/0x184) [<c0052c24>] (cpu_startup_entry) from [<70009548>] (0x70009548) CPU0: stopping CPU: 0 PID: 0 Comm: swapper/0 Tainted: G D 4.3.0-rc4-00015-gf6702681a0af #48 Hardware name: ARM-RealView PB11MPCore [<c0015f64>] (unwind_backtrace) from [<c0012dc0>] (show_stack+0x10/0x14) [<c0012dc0>] (show_stack) from [<c01778c4>] (dump_stack+0x84/0x9c) [<c01778c4>] (dump_stack) from [<c0014f24>] (handle_IPI+0x174/0x1b4) [<c0014f24>] (handle_IPI) from [<c00094b0>] (gic_handle_irq+0x80/0x8c) [<c00094b0>] (gic_handle_irq) from [<c00138f4>] (__irq_svc+0x54/0x70) Exception stack(0xc06e5f58 to 0xc06e5fa0) 5f40: 00002fa4 00000000 5f60: 00000000 00000000 c06e4000 c06edac4 00000000 c06eda78 c06e1284 c06e5fb0 5f80: c033d738 00000001 00000000 c06e5fa8 c000ff58 c000ff5c 60000013 ffffffff [<c00138f4>] (__irq_svc) from [<c000ff5c>] (arch_cpu_idle+0x28/0x30) [<c000ff5c>] (arch_cpu_idle) from [<c0052c24>] (cpu_startup_entry+0xf8/0x184) [<c0052c24>] (cpu_startup_entry) from [<c0448bec>] (start_kernel+0x32c/0x3a0) CPU1: stopping CPU: 1 PID: 0 Comm: swapper/1 Tainted: G D 4.3.0-rc4-00015-gf6702681a0af #48 Hardware name: ARM-RealView PB11MPCore [<c0015f64>] (unwind_backtrace) from [<c0012dc0>] (show_stack+0x10/0x14) [<c0012dc0>] (show_stack) from [<c01778c4>] (dump_stack+0x84/0x9c) [<c01778c4>] (dump_stack) from [<c0014f24>] (handle_IPI+0x174/0x1b4) [<c0014f24>] (handle_IPI) from [<c00094b0>] (gic_handle_irq+0x80/0x8c) [<c00094b0>] (gic_handle_irq) from [<c00138f4>] (__irq_svc+0x54/0x70) Exception stack(0xc7857f90 to 0xc7857fd8) 7f80: 0000290a 00000000 00000000 00000000 7fa0: c7856000 c06edac4 00000000 c06eda78 c06e1284 c7857fe8 c033d738 00000001 7fc0: 00000000 c7857fe0 c000ff58 c000ff5c 60000113 ffffffff [<c00138f4>] (__irq_svc) from [<c000ff5c>] (arch_cpu_idle+0x28/0x30) [<c000ff5c>] (arch_cpu_idle) from [<c0052c24>] (cpu_startup_entry+0xf8/0x184) [<c0052c24>] (cpu_startup_entry) from [<70009548>] (0x70009548) ---[ end Kernel panic - not syncing: Fatal exception (I configured to treat oops as panic so it takes down all CPUs.) Sometimes I get this instead, earlier: INFO: rcu_sched detected stalls on CPUs/tasks: 1: (0 ticks this GP) idle=8af/140000000000000/0 softirq=242/244 fqs=1373 (detected by 0, t=2103 jiffies, g=-256, c=-257, q=235) Task dump for CPU 1: modprobe R running 0 351 350 0x00000002 [<c032eab4>] (__schedule) from [<c00a2734>] (handle_mm_fault+0x978/0xa9c) [<c00a2734>] (handle_mm_fault) from [<c0017218>] (do_page_fault+0x1e0/0x2a4) [<c0017218>] (do_page_fault) from [<c0009310>] (do_DataAbort+0x34/0xb4) [<c0009310>] (do_DataAbort) from [<c001361c>] (__dabt_usr+0x3c/0x40) Exception stack(0xc698dfb0 to 0xc698dff8) dfa0: b6f7cd2c 00000020 0000eed4 b6f7d450 dfc0: b6f7c000 b6f8af78 00000000 00000000 b6f82040 6defe040 be903ec4 be903ebc dfe0: 00000108 be903cf0 00000021 b6f81490 20000010 ffffffff Reverting the patch makes everything boot smoothly again. Feeling kind of clueless on where the problem may be, the first backtrace seem to be in pure assembly so I'm a bit lost. The second one from RCU is a bit more clear but I don't know the context of how this is affected by the patch. Been scratching my head for a while... Any ideas? Yours, Linus Walleij
On Fri, Oct 09, 2015 at 10:28:14AM +0200, Linus Walleij wrote: > On Tue, Aug 25, 2015 at 5:42 PM, Russell King > <rmk+kernel@arm.linux.org.uk> wrote: > > > Provide a software-based implementation of the priviledged no access > > support found in ARMv8.1. > > > > Userspace pages are mapped using a different domain number from the > > kernel and IO mappings. If we switch the user domain to "no access" > > when we enter the kernel, we can prevent the kernel from touching > > userspace. > > > > However, the kernel needs to be able to access userspace via the > > various user accessor functions. With the wrapping in the previous > > patch, we can temporarily enable access when the kernel needs user > > access, and re-disable it afterwards. > > > > This allows us to trap non-intended accesses to userspace, eg, caused > > by an inadvertent dereference of the LIST_POISON* values, which, with > > appropriate user mappings setup, can be made to succeed. This in turn > > can allow use-after-free bugs to be further exploited than would > > otherwise be possible. > > > > Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> > > For some reason this patch explodes on my ARM PB11MPCore, it > is a weird beast and corner case machine so I guess that is why > it wasn't noticed. This happens a bit into the boot when freeing > unused pages: > > Freeing unused kernel memory: 2672K (c0448000 - c06e4000) > Unable to handle kernel paging request at virtual address b6f069f4 > pgd = c6e58000 > [b6f069f4] *pgd=76e09831, *pte=77ff759f, *ppte=77ff7e6e > Internal error: Oops: 17 [#1] SMP ARM > Modules linked in: > CPU: 2 PID: 1 Comm: init Not tainted 4.3.0-rc4-00015-gf6702681a0af #48 > Hardware name: ARM-RealView PB11MPCore > task: c7827bc0 ti: c782c000 task.ti: c782c000 > PC is at v6wbi_flush_user_tlb_range+0x28/0x48 > LR is at on_each_cpu_mask+0x58/0x60 > pc : [<c001abf0>] lr : [<c007c18c>] psr: 20000093 > sp : c782deb8 ip : 00000000 fp : 00000000 > r10: c6e5adc8 r9 : 00000001 r8 : b6f02000 > r7 : c7a17180 r6 : c782ded4 r5 : c0015118 r4 : 20000013 > r3 : 00000002 r2 : 00100075 r1 : b6f02000 r0 : b6f01002 > Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none > Control: 00c5787d Table: 76e5800a DAC: 00000051 It looks like we're faulting on the TLBI instruction, because it's targetting a userspace address (r0 == 0xb6f01002) and the DAC prohibits access to userspace. It's weird that this only seems to happen on 11MPCore though; if this core was one of the guys getting cross-called, then I could understand the bug, but the lr suggests that CPU 2 is initiating the flush, so I'd expect the same problem to appear on any ARMv6 part. Russell, have you tried the s/w PAN stuff on any v6 CPUs? Will
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index a750c1425c3a..a898eb72da51 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1694,6 +1694,21 @@ config HIGHPTE bool "Allocate 2nd-level pagetables from highmem" depends on HIGHMEM +config CPU_SW_DOMAIN_PAN + bool "Enable use of CPU domains to implement priviledged no-access" + depends on MMU && !ARM_LPAE + default y + help + Increase kernel security by ensuring that normal kernel accesses + are unable to access userspace addresses. This can help prevent + use-after-free bugs becoming an exploitable privilege escalation + by ensuring that magic values (such as LIST_POISON) will always + fault when dereferenced. + + CPUs with low-vector mappings use a best-efforts implementation. + Their lower 1MB needs to remain accessible for the vectors, but + the remainder of userspace will become appropriately inaccessible. + config HW_PERF_EVENTS bool "Enable hardware performance counter support for perf events" depends on PERF_EVENTS diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index a91177043467..ff74f0b54b0e 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -446,15 +446,45 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .endm .macro uaccess_disable, tmp, isb=1 +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + /* + * Whenever we re-enter userspace, the domains should always be + * set appropriately. + */ + mov \tmp, #DACR_UACCESS_DISABLE + mcr p15, 0, \tmp, c3, c0, 0 @ Set domain register + .if \isb + isb + .endif +#endif .endm .macro uaccess_enable, tmp, isb=1 +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + /* + * Whenever we re-enter userspace, the domains should always be + * set appropriately. + */ + mov \tmp, #DACR_UACCESS_ENABLE + mcr p15, 0, \tmp, c3, c0, 0 + .if \isb + isb + .endif +#endif .endm .macro uaccess_save, tmp +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + mrc p15, 0, \tmp, c3, c0, 0 + str \tmp, [sp, #S_FRAME_SIZE] +#endif .endm .macro uaccess_restore +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + ldr r0, [sp, #S_FRAME_SIZE] + mcr p15, 0, r0, c3, c0, 0 +#endif .endm .macro uaccess_save_and_disable, tmp diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h index 2be929549938..e878129f2fee 100644 --- a/arch/arm/include/asm/domain.h +++ b/arch/arm/include/asm/domain.h @@ -57,11 +57,29 @@ #define domain_mask(dom) ((3) << (2 * (dom))) #define domain_val(dom,type) ((type) << (2 * (dom))) +#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#define DACR_INIT \ + (domain_val(DOMAIN_USER, DOMAIN_NOACCESS) | \ + domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ + domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ + domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT)) +#else #define DACR_INIT \ (domain_val(DOMAIN_USER, DOMAIN_CLIENT) | \ domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT)) +#endif + +#define __DACR_DEFAULT \ + domain_val(DOMAIN_KERNEL, DOMAIN_CLIENT) | \ + domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ + domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT) + +#define DACR_UACCESS_DISABLE \ + (__DACR_DEFAULT | domain_val(DOMAIN_USER, DOMAIN_NOACCESS)) +#define DACR_UACCESS_ENABLE \ + (__DACR_DEFAULT | domain_val(DOMAIN_USER, DOMAIN_CLIENT)) #ifndef __ASSEMBLY__ @@ -76,7 +94,6 @@ static inline unsigned int get_domain(void) return domain; } -#ifdef CONFIG_CPU_USE_DOMAINS static inline void set_domain(unsigned val) { asm volatile( @@ -85,6 +102,7 @@ static inline void set_domain(unsigned val) isb(); } +#ifdef CONFIG_CPU_USE_DOMAINS #define modify_domain(dom,type) \ do { \ unsigned int domain = get_domain(); \ @@ -94,7 +112,6 @@ static inline void set_domain(unsigned val) } while (0) #else -static inline void set_domain(unsigned val) { } static inline void modify_domain(unsigned dom, unsigned type) { } #endif diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 82880132f941..01bae13b2cea 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -57,11 +57,25 @@ extern int fixup_exception(struct pt_regs *regs); */ static inline unsigned int uaccess_save_and_enable(void) { +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + unsigned int old_domain = get_domain(); + + /* Set the current domain access to permit user accesses */ + set_domain((old_domain & ~domain_mask(DOMAIN_USER)) | + domain_val(DOMAIN_USER, DOMAIN_CLIENT)); + + return old_domain; +#else return 0; +#endif } static inline void uaccess_restore(unsigned int flags) { +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + /* Restore the user access mask */ + set_domain(flags); +#endif } /* diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index e722f9b3c9b1..b407cc7a7b55 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -129,12 +129,24 @@ void __show_regs(struct pt_regs *regs) buf[4] = '\0'; #ifndef CONFIG_CPU_V7M - printk("Flags: %s IRQs o%s FIQs o%s Mode %s ISA %s Segment %s\n", - buf, interrupts_enabled(regs) ? "n" : "ff", - fast_interrupts_enabled(regs) ? "n" : "ff", - processor_modes[processor_mode(regs)], - isa_modes[isa_mode(regs)], - get_fs() == get_ds() ? "kernel" : "user"); + { + unsigned int domain = get_domain(); + const char *segment; + + if ((domain & domain_mask(DOMAIN_USER)) == + domain_val(DOMAIN_USER, DOMAIN_NOACCESS)) + segment = "none"; + else if (get_fs() == get_ds()) + segment = "kernel"; + else + segment = "user"; + + printk("Flags: %s IRQs o%s FIQs o%s Mode %s ISA %s Segment %s\n", + buf, interrupts_enabled(regs) ? "n" : "ff", + fast_interrupts_enabled(regs) ? "n" : "ff", + processor_modes[processor_mode(regs)], + isa_modes[isa_mode(regs)], segment); + } #else printk("xPSR: %08lx\n", regs->ARM_cpsr); #endif diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index 1d0957e61f89..52784f6f1086 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -17,6 +17,19 @@ .text +#ifdef CONFIG_CPU_SW_DOMAIN_PAN + .macro save_regs + mrc p15, 0, r3, c3, c0, 0 + stmfd sp!, {r1 - r8, lr} + uaccess_enable r3 + .endm + + .macro load_regs + ldmfd sp!, {r1 - r8, lr} + mcr p15, 0, r3, c3, c0, 0 + ret lr + .endm +#else .macro save_regs stmfd sp!, {r1, r2, r4 - r8, lr} .endm @@ -24,6 +37,7 @@ .macro load_regs ldmfd sp!, {r1, r2, r4 - r8, pc} .endm +#endif .macro load1b, reg1 ldrusr \reg1, r0, 1
Provide a software-based implementation of the priviledged no access support found in ARMv8.1. Userspace pages are mapped using a different domain number from the kernel and IO mappings. If we switch the user domain to "no access" when we enter the kernel, we can prevent the kernel from touching userspace. However, the kernel needs to be able to access userspace via the various user accessor functions. With the wrapping in the previous patch, we can temporarily enable access when the kernel needs user access, and re-disable it afterwards. This allows us to trap non-intended accesses to userspace, eg, caused by an inadvertent dereference of the LIST_POISON* values, which, with appropriate user mappings setup, can be made to succeed. This in turn can allow use-after-free bugs to be further exploited than would otherwise be possible. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> --- arch/arm/Kconfig | 15 +++++++++++++++ arch/arm/include/asm/assembler.h | 30 ++++++++++++++++++++++++++++++ arch/arm/include/asm/domain.h | 21 +++++++++++++++++++-- arch/arm/include/asm/uaccess.h | 14 ++++++++++++++ arch/arm/kernel/process.c | 24 ++++++++++++++++++------ arch/arm/lib/csumpartialcopyuser.S | 14 ++++++++++++++ 6 files changed, 110 insertions(+), 8 deletions(-)