From patchwork Tue Jan 25 09:14:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57CC9C433EF for ; Tue, 25 Jan 2022 09:27:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1457368AbiAYJ1M (ORCPT ); Tue, 25 Jan 2022 04:27:12 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:51562 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1457384AbiAYJPM (ORCPT ); Tue, 25 Jan 2022 04:15:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 12473B81733 for ; Tue, 25 Jan 2022 09:15:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4D1FC36AE3; Tue, 25 Jan 2022 09:15:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102104; bh=Z8MsypEn9Y0/+dIdTvMJ9ObEDee5JffC7z7a9ZlDRwU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j929Y1IN0v4Wou2FtQSOKTk8hqcdwV5NoPFU+5ZtvovoqvmGt06k6gAABgP1Tflh6 aNRrL8L3K9AUMQrd5KRwRAvoGOy4PmYonM5797122paqH473E5G/a+oIeRJn0Ml0s3 gB43IK+OcBr5VV2y5oUrbIEiFW62uqWucn0HvJnmg8ur62k5jxL+2kSxM+XSRtZpbL 74JFauANtj2WnML95YORL9f3EbO3d5e86q+U6wIwKN+xJphXx6dQIqZoG5+h8Vev89 XTsjIS+CFSErXjUGo5+BvWxmgWXAJJllPqMn5nEumOWQXcuNMA/hy6KQHR3HzplcCb UQsabz+TbtUig== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 1/8] ARM: mm: switch to swapper_pg_dir early for vmap'ed stack Date: Tue, 25 Jan 2022 10:14:46 +0100 Message-Id: <20220125091453.1475246-2-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2345; h=from:subject; bh=Z8MsypEn9Y0/+dIdTvMJ9ObEDee5JffC7z7a9ZlDRwU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh779+uXtZ6ZtwzCio7m54DHr+Vq4QdkXWtPJmjewv EqoJZZiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/fgAKCRDDTyI5ktmPJDf2C/ 0TcbEkS8EQzCAFhz/j/KZfj85LqF+tMqirRf6cC7Tv6vPNS0xWXSfCen2IextmidRn9LGxAyYZ5Cwl sKOtV8qjz2YR8T602wzuWUMyJQsG/XhIVdyaijJQbhdmFqUw0g5uI0V7YISFuA6FIJ/rkIo24D2ymU QFJrFC5HVab8HAdjEL2AXcfFprN2vUKzjSQ0GzGx/Gfc9rJrx5Bo/eudHGE62yJ6x/t8zme2WI9smh kKyJt7ys/JZbVwlusQZmqEHcl2lu8DjOTNlLe4Pq8A54MDZmvgqKKA1AWUhhxxQ6ZTjbHMxTJvkqe8 WzOOgODJvobJA/70Gp/oIZvnzBXBnmc+Vjyzipwx2pEGrVNyhIkg+UCvIICOd6dlTwZoyGGlTOhFBU 3ySp9ql7wyRcuzumSpu/Pp/C/RpkeYDnX44PopnI6xzxxj4pyaLWbNzB9FqkmjwNDhq9JDv8Iugrfk Le3Ksd7pPtAbFf/g6NH6aPIGeZh1sGNVQ5bIdaUiha/O0= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org When onlining a CPU, switch to swapper_pg_dir as soon as possible so that it is guaranteed that the vmap'ed stack is mapped before it is used. Signed-off-by: Ard Biesheuvel --- arch/arm/Kconfig | 2 +- arch/arm/kernel/head.S | 7 +++++++ arch/arm/kernel/sleep.S | 7 +++++++ 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index c32b79453ddf..359a3b85c8b3 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -128,7 +128,7 @@ config ARM select RTC_LIB select SYS_SUPPORTS_APM_EMULATION select THREAD_INFO_IN_TASK - select HAVE_ARCH_VMAP_STACK if MMU && (!LD_IS_LLD || LLD_VERSION >= 140000) && !PM_SLEEP_SMP + select HAVE_ARCH_VMAP_STACK if MMU && (!LD_IS_LLD || LLD_VERSION >= 140000) select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M # Above selects are sorted alphabetically; please add new ones # according to that. Thanks. diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index c04dd94630c7..500612d3da2e 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -424,6 +424,13 @@ ENDPROC(secondary_startup) ENDPROC(secondary_startup_arm) ENTRY(__secondary_switched) +#if defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE) + @ Before using the vmap'ed stack, we have to switch to swapper_pg_dir + @ as the ID map does not cover the vmalloc region. + mrc p15, 0, ip, c2, c0, 1 @ read TTBR1 + mcr p15, 0, ip, c2, c0, 0 @ set TTBR0 + instr_sync +#endif adr_l r7, secondary_data + 12 @ get secondary_data.stack ldr sp, [r7] ldr r0, [r7, #4] @ get secondary_data.task diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S index f909baf17912..a86a1d4f3461 100644 --- a/arch/arm/kernel/sleep.S +++ b/arch/arm/kernel/sleep.S @@ -119,6 +119,13 @@ ENTRY(cpu_resume_mmu) ENDPROC(cpu_resume_mmu) .popsection cpu_resume_after_mmu: +#if defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE) + @ Before using the vmap'ed stack, we have to switch to swapper_pg_dir + @ as the ID map does not cover the vmalloc region. + mrc p15, 0, ip, c2, c0, 1 @ read TTBR1 + mcr p15, 0, ip, c2, c0, 0 @ set TTBR0 + instr_sync +#endif bl cpu_init @ restore the und/abt/irq banked regs mov r0, #0 @ return zero on success ldmfd sp!, {r4 - r11, pc} From patchwork Tue Jan 25 09:14:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE24EC433EF for ; Tue, 25 Jan 2022 09:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1455514AbiAYJ1G (ORCPT ); Tue, 25 Jan 2022 04:27:06 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:48984 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1457397AbiAYJPM (ORCPT ); Tue, 25 Jan 2022 04:15:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E57B961520 for ; Tue, 25 Jan 2022 09:15:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42D4AC340EB; Tue, 25 Jan 2022 09:15:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102107; bh=7HlVD05SkXizpl7tEfj25ZvyZuQshFuhgIjqL7v4bgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KC02KkW2t1QVoqzIjpk6i6V6ybXuk2dJL+lVlysPSQ5Ap3/Q/Fp/YdA7mw9btVkf7 8/CY17REUXSq8HtG9vPu8RMX2FpeLhJSBfTzdIaPIBLRMsjkiMp3YCaRCbPooG5CEf eaRZZnOyz23XbT9qqDj7wi8pMUvz7/dRe7HFIQV/AUp1y7KU6jj1fQLmHop8vrpwNH 0G9wqmu9aPc2mXsqRpPf8fg2QimQ1c3jxST42U2oIF8bDYqv3qxPYPwBVJdUQnfTAT 0K6UGHBvuuHEDSkmxF59nekz9fXI6xR7YAWh9P42tutmp9b6h8/Kyp4VURR9YmvY9v VQO6hwVnz8dDg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 2/8] ARM: assembler: define a Kconfig symbol for group relocation support Date: Tue, 25 Jan 2022 10:14:47 +0100 Message-Id: <20220125091453.1475246-3-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6264; h=from:subject; bh=7HlVD05SkXizpl7tEfj25ZvyZuQshFuhgIjqL7v4bgc=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+A9sMUYkRtcYQMDCDYTp3v6UAQfQG1Xe70vKp8 hEtiV4CJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/gAAKCRDDTyI5ktmPJFbEC/ 4k1EsmzMc66Pc4yPvJaGOqqSL5TobZz/8CrobbYE3fgN+0/cK4SLF0o+PjUetSKpSt7gUFYli0MI1I exl+4dOFkiB8EKPiYwdSGu8xsnHo9XowPZZvXi0xfBNGW2xXtmOkIVllIntMCva9hPDL63uwbTKkz+ lTZrRHRBccHEsfQLK1lqepSBXIoQCA7gNdsvVjHBCLv13O7TSYml7Q67o6LCQEx83dcPpZ15tK9QwG rxiN9U4n9WcYxhOAFpUpz33Mq/mmO1387PSRR+TXqwdVC0jnG2jQm7avdyD+Yt936jolOtXEuFzUum KplkFrRFbIpKVeruNneASSWrjRJ7tbsOB1kFhkApOsmPwFe+bzBgzdPKWuJ0ZcOzn+wZHqGaokop9i zdu6DbYh+uG+MizsGTXDI6wHTzXO5MLERfu+ofSJR0O0NGojCWml6iiI3waxzkyi4Aw6F8JSAGTyhY Glbq0hxyIU2DChPeBYxUNjbTEIpEM1lRHcCwdJDWrUTnk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Nathan reports the group relocations go out of range in pathological cases such as allyesconfig kernels, which have little chance of actually booting but are still used in validation. So add a Kconfig symbol for this feature, and make it depend on !COMPILE_TEST. Signed-off-by: Ard Biesheuvel --- arch/arm/Kconfig | 13 ++++++++++++- arch/arm/include/asm/assembler.h | 8 ++++---- arch/arm/include/asm/current.h | 8 ++++---- arch/arm/include/asm/percpu.h | 4 ++-- arch/arm/kernel/module.c | 7 ++++++- 5 files changed, 28 insertions(+), 12 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 359a3b85c8b3..70ab8d807032 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -128,7 +128,7 @@ config ARM select RTC_LIB select SYS_SUPPORTS_APM_EMULATION select THREAD_INFO_IN_TASK - select HAVE_ARCH_VMAP_STACK if MMU && (!LD_IS_LLD || LLD_VERSION >= 140000) + select HAVE_ARCH_VMAP_STACK if MMU && ARM_HAS_GROUP_RELOCS select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M # Above selects are sorted alphabetically; please add new ones # according to that. Thanks. @@ -140,6 +140,17 @@ config ARM Europe. There is an ARM Linux project with a web page at . +config ARM_HAS_GROUP_RELOCS + def_bool y + depends on !LD_IS_LLD || LLD_VERSION >= 140000 + depends on !COMPILE_TEST + help + Whether or not to use R_ARM_ALU_PC_Gn or R_ARM_LDR_PC_Gn group + relocations, which have been around for a long time, but were not + supported in LLD until version 14. The combined range is -/+ 256 MiB, + which is usually sufficient, but not for allyesconfig, so we disable + this feature when doing compile testing. + config ARM_HAS_SG_CHAIN bool diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 59d7b9e81934..9998718a49ca 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -656,8 +656,8 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .macro __ldst_va, op, reg, tmp, sym, cond #if __LINUX_ARM_ARCH__ >= 7 || \ - (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) || \ - (defined(CONFIG_LD_IS_LLD) && CONFIG_LLD_VERSION < 140000) + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) mov_l \tmp, \sym, \cond \op\cond \reg, [\tmp] #else @@ -716,8 +716,8 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) */ .macro ldr_this_cpu, rd:req, sym:req, t1:req, t2:req #if __LINUX_ARM_ARCH__ >= 7 || \ - (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) || \ - (defined(CONFIG_LD_IS_LLD) && CONFIG_LLD_VERSION < 140000) + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) this_cpu_offset \t1 mov_l \t2, \sym ldr \rd, [\t1, \t2] diff --git a/arch/arm/include/asm/current.h b/arch/arm/include/asm/current.h index 2f9d79214b25..131a89bbec6b 100644 --- a/arch/arm/include/asm/current.h +++ b/arch/arm/include/asm/current.h @@ -37,8 +37,8 @@ static inline __attribute_const__ struct task_struct *get_current(void) #ifdef CONFIG_CPU_V6 "1: \n\t" " .subsection 1 \n\t" -#if !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) && \ - !(defined(CONFIG_LD_IS_LLD) && CONFIG_LLD_VERSION < 140000) +#if defined(CONFIG_ARM_HAS_GROUP_RELOCS) && \ + !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) "2: " LOAD_SYM_ARMV6(%0, __current) " \n\t" " b 1b \n\t" #else @@ -55,8 +55,8 @@ static inline __attribute_const__ struct task_struct *get_current(void) #endif : "=r"(cur)); #elif __LINUX_ARM_ARCH__>= 7 || \ - (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) || \ - (defined(CONFIG_LD_IS_LLD) && CONFIG_LLD_VERSION < 140000) + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) cur = __current; #else asm(LOAD_SYM_ARMV6(%0, __current) : "=r"(cur)); diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h index 28961d60877d..a09034ae45a1 100644 --- a/arch/arm/include/asm/percpu.h +++ b/arch/arm/include/asm/percpu.h @@ -38,8 +38,8 @@ static inline unsigned long __my_cpu_offset(void) #ifdef CONFIG_CPU_V6 "1: \n\t" " .subsection 1 \n\t" -#if !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) && \ - !(defined(CONFIG_LD_IS_LLD) && CONFIG_LLD_VERSION < 140000) +#if defined(CONFIG_ARM_HAS_GROUP_RELOCS) && \ + !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) "2: " LOAD_SYM_ARMV6(%0, __per_cpu_offset) " \n\t" " b 1b \n\t" #else diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index 4d33a7acf617..549abcedf795 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -68,6 +68,7 @@ bool module_exit_section(const char *name) strstarts(name, ".ARM.exidx.exit"); } +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS /* * This implements the partitioning algorithm for group relocations as * documented in the ARM AArch32 ELF psABI (IHI 0044). @@ -103,6 +104,7 @@ static u32 get_group_rem(u32 group, u32 *offset) } while (group--); return shift; } +#endif int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, @@ -118,7 +120,9 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, unsigned long loc; Elf32_Sym *sym; const char *symname; +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS u32 shift, group = 1; +#endif s32 offset; u32 tmp; #ifdef CONFIG_THUMB2_KERNEL @@ -249,6 +253,7 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, *(u32 *)loc = __opcode_to_mem_arm(tmp); break; +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS case R_ARM_ALU_PC_G0_NC: group = 0; fallthrough; @@ -296,7 +301,7 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, } *(u32 *)loc = __opcode_to_mem_arm((tmp & ~0xfff) | offset); break; - +#endif #ifdef CONFIG_THUMB2_KERNEL case R_ARM_THM_CALL: case R_ARM_THM_JUMP24: From patchwork Tue Jan 25 09:14:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC93C433F5 for ; Tue, 25 Jan 2022 09:27:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1457384AbiAYJ1Q (ORCPT ); Tue, 25 Jan 2022 04:27:16 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:49020 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1457403AbiAYJPM (ORCPT ); Tue, 25 Jan 2022 04:15:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 73E1D6151D for ; Tue, 25 Jan 2022 09:15:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C675DC340E6; Tue, 25 Jan 2022 09:15:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102109; bh=S3pwYCt2gSUeW+29vY6nfD17RKQePaxvX9y2g6JFmnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eh6q96gqh3K5giFBqlNOFbHMwajihgn+8h+EF9uz+zwjRzXHjSiLYgfODCqu8CZVg xqNESrMoPNaRV9BHU4HH72R1ejlD14BF5ugMSyRtLCuaidCjuCNil0CZCtcLzmDts5 745YhE7nrd44JSAzKbwTH6oR1UCGliGTwt+8YOkXxxO85ksFNZGsc21mDtmKOiHANE j+qmHMitrf0GAf/OBgbsyltSMFS0m9rNAq3MsZerWPQO5Xv6L7cW6aXdmA95HhWp9e /2fSLVWWqPvDNvdUOj6u8YjMJt6Vfd6VSRJhymb4l295GbLWTd5xXM02V8Q8qGagr3 aaW+r75YhVkKw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 3/8] ARM: smp: elide HWCAP_TLS checks or __entry_task updates on SMP+v6 Date: Tue, 25 Jan 2022 10:14:48 +0100 Message-Id: <20220125091453.1475246-4-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4014; h=from:subject; bh=S3pwYCt2gSUeW+29vY6nfD17RKQePaxvX9y2g6JFmnA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+Cxdx7lEyA/ZN3qwguA6+yWbFjqZiykNHtkO4u YjWO+a+JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/ggAKCRDDTyI5ktmPJLh4DA CoXxEfTwJoCboPeMXM3u9sMH3RUogpUz3vXIK86H6ZFzXqEmXQEVPAU1+WJ+6v6CLijNyjL7gPRcP6 P5C1nrrsOUzHFqISLATNTJG9U0fkgy7qT3zfrurjsqbcZq1JwO/lkiyeOSLX6cmHrRSE3AdMvoy0Zd PZwDXgUaf+QfiEv0GYV0D+9bln0yg0+/26GdV0pgoMabfRXm4eZlgZJmzuQhWvn0LkQSe0Z7poOgaX Tre3KHE+Ejs6EhQVkZNszhjgDw/FHHXGxM/JuBc39IdUYgyRDYxOzKzE/sQjvyDnlB9XnKSwSKbGx6 dAXtPc0BnYixQlA06tOF1Uxd5PBTRxHjpyCBZPgT/xQD2eY2Wtx1U0auML3iuOsNL85T5g1y82Dw68 CGqiy2xQvyu9Uf6d717RXYbjLaWXWxZ8I3OzI8Q8d8wp+1bLO3HcnRz9P7IEJwD7GTuNsL1KleSXuz e4J1vqWXGyZU9PzZqU6FFEPxyr/0+UW2COPZ0uPybmBmI= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Use the SMP_ON_UP patching framework to elide HWCAP_TLS tests from the context switch and return to userspace code paths, as SMP systems are guaranteed to have this h/w capability. At the same time, omit the update of __entry_task if the system is detected to be UP at runtime, as in that case, the value is never used. Signed-off-by: Ard Biesheuvel --- arch/arm/include/asm/switch_to.h | 4 ++-- arch/arm/include/asm/tls.h | 22 ++++++++++++++------ arch/arm/kernel/entry-header.S | 17 +++++++-------- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch_to.h index a482c99934ff..f67ae946a3c6 100644 --- a/arch/arm/include/asm/switch_to.h +++ b/arch/arm/include/asm/switch_to.h @@ -3,6 +3,7 @@ #define __ASM_ARM_SWITCH_TO_H #include +#include /* * For v7 SMP cores running a preemptible kernel we may be pre-empted @@ -40,8 +41,7 @@ static inline void set_ti_cpu(struct task_struct *p) do { \ __complete_pending_tlbi(); \ set_ti_cpu(next); \ - if (IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || \ - IS_ENABLED(CONFIG_SMP)) \ + if (IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || is_smp()) \ __this_cpu_write(__entry_task, next); \ last = __switch_to(prev,task_thread_info(prev), task_thread_info(next)); \ } while (0) diff --git a/arch/arm/include/asm/tls.h b/arch/arm/include/asm/tls.h index d712c170c095..3dcd0f71a0da 100644 --- a/arch/arm/include/asm/tls.h +++ b/arch/arm/include/asm/tls.h @@ -18,22 +18,32 @@ .endm .macro switch_tls_v6, base, tp, tpuser, tmp1, tmp2 +#ifdef CONFIG_SMP +ALT_SMP(nop) +ALT_UP_B(.L0_\@) + .subsection 1 +#endif +.L0_\@: ldr_va \tmp1, elf_hwcap mov \tmp2, #0xffff0fff tst \tmp1, #HWCAP_TLS @ hardware TLS available? streq \tp, [\tmp2, #-15] @ set TLS value at 0xffff0ff0 - mrcne p15, 0, \tmp2, c13, c0, 2 @ get the user r/w register -#ifndef CONFIG_SMP - mcrne p15, 0, \tp, c13, c0, 3 @ yes, set TLS register + beq .L2_\@ + mcr p15, 0, \tp, c13, c0, 3 @ yes, set TLS register +#ifdef CONFIG_SMP + b .L1_\@ + .previous #endif - mcrne p15, 0, \tpuser, c13, c0, 2 @ set user r/w register - strne \tmp2, [\base, #TI_TP_VALUE + 4] @ save it +.L1_\@: switch_tls_v6k \base, \tp, \tpuser, \tmp1, \tmp2 +.L2_\@: .endm .macro switch_tls_software, base, tp, tpuser, tmp1, tmp2 mov \tmp1, #0xffff0fff str \tp, [\tmp1, #-15] @ set TLS value at 0xffff0ff0 .endm +#else +#include #endif #ifdef CONFIG_TLS_REG_EMUL @@ -44,7 +54,7 @@ #elif defined(CONFIG_CPU_V6) #define tls_emu 0 #define has_tls_reg (elf_hwcap & HWCAP_TLS) -#define defer_tls_reg_update IS_ENABLED(CONFIG_SMP) +#define defer_tls_reg_update is_smp() #define switch_tls switch_tls_v6 #elif defined(CONFIG_CPU_32v6K) #define tls_emu 0 diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index cb82ff5adec1..9a1dc142f782 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -292,21 +292,18 @@ .macro restore_user_regs, fast = 0, offset = 0 -#if defined(CONFIG_CPU_32v6K) || defined(CONFIG_SMP) -#if defined(CONFIG_CPU_V6) && defined(CONFIG_SMP) -ALT_SMP(b .L1_\@ ) -ALT_UP( nop ) - ldr_va r1, elf_hwcap - tst r1, #HWCAP_TLS @ hardware TLS available? - beq .L2_\@ -.L1_\@: +#if defined(CONFIG_CPU_32v6K) && \ + (!defined(CONFIG_CPU_V6) || defined(CONFIG_SMP)) +#ifdef CONFIG_CPU_V6 +ALT_SMP(nop) +ALT_UP_B(.L1_\@) #endif @ The TLS register update is deferred until return to user space so we @ can use it for other things while running in the kernel - get_thread_info r1 + mrc p15, 0, r1, c13, c0, 3 @ get current_thread_info pointer ldr r1, [r1, #TI_TP_VALUE] mcr p15, 0, r1, c13, c0, 3 @ set TLS register -.L2_\@: +.L1_\@: #endif uaccess_enable r1, isb=0 From patchwork Tue Jan 25 09:14:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76DF7C433EF for ; Tue, 25 Jan 2022 09:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1573841AbiAYJ10 (ORCPT ); Tue, 25 Jan 2022 04:27:26 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:51618 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346478AbiAYJPW (ORCPT ); Tue, 25 Jan 2022 04:15:22 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id ABDEAB81722 for ; Tue, 25 Jan 2022 09:15:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 558B7C340E8; Tue, 25 Jan 2022 09:15:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102112; bh=5CU3hVcUUrrAtI+VHPsC7I+bj7z+ck1jQdWMKOCM39c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qqHrJfQitoYSRKlT8nlXeH17rYUeXXkdWq4Ri8VY3ZaNIAY4ePROSrlStES1SLQGZ tEUynmT9fJi7+G/YLK3lcqGJayZqHLM09r+rpLa82AOoUQukyZHMtKcv/w+ym0Mmk/ ap3JeISqlWIfuW8UrtPP/Zt3BtZSW/eSIpaj/llbbHyuVq3Ak4j9zKBMTn4PTYWqVJ 9jNbXhq+5eMfccIQ65p063vQuQ5+T3fP6JSdmattMkM4jg88HrWAxyDz4kr4TRNZe4 k6NKsF4AWw7pZ2vJJnzw1koP6ZMPUZlyf1IKyuN/U+l6G6bjHkxIoczNT6TEHrSX6d ugoGlzK59Udcg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 4/8] ARM: entry: avoid clobbering R9 in IRQ handler Date: Tue, 25 Jan 2022 10:14:49 +0100 Message-Id: <20220125091453.1475246-5-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1216; h=from:subject; bh=5CU3hVcUUrrAtI+VHPsC7I+bj7z+ck1jQdWMKOCM39c=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+EKwryGpxQeAF7iDJiZrxU30CtGWxP+u9iQr1c h3df2L6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/hAAKCRDDTyI5ktmPJGQSC/ 9+EupbXGT4o1empU/QZzrPVb9vEA67V2uYfZoH7YjTAA9JG7ChZVdPgwj/fMpZ9RbfUUx9jpOxrqGn W6oG9OqIeeNq/k1nJwNunEnEFPTHVygkyshpY1ldJrqR4ltHNNjU/nTmYqd5CInm0kPE6ygmhaNOCF TbgmmmR+JyOsznLhPOtyky6+czopVqHkZdCZobvJ9t2NJYvapMP0zATur1DU8aDcslaG7YKQR+WbhQ zorjyFqw2kRrWMD6eW8nKrANyt+VXX9wkGfJq+3JnpdkmAdLvwsycOmGRAJfURsDLuWFDeIn2JCbsG vSf9fFLgiNdr1VLdn4VlsHOt1n8+SkPyuXkv03oBbBSV8afRcxAVKfyIdobukqne2MzQDmfJC1UNxx RzZJmKmdOQTRIZaukyQ5TJCxKIcNQpCcdOxSYnoCXR9QVOiYhaYEpYovJyybhNWzuqtc5lR7QJZGYx hH61cs/7hrV+TgwYTzfzHCELQQq5Rl92/+Y9w5rQ3pFuY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Avoid using R9 in the IRQ handler code, as the entry code uses it for tsk, and expects it to remain untouched between the IRQ entry and exit code. Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/entry-armv.S | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index b58bda51e4b8..038aabb6578f 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -38,11 +38,10 @@ #ifdef CONFIG_UNWINDER_ARM mov fpreg, sp @ Preserve original SP #else - mov r8, fp @ Preserve original FP - mov r9, sp @ Preserve original SP + mov r7, fp @ Preserve original FP + mov r8, sp @ Preserve original SP #endif ldr_this_cpu sp, irq_stack_ptr, r2, r3 - .if \from_user == 0 UNWIND( .setfp fpreg, sp ) @ @@ -82,8 +81,8 @@ UNWIND( .setfp fpreg, sp ) #ifdef CONFIG_UNWINDER_ARM mov sp, fpreg @ Restore original SP #else - mov fp, r8 @ Restore original FP - mov sp, r9 @ Restore original SP + mov fp, r7 @ Restore original FP + mov sp, r8 @ Restore original SP #endif // CONFIG_UNWINDER_ARM #endif // CONFIG_IRQSTACKS .endm From patchwork Tue Jan 25 09:14:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93224C433F5 for ; Tue, 25 Jan 2022 09:28:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1457454AbiAYJ1m (ORCPT ); Tue, 25 Jan 2022 04:27:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1573024AbiAYJSj (ORCPT ); Tue, 25 Jan 2022 04:18:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99666C06177D for ; Tue, 25 Jan 2022 01:15:17 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3C0DCB8173C for ; Tue, 25 Jan 2022 09:15:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7A46C340E7; Tue, 25 Jan 2022 09:15:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102115; bh=14Hmwc8lScA0KQlUzpYzyWjjgsdqpgUn2q4RimE1lsc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pj6LsbSKKMgiCXVHxv0w4vNfFxEAy+12GMbw5w7sskCK09H/xP8avgmuhoPsAn12F nJap5wTtseUXwVGKmqbGG+zcXv5jtocxpemwSvYOGhjo5Cg44Lhplq5T/OkIeyYbTi kb3YHNPUSxZjhnyTDXBgYUN9eFszhbb8l8vNTZYMVXN0Bkuu7RKiAvAuKuK1S45kOT NqY3y1X8vRscHLTfmJ3eB8bLUIzkjYZXEwvneDylOG+dFiy6haIHPIV5CpTlITM2Tl sDVbWtJN78/FRiiQ08vK5SepixyDeqfE8YHzkwMijj/ynVPbA5xTR9b+lv+jlpdEaU BtpRdfC45ydnQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 5/8] ARM: mm: make vmalloc_seq handling SMP safe Date: Tue, 25 Jan 2022 10:14:50 +0100 Message-Id: <20220125091453.1475246-6-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7628; h=from:subject; bh=14Hmwc8lScA0KQlUzpYzyWjjgsdqpgUn2q4RimE1lsc=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+Gon/MXTuvZ21AtabZWtOSby6aw3idYj71VwdY 4LC6ih2JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/hgAKCRDDTyI5ktmPJGI/C/ 0fbj+Lm4EBs+bgnH2b0+DFie769eX9I1JrluJ7i+LAr7YWW81qVouBV0XC5Ugj/MFHbfuB/TOBjYjx l61sdYbbxfQGfhWho8YZhgCGLPfMpcZJCTgxElYf9Ny1urJLw7RKIIJtQMXH1CyYeKIkuvzH1/ba23 0ohrJ6lXd4qhHObUgspg/OTrJCi81+yvtwzC5BZNnLNJiEBfD0ffNkIJfL5eUGgEaRcT6vqYsCJ/HV wIB9VLFpJ7sH2aCCEDNds9ycgZWb6JcMW1tt/NDdp+EJZTwBOXkCX9cLghgjil7JQ5PQ8O+rZ4T6Zq F0ZkJRncGHqLfMYQQ4riJFK52mC/WzfnrGMTrb2K5Dnxsw10p2DQvj/2GN/AlvyCzV65u+EurfYaRt vpp/n8GH6e84UtvM05uXIAuF1nJmppRumCfpWr2/FfeBMyldvLewgBWC8xBDfta5fwfEBmK9y+DVp2 thg5RTeueCul/ca80tOQaHAOM4CvhGn9+6V8LBDxJ2F5k= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Rework the vmalloc_seq handling so it can be used safely under SMP, as we started using it to ensure that vmap'ed stacks are guaranteed to be mapped by the active mm before switching to a task, and here we need to ensure that changes to the page tables are visible to other CPUs when they observe a change in the sequence count. Since LPAE needs none of this, fold a check against it into the vmalloc_seq counter check after breaking it out into a separate static inline helper. Given that vmap'ed stacks are now also supported on !SMP configurations, let's drop the WARN() that could potentially now fire spuriously. Signed-off-by: Ard Biesheuvel --- arch/arm/include/asm/mmu.h | 2 +- arch/arm/include/asm/mmu_context.h | 22 +++++++++++++++-- arch/arm/include/asm/page.h | 3 +-- arch/arm/kernel/traps.c | 25 ++++++-------------- arch/arm/mm/context.c | 3 +-- arch/arm/mm/ioremap.c | 18 ++++++++------ 6 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h index 1592a4264488..e049723840d3 100644 --- a/arch/arm/include/asm/mmu.h +++ b/arch/arm/include/asm/mmu.h @@ -10,7 +10,7 @@ typedef struct { #else int switch_pending; #endif - unsigned int vmalloc_seq; + atomic_t vmalloc_seq; unsigned long sigpage; #ifdef CONFIG_VDSO unsigned long vdso; diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h index 84e58956fcab..db2cb06aa8cf 100644 --- a/arch/arm/include/asm/mmu_context.h +++ b/arch/arm/include/asm/mmu_context.h @@ -23,6 +23,16 @@ void __check_vmalloc_seq(struct mm_struct *mm); +#ifdef CONFIG_MMU +static inline void check_vmalloc_seq(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_ARM_LPAE) && + unlikely(atomic_read(&mm->context.vmalloc_seq) != + atomic_read(&init_mm.context.vmalloc_seq))) + __check_vmalloc_seq(mm); +} +#endif + #ifdef CONFIG_CPU_HAS_ASID void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk); @@ -52,8 +62,7 @@ static inline void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm, static inline void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) { - if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq)) - __check_vmalloc_seq(mm); + check_vmalloc_seq(mm); if (irqs_disabled()) /* @@ -129,6 +138,15 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, #endif } +#ifdef CONFIG_VMAP_STACK +static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) +{ + if (mm != &init_mm) + check_vmalloc_seq(mm); +} +#define enter_lazy_tlb enter_lazy_tlb +#endif + #include #endif diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index 7b871ed99ccf..5fcc8a600e36 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -147,11 +147,10 @@ extern void copy_page(void *to, const void *from); #include #else #include -#endif - #ifdef CONFIG_VMAP_STACK #define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED #endif +#endif #endif /* CONFIG_MMU */ diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index 3f38357efc46..08612032aefe 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -885,6 +885,7 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) die("kernel stack overflow", regs, 0); } +#ifndef CONFIG_ARM_LPAE /* * Normally, we rely on the logic in do_translation_fault() to update stale PMD * entries covering the vmalloc space in a task's page tables when it first @@ -895,26 +896,14 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) * So we need to ensure that these PMD entries are up to date *before* the MM * switch. As we already have some logic in the MM switch path that takes care * of this, let's trigger it by bumping the counter every time the core vmalloc - * code modifies a PMD entry in the vmalloc region. + * code modifies a PMD entry in the vmalloc region. Use release semantics on + * the store so that other CPUs observing the counter's new value are + * guaranteed to see the updated page table entries as well. */ void arch_sync_kernel_mappings(unsigned long start, unsigned long end) { - if (start > VMALLOC_END || end < VMALLOC_START) - return; - - /* - * This hooks into the core vmalloc code to receive notifications of - * any PMD level changes that have been made to the kernel page tables. - * This means it should only be triggered once for every MiB worth of - * vmalloc space, given that we don't support huge vmalloc/vmap on ARM, - * and that kernel PMD level table entries are rarely (if ever) - * updated. - * - * This means that the counter is going to max out at ~250 for the - * typical case. If it overflows, something entirely unexpected has - * occurred so let's throw a warning if that happens. - */ - WARN_ON(++init_mm.context.vmalloc_seq == UINT_MAX); + if (start < VMALLOC_END && end > VMALLOC_START) + atomic_inc_return_release(&init_mm.context.vmalloc_seq); } - +#endif #endif diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c index 48091870db89..4204ffa2d104 100644 --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -240,8 +240,7 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) unsigned int cpu = smp_processor_id(); u64 asid; - if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq)) - __check_vmalloc_seq(mm); + check_vmalloc_seq(mm); /* * We cannot update the pgd and the ASID atomicly with classic diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index 6e830b9418c9..8963c8c63471 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -117,16 +117,21 @@ EXPORT_SYMBOL(ioremap_page); void __check_vmalloc_seq(struct mm_struct *mm) { - unsigned int seq; + int seq; do { - seq = init_mm.context.vmalloc_seq; + seq = atomic_read(&init_mm.context.vmalloc_seq); memcpy(pgd_offset(mm, VMALLOC_START), pgd_offset_k(VMALLOC_START), sizeof(pgd_t) * (pgd_index(VMALLOC_END) - pgd_index(VMALLOC_START))); - mm->context.vmalloc_seq = seq; - } while (seq != init_mm.context.vmalloc_seq); + /* + * Use a store-release so that other CPUs that observe the + * counter's new value are guaranteed to see the results of the + * memcpy as well. + */ + atomic_set_release(&mm->context.vmalloc_seq, seq); + } while (seq != atomic_read(&init_mm.context.vmalloc_seq)); } #if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE) @@ -157,7 +162,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size) * Note: this is still racy on SMP machines. */ pmd_clear(pmdp); - init_mm.context.vmalloc_seq++; + atomic_inc_return_release(&init_mm.context.vmalloc_seq); /* * Free the page table, if there was one. @@ -174,8 +179,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size) * Ensure that the active_mm is up to date - we want to * catch any use-after-iounmap cases. */ - if (current->active_mm->context.vmalloc_seq != init_mm.context.vmalloc_seq) - __check_vmalloc_seq(current->active_mm); + check_vmalloc_seq(current->active_mm); flush_tlb_kernel_range(virt, end); } From patchwork Tue Jan 25 09:14:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8955AC433FE for ; Tue, 25 Jan 2022 09:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1573837AbiAYJ1X (ORCPT ); Tue, 25 Jan 2022 04:27:23 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:49102 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345927AbiAYJPT (ORCPT ); Tue, 25 Jan 2022 04:15:19 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 115C961521 for ; Tue, 25 Jan 2022 09:15:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6531AC36AE3; Tue, 25 Jan 2022 09:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102117; bh=HQZi1kpAiUghsUmiFU/5moPXJddGBhsChA//yx43VSQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RE2q5fJVg84G3g787CoJZu2jBeOXLWKBtnEcuh6d+ay0m0tdaFyVpQ4mIxLRvQJKG cGm2CUBSiGMnmIr4BDLQx7Fhmd4aF7H8HS6Z8Io7TtiGl/4kwe29Lmh8W2SPVO4hA0 8VTphh8GhAx/B6q/R86QAyPujenrGYRp+Zbrx5cvXp3sbQXpJb4MutUVuOEGWTAfy/ MLvAK4FJHHL9m1w/AcR+uHG4/JVh3jJ2u/HpNCJzT5FrTpiDbf2eHHPRbhTTyOoh9d nkHfm4VY97N6Cl3B1ObYcgMWMCyU38R/7kLawwgYzN16utxMFLv6r9Ocwq1oo9k/Es xsbg3+ZSo7fFQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 6/8] ARM: iop: make iop_handle_irq() static Date: Tue, 25 Jan 2022 10:14:51 +0100 Message-Id: <20220125091453.1475246-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=642; h=from:subject; bh=HQZi1kpAiUghsUmiFU/5moPXJddGBhsChA//yx43VSQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+I04HpYgmvcBNKMWlVnypu1IuqXj0LhSXZWBZs yUqQsSSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/iAAKCRDDTyI5ktmPJCqOC/ 4sRLh/vKW32xwVZEHDqA4mtJBafS1+1slOR/Is41jpt6wlNzzFUwTAJLSg+9lB2Lpu6MBVctr0nqRw PzInVrVXInsDiE72ImPPPqCaqobpaaeHpCO4hknxpe/XJTp+n3TUFXnJNwiz3244kBMrHkXagXsSRi 4lZX1eCLHIB4cACrg7As/Ep5bZ/g2wu+LSRhz7aRRKju0bO6CeI4dG0MaBB06THEiDf2NVsqwGz4Uy t+ovwdv9oEd/fadiKkCAArZwCjFz8vpFN6iCjSFm0w6kn/782ezCtxRUSMVva3kmqUefJiDGxEjLcJ Z9j06p0DkzSfKHSqkrR434USG7V/6Ci64sY3pgowg5qHC2+ZrIl+7o0SwlU933gHgU0VoYknbVYAlt L26gcAJWE6K0sJYo5MdzI6z7rUou4SmgOWwjoX6iPTbhOak+s4WgG8GArVQzLtrjzyEUa+tZsYCAhz wwlWbtJN9PHJnzstI2oYKRTq1+kABC/W3EX+1uHBbTZBw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The build bots complain about iop_handle_irq() not being declared so let's make it static instead. Signed-off-by: Ard Biesheuvel --- arch/arm/mach-iop32x/irq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c index b820839eaae8..6dca7e97d81f 100644 --- a/arch/arm/mach-iop32x/irq.c +++ b/arch/arm/mach-iop32x/irq.c @@ -59,7 +59,7 @@ struct irq_chip ext_chip = { .irq_unmask = iop32x_irq_unmask, }; -void iop_handle_irq(struct pt_regs *regs) +static void iop_handle_irq(struct pt_regs *regs) { u32 mask; From patchwork Tue Jan 25 09:14:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49110C433EF for ; Tue, 25 Jan 2022 09:28:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345927AbiAYJ12 (ORCPT ); Tue, 25 Jan 2022 04:27:28 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:51674 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354906AbiAYJP3 (ORCPT ); Tue, 25 Jan 2022 04:15:29 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4911CB81730 for ; Tue, 25 Jan 2022 09:15:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6DC6C36AE7; Tue, 25 Jan 2022 09:15:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102120; bh=xxnXuAM2X8RiqH87OTJXGT/piAFSBEImiMZaqipeFWI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YhVEcVfPonezSZTM7d+ZSwSgOV68BWZnmo7HO9doqB8+7/GWVIAQMi6Qc7OD7il3P bSohmTPQ7oKCTXMIx4asxUIz9JA+h912jeaIEYBGXfxGHZzP6SYJAk2YVqt/9HrP7u 8OBUHZ5UfX5Ak/UbxiygTke80dM1QrAk+44o8coEkgSXKagjuYUelKVGGdnXvU3PWZ hNq3fBNBxGfSRrXWRtBs1ss9DXkV+2DCO8UrRhoTXmH972G15SetSjozttK8fPa7vb IHOZH6uDSF4biIEXlymiWgH/qAHRGtgpeWPqJOzNPuMZcbExRk8Nu5D4qfxgujK/wT BlG0uF2IsSkiQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 7/8] ARM: drop pointless SMP check on secondary startup path Date: Tue, 25 Jan 2022 10:14:52 +0100 Message-Id: <20220125091453.1475246-8-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=740; h=from:subject; bh=xxnXuAM2X8RiqH87OTJXGT/piAFSBEImiMZaqipeFWI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+KoLNbnFvtA845WHBMCSCBH1Wd62Cjt7MlR28K Ryln972JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/igAKCRDDTyI5ktmPJL+9C/ 9UbtPJXPKGXGYlhu2Xt6Ks3ysK/ETivNcjSf0FCgR4Udp1Ua/oO/MeCg5c+ELOdwNWTQNpW1sZ5AAC rISQG4vVTh5DpVMh92wW+cOp4TLv/gBkh1UDJW1gLQzWLd3+UyNBEZtEYvQSlcBHRZ9HBLdsWCQR/t kOztlTY5kytaPyjCALqV20nhOKt57bYXi8HyjaECbiM0CqRruarQINXZfi6Wh0yJZ9zMHgxp1sa2Sx sO2EJjDhECunPhnnf//j80qaWvrFMkEh744eCYO7Hadhm6T3kiY2bbpt3K+I752kPU2s05yF3WfR3+ OrqvAfiQClWMEbLIEEQjwJrL9YuLhzFe4z8RBxKnFWjkZhnKxO4bfg+8vqmJXsuccwFalj6UVKrlDK ebhSfn/eWwLv5GKUpUa3irZfGKRnEkZFKVQ18ooBGbFQtcMpjZM6SsXTbOsVHLBs5tOVE9gJW+/pO0 zbK7eD+EkDZwFn4wqL7RqA+f2VCn5/8rR+nj7EPQu31kA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Only SMP systems use the secondary startup path by definition, so there is no need for SMP conditionals there. Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/smp.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 951559e5bea3..e34efa96cea1 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -405,11 +405,6 @@ static void smp_store_cpu_info(unsigned int cpuid) static void set_current(struct task_struct *cur) { - if (!IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO) && !is_smp()) { - __current = cur; - return; - } - /* Set TPIDRURO */ asm("mcr p15, 0, %0, c13, c0, 3" :: "r"(cur) : "memory"); } From patchwork Tue Jan 25 09:14:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12723554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73A2FC433FE for ; Tue, 25 Jan 2022 09:28:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349659AbiAYJ13 (ORCPT ); Tue, 25 Jan 2022 04:27:29 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:49176 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354910AbiAYJPb (ORCPT ); Tue, 25 Jan 2022 04:15:31 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1E7A56151A for ; Tue, 25 Jan 2022 09:15:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 74B22C36AE2; Tue, 25 Jan 2022 09:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643102122; bh=qGhOY8YQJel1prt+Wcubqyf0zwngNxVhOPLndl9FMfw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s93SFyN35bOI9Z6aNnQsf1qauRkd/uLSm1TVtKHE4vOuujS6S+jKxKgxQ5x+0/fR2 UfWGz5YGwCRDn7s2Z3gjXCBRsY1xetaaPROa8xECm28fz9RTvDre65XVQ9Bt6mTINw bVTidRUKrF5SmLw+BlgIxOzCOJjfNzxNuPkCN4Rh4/sA696Ip8TMKTnUe8xJFB40Y8 0sNuJQE6oGGDVLQlmfo0VDh/bSUDLTDVGbz6c1Ut2ZHmBLlOUaVBg3dZVt/6UcPaTs ozAIAeRCa9YNqFYREuwr0q3TUZB7ouuEjWZAKDgY1CtnKhGp6p1jh8gGj7m8B7l5cK OKD7yfzL3UlOg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Marc Zyngier Subject: [PATCH v6 8/8] ARM: make get_current() and __my_cpu_offset() __always_inline Date: Tue, 25 Jan 2022 10:14:53 +0100 Message-Id: <20220125091453.1475246-9-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220125091453.1475246-1-ardb@kernel.org> References: <20220125091453.1475246-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1471; h=from:subject; bh=qGhOY8YQJel1prt+Wcubqyf0zwngNxVhOPLndl9FMfw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh77+Mms534Hv/4aXX2yfMUfiFF00l6CLMLG5NFfIS Td0eESuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe+/jAAKCRDDTyI5ktmPJJ1/C/ 9YGcMv7l2/9Js9kcZaeNh1wDWJM1Puq43wdMCJ5G684JaffjVfZktZJTguPFVO+RLayRNfhY0nCaN5 9eZjSchmQoI8QqE2CKisiQXnT1QdVLBQiKubmEakFiLwnX0joPyGVrLJUSa0QfpebTVz3Vu0ZpAIuM e0Vq9R2qd7tIfy+WXQq1w9iRPUPecQzuOvFkHCmzzXSwV9yaPMmkRlPp8aw3PM9Uw09xfDF3nOH+Z6 CCwEE721fIJ2FFtrG9eJ6L90uzOSMpaVp6DpfqhINbmdGICddsFajh0RpJw3TSs1Y/ziQGSTUGB3Lh 2kKi3iKwkgj21wo/usQE8lMW6DRKRkkYbxFgr6zgorXR9o0hAvMuwV9vzhYovpCAZ14tgXYHYPR3Vi 76GrJ4gjZ22ummS5FpILaCS9HFAdq8oQ5DItRy6aRXasFhOOxOsg7YZvYrSO35YR8g5SytOmkGPSa4 Tjdx7YERr0oZZzAmML0fJW8kW3LaJshQG7tH+KpflJaxg= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The get_current() and __my_cpu_offset() accessors evaluate to only a single instruction emitted inline, but due to the size of the asm string that is created for SMP+v6 configurations, the compiler assumes otherwise, and may emit the functions out of line instead. So use __always_inline to avoid this. Signed-off-by: Ard Biesheuvel Reviewed-by: Nick Desaulniers --- arch/arm/include/asm/current.h | 2 +- arch/arm/include/asm/percpu.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/current.h b/arch/arm/include/asm/current.h index 131a89bbec6b..1e1178bf176d 100644 --- a/arch/arm/include/asm/current.h +++ b/arch/arm/include/asm/current.h @@ -14,7 +14,7 @@ struct task_struct; extern struct task_struct *__current; -static inline __attribute_const__ struct task_struct *get_current(void) +static __always_inline __attribute_const__ struct task_struct *get_current(void) { struct task_struct *cur; diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h index a09034ae45a1..7545c87c251f 100644 --- a/arch/arm/include/asm/percpu.h +++ b/arch/arm/include/asm/percpu.h @@ -25,7 +25,7 @@ static inline void set_my_cpu_offset(unsigned long off) asm volatile("mcr p15, 0, %0, c13, c0, 4" : : "r" (off) : "memory"); } -static inline unsigned long __my_cpu_offset(void) +static __always_inline unsigned long __my_cpu_offset(void) { unsigned long off;