From patchwork Mon Jan 24 17:47:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94525C433EF for ; Mon, 24 Jan 2022 17:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244642AbiAXRr7 (ORCPT ); Mon, 24 Jan 2022 12:47:59 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43106 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240939AbiAXRr6 (ORCPT ); Mon, 24 Jan 2022 12:47:58 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 101E5612FC for ; Mon, 24 Jan 2022 17:47:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3C31C340EA; Mon, 24 Jan 2022 17:47:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046477; bh=cSMz9HNmdlyrAxuErbtcZ6pp4jGvOvzYWBgsYjniQbI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jMxGADB1707YfQgGJjqUlu6R1UWcrkY+OOGUR1EwbpBYLicEQFXf+MDUvgMoDiO/I 1pknyr6xldFIIR5rIumqZ8ZdL+dZu2rY5YUglY2xKTZxGZIuVu4qYKUP3PlLnkRD5o +2yF7GoNM6pMu8IrAvU25avLT9g6vuXx1TH9NoWT6W2Lp+M3u8Lb6SZDUimHEzVgWy 4q3oRMH0LDl1CBGmgxxDX+19uWVTI3+3qkHnzX0tPwNxxIc3XeZm3zMAY3vmF3Sv6l XmrxttnSlQXMzTwGeDTzrGk6R4C9FVH6BoAiAPOszS1yusGqmgPGqleJaZyn2M8tld 25faApeNEdwrA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 01/32] ARM: riscpc: drop support for IOMD_IRQREQC/IOMD_IRQREQD IRQ groups Date: Mon, 24 Jan 2022 18:47:13 +0100 Message-Id: <20220124174744.1054712-2-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3374; h=from:subject; bh=cSMz9HNmdlyrAxuErbtcZ6pp4jGvOvzYWBgsYjniQbI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYDLVhTYXHtrpVrhKsZAsJHAndmZ+M7D40XqAom tZcFECGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mAwAKCRDDTyI5ktmPJFm5C/ 9wiSO+NXhWoc3038aysNFA4nx7pu0yzS2CvDWnVklXa/VpnidEtzxnvCzlmMk90byOlh4ztbvVvPyw 5lQ/dwHSs5+Hc5EENB7bkx/DxQrUIRcqvPeqsRXfd8zgiO8reL+TUNxFLTTnLmW2ZWML91povM8SbB Q5z3u8P3LTJwYu4GCZqURPsovRKo6GibIMcze/w/3CH83lKuELC9U01g8HYl+NQnPUW7rxQhj29rLI mU+8MJvnAV21OGHKLw02QU+q1NPJME4T1nTRhrbb81OroKYL8DeHZPQEcnDDvAwmeLv1BJ302/1H1t 33M+WyKGDxoVPod8yzIqwHN0mQVl+EXxDchG4E0be5sPWHueCJUsB2JhMmCMpOybcehPzzk0eMNC1g iUkpCaoGp/UWHTu5+POnFHSJ9i8ND3cOpHifMD0NAaWxFdy2lYfaARiUU6pS/4+EqWczQvVT4OQIyy QHvWJ4eCxepLkdDJK4kOfiD+KXRRohRNfNKGmOng8SmuE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org IOMD_IRQREQC nor IOMD_IRQREQD are ever defined, so any conditionally compiled code that depends on them is dead code, and can be removed. Suggested-by: Russell King Signed-off-by: Ard Biesheuvel --- arch/arm/include/asm/hardware/entry-macro-iomd.S | 47 -------------------- 1 file changed, 47 deletions(-) diff --git a/arch/arm/include/asm/hardware/entry-macro-iomd.S b/arch/arm/include/asm/hardware/entry-macro-iomd.S index f7692731e514..81441dfa5282 100644 --- a/arch/arm/include/asm/hardware/entry-macro-iomd.S +++ b/arch/arm/include/asm/hardware/entry-macro-iomd.S @@ -24,16 +24,6 @@ ldrbeq \irqstat, [\base, #IOMD_IRQREQA] @ get low priority addeq \tmp, \tmp, #256 @ irq_prio_d table size teqeq \irqstat, #0 -#ifdef IOMD_IRQREQC - ldrbeq \irqstat, [\base, #IOMD_IRQREQC] - addeq \tmp, \tmp, #256 @ irq_prio_l table size - teqeq \irqstat, #0 -#endif -#ifdef IOMD_IRQREQD - ldrbeq \irqstat, [\base, #IOMD_IRQREQD] - addeq \tmp, \tmp, #256 @ irq_prio_lc table size - teqeq \irqstat, #0 -#endif 2406: ldrbne \irqnr, [\tmp, \irqstat] @ get IRQ number .endm @@ -92,40 +82,3 @@ irq_prio_l: .byte 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3 .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 -#ifdef IOMD_IRQREQC -irq_prio_lc: .byte 24,24,25,24,26,26,26,26,27,27,27,27,27,27,27,27 - .byte 28,24,25,24,26,26,26,26,27,27,27,27,27,27,27,27 - .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29 - .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29 - .byte 30,30,30,30,30,30,30,30,27,27,27,27,27,27,27,27 - .byte 30,30,30,30,30,30,30,30,27,27,27,27,27,27,27,27 - .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29 - .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 - .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31 -#endif -#ifdef IOMD_IRQREQD -irq_prio_ld: .byte 40,40,41,40,42,42,42,42,43,43,43,43,43,43,43,43 - .byte 44,40,41,40,42,42,42,42,43,43,43,43,43,43,43,43 - .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45 - .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45 - .byte 46,46,46,46,46,46,46,46,43,43,43,43,43,43,43,43 - .byte 46,46,46,46,46,46,46,46,43,43,43,43,43,43,43,43 - .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45 - .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 - .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47 -#endif - From patchwork Mon Jan 24 17:47:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F1ADC433EF for ; Mon, 24 Jan 2022 17:48:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244608AbiAXRsD (ORCPT ); Mon, 24 Jan 2022 12:48:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240939AbiAXRsC (ORCPT ); Mon, 24 Jan 2022 12:48:02 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12830C06173B for ; Mon, 24 Jan 2022 09:48:02 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5BE0261301 for ; Mon, 24 Jan 2022 17:48:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB386C340E8; Mon, 24 Jan 2022 17:47:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046480; bh=MBdnzrhS7Inw7mEJXXZQkmN30h0TVnJLnhelzR+UmgU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ehgvbaUAWFWg5+sNDOmmDpf0Vg+WpznpFgfXvCJBxpo1Pg88gCfnLC1H/G2Hdqha6 0EZtcrf1Fi29DG2dNQUl4GHck3C8/l2u0TeNBMaRxuI7R7GEgTWxwck4IBPgSGp7Wf iDvgC8afrY8DoV63s3EZe3Fd0lrmAGu+v1Ov5cbgve9TSr1KUieTVEQjY6abE6ZTW7 WXbAaeSu65ULAzln8PSEmgENXDZq6mX+EoE0LcrNf4B1hvS3eUgrmjPE2Uv0nAN4ul OaDwDsmJEZIXQ4H4ZE/5o1FHBxp/WFIXW45TV+J4qn7jEjnNNhlgUtlzOoMRiEud1R zAnQ4wKxDivwQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 02/32] ARM: riscpc: use GENERIC_IRQ_MULTI_HANDLER Date: Mon, 24 Jan 2022 18:47:14 +0100 Message-Id: <20220124174744.1054712-3-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10826; i=ardb@kernel.org; h=from:subject; bh=zdFFow6a3Tw2jiMBmFTU9djJXqPwMnPVJeEnOQrcc7c=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYFWfCNbC/vrCf1xhcCT5uNvyj9RU0a/tsCTonS o07/v5GJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mBQAKCRDDTyI5ktmPJM81C/ 0eAmRswPeuBFIJ3rPgoVONaIYcsAkQ6hJK9IV8rbshE6/cnPKNDZmtNP2Zcz7kYCSM9dtJdl7gPOlG ZIF/ZugJzPO7H7AGntCpLDvRZGP9nSB87TdwQriv9eHDlTGZTE4IG7fMo6xYFqE+C/+Bm7WL9cXOG4 eqSJmVr4XwKGOaEUFxGUvaLiAEE6iUh0x9e9OrFEA2vN/rHCCYZH6jCW8fiysxTawK0HmoDh/abu7v N1r6vcwB4Tewupk5X04vdpFLif1p9hcqWPD+K18k6s44xyZdoCMfd9APYXl3zNfavzeCOY1Rdi2MtA brawCmxQV9bTBU23Ib4FzK89jHHNTLVTNDnK6hziqCMNo0mJukUvbZvzbM5MtuJFhHg7OSEBo5KLu7 p7me/z0f53xk6dY/NKpEphiZZQ4U3TQ6k6BygroBQjBExZVQS56Rafc6c08iRXO1mSAnJqHkOFOWky dRqEBRROFybCcCBD26tqz4v0PX/8KRii/9Kyz2Z4put5I= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Arnd Bergmann This is one of the last platforms using the old entry path. While this code path is spread over a few files, it is fairly straightforward to convert it into an equivalent C version, leaving the existing algorithm and all the priority handling the same. Unlike most irqchip drivers, this means reading the status register(s) in a loop and always handling the highest-priority irq first. The IOMD_IRQREQC and IOMD_IRQREQD registers are not actaully used here, but I left the code in place for the time being, to keep the conversion as direct as possible. It could be removed in a cleanup on top. Signed-off-by: Arnd Bergmann [ardb: drop obsolete IOMD_IRQREQC/IOMD_IRQREQD handling] Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 1 + arch/arm/include/asm/hardware/entry-macro-iomd.S | 84 ----------------- arch/arm/mach-rpc/fiq.S | 5 +- arch/arm/mach-rpc/include/mach/entry-macro.S | 13 --- arch/arm/mach-rpc/irq.c | 95 ++++++++++++++++++++ 5 files changed, 99 insertions(+), 99 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index fabe39169b12..40193ec76f1a 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -443,6 +443,7 @@ config ARCH_RPC select ARM_HAS_SG_CHAIN select CPU_SA110 select FIQ + select GENERIC_IRQ_MULTI_HANDLER select HAVE_PATA_PLATFORM select ISA_DMA_API select LEGACY_TIMER_TICK diff --git a/arch/arm/include/asm/hardware/entry-macro-iomd.S b/arch/arm/include/asm/hardware/entry-macro-iomd.S deleted file mode 100644 index 81441dfa5282..000000000000 --- a/arch/arm/include/asm/hardware/entry-macro-iomd.S +++ /dev/null @@ -1,84 +0,0 @@ -/* - * arch/arm/include/asm/hardware/entry-macro-iomd.S - * - * Low-level IRQ helper macros for IOC/IOMD based platforms - * - * This file is licensed under the terms of the GNU General Public - * License version 2. This program is licensed "as is" without any - * warranty of any kind, whether express or implied. - */ - -/* IOC / IOMD based hardware */ -#include - - .macro get_irqnr_and_base, irqnr, irqstat, base, tmp - ldrb \irqstat, [\base, #IOMD_IRQREQB] @ get high priority first - ldr \tmp, =irq_prio_h - teq \irqstat, #0 -#ifdef IOMD_BASE - ldrbeq \irqstat, [\base, #IOMD_DMAREQ] @ get dma - addeq \tmp, \tmp, #256 @ irq_prio_h table size - teqeq \irqstat, #0 - bne 2406f -#endif - ldrbeq \irqstat, [\base, #IOMD_IRQREQA] @ get low priority - addeq \tmp, \tmp, #256 @ irq_prio_d table size - teqeq \irqstat, #0 -2406: ldrbne \irqnr, [\tmp, \irqstat] @ get IRQ number - .endm - -/* - * Interrupt table (incorporates priority). Please note that we - * rely on the order of these tables (see above code). - */ - .align 5 -irq_prio_h: .byte 0, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 12, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 - .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10 -#ifdef IOMD_BASE -irq_prio_d: .byte 0,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 20,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 - .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16 -#endif -irq_prio_l: .byte 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3 - .byte 4, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3 - .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 - .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 - .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3 - .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3 - .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 - .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 - .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 diff --git a/arch/arm/mach-rpc/fiq.S b/arch/arm/mach-rpc/fiq.S index 0de83e9b0b39..087bdf4bc093 100644 --- a/arch/arm/mach-rpc/fiq.S +++ b/arch/arm/mach-rpc/fiq.S @@ -2,10 +2,11 @@ #include #include #include -#include - .text + .equ ioc_base_high, IOC_BASE & 0xff000000 + .equ ioc_base_low, IOC_BASE & 0x00ff0000 + .text .global rpc_default_fiq_end ENTRY(rpc_default_fiq_start) mov r12, #ioc_base_high diff --git a/arch/arm/mach-rpc/include/mach/entry-macro.S b/arch/arm/mach-rpc/include/mach/entry-macro.S deleted file mode 100644 index a6d1a9f4bb79..000000000000 --- a/arch/arm/mach-rpc/include/mach/entry-macro.S +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#include -#include - - .equ ioc_base_high, IOC_BASE & 0xff000000 - .equ ioc_base_low, IOC_BASE & 0x00ff0000 - - .macro get_irqnr_preamble, base, tmp - mov \base, #ioc_base_high @ point at IOC - .if ioc_base_low - orr \base, \base, #ioc_base_low - .endif - .endm diff --git a/arch/arm/mach-rpc/irq.c b/arch/arm/mach-rpc/irq.c index 803aeb126f0e..dc29384b6ef8 100644 --- a/arch/arm/mach-rpc/irq.c +++ b/arch/arm/mach-rpc/irq.c @@ -14,6 +14,99 @@ #define CLR 0x04 #define MASK 0x08 +static const u8 irq_prio_h[256] = { + 0, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10, + 12, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10, + 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10, + 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10, + 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, + 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10, +}; + +static const u8 irq_prio_d[256] = { + 0,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 20,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, + 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16, +}; + +static const u8 irq_prio_l[256] = { + 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, + 4, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, + 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, + 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, + 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3, + 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3, + 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, + 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, +}; + +static int iomd_get_irq_nr(void) +{ + int irq; + u8 reg; + + /* get highest priority first */ + reg = readb(IOC_BASE + IOMD_IRQREQB); + irq = irq_prio_h[reg]; + if (irq) + return irq; + + /* get DMA */ + reg = readb(IOC_BASE + IOMD_DMAREQ); + irq = irq_prio_d[reg]; + if (irq) + return irq; + + /* get low priority */ + reg = readb(IOC_BASE + IOMD_IRQREQA); + irq = irq_prio_l[reg]; + if (irq) + return irq; + return 0; +} + +static void iomd_handle_irq(struct pt_regs *regs) +{ + int irq; + + do { + irq = iomd_get_irq_nr(); + if (irq) + generic_handle_irq(irq); + } while (irq); +} + static void __iomem *iomd_get_base(struct irq_data *d) { void *cd = irq_data_get_irq_chip_data(d); @@ -82,6 +175,8 @@ void __init rpc_init_irq(void) set_fiq_handler(&rpc_default_fiq_start, &rpc_default_fiq_end - &rpc_default_fiq_start); + set_handle_irq(iomd_handle_irq); + for (irq = 0; irq < NR_IRQS; irq++) { clr = IRQ_NOREQUEST; set = 0; From patchwork Mon Jan 24 17:47:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F52DC433EF for ; Mon, 24 Jan 2022 17:48:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244673AbiAXRsG (ORCPT ); Mon, 24 Jan 2022 12:48:06 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:49900 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240939AbiAXRsG (ORCPT ); Mon, 24 Jan 2022 12:48:06 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5996EB811B1 for ; Mon, 24 Jan 2022 17:48:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 401CBC340EA; Mon, 24 Jan 2022 17:48:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046484; bh=gxecdLt8rF6jDfs+RxXFFv8e21A23HTF4VhA40qVdPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tsnkxAD+dPic6cSRme3qNM2JF4krQlm2d2+LrWCghOVC06Pm7CH20fvKOtGM9DeLm rppvO1UyFE+3VOMLftwT48Lf+jjp/hq5svkmuu76V0mgS63QMi2fvvsbwVaDVhxo5j Lk59VtUtJLvzLOvxqcFdQjCnIiF5DQ+qOcP7FBT5jhQ2bcPO+9hwTMMVGHoLaojBO5 3matXOGCr16JltFo/YJXZNfyqsBi3p/g8oCluBUPOAAVDn5X/Hp9D7i77AYmi8uFKh FI+t5b/a55raFeKih5CkH7hqYVaNPdPdlZQY/ik3qw/pjuM379T0K3H6QqKlTqNGWX 6qnZhprNW0BMw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 03/32] ARM: footbridge: use GENERIC_IRQ_MULTI_HANDLER Date: Mon, 24 Jan 2022 18:47:15 +0100 Message-Id: <20220124174744.1054712-4-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6395; i=ardb@kernel.org; h=from:subject; bh=YpVOH03SjU446m5no57bBl9lygnIPLQRBjksa7nMLhQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYH3B86fZFA7SfuWfpdNQQjVUTqd2cWNMrgfIjb SROk3FeJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mBwAKCRDDTyI5ktmPJLWADA CLJ1kJiXVSMSMfDEZ0p3DmuE9EcaIpXc95q+pVSsZgXE12E+ZCxni4gCVx1DdskNiXW3BcF0h12njq 0stDshlAj+1oxzU19rxfqGtSq5yWxBpiv2I/i1vvzMBuTzlhDjvumkDrR94teFEfLReIzHz6S1USut 81N5x/rwqWfmqwETv+cCsMDY7kD+zrM4eqc4gZdGZUXgdni4w/Lj1Lkd9076VwiJuUnYFpUuXvhf6G MLT0xm/ioVxW8K06oVwi93nGpByfWAqKtmjrCGXn4rgASrlVlMZZfsYFrrxW1rCAUbOVh0ZL+Tn95j 9MNGbUDwScVV83TP53JqygKMRexb+wtuUVvnVXQ+A18daBkIfQXcQKPjT57w+02JwnvGuQAi4U3p3Z nbsMP4tqWIzichJxPktutayzyZwHsCMgafbUrHftn1Ys1m5U4pp6PjmZOmaLYJjhZTQ8nE66/IMPJf rVK4gjKF6N/y75E9e7+W+1dWRhgUeCaCLgBINsaEcv+eY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Arnd Bergmann Footbridge still uses the classic IRQ entry path in assembler, but this is easily converted into an equivalent C version. In this case, the correlation between IRQ numbers and bits in the status register is non-obvious, and the priorities are handled by manually checking each bit in a static order, re-reading the status register after each handled event. I moved the code into the new file and edited the syntax without changing this sequence to keep the behavior as close as possible to what it traditionally did. Signed-off-by: Arnd Bergmann Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M Reviewed-by: Linus Walleij --- arch/arm/Kconfig | 1 + arch/arm/mach-footbridge/common.c | 87 ++++++++++++++++ arch/arm/mach-footbridge/include/mach/entry-macro.S | 107 -------------------- 3 files changed, 88 insertions(+), 107 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 40193ec76f1a..bef5085f2ce7 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -361,6 +361,7 @@ config ARCH_FOOTBRIDGE select FOOTBRIDGE select NEED_MACH_IO_H if !MMU select NEED_MACH_MEMORY_H + select GENERIC_IRQ_MULTI_HANDLER help Support for systems based on the DC21285 companion chip ("FootBridge"), such as the Simtec CATS and the Rebel NetWinder. diff --git a/arch/arm/mach-footbridge/common.c b/arch/arm/mach-footbridge/common.c index eee095f0e2f6..322495df271d 100644 --- a/arch/arm/mach-footbridge/common.c +++ b/arch/arm/mach-footbridge/common.c @@ -27,6 +27,91 @@ #include "common.h" +#include +#include +#include + +static int dc21285_get_irq(void) +{ + void __iomem *irqstatus = (void __iomem *)CSR_IRQ_STATUS; + u32 mask = readl(irqstatus); + + if (mask & IRQ_MASK_SDRAMPARITY) + return IRQ_SDRAMPARITY; + + if (mask & IRQ_MASK_UART_RX) + return IRQ_CONRX; + + if (mask & IRQ_MASK_DMA1) + return IRQ_DMA1; + + if (mask & IRQ_MASK_DMA2) + return IRQ_DMA2; + + if (mask & IRQ_MASK_IN0) + return IRQ_IN0; + + if (mask & IRQ_MASK_IN1) + return IRQ_IN1; + + if (mask & IRQ_MASK_IN2) + return IRQ_IN2; + + if (mask & IRQ_MASK_IN3) + return IRQ_IN3; + + if (mask & IRQ_MASK_PCI) + return IRQ_PCI; + + if (mask & IRQ_MASK_DOORBELLHOST) + return IRQ_DOORBELLHOST; + + if (mask & IRQ_MASK_I2OINPOST) + return IRQ_I2OINPOST; + + if (mask & IRQ_MASK_TIMER1) + return IRQ_TIMER1; + + if (mask & IRQ_MASK_TIMER2) + return IRQ_TIMER2; + + if (mask & IRQ_MASK_TIMER3) + return IRQ_TIMER3; + + if (mask & IRQ_MASK_UART_TX) + return IRQ_CONTX; + + if (mask & IRQ_MASK_PCI_ABORT) + return IRQ_PCI_ABORT; + + if (mask & IRQ_MASK_PCI_SERR) + return IRQ_PCI_SERR; + + if (mask & IRQ_MASK_DISCARD_TIMER) + return IRQ_DISCARD_TIMER; + + if (mask & IRQ_MASK_PCI_DPERR) + return IRQ_PCI_DPERR; + + if (mask & IRQ_MASK_PCI_PERR) + return IRQ_PCI_PERR; + + return 0; +} + +static void dc21285_handle_irq(struct pt_regs *regs) +{ + int irq; + do { + irq = dc21285_get_irq(); + if (!irq) + break; + + generic_handle_irq(irq); + } while (1); +} + + unsigned int mem_fclk_21285 = 50000000; EXPORT_SYMBOL(mem_fclk_21285); @@ -108,6 +193,8 @@ static void __init __fb_init_irq(void) void __init footbridge_init_irq(void) { + set_handle_irq(dc21285_handle_irq); + __fb_init_irq(); if (!footbridge_cfn_mode()) diff --git a/arch/arm/mach-footbridge/include/mach/entry-macro.S b/arch/arm/mach-footbridge/include/mach/entry-macro.S deleted file mode 100644 index dabbd5c54a78..000000000000 --- a/arch/arm/mach-footbridge/include/mach/entry-macro.S +++ /dev/null @@ -1,107 +0,0 @@ -/* - * arch/arm/mach-footbridge/include/mach/entry-macro.S - * - * Low-level IRQ helper macros for footbridge-based platforms - * - * This file is licensed under the terms of the GNU General Public - * License version 2. This program is licensed "as is" without any - * warranty of any kind, whether express or implied. - */ -#include -#include -#include - - .equ dc21285_high, ARMCSR_BASE & 0xff000000 - .equ dc21285_low, ARMCSR_BASE & 0x00ffffff - - .macro get_irqnr_preamble, base, tmp - mov \base, #dc21285_high - .if dc21285_low - orr \base, \base, #dc21285_low - .endif - .endm - - .macro get_irqnr_and_base, irqnr, irqstat, base, tmp - ldr \irqstat, [\base, #0x180] @ get interrupts - - mov \irqnr, #IRQ_SDRAMPARITY - tst \irqstat, #IRQ_MASK_SDRAMPARITY - bne 1001f - - tst \irqstat, #IRQ_MASK_UART_RX - movne \irqnr, #IRQ_CONRX - bne 1001f - - tst \irqstat, #IRQ_MASK_DMA1 - movne \irqnr, #IRQ_DMA1 - bne 1001f - - tst \irqstat, #IRQ_MASK_DMA2 - movne \irqnr, #IRQ_DMA2 - bne 1001f - - tst \irqstat, #IRQ_MASK_IN0 - movne \irqnr, #IRQ_IN0 - bne 1001f - - tst \irqstat, #IRQ_MASK_IN1 - movne \irqnr, #IRQ_IN1 - bne 1001f - - tst \irqstat, #IRQ_MASK_IN2 - movne \irqnr, #IRQ_IN2 - bne 1001f - - tst \irqstat, #IRQ_MASK_IN3 - movne \irqnr, #IRQ_IN3 - bne 1001f - - tst \irqstat, #IRQ_MASK_PCI - movne \irqnr, #IRQ_PCI - bne 1001f - - tst \irqstat, #IRQ_MASK_DOORBELLHOST - movne \irqnr, #IRQ_DOORBELLHOST - bne 1001f - - tst \irqstat, #IRQ_MASK_I2OINPOST - movne \irqnr, #IRQ_I2OINPOST - bne 1001f - - tst \irqstat, #IRQ_MASK_TIMER1 - movne \irqnr, #IRQ_TIMER1 - bne 1001f - - tst \irqstat, #IRQ_MASK_TIMER2 - movne \irqnr, #IRQ_TIMER2 - bne 1001f - - tst \irqstat, #IRQ_MASK_TIMER3 - movne \irqnr, #IRQ_TIMER3 - bne 1001f - - tst \irqstat, #IRQ_MASK_UART_TX - movne \irqnr, #IRQ_CONTX - bne 1001f - - tst \irqstat, #IRQ_MASK_PCI_ABORT - movne \irqnr, #IRQ_PCI_ABORT - bne 1001f - - tst \irqstat, #IRQ_MASK_PCI_SERR - movne \irqnr, #IRQ_PCI_SERR - bne 1001f - - tst \irqstat, #IRQ_MASK_DISCARD_TIMER - movne \irqnr, #IRQ_DISCARD_TIMER - bne 1001f - - tst \irqstat, #IRQ_MASK_PCI_DPERR - movne \irqnr, #IRQ_PCI_DPERR - bne 1001f - - tst \irqstat, #IRQ_MASK_PCI_PERR - movne \irqnr, #IRQ_PCI_PERR -1001: - .endm - From patchwork Mon Jan 24 17:47:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41BF2C433F5 for ; Mon, 24 Jan 2022 17:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240939AbiAXRsN (ORCPT ); Mon, 24 Jan 2022 12:48:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244732AbiAXRsK (ORCPT ); Mon, 24 Jan 2022 12:48:10 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4327C06173B for ; Mon, 24 Jan 2022 09:48:09 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9EAA9B811A5 for ; Mon, 24 Jan 2022 17:48:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87D72C340E8; Mon, 24 Jan 2022 17:48:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046487; bh=EiQVtdzvXgjyL0LmVL1EzAI/mo0NzKG3mqi/KHf8z0w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uaLOJM/0iChTAq9QSXvxWGqSHB9eJ8s1h0jBUWlEdO9Yp6b6h3qN1ad44oUiWUvwI e46l3zliWny0BPANfEw9rK/p1Qcsn4/Pg3S8Cya+1fd4U5013XOvyh6V0Oy9a4iyDT CwE6z6h40PjHyKHi2StQpzdobVG/Z8Jc3njQz/hmcmiX9IruTGaSSfVqvWRucbyfyu kfu7NoqffN6LRLeXQXs3hfA3fL+zfPCOxAKGaoz6IedNYRaNS5k7R9z6NcV3j7rQl0 Zy8cAToaaPNIvw+jhx1mf2HPZOaHVdIPl/CwvK1wKv/ZRAZp5h/BhjS+YNYtB+/dhP wEqxoYKS8VLbw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 04/32] ARM: iop32x: offset IRQ numbers by 1 Date: Mon, 24 Jan 2022 18:47:16 +0100 Message-Id: <20220124174744.1054712-5-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5281; i=ardb@kernel.org; h=from:subject; bh=TWotvNAxcAhiklui1EglJetBjAKAECMqx7SIUyPo0Gw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYJmVYJNoijGuYG34bxl/9AS+9/pLYzcuwpulMl f8Zq1HOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mCQAKCRDDTyI5ktmPJL2kC/ 9ipJlENH4qIC/TCUveonBsIbMxM+pqdzC0qWU+ObjNONxwE0thzanwwRCJIUFvyoTxpxRNQ11iVAiG JBXPzucWBDB5wjZVTsgoyUmEnOB3uxmD1PDsQ7WVrp1UpPlZRmqmPtM2sPNe3xbazAHrDUwS7zlfAv rsALH8JCeypYmo6dhXnig2XPEwSbtDTMYFhD0vwe+l7dA6C0iux09wo1VY5rJIYGqRGRMXb1uRyHqG FRxCY5ymbYWtZ7sQcrGGUJfXACeK0Bxhoq7x8mVaSagDP+l/rLIESOcBR/Bdqbrfs6vf96YcleqUO4 IWytHZMn6X+h7841dtMt2zN6D4Msb+/cZY1UaN0NivUay5WXxzPRhSCf2LjBXLh/ExOglogf5Dj46d ZLEeilQBvRsYUgrcRu+i2iTP1zhBEP0stEpRQHxp4K3sOc83XSrRq4NypoSkS1nxMlC3CJH6puWLkm ODeEzKG3IBx7KcFrGL/HR96RPX2Xk1+bbYP6L6CqYoV7U= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Arnd Bergmann iop32x is one of the last platforms to use IRQ 0, and this has apparently stopped working in a 2014 cleanup without anyone noticing. This interrupt is used for the DMA engine, so most likely this has not actually worked in the past 7 years, but it's also not essential for using this board. I'm splitting out this change from my GENERIC_IRQ_MULTI_HANDLER conversion so it can be backported if anyone cares. Fixes: a71b092a9c68 ("ARM: Convert handle_IRQ to use __handle_domain_irq") Signed-off-by: Arnd Bergmann [ardb: take +1 offset into account in mask/unmask and init as well] Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M Reviewed-by: Linus Walleij --- arch/arm/mach-iop32x/include/mach/entry-macro.S | 2 +- arch/arm/mach-iop32x/include/mach/irqs.h | 2 +- arch/arm/mach-iop32x/irq.c | 6 +- arch/arm/mach-iop32x/irqs.h | 60 +++++++++++--------- 4 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/arm/mach-iop32x/include/mach/entry-macro.S b/arch/arm/mach-iop32x/include/mach/entry-macro.S index 8e6766d4621e..341e5d9a6616 100644 --- a/arch/arm/mach-iop32x/include/mach/entry-macro.S +++ b/arch/arm/mach-iop32x/include/mach/entry-macro.S @@ -20,7 +20,7 @@ mrc p6, 0, \irqstat, c8, c0, 0 @ Read IINTSRC cmp \irqstat, #0 clzne \irqnr, \irqstat - rsbne \irqnr, \irqnr, #31 + rsbne \irqnr, \irqnr, #32 .endm .macro arch_ret_to_user, tmp1, tmp2 diff --git a/arch/arm/mach-iop32x/include/mach/irqs.h b/arch/arm/mach-iop32x/include/mach/irqs.h index c4e78df428e8..e09ae5f48aec 100644 --- a/arch/arm/mach-iop32x/include/mach/irqs.h +++ b/arch/arm/mach-iop32x/include/mach/irqs.h @@ -9,6 +9,6 @@ #ifndef __IRQS_H #define __IRQS_H -#define NR_IRQS 32 +#define NR_IRQS 33 #endif diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c index 2d48bf1398c1..d1e8824cbd82 100644 --- a/arch/arm/mach-iop32x/irq.c +++ b/arch/arm/mach-iop32x/irq.c @@ -32,14 +32,14 @@ static void intstr_write(u32 val) static void iop32x_irq_mask(struct irq_data *d) { - iop32x_mask &= ~(1 << d->irq); + iop32x_mask &= ~(1 << (d->irq - 1)); intctl_write(iop32x_mask); } static void iop32x_irq_unmask(struct irq_data *d) { - iop32x_mask |= 1 << d->irq; + iop32x_mask |= 1 << (d->irq - 1); intctl_write(iop32x_mask); } @@ -65,7 +65,7 @@ void __init iop32x_init_irq(void) machine_is_em7210()) *IOP3XX_PCIIRSR = 0x0f; - for (i = 0; i < NR_IRQS; i++) { + for (i = 1; i < NR_IRQS; i++) { irq_set_chip_and_handler(i, &ext_chip, handle_level_irq); irq_clear_status_flags(i, IRQ_NOREQUEST | IRQ_NOPROBE); } diff --git a/arch/arm/mach-iop32x/irqs.h b/arch/arm/mach-iop32x/irqs.h index 69858e4e905d..e1dfc8b4e7d7 100644 --- a/arch/arm/mach-iop32x/irqs.h +++ b/arch/arm/mach-iop32x/irqs.h @@ -7,36 +7,40 @@ #ifndef __IOP32X_IRQS_H #define __IOP32X_IRQS_H +/* Interrupts in Linux start at 1, hardware starts at 0 */ + +#define IOP_IRQ(x) ((x) + 1) + /* * IOP80321 chipset interrupts */ -#define IRQ_IOP32X_DMA0_EOT 0 -#define IRQ_IOP32X_DMA0_EOC 1 -#define IRQ_IOP32X_DMA1_EOT 2 -#define IRQ_IOP32X_DMA1_EOC 3 -#define IRQ_IOP32X_AA_EOT 6 -#define IRQ_IOP32X_AA_EOC 7 -#define IRQ_IOP32X_CORE_PMON 8 -#define IRQ_IOP32X_TIMER0 9 -#define IRQ_IOP32X_TIMER1 10 -#define IRQ_IOP32X_I2C_0 11 -#define IRQ_IOP32X_I2C_1 12 -#define IRQ_IOP32X_MESSAGING 13 -#define IRQ_IOP32X_ATU_BIST 14 -#define IRQ_IOP32X_PERFMON 15 -#define IRQ_IOP32X_CORE_PMU 16 -#define IRQ_IOP32X_BIU_ERR 17 -#define IRQ_IOP32X_ATU_ERR 18 -#define IRQ_IOP32X_MCU_ERR 19 -#define IRQ_IOP32X_DMA0_ERR 20 -#define IRQ_IOP32X_DMA1_ERR 21 -#define IRQ_IOP32X_AA_ERR 23 -#define IRQ_IOP32X_MSG_ERR 24 -#define IRQ_IOP32X_SSP 25 -#define IRQ_IOP32X_XINT0 27 -#define IRQ_IOP32X_XINT1 28 -#define IRQ_IOP32X_XINT2 29 -#define IRQ_IOP32X_XINT3 30 -#define IRQ_IOP32X_HPI 31 +#define IRQ_IOP32X_DMA0_EOT IOP_IRQ(0) +#define IRQ_IOP32X_DMA0_EOC IOP_IRQ(1) +#define IRQ_IOP32X_DMA1_EOT IOP_IRQ(2) +#define IRQ_IOP32X_DMA1_EOC IOP_IRQ(3) +#define IRQ_IOP32X_AA_EOT IOP_IRQ(6) +#define IRQ_IOP32X_AA_EOC IOP_IRQ(7) +#define IRQ_IOP32X_CORE_PMON IOP_IRQ(8) +#define IRQ_IOP32X_TIMER0 IOP_IRQ(9) +#define IRQ_IOP32X_TIMER1 IOP_IRQ(10) +#define IRQ_IOP32X_I2C_0 IOP_IRQ(11) +#define IRQ_IOP32X_I2C_1 IOP_IRQ(12) +#define IRQ_IOP32X_MESSAGING IOP_IRQ(13) +#define IRQ_IOP32X_ATU_BIST IOP_IRQ(14) +#define IRQ_IOP32X_PERFMON IOP_IRQ(15) +#define IRQ_IOP32X_CORE_PMU IOP_IRQ(16) +#define IRQ_IOP32X_BIU_ERR IOP_IRQ(17) +#define IRQ_IOP32X_ATU_ERR IOP_IRQ(18) +#define IRQ_IOP32X_MCU_ERR IOP_IRQ(19) +#define IRQ_IOP32X_DMA0_ERR IOP_IRQ(20) +#define IRQ_IOP32X_DMA1_ERR IOP_IRQ(21) +#define IRQ_IOP32X_AA_ERR IOP_IRQ(23) +#define IRQ_IOP32X_MSG_ERR IOP_IRQ(24) +#define IRQ_IOP32X_SSP IOP_IRQ(25) +#define IRQ_IOP32X_XINT0 IOP_IRQ(27) +#define IRQ_IOP32X_XINT1 IOP_IRQ(28) +#define IRQ_IOP32X_XINT2 IOP_IRQ(29) +#define IRQ_IOP32X_XINT3 IOP_IRQ(30) +#define IRQ_IOP32X_HPI IOP_IRQ(31) #endif From patchwork Mon Jan 24 17:47:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DDF0C433FE for ; Mon, 24 Jan 2022 17:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244732AbiAXRsO (ORCPT ); Mon, 24 Jan 2022 12:48:14 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43186 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244749AbiAXRsM (ORCPT ); Mon, 24 Jan 2022 12:48:12 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 40D8A61301 for ; Mon, 24 Jan 2022 17:48:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D09DEC340EA; Mon, 24 Jan 2022 17:48:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046490; bh=n1eJJ4h5kOscCZLHFbapcGr7Nu8+OUcDuVRcwnRmu9o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e4EcbP2uWhgUFd7ZWqsyQ1E65yeZzP44sjgl/yulXKXIleScq/buCaA/YytbVEMF+ 7CK5Z04QHL1GAVl8iM1Tse5GPbm9otwoZZu8FBNPNHE6mOMDi1TfW6NQJTgB2Jc6ag NVYBwwkQkCCMK128cv1HqPRoJ9iBUGEE054lZbqOBCN7SKpyXb1hWsFXEFUCD7d/9A mnSiR58rqNuaJ+ZiovQJkWjipJz/oxY8gkypU4pxUAAPgbmgSPdJ1DH7O/dd0xP9Yd jiBgX1K0YIi4TB0rUdoITI4TTVa531T9m0iT6t6OuERV23Q3mf6E3izT261pKLfho4 n1YsQ45JqRuJQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 05/32] ARM: iop32x: use GENERIC_IRQ_MULTI_HANDLER Date: Mon, 24 Jan 2022 18:47:17 +0100 Message-Id: <20220124174744.1054712-6-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6739; i=ardb@kernel.org; h=from:subject; bh=0u2xOg6wEXf72p6oUm4/55dvuXn004dvI5pPJqhcG+M=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYLPXmptE87nyCCtDjIvjbkji/yK6wvaHIvcKEx /XUyjUmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mCwAKCRDDTyI5ktmPJBmjC/ 4jhXpVJV+rOHgknzpEXiOQavsgM1U/nwn+f1DB5iJFDF+IHIaWTljW4IMRwO59tthJoyibmDGNCAOU BOZ+hPvv4FyuZiBh+SS2ih3vkeLKsjS163gFZ7BwWaelWaF7tvR2OVBUnaSH/OlL9bIYkeGmKq61t8 yxU/djMV3FGWkast3rNHdhOaO1sFKOQmLE3AO+OwE69fbYnv6K7eowmqi945WXhyQEMBI1HlbIQcre qovnu4lBQQjbYM80D7VVV+rNMdhS5Vx5+vM7bi84I4kOfzBgPQFBsV8zxzharD1r8RhEEN86OiukJV qgv8UupFXN0ryRiab2dQ1iwNx3tkbm3g1ee6Ctt/6DH87o25599S6T1WCWAzBEJC1hZCOz0p5HkjBu 1gmrj2jzVYZgzY/U4WfDd49nNdxxSh1kL5V/4dqZcLgranBUaWtQK09og9JrAx1rmqL+RCGPVLBAAo Ar5SyjgIZggA+BjooyjTqr/gJmQf7jHvgZIcXxqWHG58o= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Arnd Bergmann iop32x uses the entry-macro.S file for both the IRQ entry and for hooking into the arch_ret_to_user code path. This is done because the cp6 registers have to be enabled before accessing any of the interrupt controller registers but have to be disabled when running in user space. There is also a lazy-enable logic in cp6.c, but during a hardirq, we know it has to be enabled. Both the cp6-enable code and the code to read the IRQ status can be lifted into the normal generic_handle_arch_irq() path, but the cp6-disable code has to remain in the user return code. As nothing other than iop32x uses this hook, just open-code it there with an ifdef for the platform that can eventually be removed when iop32x has reached the end of its life. The cp6-enable path in the IRQ entry has an extra cp_wait barrier that the trap version does not have, but it is harmless to do it in both cases to simplify the logic here at the cost of a few extra cycles for the trap. Signed-off-by: Arnd Bergmann Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 5 +--- arch/arm/kernel/entry-common.S | 16 +++++----- arch/arm/mach-iop32x/cp6.c | 10 ++++++- arch/arm/mach-iop32x/include/mach/entry-macro.S | 31 -------------------- arch/arm/mach-iop32x/iop3xx.h | 1 + arch/arm/mach-iop32x/irq.c | 23 +++++++++++++++ 6 files changed, 43 insertions(+), 43 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index bef5085f2ce7..ac2f88ce0b9a 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -226,9 +226,6 @@ config GENERIC_ISA_DMA config FIQ bool -config NEED_RET_TO_USER - bool - config ARCH_MTD_XIP bool @@ -370,9 +367,9 @@ config ARCH_IOP32X bool "IOP32x-based" depends on MMU select CPU_XSCALE + select GENERIC_IRQ_MULTI_HANDLER select GPIO_IOP select GPIOLIB - select NEED_RET_TO_USER select FORCE_PCI select PLAT_IOP help diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index ac86c34682bb..c928d6b04cce 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -16,12 +16,14 @@ .equ NR_syscalls, __NR_syscalls -#ifdef CONFIG_NEED_RET_TO_USER -#include -#else - .macro arch_ret_to_user, tmp1, tmp2 - .endm + .macro arch_ret_to_user, tmp +#ifdef CONFIG_ARCH_IOP32X + mrc p15, 0, \tmp, c15, c1, 0 + tst \tmp, #(1 << 6) + bicne \tmp, \tmp, #(1 << 6) + mcrne p15, 0, \tmp, c15, c1, 0 @ Disable cp6 access #endif + .endm #include "entry-header.S" @@ -55,7 +57,7 @@ __ret_fast_syscall: /* perform architecture specific actions before user return */ - arch_ret_to_user r1, lr + arch_ret_to_user r1 restore_user_regs fast = 1, offset = S_OFF UNWIND(.fnend ) @@ -128,7 +130,7 @@ no_work_pending: asm_trace_hardirqs_on save = 0 /* perform architecture specific actions before user return */ - arch_ret_to_user r1, lr + arch_ret_to_user r1 ct_user_enter save = 0 restore_user_regs fast = 0, offset = 0 diff --git a/arch/arm/mach-iop32x/cp6.c b/arch/arm/mach-iop32x/cp6.c index ec74b07fb7e3..2882674a1c39 100644 --- a/arch/arm/mach-iop32x/cp6.c +++ b/arch/arm/mach-iop32x/cp6.c @@ -7,7 +7,7 @@ #include #include -static int cp6_trap(struct pt_regs *regs, unsigned int instr) +void iop_enable_cp6(void) { u32 temp; @@ -16,7 +16,15 @@ static int cp6_trap(struct pt_regs *regs, unsigned int instr) "mrc p15, 0, %0, c15, c1, 0\n\t" "orr %0, %0, #(1 << 6)\n\t" "mcr p15, 0, %0, c15, c1, 0\n\t" + "mrc p15, 0, %0, c15, c1, 0\n\t" + "mov %0, %0\n\t" + "sub pc, pc, #4 @ cp_wait\n\t" : "=r"(temp)); +} + +static int cp6_trap(struct pt_regs *regs, unsigned int instr) +{ + iop_enable_cp6(); return 0; } diff --git a/arch/arm/mach-iop32x/include/mach/entry-macro.S b/arch/arm/mach-iop32x/include/mach/entry-macro.S deleted file mode 100644 index 341e5d9a6616..000000000000 --- a/arch/arm/mach-iop32x/include/mach/entry-macro.S +++ /dev/null @@ -1,31 +0,0 @@ -/* - * arch/arm/mach-iop32x/include/mach/entry-macro.S - * - * Low-level IRQ helper macros for IOP32x-based platforms - * - * This file is licensed under the terms of the GNU General Public - * License version 2. This program is licensed "as is" without any - * warranty of any kind, whether express or implied. - */ - .macro get_irqnr_preamble, base, tmp - mrc p15, 0, \tmp, c15, c1, 0 - orr \tmp, \tmp, #(1 << 6) - mcr p15, 0, \tmp, c15, c1, 0 @ Enable cp6 access - mrc p15, 0, \tmp, c15, c1, 0 - mov \tmp, \tmp - sub pc, pc, #4 @ cp_wait - .endm - - .macro get_irqnr_and_base, irqnr, irqstat, base, tmp - mrc p6, 0, \irqstat, c8, c0, 0 @ Read IINTSRC - cmp \irqstat, #0 - clzne \irqnr, \irqstat - rsbne \irqnr, \irqnr, #32 - .endm - - .macro arch_ret_to_user, tmp1, tmp2 - mrc p15, 0, \tmp1, c15, c1, 0 - ands \tmp2, \tmp1, #(1 << 6) - bicne \tmp1, \tmp1, #(1 << 6) - mcrne p15, 0, \tmp1, c15, c1, 0 @ Disable cp6 access - .endm diff --git a/arch/arm/mach-iop32x/iop3xx.h b/arch/arm/mach-iop32x/iop3xx.h index 46b4b34a4ad2..a6ec7ebadb35 100644 --- a/arch/arm/mach-iop32x/iop3xx.h +++ b/arch/arm/mach-iop32x/iop3xx.h @@ -225,6 +225,7 @@ extern int iop3xx_get_init_atu(void); #include void iop3xx_map_io(void); +void iop_enable_cp6(void); void iop_init_cp6_handler(void); void iop_init_time(unsigned long tickrate); void iop3xx_restart(enum reboot_mode, const char *); diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c index d1e8824cbd82..6dca7e97d81f 100644 --- a/arch/arm/mach-iop32x/irq.c +++ b/arch/arm/mach-iop32x/irq.c @@ -29,6 +29,15 @@ static void intstr_write(u32 val) asm volatile("mcr p6, 0, %0, c4, c0, 0" : : "r" (val)); } +static u32 iintsrc_read(void) +{ + int irq; + + asm volatile("mrc p6, 0, %0, c8, c0, 0" : "=r" (irq)); + + return irq; +} + static void iop32x_irq_mask(struct irq_data *d) { @@ -50,11 +59,25 @@ struct irq_chip ext_chip = { .irq_unmask = iop32x_irq_unmask, }; +static void iop_handle_irq(struct pt_regs *regs) +{ + u32 mask; + + iop_enable_cp6(); + + do { + mask = iintsrc_read(); + if (mask) + generic_handle_irq(fls(mask)); + } while (mask); +} + void __init iop32x_init_irq(void) { int i; iop_init_cp6_handler(); + set_handle_irq(iop_handle_irq); intctl_write(0); intstr_write(0); From patchwork Mon Jan 24 17:47:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC00DC433EF for ; Mon, 24 Jan 2022 17:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244755AbiAXRsP (ORCPT ); Mon, 24 Jan 2022 12:48:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244749AbiAXRsP (ORCPT ); Mon, 24 Jan 2022 12:48:15 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9863C06173B for ; Mon, 24 Jan 2022 09:48:14 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8ACD7612FC for ; Mon, 24 Jan 2022 17:48:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24BC9C340E8; Mon, 24 Jan 2022 17:48:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046494; bh=IWrjbMTWtbV2vW1HSXLK21QpPiTl8d7SkHaWwNzmMSc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lmQ/07C0FVkg4AhjQQv/bpwk9IpBLDN/FkOm489QpZkkerE1pHVDsbymgXNthyJlQ 8lyVf8tjt6r0AgC0n2oJj7rEzqjbnrY7sjtBaWoP3MS6rmPelb6MLPojRtHdY2Fi3G OV1EYL2SjtW49X4bDNqVMoH3nrD04lTsnW7SdtAsfDfegKSEN3C5nWiOad/l5vlSlC lA818ytKc6o/MJmNisiniikXEaqG1qroDvfgKjW0Dy5v6WfAM6RoT38A2phFKqjVJO ysi/Xv13uL/COzcpap0sZ7Yc+q6K8kT6QL6AAKcS9O7jaUBlB4AuidbgRGgNN15QcJ pQc3wrjeERqOw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 06/32] ARM: remove old-style irq entry Date: Mon, 24 Jan 2022 18:47:18 +0100 Message-Id: <20220124174744.1054712-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8430; i=ardb@kernel.org; h=from:subject; bh=3kl5/3H+g5iKvSKJA4TNx5q8FVxa+Bn//qGU0XUVr/s=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYNbceVqyuSi5Fi6wlBotOjqVgB3NV3jNdZktmS 0oeRqoSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mDQAKCRDDTyI5ktmPJD2eDA CbEyHCnxS9QIAQbSh8sg5OJjAPPAO1pKXyZW9CdyDyIm+88f74mTwYgaeOlabIyPiHfj5S84pEnQ6I tq8UQJPxkoWtM7rnrs4lcDMUxYrZ9fFAurnmF88cpvSUbQnIzk6ghnhb7dK/9vXx8oucP5GEx6NZz0 W8EX1jBtwDkiA6KKlbd62XCLGKBJ1M7RCJUXOGdmneVJlILgRUrBfm91wuDCbO6jDCLsJirBEziaoq dRhT2EsEv4UKtsJ5IhqfHnxABWzR0GFGbuamCCeYWGf5y6I5EX/fPqFDYoHxGGJdkONBNulqd4XvV0 2XZ7qYFzvRJBmzSL69GoLbZFV8owd74TNYM8wZgID+UUbhy6ZjAEbOpK/k4ShRqGn/dqSNslkPuPYC U0XU2ImfzD+zle5o/eTApNpno3zf0XqfVCdxokYIgyx7jJNSBojmAUo+av+frKm049yyhHBSPodolC M8E2sPtSuckYQaB1xckPXb7Mq8R2cJ8KY1PvIJea2Xo+E= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Arnd Bergmann The last user of arch_irq_handler_default is gone now, so the entry-macro-multi.S file and all references to mach/entry-macro.S can be removed, as well as the asm_do_IRQ() entrypoint into the interrupt handling routines implemented in C. Note: The ARMv7-M entry still uses its own top-level IRQ entry, calling nvic_handle_irq() from assembly. This could be changed to go through generic_handle_arch_irq() as well, but it's unclear to me if there are any benefits. Signed-off-by: Arnd Bergmann [ardb: keep irq_handler macro as it will carry the IRQ stack handling] Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M Reviewed-by: Linus Walleij --- arch/arm/Kconfig | 12 +----- arch/arm/include/asm/entry-macro-multi.S | 40 -------------------- arch/arm/include/asm/irq.h | 1 - arch/arm/include/asm/mach/arch.h | 2 - arch/arm/include/asm/smp.h | 5 --- arch/arm/kernel/entry-armv.S | 8 ---- arch/arm/kernel/irq.c | 17 --------- arch/arm/kernel/smp.c | 5 --- 8 files changed, 1 insertion(+), 89 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index ac2f88ce0b9a..7528cbdb90a1 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -58,6 +58,7 @@ config ARM select GENERIC_CPU_AUTOPROBE select GENERIC_EARLY_IOREMAP select GENERIC_IDLE_POLL_SETUP + select GENERIC_IRQ_MULTI_HANDLER if MMU select GENERIC_IRQ_PROBE select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW_LEVEL @@ -319,7 +320,6 @@ config ARCH_MULTIPLATFORM select AUTO_ZRELADDR select TIMER_OF select COMMON_CLK - select GENERIC_IRQ_MULTI_HANDLER select HAVE_PCI select PCI_DOMAINS_GENERIC if PCI select SPARSE_IRQ @@ -343,7 +343,6 @@ config ARCH_EP93XX select ARM_AMBA imply ARM_PATCH_PHYS_VIRT select ARM_VIC - select GENERIC_IRQ_MULTI_HANDLER select AUTO_ZRELADDR select CLKSRC_MMIO select CPU_ARM920T @@ -358,7 +357,6 @@ config ARCH_FOOTBRIDGE select FOOTBRIDGE select NEED_MACH_IO_H if !MMU select NEED_MACH_MEMORY_H - select GENERIC_IRQ_MULTI_HANDLER help Support for systems based on the DC21285 companion chip ("FootBridge"), such as the Simtec CATS and the Rebel NetWinder. @@ -367,7 +365,6 @@ config ARCH_IOP32X bool "IOP32x-based" depends on MMU select CPU_XSCALE - select GENERIC_IRQ_MULTI_HANDLER select GPIO_IOP select GPIOLIB select FORCE_PCI @@ -383,7 +380,6 @@ config ARCH_IXP4XX select ARCH_SUPPORTS_BIG_ENDIAN select CPU_XSCALE select DMABOUNCE if PCI - select GENERIC_IRQ_MULTI_HANDLER select GPIO_IXP4XX select GPIOLIB select HAVE_PCI @@ -399,7 +395,6 @@ config ARCH_IXP4XX config ARCH_DOVE bool "Marvell Dove" select CPU_PJ4 - select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select HAVE_PCI select MVEBU_MBUS @@ -422,7 +417,6 @@ config ARCH_PXA select CLKSRC_MMIO select TIMER_OF select CPU_XSCALE if !CPU_XSC3 - select GENERIC_IRQ_MULTI_HANDLER select GPIO_PXA select GPIOLIB select IRQ_DOMAIN @@ -441,7 +435,6 @@ config ARCH_RPC select ARM_HAS_SG_CHAIN select CPU_SA110 select FIQ - select GENERIC_IRQ_MULTI_HANDLER select HAVE_PATA_PLATFORM select ISA_DMA_API select LEGACY_TIMER_TICK @@ -462,7 +455,6 @@ config ARCH_SA1100 select COMMON_CLK select CPU_FREQ select CPU_SA1100 - select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select IRQ_DOMAIN select ISA @@ -477,7 +469,6 @@ config ARCH_S3C24XX select CLKSRC_SAMSUNG_PWM select GPIO_SAMSUNG select GPIOLIB - select GENERIC_IRQ_MULTI_HANDLER select NEED_MACH_IO_H select S3C2410_WATCHDOG select SAMSUNG_ATAGS @@ -495,7 +486,6 @@ config ARCH_OMAP1 select ARCH_OMAP select CLKSRC_MMIO select GENERIC_IRQ_CHIP - select GENERIC_IRQ_MULTI_HANDLER select GPIOLIB select HAVE_LEGACY_CLK select IRQ_DOMAIN diff --git a/arch/arm/include/asm/entry-macro-multi.S b/arch/arm/include/asm/entry-macro-multi.S deleted file mode 100644 index dfc6bfa43012..000000000000 --- a/arch/arm/include/asm/entry-macro-multi.S +++ /dev/null @@ -1,40 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#include - -/* - * Interrupt handling. Preserves r7, r8, r9 - */ - .macro arch_irq_handler_default - get_irqnr_preamble r6, lr -1: get_irqnr_and_base r0, r2, r6, lr - movne r1, sp - @ - @ routine called with r0 = irq number, r1 = struct pt_regs * - @ - badrne lr, 1b - bne asm_do_IRQ - -#ifdef CONFIG_SMP - /* - * XXX - * - * this macro assumes that irqstat (r2) and base (r6) are - * preserved from get_irqnr_and_base above - */ - ALT_SMP(test_for_ipi r0, r2, r6, lr) - ALT_UP_B(9997f) - movne r1, sp - badrne lr, 1b - bne do_IPI -#endif -9997: - .endm - - .macro arch_irq_handler, symbol_name - .align 5 - .global \symbol_name -\symbol_name: - mov r8, lr - arch_irq_handler_default - ret r8 - .endm diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h index 1cbcc462b07e..a7c2337b0c7d 100644 --- a/arch/arm/include/asm/irq.h +++ b/arch/arm/include/asm/irq.h @@ -26,7 +26,6 @@ struct irqaction; struct pt_regs; -extern void asm_do_IRQ(unsigned int, struct pt_regs *); void handle_IRQ(unsigned int, struct pt_regs *); void init_IRQ(void); diff --git a/arch/arm/include/asm/mach/arch.h b/arch/arm/include/asm/mach/arch.h index eec0c0bda766..9349e7a82c9c 100644 --- a/arch/arm/include/asm/mach/arch.h +++ b/arch/arm/include/asm/mach/arch.h @@ -56,9 +56,7 @@ struct machine_desc { void (*init_time)(void); void (*init_machine)(void); void (*init_late)(void); -#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER void (*handle_irq)(struct pt_regs *); -#endif void (*restart)(enum reboot_mode, const char *); }; diff --git a/arch/arm/include/asm/smp.h b/arch/arm/include/asm/smp.h index f16cbbd5cda4..7c1c90d9f582 100644 --- a/arch/arm/include/asm/smp.h +++ b/arch/arm/include/asm/smp.h @@ -24,11 +24,6 @@ struct seq_file; */ extern void show_ipi_list(struct seq_file *, int); -/* - * Called from assembly code, this handles an IPI. - */ -asmlinkage void do_IPI(int ipinr, struct pt_regs *regs); - /* * Called from C code, this handles an IPI. */ diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 5cd057859fe9..9d9372781408 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -19,9 +19,6 @@ #include #include #include -#ifndef CONFIG_GENERIC_IRQ_MULTI_HANDLER -#include -#endif #include #include #include @@ -30,19 +27,14 @@ #include #include "entry-header.S" -#include #include /* * Interrupt handling. */ .macro irq_handler -#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER mov r0, sp bl generic_handle_arch_irq -#else - arch_irq_handler_default -#endif .endm .macro pabt_helper diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index b79975bd988c..5a1e52a4ee11 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -80,23 +80,6 @@ void handle_IRQ(unsigned int irq, struct pt_regs *regs) ack_bad_irq(irq); } -/* - * asm_do_IRQ is the interface to be used from assembly code. - */ -asmlinkage void __exception_irq_entry -asm_do_IRQ(unsigned int irq, struct pt_regs *regs) -{ - struct pt_regs *old_regs; - - irq_enter(); - old_regs = set_irq_regs(regs); - - handle_IRQ(irq, regs); - - set_irq_regs(old_regs); - irq_exit(); -} - void __init init_IRQ(void) { int ret; diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 97ee6b1567e9..ed2b168ff46c 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -628,11 +628,6 @@ static void ipi_complete(unsigned int cpu) /* * Main handler for inter-processor interrupts */ -asmlinkage void __exception_irq_entry do_IPI(int ipinr, struct pt_regs *regs) -{ - handle_IPI(ipinr, regs); -} - static void do_handle_IPI(int ipinr) { unsigned int cpu = smp_processor_id(); From patchwork Mon Jan 24 17:47:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEE8CC43219 for ; Mon, 24 Jan 2022 17:48:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244758AbiAXRsT (ORCPT ); Mon, 24 Jan 2022 12:48:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244756AbiAXRsS (ORCPT ); Mon, 24 Jan 2022 12:48:18 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A242C06173B for ; Mon, 24 Jan 2022 09:48:18 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0B14C61301 for ; Mon, 24 Jan 2022 17:48:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E43DC36AE3; Mon, 24 Jan 2022 17:48:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046497; bh=JfGRqkvVEJXcX89tKCRbEHW9dK4QL2euw/qDvAceXC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iNdf4s3CsVpdnKFJNtIfJnOqXfWsYPuDWtgLBwbB/XF2QWG1xgNhat7H7QcVi0zRm jt40XW1kgFdhvEATIImin2JBVXfsQZrMlXwPiN+ZEludUCQ9rvg1vFdlll4UanRjmH 70nK/YXHyspjj5n+Eu+sJ5LJiUTFrDen1Xss92N7b/3dsCbLOjzZEZZuqKhgYZh8dk 6MQA2m8T2r27y2UOA27EnH8zp4yE0elFaXqMLsXK9q33dniF7GfCw3H5ziqiWowQQT 9smGYAglF2/lAO3O4eeb7LWHBNj/vk+2Ugs5Zvs/w9kipt4okuZBtvHfYdSe+6ArCO SAOOqlOjbkBOw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube , Mark Rutland Subject: [PATCH v5 07/32] irqchip: nvic: Use GENERIC_IRQ_MULTI_HANDLER Date: Mon, 24 Jan 2022 18:47:19 +0100 Message-Id: <20220124174744.1054712-8-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3684; i=ardb@kernel.org; h=from:subject; bh=yam0h+eZOmjuxXtEw2mQ3PuK/kcsMmUqmzLUnVe0QVo=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYP31SMnZ6kRAIITxO41UnaJ8dod4MWPdmYDiKh X12A+ZCJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mDwAKCRDDTyI5ktmPJFfODA CgyhSSS4JK1OatfMKaqqNHnhF/f90kibT65G+oeEFOiodnHfhtNtQeBiI5p1eIX+ThafAt/2wsSkUW TvkS+mBMu1UC6gRDtbMGQebWw8Y7Prj1wtYQC5m7DpGERWkm3XEv6BEvcW3IxXcBep6H5OzZwHDkSt UrsphbMAwfFX/0ItqYtH7q8xng41WMaxLTqZQM0m9lgkBDEcSrN2mcURgP4/40vSeuQx18+L/pg2Rd RYI9c2uxJudvukhoPKqyods1ijNIuhZksh6K4YXpKMPqIzbrdjR3HQE/ch3p4ZiOnQOGbi2ZA/OAJT oeinhCQUQ3a/OKTSpnH1IhjmjpKhSgeWZ5Tm6tdUuGzt6cI2MNlEJFuXSd8QNHil9YodwxGXn/Z2FP STNoSb+3KuxeX4wUzvHIQdG8zaqRWUgbXJuTn7tymGrmoP3A0qGa9P9yfbxqTIePhfAty/9/zpBCHP N2iJJa9npHPjiZa8nNx0KtnkKGs8f4VKBoT45/hIIC9d8= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Vladimir Murzin Rather then restructuring the ARMv7M entrly logic per TODO, just move NVIC to GENERIC_IRQ_MULTI_HANDLER. Signed-off-by: Vladimir Murzin Acked-by: Mark Rutland Acked-by: Arnd Bergmann Acked-by: Marc Zyngier Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/v7m.h | 3 ++- arch/arm/kernel/entry-v7m.S | 10 +++------ drivers/irqchip/Kconfig | 1 + drivers/irqchip/irq-nvic.c | 22 +++++--------------- 4 files changed, 11 insertions(+), 25 deletions(-) diff --git a/arch/arm/include/asm/v7m.h b/arch/arm/include/asm/v7m.h index 2cb00d15831b..4512f7e1918f 100644 --- a/arch/arm/include/asm/v7m.h +++ b/arch/arm/include/asm/v7m.h @@ -13,6 +13,7 @@ #define V7M_SCB_ICSR_PENDSVSET (1 << 28) #define V7M_SCB_ICSR_PENDSVCLR (1 << 27) #define V7M_SCB_ICSR_RETTOBASE (1 << 11) +#define V7M_SCB_ICSR_VECTACTIVE 0x000001ff #define V7M_SCB_VTOR 0x08 @@ -38,7 +39,7 @@ #define V7M_SCB_SHCSR_MEMFAULTENA (1 << 16) #define V7M_xPSR_FRAMEPTRALIGN 0x00000200 -#define V7M_xPSR_EXCEPTIONNO 0x000001ff +#define V7M_xPSR_EXCEPTIONNO V7M_SCB_ICSR_VECTACTIVE /* * When branching to an address that has bits [31:28] == 0xf an exception return diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S index 7bde93c10962..520dd43e7e08 100644 --- a/arch/arm/kernel/entry-v7m.S +++ b/arch/arm/kernel/entry-v7m.S @@ -39,14 +39,10 @@ __irq_entry: @ @ Invoke the IRQ handler @ - mrs r0, ipsr - ldr r1, =V7M_xPSR_EXCEPTIONNO - and r0, r1 - sub r0, #16 - mov r1, sp + mov r0, sp stmdb sp!, {lr} - @ routine called with r0 = irq number, r1 = struct pt_regs * - bl nvic_handle_irq + @ routine called with r0 = struct pt_regs * + bl generic_handle_arch_irq pop {lr} @ diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 7038957f4a77..488eaa14d3a7 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -58,6 +58,7 @@ config ARM_NVIC bool select IRQ_DOMAIN_HIERARCHY select GENERIC_IRQ_CHIP + select GENERIC_IRQ_MULTI_HANDLER config ARM_VIC bool diff --git a/drivers/irqchip/irq-nvic.c b/drivers/irqchip/irq-nvic.c index ba4759b3e269..125f9c1cf0c3 100644 --- a/drivers/irqchip/irq-nvic.c +++ b/drivers/irqchip/irq-nvic.c @@ -37,25 +37,12 @@ static struct irq_domain *nvic_irq_domain; -static void __nvic_handle_irq(irq_hw_number_t hwirq) +static void __irq_entry nvic_handle_irq(struct pt_regs *regs) { - generic_handle_domain_irq(nvic_irq_domain, hwirq); -} + unsigned long icsr = readl_relaxed(BASEADDR_V7M_SCB + V7M_SCB_ICSR); + irq_hw_number_t hwirq = (icsr & V7M_SCB_ICSR_VECTACTIVE) - 16; -/* - * TODO: restructure the ARMv7M entry logic so that this entry logic can live - * in arch code. - */ -asmlinkage void __exception_irq_entry -nvic_handle_irq(irq_hw_number_t hwirq, struct pt_regs *regs) -{ - struct pt_regs *old_regs; - - irq_enter(); - old_regs = set_irq_regs(regs); - __nvic_handle_irq(hwirq); - set_irq_regs(old_regs); - irq_exit(); + generic_handle_domain_irq(nvic_irq_domain, hwirq); } static int nvic_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, @@ -141,6 +128,7 @@ static int __init nvic_of_init(struct device_node *node, for (i = 0; i < irqs; i += 4) writel_relaxed(0, nvic_base + NVIC_IPR + i); + set_handle_irq(nvic_handle_irq); return 0; } IRQCHIP_DECLARE(armv7m_nvic, "arm,armv7m-nvic", nvic_of_init); From patchwork Mon Jan 24 17:47:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28E94C433EF for ; Mon, 24 Jan 2022 17:48:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241412AbiAXRsY (ORCPT ); Mon, 24 Jan 2022 12:48:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241414AbiAXRsX (ORCPT ); Mon, 24 Jan 2022 12:48:23 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55DC7C06173B for ; Mon, 24 Jan 2022 09:48:23 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 11A06B811AC for ; Mon, 24 Jan 2022 17:48:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E216CC340E5; Mon, 24 Jan 2022 17:48:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046500; bh=7DENFUqX6QUcNqYG/pBBlPLnnfMOmgh0iQQhcDzMsQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jHOi4VRlkF2qKjHQzK6ewS5e/MKEYXnoQJP11Hjvb5UoI6uxipX3uUhzmM+G1TewU 3rGfaOwK7pJwGZ5qWLooUSoHTt/jUw9qjE/VofWimO15I6qS7O9jhjGp2WMz5OAVVG XLHlHz+UTHuJhnxmuV1vJaZCpe1VZlkxj+MiKbYUCP6TezcntFzi0SdFRbGVTx6VY9 3pvO90WF5U0S00SQY7wLN78bmWf6wQX0vzD6S+waAyQWkjfG3gDI5qkxryVLTN6fFd bGqU/bciDZiS0HdSEIYKq2HftS3F+rfpDFNFFZmegjDXv6KIIMT/xlb6n7f6TCmAz5 0bf3pezXKdH2w== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 08/32] ARM: decompressor: disable stack protector Date: Mon, 24 Jan 2022 18:47:20 +0100 Message-Id: <20220124174744.1054712-9-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2287; h=from:subject; bh=7DENFUqX6QUcNqYG/pBBlPLnnfMOmgh0iQQhcDzMsQM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYRw1vg9bTWrzxjXxct6bFRDnUedQU8GtMkJfgP H7BPCmmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mEQAKCRDDTyI5ktmPJIbQC/ 9VmBkgyHhKLbiux61LpaifaTnN5QBo9JOHIzL2TlbP0QlVjuJIr7dj0ToJMb/dLZJWEeLaC9UL/yt8 hUq2Y48C6y3j8JmWAFa59MLb0ydXoc6wd9UydwmFTW2uCYoqYi45MxxJ7V8e+Ar/6TH2kRWRdp8HyX MRBVBWVtoQcozJzRBuSzljR+JwQX6DYZG5SeTMWe+jMqCT8y7i86hCL8AOi5MOrpv8rIzbyFjsCMom u0x9w1NXBL1y/ObHlW9c3CSags7XvEDYnAD9olWirLdgd/hbvSw7oX62A6otFbqMBCfyd8Lk/QYvsf Txb9aUiidG+R4ly0QZMbF+oosuQjIL1ygZXW0K4j8QSh+XlPs7Ezz5JlAFHlaacRKgET6418zAGUOy g7ut3zUs+aDF9iqw/mwNQwRpMiQjtWHZWUNdelfx8GXJM/5pKRocbZ++fnqzKGDJI8lbrcT9OD9EUz LSw/AzxQSFZFEvF9bwS8yJSaOnstLaSWOc1hmh5AAFv1o= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Enabling the stack protector in the decompressor is of dubious value, given that it uses a fixed value for the canary, cannot print any output unless CONFIG_DEBUG_LL is enabled (which relies on board specific build time settings), and is already disabled for a good chunk of the code (libfdt). So let's just disable it in the decompressor. This will make it easier in the future to manage the command line options that would need to be removed again in this context for the TLS register based stack protector. Signed-off-by: Ard Biesheuvel --- arch/arm/boot/compressed/Makefile | 6 +----- arch/arm/boot/compressed/misc.c | 7 ------- 2 files changed, 1 insertion(+), 12 deletions(-) diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 954eee8a785a..187a187706cb 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -92,17 +92,13 @@ ifeq ($(CONFIG_USE_OF),y) OBJS += $(libfdt_objs) fdt_check_mem_start.o endif -# -fstack-protector-strong triggers protection checks in this code, -# but it is being used too early to link to meaningful stack_chk logic. -$(foreach o, $(libfdt_objs) atags_to_fdt.o fdt_check_mem_start.o, \ - $(eval CFLAGS_$(o) := -I $(srctree)/scripts/dtc/libfdt -fno-stack-protector)) - targets := vmlinux vmlinux.lds piggy_data piggy.o \ head.o $(OBJS) KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \ + -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -I$(obj) $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) ccflags-remove-$(CONFIG_FUNCTION_TRACER) += -pg asflags-y := -DZIMAGE diff --git a/arch/arm/boot/compressed/misc.c b/arch/arm/boot/compressed/misc.c index e1e9a5dde853..c3c66ff2d696 100644 --- a/arch/arm/boot/compressed/misc.c +++ b/arch/arm/boot/compressed/misc.c @@ -128,13 +128,6 @@ asmlinkage void __div0(void) error("Attempting division by 0!"); } -const unsigned long __stack_chk_guard = 0x000a0dff; - -void __stack_chk_fail(void) -{ - error("stack-protector: Kernel stack is corrupted\n"); -} - extern int do_decompress(u8 *input, int len, u8 *output, void (*error)(char *x)); From patchwork Mon Jan 24 17:47:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C300C433EF for ; Mon, 24 Jan 2022 17:48:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241369AbiAXRs0 (ORCPT ); Mon, 24 Jan 2022 12:48:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241340AbiAXRsZ (ORCPT ); Mon, 24 Jan 2022 12:48:25 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05FC6C06173D for ; Mon, 24 Jan 2022 09:48:25 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 97CE261312 for ; Mon, 24 Jan 2022 17:48:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 366AFC340E7; Mon, 24 Jan 2022 17:48:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046504; bh=f8Nqb3G0T6NtLnnu+yWZJPJPQixJ1gZNM6Ac/klURW4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MSsX4x4jLuafE9svL1ozqMSj2vIHSUogLZprQu6STa5uDvHyhKPR5Ui6MET5UFLL0 M1lYVsFzSCSRCYhWC+7pvhcMP1JmeIvYkO7rRGd6dqeC0I/RsORXI5u5sX3jYVipkm NO2dihXElHgvnFpGoX2SqUNAtu33VTpbxGwiBjOJQEpnuQf313hTLDSjqO8jCmcYFM Xk5OKRdDWo1ArXYOM900lc14Etipw6/Inc8T4UL4JZqgaiqMMwrUuoIem9CfpanGir sUnQbJe5dsvUmA5rCRTHXJrHSBslR+zlDh/gTWJiwXIzQMwtU8h4DHEyry8xSVEGFx r1Ch+ZWdFSudA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 09/32] ARM: stackprotector: prefer compiler for TLS based per-task protector Date: Mon, 24 Jan 2022 18:47:21 +0100 Message-Id: <20220124174744.1054712-10-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2513; h=from:subject; bh=f8Nqb3G0T6NtLnnu+yWZJPJPQixJ1gZNM6Ac/klURW4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYTZlMJ78iBD7DO80vedjoony6HHekMDH4mewn2 +dW9rqOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mEwAKCRDDTyI5ktmPJBGDC/ 9PWWz0hXeVkofk4FE7To0upAfk6Hjl707Nfk9hM+c5OyhsKiqh5d4pMQ/qElIST02QtYUeeZFiBX/U 4bPQ+k4+yyVbWj96Rrfbx4js+Nc9sw5Ctd8t7yoAU4VMYneeDCq+mde81W8M/P5LqkOYCJznmqsPgS hEa6YBuOdgGEFsKi7VklY4+RN4d+jz4wb3ZILSg66aQe3EARW1MhY85cnn8uk92BfNlSWGzKn2MKq1 DstFk+fwuoKz+RkVWlBKXorPy3e20BT1/gVmL/W6xfGIZ4GjFpsLp9uhZ7SSkSfGKAcocsh81JY0AS BJomqnJfMUTTrpMSqVIqiZqsudUj+1L3SHdHV6mFfgWnvbuy2KOSF/LGq/fagD8EHo4jGjFIsrxHma YzcS0AFF5dIbQx47UjImm6ftiO7A3Bn9nnfFQcewJx89faxqqpob8z0+Axo1aqhvcL3FPiYG+KdcPh TsdYAeKlBgJN+jjohb8N4qLAw6UCn2yl1egQ0GA+mbHRA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Currently, we implement the per-task stack protector for ARM using a GCC plugin, due to lack of native compiler support. However, work is underway to get this implemented in the compiler, which means we will be able to deprecate the GCC plugin at some point. In the meantime, we will need to support both, where the native compiler implementation is obviously preferred. So let's wire this up in Kconfig and the Makefile. Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 8 ++++++-- arch/arm/Makefile | 9 +++++++++ 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 7528cbdb90a1..99ac5d75dcec 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1596,10 +1596,14 @@ config XEN help Say Y if you want to run Linux in a Virtual Machine on Xen on ARM. +config CC_HAVE_STACKPROTECTOR_TLS + def_bool $(cc-option,-mtp=cp15 -mstack-protector-guard=tls -mstack-protector-guard-offset=0) + config STACKPROTECTOR_PER_TASK bool "Use a unique stack canary value for each task" - depends on GCC_PLUGINS && STACKPROTECTOR && THREAD_INFO_IN_TASK && !XIP_DEFLATED_DATA - select GCC_PLUGIN_ARM_SSP_PER_TASK + depends on STACKPROTECTOR && THREAD_INFO_IN_TASK && !XIP_DEFLATED_DATA + depends on GCC_PLUGINS || CC_HAVE_STACKPROTECTOR_TLS + select GCC_PLUGIN_ARM_SSP_PER_TASK if !CC_HAVE_STACKPROTECTOR_TLS default y help Due to the fact that GCC uses an ordinary symbol reference from diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 77172d555c7e..e943624cbf87 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -275,6 +275,14 @@ endif ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) prepare: stack_protector_prepare +ifeq ($(CONFIG_CC_HAVE_STACKPROTECTOR_TLS),y) +stack_protector_prepare: prepare0 + $(eval KBUILD_CFLAGS += \ + -mstack-protector-guard=tls \ + -mstack-protector-guard-offset=$(shell \ + awk '{if ($$2 == "TSK_STACK_CANARY") print $$3;}'\ + include/generated/asm-offsets.h)) +else stack_protector_prepare: prepare0 $(eval SSP_PLUGIN_CFLAGS := \ -fplugin-arg-arm_ssp_per_task_plugin-offset=$(shell \ @@ -283,6 +291,7 @@ stack_protector_prepare: prepare0 $(eval KBUILD_CFLAGS += $(SSP_PLUGIN_CFLAGS)) $(eval GCC_PLUGINS_CFLAGS += $(SSP_PLUGIN_CFLAGS)) endif +endif all: $(notdir $(KBUILD_IMAGE)) From patchwork Mon Jan 24 17:47:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93BA6C433F5 for ; Mon, 24 Jan 2022 17:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235398AbiAXRs3 (ORCPT ); Mon, 24 Jan 2022 12:48:29 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43452 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241340AbiAXRs2 (ORCPT ); Mon, 24 Jan 2022 12:48:28 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E297C612FC for ; Mon, 24 Jan 2022 17:48:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E611C340EA; Mon, 24 Jan 2022 17:48:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046507; bh=gslX7T5TrnrvLT9MZy1Flj2WrIS21TNN7skRYLf2cB4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u+a5guIYJlWG0aQFL6ayUD51PzanAJgDDgnDfW4LDhKjBHpJPfBnyNTp7CodzotYt j+43QO7ApP7aTNE0aGiO+yNDOjriTcSTJRFCIUtpCmhlk/TtQntCrhyjbqjAg59Cug 7J8VI7jGjsu+XOPtimzzNle0oJKjKJh54l0ScMqVCAPi6+35CL5ZTLUDzFpPMB7gA9 DGn/DBSfpm7LizBGy13UMCG3xBraowXXOdPZ4UtgnTt9WaO5Jhmn49a/3zhVcqymwO Au1LJImb7idCjPN8j2OomvVvJOJkll6BvUsbe3eR4BqKRWxiy4G62Rn3HtDTovwHmo l/1fyL2wNBLgA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 10/32] ARM: entry: preserve thread_info pointer in switch_to Date: Mon, 24 Jan 2022 18:47:22 +0100 Message-Id: <20220124174744.1054712-11-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2194; h=from:subject; bh=gslX7T5TrnrvLT9MZy1Flj2WrIS21TNN7skRYLf2cB4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYViXFUcknkrsWItgSrA5jHKmMET02bvqQS6Nlf hymjRTOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mFQAKCRDDTyI5ktmPJDsfC/ 9zjEN2ZvHm9mEpB4TxzHpRwKwUGhOBMrdIKOLOP02+8pK9cgRhfsTFQ1xdecWiVUocIxcYbX8Va8ab bHudQRdStxiniZH5BCgDRQQSjvbpBkXYM6AMqMydSFebuOfjgGRYfPctGRv1J/w96+Fort9W3hL1vD AgiC2oTKLzFRru112J2nmm6gLsy6+qcPnCsy6VmfJI2gRXp6YZbduuw+V1K96HOK9B1xLmmeAfljR6 ZR79XLrUQm/VVkPf3AOsz9bzXpeVy7g6Tgl3dnK8HyrhcAHxBf8MCRV6KTEeKfBEaSGAP/U+N5cyc+ 4cLRgVCp9xNlqlriYw6jBoox09IWj043Kak4TimBXeoPtTPceF8uaJFXDkalB+nbstVZ1QxNFdt9jJ AjaN7h6ubhXSZShzzmqzMTKl0Kw6AE0Cx/Q9qunERV8XgBRbwsieOsbVCbFyr3Gf7Q9LnM5k/dCIt9 ptLw7MJhF148+gi9L3l0iten9uINUfYSu8MJnnIG6eOXw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Tweak the UP stack protector handling code so that the thread info pointer is preserved in R7 until set_current is called. This is needed for a subsequent patch that implements THREAD_INFO_IN_TASK and set_current for UP as well. This also means we will prefer the per-task protector on UP systems that implement the thread ID registers, so tweak the preprocessor conditionals to reflect this. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/kernel/entry-armv.S | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 9d9372781408..5e01a34369a0 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -744,16 +744,16 @@ ENTRY(__switch_to) ldr r6, [r2, #TI_CPU_DOMAIN] #endif switch_tls r1, r4, r5, r3, r7 -#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) - ldr r7, [r2, #TI_TASK] +#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) && \ + !defined(CONFIG_STACKPROTECTOR_PER_TASK) + ldr r9, [r2, #TI_TASK] ldr r8, =__stack_chk_guard .if (TSK_STACK_CANARY > IMM12_MASK) - add r7, r7, #TSK_STACK_CANARY & ~IMM12_MASK + add r9, r9, #TSK_STACK_CANARY & ~IMM12_MASK .endif - ldr r7, [r7, #TSK_STACK_CANARY & IMM12_MASK] -#elif defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) - mov r7, r2 @ Preserve 'next' + ldr r9, [r9, #TSK_STACK_CANARY & IMM12_MASK] #endif + mov r7, r2 @ Preserve 'next' #ifdef CONFIG_CPU_USE_DOMAINS mcr p15, 0, r6, c3, c0, 0 @ Set domain register #endif @@ -762,8 +762,9 @@ ENTRY(__switch_to) ldr r0, =thread_notify_head mov r1, #THREAD_NOTIFY_SWITCH bl atomic_notifier_call_chain -#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) - str r7, [r8] +#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) && \ + !defined(CONFIG_STACKPROTECTOR_PER_TASK) + str r9, [r8] #endif THUMB( mov ip, r4 ) mov r0, r5 From patchwork Mon Jan 24 17:47:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E2A5C433EF for ; Mon, 24 Jan 2022 17:48:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241269AbiAXRsd (ORCPT ); Mon, 24 Jan 2022 12:48:33 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:50118 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241414AbiAXRsc (ORCPT ); Mon, 24 Jan 2022 12:48:32 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E309CB811A5 for ; Mon, 24 Jan 2022 17:48:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5F9CC340E8; Mon, 24 Jan 2022 17:48:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046510; bh=kBmjChGfVHtfptt13fVh04pzGlJ1Uj7ogIqlHIXKQj4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LWX8RkR0btj9Tw53wmyh1lFH3wcXvtwFtulZPXOrWN1AHagnq8Zqv11cyGyk4ad5t n0PtTtCbTZA5ttekO22Z9kcAnwksVlnOBcvKO6FXoMfc5C2C7+F8peXufvyMYD0/8z l8IR7XBfzTi/06T98t6HxXZBN2/dY49BhSezjPWM5B7ssOX05AmYStWj74AjPBsiv+ BRC4v1QGausEM/aVV1ML0r7CeLXzVxG+NaEXEaebUxZLINl5P26kRASZHweVHIdgNS ngqjmPMXW/774cEbBIK1f7mtCSc6v00dbZx/nJL6DMi68nA72h1BCVcF4jLyFGaRRN vcZ72SKQ/f1Lg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 11/32] ARM: module: implement support for PC-relative group relocations Date: Mon, 24 Jan 2022 18:47:23 +0100 Message-Id: <20220124174744.1054712-12-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5346; h=from:subject; bh=kBmjChGfVHtfptt13fVh04pzGlJ1Uj7ogIqlHIXKQj4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYWeuBhQsF3ikfXqO+YfGeTbi2+AvfVgREj5qVw xwbWrOuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mFgAKCRDDTyI5ktmPJM2pC/ 4iBSLZWPiF/mOcOz5TZoO7EPoemwc/tM4q+eCZKPY3enCIx5QoDTL3abUXf2c+GoqpQq8rZKjybH6x z5unKDlpumlbaxhv0JRFhC1uCb3WqMhD43IeoSKvt4gey6jSVBbruoTRyMDyX2Ad6Ix04hoFbBGdU6 uHPqDabj1s08jAuuhScSLH5gp07mNf5rxwsd6nvG1U5M+hhVqkgSvjdc9eBYD7Xtc/K2olVpzBS1Cb taPA1bu0ctDzuujtOlDXv9gyFrxLrpHsnFQ3hlqBQr8RFIgGkIhfMVu+zzQLT+D2fQQsl89snKxqBz Hp/kLp2gKIwTr8MCayeYgvtCT+XWyw0ox1LcqD5cLsh3nGGJZNSAJT4F9XMpeXA0B5ZdCNEAX1Dyz/ TSHN6+4gD11GI8D9pEAsdwKWF1oKpk/3p+48PVmsL2/h7GdRrjez2hHd4xeH1fc4ecamVfzobbjcKI qDBQny6DiGhPdpAFu+bJlWaKtG/BByJCxDnjnCDqGSnpQ= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Add support for the R_ARM_ALU_PC_Gn_NC and R_ARM_LDR_PC_G2 group relocations [0] so we can use them in modules. These will be used to load the current task pointer from a global variable without having to rely on a literal pool entry to carry the address of this variable, which may have a significant negative impact on cache utilization for variables that are used often and in many different places, as each occurrence will result in a literal pool entry and therefore a line in the D-cache. [0] 'ELF for the ARM architecture' https://github.com/ARM-software/abi-aa/releases Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/elf.h | 3 + arch/arm/kernel/module.c | 90 ++++++++++++++++++++ 2 files changed, 93 insertions(+) diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h index b8102a6ddf16..d68101655b74 100644 --- a/arch/arm/include/asm/elf.h +++ b/arch/arm/include/asm/elf.h @@ -61,6 +61,9 @@ typedef struct user_fp elf_fpregset_t; #define R_ARM_MOVT_ABS 44 #define R_ARM_MOVW_PREL_NC 45 #define R_ARM_MOVT_PREL 46 +#define R_ARM_ALU_PC_G0_NC 57 +#define R_ARM_ALU_PC_G1_NC 59 +#define R_ARM_LDR_PC_G2 63 #define R_ARM_THM_CALL 10 #define R_ARM_THM_JUMP24 30 diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c index beac45e89ba6..49ff7fd18f0c 100644 --- a/arch/arm/kernel/module.c +++ b/arch/arm/kernel/module.c @@ -68,6 +68,44 @@ bool module_exit_section(const char *name) strstarts(name, ".ARM.exidx.exit"); } +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS +/* + * This implements the partitioning algorithm for group relocations as + * documented in the ARM AArch32 ELF psABI (IHI 0044). + * + * A single PC-relative symbol reference is divided in up to 3 add or subtract + * operations, where the final one could be incorporated into a load/store + * instruction with immediate offset. E.g., + * + * ADD Rd, PC, #... or ADD Rd, PC, #... + * ADD Rd, Rd, #... ADD Rd, Rd, #... + * LDR Rd, [Rd, #...] ADD Rd, Rd, #... + * + * The latter has a guaranteed range of only 16 MiB (3x8 == 24 bits), so it is + * of limited use in the kernel. However, the ADD/ADD/LDR combo has a range of + * -/+ 256 MiB, (2x8 + 12 == 28 bits), which means it has sufficient range for + * any in-kernel symbol reference (unless module PLTs are being used). + * + * The main advantage of this approach over the typical pattern using a literal + * load is that literal loads may miss in the D-cache, and generally lead to + * lower cache efficiency for variables that are referenced often from many + * different places in the code. + */ +static u32 get_group_rem(u32 group, u32 *offset) +{ + u32 val = *offset; + u32 shift; + do { + shift = val ? (31 - __fls(val)) & ~1 : 32; + *offset = val; + if (!val) + break; + val &= 0xffffff >> shift; + } while (group--); + return shift; +} +#endif + int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, unsigned int relindex, struct module *module) @@ -87,6 +125,9 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, #ifdef CONFIG_THUMB2_KERNEL u32 upper, lower, sign, j1, j2; #endif +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS + u32 shift, group = 1; +#endif offset = ELF32_R_SYM(rel->r_info); if (offset < 0 || offset > (symsec->sh_size / sizeof(Elf32_Sym))) { @@ -331,6 +372,55 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, *(u16 *)(loc + 2) = __opcode_to_mem_thumb16(lower); break; #endif +#ifdef CONFIG_ARM_HAS_GROUP_RELOCS + case R_ARM_ALU_PC_G0_NC: + group = 0; + fallthrough; + case R_ARM_ALU_PC_G1_NC: + tmp = __mem_to_opcode_arm(*(u32 *)loc); + offset = ror32(tmp & 0xff, (tmp & 0xf00) >> 7); + if (tmp & BIT(22)) + offset = -offset; + offset += sym->st_value - loc; + if (offset < 0) { + offset = -offset; + tmp = (tmp & ~BIT(23)) | BIT(22); // SUB opcode + } else { + tmp = (tmp & ~BIT(22)) | BIT(23); // ADD opcode + } + + shift = get_group_rem(group, &offset); + if (shift < 24) { + offset >>= 24 - shift; + offset |= (shift + 8) << 7; + } + *(u32 *)loc = __opcode_to_mem_arm((tmp & ~0xfff) | offset); + break; + + case R_ARM_LDR_PC_G2: + tmp = __mem_to_opcode_arm(*(u32 *)loc); + offset = tmp & 0xfff; + if (~tmp & BIT(23)) // U bit cleared? + offset = -offset; + offset += sym->st_value - loc; + if (offset < 0) { + offset = -offset; + tmp &= ~BIT(23); // clear U bit + } else { + tmp |= BIT(23); // set U bit + } + get_group_rem(2, &offset); + + if (offset > 0xfff) { + pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", + module->name, relindex, i, symname, + ELF32_R_TYPE(rel->r_info), loc, + sym->st_value); + return -ENOEXEC; + } + *(u32 *)loc = __opcode_to_mem_arm((tmp & ~0xfff) | offset); + break; +#endif default: pr_err("%s: unknown relocation: %u\n", From patchwork Mon Jan 24 17:47:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6499CC4332F for ; Mon, 24 Jan 2022 17:48:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241340AbiAXRsf (ORCPT ); Mon, 24 Jan 2022 12:48:35 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43556 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241414AbiAXRse (ORCPT ); Mon, 24 Jan 2022 12:48:34 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7CDC761312 for ; Mon, 24 Jan 2022 17:48:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18E16C340EB; Mon, 24 Jan 2022 17:48:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046513; bh=YhN3QJGzpYUXpQwGEXw+ci07ZoekkfIoc5ZDfaXpolY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ugJcKpWfRF3r2V7E6qVihyl2wrRxRV4aQOpcbybKvIvkU3PZcAU8h6AFBV+eBzdmU avHZBEVF8WvWOugfTR9TVp05OEztf6bwO47hgEl7+6hX7p+K3TGiby8Gr6ODlcc3zZ zDY6RUOFC3R0IHtF2PB0L4b96MWMuLfwmuY46UVSfKFsEdv97A4E5wIzmo3kn+gxxP FjAxm9dM5LZhsUHA09IH9q9pG5kJdlyfJJZRqa4qsOwFdarCvISVuVOyTqv6uylz9p jt9z8c7bocmaTN302suOTGZgwdi0XurvbZJ3z5McXWjoyiJBU14cMYZaa9upmfYphG OMhT8I2yoSPGA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 12/32] ARM: assembler: add optimized ldr/str macros to load variables from memory Date: Mon, 24 Jan 2022 18:47:24 +0100 Message-Id: <20220124174744.1054712-13-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3676; h=from:subject; bh=YhN3QJGzpYUXpQwGEXw+ci07ZoekkfIoc5ZDfaXpolY=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYY8XhWWCvJjXsZiYsX7wfoz9aWEplGFkAecTZb wKKXA22JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mGAAKCRDDTyI5ktmPJCtrC/ 47CN3ftvcb+Ch2vEH6ny+Ae+FD2UEGbf4OUvepURhboK55NqZ46AqBCXkBAx2EQzy7cePJ/7Ttl6lj T939KjdFgjUFI+wWiLvgTrk8kEDVNQdSqgxbSz1I9USBonk4n8JXSMC/j+/dTHVwvHdJ3zTaW7YJl1 1kYDlJrdqAw5BWfNLS9bmtCXYfTBpB67XaBnMt2eiEyiQcYV6y6J0TZpHrnijpMQ4NfQ8wkAmlF3fy QM5vM2VD0xjE6Q/UItsHm8SubOmCWYCtx4IAQPrOmiLdjhOkpu2ppNucWsvtnrz3S+Cg8XV4VMm3Ao qjeTx22DAhAvFu9ATD5KH/FZU1blUztq3b1N1LKk9lEqiT9ZWe7tQPPIRhmJ9r+SC2868bvxJ6LZdT QEHp/rpNESP8gAYHPLGRZKseZ0YY2jjpwNae4hnzYzXUXwTvTyBBpA9TlEPL4SCz4n7wgMCQSh7r/v LzKPoSW6VE+TtfJTKs7GLomAA62MSvhH7dvbS0aViLzVM= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org We will be adding variable loads to various hot paths, so it makes sense to add a helper macro that can load variables from asm code without the use of literal pool entries. On v7 or later, we can simply use MOVW/MOVT pairs, but on earlier cores, this requires a bit of hackery to emit a instruction sequence that implements this using a sequence of ADD/LDR instructions. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 11 +++++ arch/arm/include/asm/assembler.h | 48 ++++++++++++++++++-- 2 files changed, 55 insertions(+), 4 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 99ac5d75dcec..9586636289d2 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -139,6 +139,17 @@ config ARM Europe. There is an ARM Linux project with a web page at . +config ARM_HAS_GROUP_RELOCS + def_bool y + depends on !LD_IS_LLD || LLD_VERSION >= 140000 + depends on !COMPILE_TEST + help + Whether or not to use R_ARM_ALU_PC_Gn or R_ARM_LDR_PC_Gn group + relocations, which have been around for a long time, but were not + supported in LLD until version 14. The combined range is -/+ 256 MiB, + which is usually sufficient, but not for allyesconfig, so we disable + this feature when doing compile testing. + config ARM_HAS_SG_CHAIN bool diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 7d23d4bb2168..7a4e292b68e4 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -564,12 +564,12 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) /* * mov_l - move a constant value or [relocated] address into a register */ - .macro mov_l, dst:req, imm:req + .macro mov_l, dst:req, imm:req, cond .if __LINUX_ARM_ARCH__ < 7 - ldr \dst, =\imm + ldr\cond \dst, =\imm .else - movw \dst, #:lower16:\imm - movt \dst, #:upper16:\imm + movw\cond \dst, #:lower16:\imm + movt\cond \dst, #:upper16:\imm .endif .endm @@ -607,6 +607,46 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) __adldst_l str, \src, \sym, \tmp, \cond .endm + .macro __ldst_va, op, reg, tmp, sym, offset=0, cond +#if __LINUX_ARM_ARCH__ >= 7 || \ + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) + mov_l \tmp, \sym, \cond +#else + /* + * Avoid a literal load, by emitting a sequence of ADD/LDR instructions + * with the appropriate relocations. The combined sequence has a range + * of -/+ 256 MiB, which should be sufficient for the core kernel and + * for modules loaded into the module region. + */ + .globl \sym + .reloc .L0_\@, R_ARM_ALU_PC_G0_NC, \sym + .reloc .L1_\@, R_ARM_ALU_PC_G1_NC, \sym + .reloc .L2_\@, R_ARM_LDR_PC_G2, \sym +.L0_\@: sub\cond \tmp, pc, #8 - \offset +.L1_\@: sub\cond \tmp, \tmp, #4 - \offset +#endif +.L2_\@: \op\cond \reg, [\tmp, #\offset] + .endm + + /* + * ldr_va - load a 32-bit word from the virtual address of \sym + */ + .macro ldr_va, rd:req, sym:req, cond, tmp, offset + .ifb \tmp + __ldst_va ldr, \rd, \rd, \sym, \offset, \cond + .else + __ldst_va ldr, \rd, \tmp, \sym, \offset, \cond + .endif + .endm + + /* + * str_va - store a 32-bit word to the virtual address of \sym + */ + .macro str_va, rn:req, sym:req, tmp:req + __ldst_va str, \rn, \tmp, \sym + .endm + /* * rev_l - byte-swap a 32-bit value * From patchwork Mon Jan 24 17:47:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0FFAC433EF for ; Mon, 24 Jan 2022 17:48:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241414AbiAXRsj (ORCPT ); Mon, 24 Jan 2022 12:48:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241434AbiAXRsi (ORCPT ); Mon, 24 Jan 2022 12:48:38 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 404FAC06173B for ; Mon, 24 Jan 2022 09:48:38 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C43C661312 for ; Mon, 24 Jan 2022 17:48:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 60B1EC340E8; Mon, 24 Jan 2022 17:48:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046517; bh=eMmzRcjpASAjkmFAFfmqCrwEHxj9li5uj93BJiQL2FM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AZPZ4rZnPVixnstg3xyDgLPdzqHbR3+rK3EtFDNeC/dl8Mc0ZIOj/8dfD8rOQnoJ1 zWodRCcEkaSHF8gmZVoE2Tlf2Ke5yEhABGwVxrqq8zsHOBnk9x6st1e1puVqA4/owB LHLh23gGw/+akkCxHP5XdqWZg/SaEb/eWAeUFgr4eTWdpFlLV13EjXgilMyLK2IqVn ssIo19w0CZLmhw88C7a8dsjiTLZujSmI7gODC2iNBwRRY/yGeiuuzXp1aCZ1rRlZef omG9XHLcF8iOs5g8h/q30r732uPOMfgH+P+ZXLPZrssPUOIRh7CGGCxMOaWnByKLfz Wo0opXTl4yuAw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 13/32] ARM: percpu: add SMP_ON_UP support Date: Mon, 24 Jan 2022 18:47:25 +0100 Message-Id: <20220124174744.1054712-14-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6979; h=from:subject; bh=eMmzRcjpASAjkmFAFfmqCrwEHxj9li5uj93BJiQL2FM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYaZg1QGI5pD0MXxQZ7caxgDCmY2nuv0N5kOBth hd5bX8yJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mGgAKCRDDTyI5ktmPJB/YC/ 4vLa3KaDIzczMLw6aSrkoI7UPf/AAiBgGV2bxUQagjI5QV4928PSjKQ4Vli5yt5kQqCKHQ1F6OEjQr kBc8MhQnuXrk3YNUgHfmKImd+6THXy9ChpOtuEiafy7LIFXZYHbMLZzdwrjeaMBAQRFmT+n7iehF1U 8VvqGSoT2YlxKWOXW/yjo5STbcq7n6I/mpc/gDQSzF4I8nKHvlzh4E9kfaxt/SvYBs13NG97Wqmjt3 FC7fUa7KmymkevEYNQTtaqlVa+0mD8dCtrhMR0nDx97KMhIOY7SN9JToqxuen8sExj2K2qiFFzblvz 3/q+16n0F+GP6LMjt8uv+eZsknEAAAxVKkDKdDzqF+JSXn3Zc11W50N9+jcXVnGDtkraGlNklEtRbW jT74Gga+WIZZ08gWAfJFqgZHPLMi8wCOPU8n2o4O6b/KFQFb+lR2/6zTnnIfilSH9apy+v7HwtYkaj lLZ3xEnSME/mnDSHF37DCmjyy9F6QSJh0kkDcbWz6GnaA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Permit the use of the TPIDRPRW system register for carrying the per-CPU offset in generic SMP configurations that also target non-SMP capable ARMv6 cores. This uses the SMP_ON_UP code patching framework to turn all TPIDRPRW accesses into reads/writes of entry #0 in the __per_cpu_offset array. While at it, switch over some existing direct TPIDRPRW accesses in asm code to invocations of a new helper that is patched in the same way when necessary. Note that CPU_V6+SMP without SMP_ON_UP results in a kernel that does not boot on v6 CPUs without SMP extensions, so add this dependency to Kconfig as well. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/assembler.h | 61 +++++++++++++++++++- arch/arm/include/asm/insn.h | 17 ++++++ arch/arm/include/asm/percpu.h | 36 ++++++++++-- arch/arm/mm/Kconfig | 1 + 4 files changed, 108 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 7a4e292b68e4..30752c4427d4 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -216,9 +216,7 @@ .macro reload_current, t1:req, t2:req #ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO - adr_l \t1, __entry_task @ get __entry_task base address - mrc p15, 0, \t2, c13, c0, 4 @ get per-CPU offset - ldr \t1, [\t1, \t2] @ load variable + ldr_this_cpu \t1, __entry_task, \t1, \t2 mcr p15, 0, \t1, c13, c0, 3 @ store in TPIDRURO #endif .endm @@ -308,6 +306,26 @@ #define ALT_UP_B(label) b label #endif + /* + * this_cpu_offset - load the per-CPU offset of this CPU into + * register 'rd' + */ + .macro this_cpu_offset, rd:req +#ifdef CONFIG_SMP +ALT_SMP(mrc p15, 0, \rd, c13, c0, 4) +#ifdef CONFIG_CPU_V6 +ALT_UP_B(.L0_\@) + .subsection 1 +.L0_\@: ldr_va \rd, __per_cpu_offset + b .L1_\@ + .previous +.L1_\@: +#endif +#else + mov \rd, #0 +#endif + .endm + /* * Instruction barrier */ @@ -647,6 +665,43 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) __ldst_va str, \rn, \tmp, \sym .endm + /* + * ldr_this_cpu_armv6 - Load a 32-bit word from the per-CPU variable 'sym', + * without using a temp register. Supported in ARM mode + * only. + */ + .macro ldr_this_cpu_armv6, rd:req, sym:req + this_cpu_offset \rd + .globl \sym + .reloc .L0_\@, R_ARM_ALU_PC_G0_NC, \sym + .reloc .L1_\@, R_ARM_ALU_PC_G1_NC, \sym + .reloc .L2_\@, R_ARM_LDR_PC_G2, \sym + add \rd, \rd, pc +.L0_\@: sub \rd, \rd, #4 +.L1_\@: sub \rd, \rd, #0 +.L2_\@: ldr \rd, [\rd, #4] + .endm + + /* + * ldr_this_cpu - Load a 32-bit word from the per-CPU variable 'sym' + * into register 'rd', which may be the stack pointer, + * using 't1' and 't2' as general temp registers. These + * are permitted to overlap with 'rd' if != sp + */ + .macro ldr_this_cpu, rd:req, sym:req, t1:req, t2:req +#ifndef CONFIG_SMP + ldr_va \rd, \sym,, \t1 @ CPU offset == 0x0 +#elif __LINUX_ARM_ARCH__ >= 7 || \ + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) + this_cpu_offset \t1 + mov_l \t2, \sym + ldr \rd, [\t1, \t2] +#else + ldr_this_cpu_armv6 \rd, \sym +#endif + .endm + /* * rev_l - byte-swap a 32-bit value * diff --git a/arch/arm/include/asm/insn.h b/arch/arm/include/asm/insn.h index 5475cbf9fb6b..faf3d1c28368 100644 --- a/arch/arm/include/asm/insn.h +++ b/arch/arm/include/asm/insn.h @@ -2,6 +2,23 @@ #ifndef __ASM_ARM_INSN_H #define __ASM_ARM_INSN_H +#include + +/* + * Avoid a literal load by emitting a sequence of ADD/LDR instructions with the + * appropriate relocations. The combined sequence has a range of -/+ 256 MiB, + * which should be sufficient for the core kernel as well as modules loaded + * into the module region. (Not supported by LLD before release 14) + */ +#define LOAD_SYM_ARMV6(reg, sym) \ + " .globl " #sym " \n\t" \ + " .reloc 10f, R_ARM_ALU_PC_G0_NC, " #sym " \n\t" \ + " .reloc 11f, R_ARM_ALU_PC_G1_NC, " #sym " \n\t" \ + " .reloc 12f, R_ARM_LDR_PC_G2, " #sym " \n\t" \ + "10: sub " #reg ", pc, #8 \n\t" \ + "11: sub " #reg ", " #reg ", #4 \n\t" \ + "12: ldr " #reg ", [" #reg ", #0] \n\t" + static inline unsigned long arm_gen_nop(void) { diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h index e2fcb3cfd3de..7feba9d65e85 100644 --- a/arch/arm/include/asm/percpu.h +++ b/arch/arm/include/asm/percpu.h @@ -5,20 +5,27 @@ #ifndef _ASM_ARM_PERCPU_H_ #define _ASM_ARM_PERCPU_H_ +#include + register unsigned long current_stack_pointer asm ("sp"); /* * Same as asm-generic/percpu.h, except that we store the per cpu offset * in the TPIDRPRW. TPIDRPRW only exists on V6K and V7 */ -#if defined(CONFIG_SMP) && !defined(CONFIG_CPU_V6) +#ifdef CONFIG_SMP static inline void set_my_cpu_offset(unsigned long off) { + extern unsigned int smp_on_up; + + if (IS_ENABLED(CONFIG_CPU_V6) && !smp_on_up) + return; + /* Set TPIDRPRW */ asm volatile("mcr p15, 0, %0, c13, c0, 4" : : "r" (off) : "memory"); } -static inline unsigned long __my_cpu_offset(void) +static __always_inline unsigned long __my_cpu_offset(void) { unsigned long off; @@ -27,8 +34,29 @@ static inline unsigned long __my_cpu_offset(void) * We want to allow caching the value, so avoid using volatile and * instead use a fake stack read to hazard against barrier(). */ - asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) - : "Q" (*(const unsigned long *)current_stack_pointer)); + asm("0: mrc p15, 0, %0, c13, c0, 4 \n\t" +#ifdef CONFIG_CPU_V6 + "1: \n\t" + " .subsection 1 \n\t" +#if defined(CONFIG_ARM_HAS_GROUP_RELOCS) && \ + !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) + "2: " LOAD_SYM_ARMV6(%0, __per_cpu_offset) " \n\t" + " b 1b \n\t" +#else + "2: ldr %0, 3f \n\t" + " ldr %0, [%0] \n\t" + " b 1b \n\t" + "3: .long __per_cpu_offset \n\t" +#endif + " .previous \n\t" + " .pushsection \".alt.smp.init\", \"a\" \n\t" + " .align 2 \n\t" + " .long 0b - . \n\t" + " b . + (2b - 0b) \n\t" + " .popsection \n\t" +#endif + : "=r" (off) + : "Q" (*(const unsigned long *)current_stack_pointer)); return off; } diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 58afba346729..a91ff22c6c2e 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -386,6 +386,7 @@ config CPU_V6 select CPU_PABRT_V6 select CPU_THUMB_CAPABLE select CPU_TLB_V6 if MMU + select SMP_ON_UP if SMP # ARMv6k config CPU_V6K From patchwork Mon Jan 24 17:47:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE9C7C433FE for ; Mon, 24 Jan 2022 17:48:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241434AbiAXRsq (ORCPT ); Mon, 24 Jan 2022 12:48:46 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:50182 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243906AbiAXRsm (ORCPT ); Mon, 24 Jan 2022 12:48:42 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BEBABB811AF for ; Mon, 24 Jan 2022 17:48:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8857C340EA; Mon, 24 Jan 2022 17:48:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046520; bh=ThFMwJ0Z02qJRhD4h5Bc2RqdVZ6t14LcClyaFeEEQi4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TdxRILxMfV9Zs31ARbIK5FYH+71Vm8IpR/ArYJncxI3n5gU9bv5cMuiFTEWh2enMk rjAUq0SS4AGgCdP4nfZS2HHzeCh33OLRIgNcC2ZFwK7Y3laGQ9SGOR1aABYTbBG8dR zzGkGwiwCszgg6DLiNFABA4memmx2DKctENaDoKNrbhUsgs3IXwEi1s7bVzSZZDPxr 6Eg3Yzupv3WtpPJXjyJP3v5dKZm43NGqKu+7NZ908R4qwaMtZgOPT1baQ0B/is2mHj NEQYP+4QjNmJICOhKPTaeYEsiQtdPj6N8nIr0bDKEaILjqX9DnXAqkH4Ra2wVO05+6 WQaU5okXBWKOQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 14/32] ARM: use TLS register for 'current' on !SMP as well Date: Mon, 24 Jan 2022 18:47:26 +0100 Message-Id: <20220124174744.1054712-15-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1013; h=from:subject; bh=ThFMwJ0Z02qJRhD4h5Bc2RqdVZ6t14LcClyaFeEEQi4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYcXrMDmFTsTLbtaBR/X+iSsDU8OPa877TVlaW2 E4ENHCOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mHAAKCRDDTyI5ktmPJAbQDA C0biuSI2R3kQ6eOpyGeKMsUryIHLOV8r82g1TFN0fP09PhesDFyEcRiY8CmzUzoRkZlGqgv0ZX/DfK BdFI2pweirb8FwXB2bR/3LJa3/Zs622nb2uYtFhC0csJueFh3YvG7Y5qvR578aHN9iEGgIPA/sUZk7 QI+/kMU7+Z/Vz1enHiVfOZaWSgc84WrEbyT1gyljnZ0QTT0/eK/CVr9w5mCJPCchYD0cGBxo7bg7Yk X/iyzwWIW+BI+tW53qqIiNlFYn0Ghapao/bDzDCMrNQvQceR6ujLasbHrYYe18yPUV6UIk8WYAJPls uukiGGPMpS/G3uzR03t6Qiw3Urj807CXXZNzvZMteF6xHHSUIjrstQ6jYxRVCphrxGYuFm3Oxxf7g/ U7Ctv8fmlPy4mzPdSbpt4h5zgHXDSiFJbbxNk2k2HU6EkacobN7AeWIm4Zbk4kpD2EMrQqCHXrMyeB RW8A7NVLHpkObeD44sBK5p7HTBgfr1fWMpA84QjmOUzuk= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Enable the use of the TLS register to hold the 'current' pointer also on non-SMP configurations that target v6k or later CPUs. This will permit the use of THREAD_INFO_IN_TASK as well as IRQ stacks and vmap'ed stacks for such configurations. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Acked-by: Arnd Bergmann Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 9586636289d2..0e1b93de10b4 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1163,7 +1163,7 @@ config SMP_ON_UP config CURRENT_POINTER_IN_TPIDRURO def_bool y - depends on SMP && CPU_32v6K && !CPU_V6 + depends on CPU_32v6K && !CPU_V6 config ARM_CPU_TOPOLOGY bool "Support cpu topology definition" From patchwork Mon Jan 24 17:47:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22D3FC433F5 for ; Mon, 24 Jan 2022 17:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244065AbiAXRss (ORCPT ); Mon, 24 Jan 2022 12:48:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244148AbiAXRsp (ORCPT ); Mon, 24 Jan 2022 12:48:45 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C41BEC06173B for ; Mon, 24 Jan 2022 09:48:44 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 619A96132D for ; Mon, 24 Jan 2022 17:48:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F10D8C340E8; Mon, 24 Jan 2022 17:48:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046523; bh=R9pqoOjBSvtzFhQGDwKmU6TyA6fPIXCEUg/HRlbPQZg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=flUIHDk2cQsZJkCZyFOJlHV/335uoOIqmJfbsfULgQggyTh06Nya2ZaJaE2KA67mO Jw27joMgaOC9Otn038PmNnKUvBobGc5VNj+kQVGkURGtuCi36OAtvP4jEttNSbwul1 hh8dgJXWHH2NBBonNOArgLQ0yVxxwhtrU4ycDbW6vRhauseWKhOIoVKKII/OI/Yr5f kBK8TSxjUQtI8yTMC3RT1xwSP/CakCtcqF4gA1tERaSVUOxD3oRgsBT1m/Eq+aop0U CACMsMgM30aLeRx2hVu2nGolWmzgrJIXimKudzVfftUTOzeQQMHVE/2VEuPIP+NT2M ZSMUuX8EXGQDg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 15/32] ARM: smp: defer TPIDRURO update for SMP v6 configurations too Date: Mon, 24 Jan 2022 18:47:27 +0100 Message-Id: <20220124174744.1054712-16-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3819; h=from:subject; bh=R9pqoOjBSvtzFhQGDwKmU6TyA6fPIXCEUg/HRlbPQZg=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYepZ0YFyw1WmCiJPoCd4Ph980LWSukwqcu41Gn oSbZXTaJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mHgAKCRDDTyI5ktmPJDT+C/ 9K837OClZA+Cuoze++89Rb6b+NEBs5xeEJuWXYsIQwREHTlzhcNA2VgokOwM6UYFy9uJUdzC4IfWZk OTImA9bQb0lst+erchLWSgN2AJoGaNw+CrW4++uAhG0uq2CeR25Tbq1S2fCqxl9CQa0o36mP3YhdnS NyvreUGepHWoLzRyBPWM/LgnF1INW4RTFtTRPPfFh+Ml6nEn8IUHOcO7dGfExlfTTeeLhBv/4cfPqK FYBNzRflm+AzFlV53TjrJy6M8/5tQUE/SHbqF1P8N0VVwI6XxlBwEDuR/gOOOKmGvpGHIJE8y5U7v1 31DGWi2QB/NYtDn/0mrNm9FSqm2B06nM4aZNnqzXcMhk3crGo6SvPxMKrV9wTce/n/IJtM+tankQ3C cfEGZMGbf9GAT7+7lyHwWlwDumqL+li3ac66qhsnldqPLCurcNTAhew4jzFm8JKmos89ZKe3hrrA5w mvKGL+legaAbqjbzMGD7EmPkIudiMnZgSFLke7URO34Rs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Defer TPIDURO updates for user space until exit also for CPU_V6+SMP configurations so that we can decide at runtime whether to use it to carry the current pointer, provided that we are running on a CPU that actually implements this register. This is needed for THREAD_INFO_IN_TASK support for UP systems, which requires that all SMP capable systems use the TPIDRURO based access to 'current' as the only remaining alternative will be a global variable which only works on UP. Given that SMP implies support for HWCAP_TLS, we can patch away the hwcap test entirely from the context switch path rather than just the TPIDRURO assignment when running on SMP hardware. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/tls.h | 30 +++++++++++++------- arch/arm/kernel/entry-header.S | 8 +++++- 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/tls.h b/arch/arm/include/asm/tls.h index c3296499176c..de254347acf6 100644 --- a/arch/arm/include/asm/tls.h +++ b/arch/arm/include/asm/tls.h @@ -18,21 +18,31 @@ .endm .macro switch_tls_v6, base, tp, tpuser, tmp1, tmp2 - ldr \tmp1, =elf_hwcap - ldr \tmp1, [\tmp1, #0] +#ifdef CONFIG_SMP +ALT_SMP(nop) +ALT_UP_B(.L0_\@) + .subsection 1 +#endif +.L0_\@: ldr_va \tmp1, elf_hwcap mov \tmp2, #0xffff0fff tst \tmp1, #HWCAP_TLS @ hardware TLS available? streq \tp, [\tmp2, #-15] @ set TLS value at 0xffff0ff0 - mrcne p15, 0, \tmp2, c13, c0, 2 @ get the user r/w register - mcrne p15, 0, \tp, c13, c0, 3 @ yes, set TLS register - mcrne p15, 0, \tpuser, c13, c0, 2 @ set user r/w register - strne \tmp2, [\base, #TI_TP_VALUE + 4] @ save it + beq .L2_\@ + mcr p15, 0, \tp, c13, c0, 3 @ yes, set TLS register +#ifdef CONFIG_SMP + b .L1_\@ + .previous +#endif +.L1_\@: switch_tls_v6k \base, \tp, \tpuser, \tmp1, \tmp2 +.L2_\@: .endm .macro switch_tls_software, base, tp, tpuser, tmp1, tmp2 mov \tmp1, #0xffff0fff str \tp, [\tmp1, #-15] @ set TLS value at 0xffff0ff0 .endm +#else +#include #endif #ifdef CONFIG_TLS_REG_EMUL @@ -43,7 +53,7 @@ #elif defined(CONFIG_CPU_V6) #define tls_emu 0 #define has_tls_reg (elf_hwcap & HWCAP_TLS) -#define defer_tls_reg_update 0 +#define defer_tls_reg_update is_smp() #define switch_tls switch_tls_v6 #elif defined(CONFIG_CPU_32v6K) #define tls_emu 0 @@ -81,11 +91,11 @@ static inline void set_tls(unsigned long val) */ barrier(); - if (!tls_emu && !defer_tls_reg_update) { - if (has_tls_reg) { + if (!tls_emu) { + if (has_tls_reg && !defer_tls_reg_update) { asm("mcr p15, 0, %0, c13, c0, 3" : : "r" (val)); - } else { + } else if (!has_tls_reg) { #ifdef CONFIG_KUSER_HELPERS /* * User space must never try to access this diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index ae24dd54e9ef..da206bd4f194 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -292,12 +292,18 @@ .macro restore_user_regs, fast = 0, offset = 0 -#if defined(CONFIG_CPU_32v6K) && !defined(CONFIG_CPU_V6) +#if defined(CONFIG_CPU_32v6K) && \ + (!defined(CONFIG_CPU_V6) || defined(CONFIG_SMP)) +#ifdef CONFIG_CPU_V6 +ALT_SMP(nop) +ALT_UP_B(.L1_\@) +#endif @ The TLS register update is deferred until return to user space so we @ can use it for other things while running in the kernel get_thread_info r1 ldr r1, [r1, #TI_TP_VALUE] mcr p15, 0, r1, c13, c0, 3 @ set TLS register +.L1_\@: #endif uaccess_enable r1, isb=0 From patchwork Mon Jan 24 17:47:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BC75C433EF for ; Mon, 24 Jan 2022 17:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243906AbiAXRsv (ORCPT ); Mon, 24 Jan 2022 12:48:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244120AbiAXRsu (ORCPT ); Mon, 24 Jan 2022 12:48:50 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3C1CC06173B for ; Mon, 24 Jan 2022 09:48:49 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5FE86B811A5 for ; Mon, 24 Jan 2022 17:48:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 454D6C340EA; Mon, 24 Jan 2022 17:48:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046527; bh=Yb8tR6lOHVkGzQnc98AxdPZmcB5pwY5zx+GTphZ6QKE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kfJcEf4hXd79+IjgD8PomGvttf0SVnsWCPGkHMUn7jvUxeKJu6533aErjrBuAeZvU ZmwdCO8hx0Dd40cZtPBsVrJQ4axmAW5ufdVf7HjmcQiqUEjhf4m0hZW2CahL+8xTOw kUa3boS2UbpJ87ZtgbUa67+40C5xgpMYExz7ivDcTwdPcDkb5dNNlcR/uiF9aJPHvN rsp9UQnenxIX+Yqv+kh0B4cq4FhziFl0iX5tS5qVbff6pxCUq1QxCkGyR4hQiozdqM 3C/Dn/hhTW6yKjRQba0j8ZX4lBobILMOFXBMJazguILBpahVaHRxJneSL8h/pEMbx4 a/5f1cJXk7XEQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 16/32] ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems Date: Mon, 24 Jan 2022 18:47:28 +0100 Message-Id: <20220124174744.1054712-17-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=15215; h=from:subject; bh=Yb8tR6lOHVkGzQnc98AxdPZmcB5pwY5zx+GTphZ6QKE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYg/CW4esH63LbOUcp4w0r1VJe8PrDSlNe8q+rK VJM3HtSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mIAAKCRDDTyI5ktmPJHQDDA DChfYP6PVnga842iAW2Dt6ftkjOfcgJeQe4ZbiZfMfbEkDK29mW5utg7PAdqO1IOyqOnZzXdRrmBVP rexjaRySxUshVYhypXbQ320uOiukj9l5Ydhg9I5HwIYfrojCbr4/KZPrkPVx1rsl8ZgA1dN6kQBpDi c5LX1wHxoiABJ/z27/o+YSEua2ap+6zY3qGJa3NqanHatIkYp/jKfhHp6sr5MOzMrBzgMZpObDyRLw mUYPOxNoWe2lflx9yscyfAgrXQGkgSe5KAT1/h1Or2V7UqTAiKyPwJ+S5Q6PW/t0ntXAjgBDSMsDXb uMvEOZW8aa74hp8/1flUFQSEuIysantIdlPI/c4FIrYLe3IFsniJLi9jqExBTIcyyb8yuaIQcnPk8h xraDKw6PyRmbCTgNMsCudEhRjoV+rN/URVUfj0MhPt6XMULaJcV/CJ5MEVncDmefk5mlHQ81WDAEUK dyFgAmdmOyDdyzT6ygOi5McldD1jxO1O17YcM3wK1pjjc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org On UP systems, only a single task can be 'current' at the same time, which means we can use a global variable to track it. This means we can also enable THREAD_INFO_IN_TASK for those systems, as in that case, thread_info is accessed via current rather than the other way around, removing the need to store thread_info at the base of the task stack. This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP systems as well. To partially mitigate the performance overhead of this arrangement, use a ADD/ADD/LDR sequence with the appropriate PC-relative group relocations to load the value of current when needed. This means that accessing current will still only require a single load as before, avoiding the need for a literal to carry the address of the global variable in each function. However, accessing thread_info will now require this load as well. Acked-by: Linus Walleij Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 4 +- arch/arm/include/asm/assembler.h | 83 +++++++++++++------- arch/arm/include/asm/current.h | 47 +++++++---- arch/arm/include/asm/switch_to.h | 3 +- arch/arm/include/asm/thread_info.h | 27 ------- arch/arm/kernel/asm-offsets.c | 3 - arch/arm/kernel/entry-armv.S | 9 ++- arch/arm/kernel/entry-header.S | 2 +- arch/arm/kernel/entry-v7m.S | 10 ++- arch/arm/kernel/head-common.S | 4 +- arch/arm/kernel/process.c | 7 +- arch/arm/kernel/smp.c | 6 ++ 12 files changed, 115 insertions(+), 90 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 0e1b93de10b4..108a7a872084 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -127,7 +127,7 @@ config ARM select PERF_USE_VMALLOC select RTC_LIB select SYS_SUPPORTS_APM_EMULATION - select THREAD_INFO_IN_TASK if CURRENT_POINTER_IN_TPIDRURO + select THREAD_INFO_IN_TASK select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M # Above selects are sorted alphabetically; please add new ones # according to that. Thanks. @@ -1612,7 +1612,7 @@ config CC_HAVE_STACKPROTECTOR_TLS config STACKPROTECTOR_PER_TASK bool "Use a unique stack canary value for each task" - depends on STACKPROTECTOR && THREAD_INFO_IN_TASK && !XIP_DEFLATED_DATA + depends on STACKPROTECTOR && CURRENT_POINTER_IN_TPIDRURO && !XIP_DEFLATED_DATA depends on GCC_PLUGINS || CC_HAVE_STACKPROTECTOR_TLS select GCC_PLUGIN_ARM_SSP_PER_TASK if !CC_HAVE_STACKPROTECTOR_TLS default y diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 30752c4427d4..bf304596f87e 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -199,41 +199,12 @@ .endm .endr - .macro get_current, rd -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO - mrc p15, 0, \rd, c13, c0, 3 @ get TPIDRURO register -#else - get_thread_info \rd - ldr \rd, [\rd, #TI_TASK] -#endif - .endm - - .macro set_current, rn -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO - mcr p15, 0, \rn, c13, c0, 3 @ set TPIDRURO register -#endif - .endm - - .macro reload_current, t1:req, t2:req -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO - ldr_this_cpu \t1, __entry_task, \t1, \t2 - mcr p15, 0, \t1, c13, c0, 3 @ store in TPIDRURO -#endif - .endm - /* * Get current thread_info. */ .macro get_thread_info, rd -#ifdef CONFIG_THREAD_INFO_IN_TASK /* thread_info is the first member of struct task_struct */ get_current \rd -#else - ARM( mov \rd, sp, lsr #THREAD_SIZE_ORDER + PAGE_SHIFT ) - THUMB( mov \rd, sp ) - THUMB( lsr \rd, \rd, #THREAD_SIZE_ORDER + PAGE_SHIFT ) - mov \rd, \rd, lsl #THREAD_SIZE_ORDER + PAGE_SHIFT -#endif .endm /* @@ -326,6 +297,60 @@ ALT_UP_B(.L0_\@) #endif .endm + /* + * set_current - store the task pointer of this CPU's current task + */ + .macro set_current, rn:req, tmp:req +#if defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP) +9998: mcr p15, 0, \rn, c13, c0, 3 @ set TPIDRURO register +#ifdef CONFIG_CPU_V6 +ALT_UP_B(.L0_\@) + .subsection 1 +.L0_\@: str_va \rn, __current, \tmp + b .L1_\@ + .previous +.L1_\@: +#endif +#else + str_va \rn, __current, \tmp +#endif + .endm + + /* + * get_current - load the task pointer of this CPU's current task + */ + .macro get_current, rd:req +#if defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP) +9998: mrc p15, 0, \rd, c13, c0, 3 @ get TPIDRURO register +#ifdef CONFIG_CPU_V6 +ALT_UP_B(.L0_\@) + .subsection 1 +.L0_\@: ldr_va \rd, __current + b .L1_\@ + .previous +.L1_\@: +#endif +#else + ldr_va \rd, __current +#endif + .endm + + /* + * reload_current - reload the task pointer of this CPU's current task + * into the TLS register + */ + .macro reload_current, t1:req, t2:req +#if defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP) +#ifdef CONFIG_CPU_V6 +ALT_SMP(nop) +ALT_UP_B(.L0_\@) +#endif + ldr_this_cpu \t1, __entry_task, \t1, \t2 + mcr p15, 0, \t1, c13, c0, 3 @ store in TPIDRURO +.L0_\@: +#endif + .endm + /* * Instruction barrier */ diff --git a/arch/arm/include/asm/current.h b/arch/arm/include/asm/current.h index 6bf0aad672c3..c03706869384 100644 --- a/arch/arm/include/asm/current.h +++ b/arch/arm/include/asm/current.h @@ -8,25 +8,18 @@ #define _ASM_ARM_CURRENT_H #ifndef __ASSEMBLY__ +#include struct task_struct; -static inline void set_current(struct task_struct *cur) -{ - if (!IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO)) - return; - - /* Set TPIDRURO */ - asm("mcr p15, 0, %0, c13, c0, 3" :: "r"(cur) : "memory"); -} - -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO +extern struct task_struct *__current; -static inline struct task_struct *get_current(void) +static __always_inline __attribute_const__ struct task_struct *get_current(void) { struct task_struct *cur; #if __has_builtin(__builtin_thread_pointer) && \ + defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) && \ !(defined(CONFIG_THUMB2_KERNEL) && \ defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 130001) /* @@ -39,16 +32,40 @@ static inline struct task_struct *get_current(void) * https://github.com/ClangBuiltLinux/linux/issues/1485 */ cur = __builtin_thread_pointer(); +#elif defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP) + asm("0: mrc p15, 0, %0, c13, c0, 3 \n\t" +#ifdef CONFIG_CPU_V6 + "1: \n\t" + " .subsection 1 \n\t" +#if defined(CONFIG_ARM_HAS_GROUP_RELOCS) && \ + !(defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) + "2: " LOAD_SYM_ARMV6(%0, __current) " \n\t" + " b 1b \n\t" #else - asm("mrc p15, 0, %0, c13, c0, 3" : "=r"(cur)); + "2: ldr %0, 3f \n\t" + " ldr %0, [%0] \n\t" + " b 1b \n\t" + "3: .long __current \n\t" +#endif + " .previous \n\t" + " .pushsection \".alt.smp.init\", \"a\" \n\t" + " .align 2 \n\t" + " .long 0b - . \n\t" + " b . + (2b - 0b) \n\t" + " .popsection \n\t" +#endif + : "=r"(cur)); +#elif __LINUX_ARM_ARCH__>= 7 || \ + !defined(CONFIG_ARM_HAS_GROUP_RELOCS) || \ + (defined(MODULE) && defined(CONFIG_ARM_MODULE_PLTS)) + cur = __current; +#else + asm(LOAD_SYM_ARMV6(%0, __current) : "=r"(cur)); #endif return cur; } #define current get_current() -#else -#include -#endif /* CONFIG_CURRENT_POINTER_IN_TPIDRURO */ #endif /* __ASSEMBLY__ */ diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch_to.h index 61e4a3c4ca6e..9372348516ce 100644 --- a/arch/arm/include/asm/switch_to.h +++ b/arch/arm/include/asm/switch_to.h @@ -3,6 +3,7 @@ #define __ASM_ARM_SWITCH_TO_H #include +#include /* * For v7 SMP cores running a preemptible kernel we may be pre-empted @@ -26,7 +27,7 @@ extern struct task_struct *__switch_to(struct task_struct *, struct thread_info #define switch_to(prev,next,last) \ do { \ __complete_pending_tlbi(); \ - if (IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO)) \ + if (IS_ENABLED(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || is_smp()) \ __this_cpu_write(__entry_task, next); \ last = __switch_to(prev,task_thread_info(prev), task_thread_info(next)); \ } while (0) diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index 164e15f26485..e039d8f12d9b 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -54,9 +54,6 @@ struct cpu_context_save { struct thread_info { unsigned long flags; /* low level flags */ int preempt_count; /* 0 => preemptable, <0 => bug */ -#ifndef CONFIG_THREAD_INFO_IN_TASK - struct task_struct *task; /* main task structure */ -#endif __u32 cpu; /* cpu */ __u32 cpu_domain; /* cpu domain */ struct cpu_context_save cpu_context; /* cpu context */ @@ -72,39 +69,15 @@ struct thread_info { #define INIT_THREAD_INFO(tsk) \ { \ - INIT_THREAD_INFO_TASK(tsk) \ .flags = 0, \ .preempt_count = INIT_PREEMPT_COUNT, \ } -#ifdef CONFIG_THREAD_INFO_IN_TASK -#define INIT_THREAD_INFO_TASK(tsk) - static inline struct task_struct *thread_task(struct thread_info* ti) { return (struct task_struct *)ti; } -#else -#define INIT_THREAD_INFO_TASK(tsk) .task = &(tsk), - -static inline struct task_struct *thread_task(struct thread_info* ti) -{ - return ti->task; -} - -/* - * how to get the thread information struct from C - */ -static inline struct thread_info *current_thread_info(void) __attribute_const__; - -static inline struct thread_info *current_thread_info(void) -{ - return (struct thread_info *) - (current_stack_pointer & ~(THREAD_SIZE - 1)); -} -#endif - #define thread_saved_pc(tsk) \ ((unsigned long)(task_thread_info(tsk)->cpu_context.pc)) #define thread_saved_sp(tsk) \ diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 645845e4982a..2c8d76fd7c66 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -43,9 +43,6 @@ int main(void) BLANK(); DEFINE(TI_FLAGS, offsetof(struct thread_info, flags)); DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count)); -#ifndef CONFIG_THREAD_INFO_IN_TASK - DEFINE(TI_TASK, offsetof(struct thread_info, task)); -#endif DEFINE(TI_CPU, offsetof(struct thread_info, cpu)); DEFINE(TI_CPU_DOMAIN, offsetof(struct thread_info, cpu_domain)); DEFINE(TI_CPU_SAVE, offsetof(struct thread_info, cpu_context)); diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 5e01a34369a0..2f912c509e0d 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -746,12 +746,13 @@ ENTRY(__switch_to) switch_tls r1, r4, r5, r3, r7 #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) && \ !defined(CONFIG_STACKPROTECTOR_PER_TASK) - ldr r9, [r2, #TI_TASK] ldr r8, =__stack_chk_guard .if (TSK_STACK_CANARY > IMM12_MASK) - add r9, r9, #TSK_STACK_CANARY & ~IMM12_MASK - .endif + add r9, r2, #TSK_STACK_CANARY & ~IMM12_MASK ldr r9, [r9, #TSK_STACK_CANARY & IMM12_MASK] + .else + ldr r9, [r2, #TSK_STACK_CANARY & IMM12_MASK] + .endif #endif mov r7, r2 @ Preserve 'next' #ifdef CONFIG_CPU_USE_DOMAINS @@ -768,7 +769,7 @@ ENTRY(__switch_to) #endif THUMB( mov ip, r4 ) mov r0, r5 - set_current r7 + set_current r7, r8 ARM( ldmia r4, {r4 - sl, fp, sp, pc} ) @ Load all regs saved previously THUMB( ldmia ip!, {r4 - sl, fp} ) @ Load all regs saved previously THUMB( ldr sp, [ip], #4 ) diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index da206bd4f194..9f01b229841a 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -300,7 +300,7 @@ ALT_UP_B(.L1_\@) #endif @ The TLS register update is deferred until return to user space so we @ can use it for other things while running in the kernel - get_thread_info r1 + mrc p15, 0, r1, c13, c0, 3 @ get current_thread_info ldr r1, [r1, #TI_TP_VALUE] mcr p15, 0, r1, c13, c0, 3 @ set TLS register .L1_\@: diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S index 520dd43e7e08..4e0d318b67c6 100644 --- a/arch/arm/kernel/entry-v7m.S +++ b/arch/arm/kernel/entry-v7m.S @@ -97,15 +97,17 @@ ENTRY(__switch_to) str sp, [ip], #4 str lr, [ip], #4 mov r5, r0 + mov r6, r2 @ Preserve 'next' add r4, r2, #TI_CPU_SAVE ldr r0, =thread_notify_head mov r1, #THREAD_NOTIFY_SWITCH bl atomic_notifier_call_chain - mov ip, r4 mov r0, r5 - ldmia ip!, {r4 - r11} @ Load all regs saved previously - ldr sp, [ip] - ldr pc, [ip, #4]! + mov r1, r6 + ldmia r4, {r4 - r12, lr} @ Load all regs saved previously + set_current r1, r2 + mov sp, ip + bx lr .fnend ENDPROC(__switch_to) diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index da18e0a17dc2..42cae73fcc19 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -105,10 +105,8 @@ __mmap_switched: mov r1, #0 bl __memset @ clear .bss -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO adr_l r0, init_task @ get swapper task_struct - set_current r0 -#endif + set_current r0, r1 ldmia r4, {r0, r1, r2, r3} str r9, [r0] @ Save processor ID diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index d47159f3791c..0617af11377f 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -36,7 +36,7 @@ #include "signal.h" -#ifdef CONFIG_CURRENT_POINTER_IN_TPIDRURO +#if defined(CONFIG_CURRENT_POINTER_IN_TPIDRURO) || defined(CONFIG_SMP) DEFINE_PER_CPU(struct task_struct *, __entry_task); #endif @@ -46,6 +46,11 @@ unsigned long __stack_chk_guard __read_mostly; EXPORT_SYMBOL(__stack_chk_guard); #endif +#ifndef CONFIG_CURRENT_POINTER_IN_TPIDRURO +asmlinkage struct task_struct *__current; +EXPORT_SYMBOL(__current); +#endif + static const char *processor_modes[] __maybe_unused = { "USER_26", "FIQ_26" , "IRQ_26" , "SVC_26" , "UK4_26" , "UK5_26" , "UK6_26" , "UK7_26" , "UK8_26" , "UK9_26" , "UK10_26", "UK11_26", "UK12_26", "UK13_26", "UK14_26", "UK15_26", diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index ed2b168ff46c..73fc645fc4c7 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -400,6 +400,12 @@ static void smp_store_cpu_info(unsigned int cpuid) check_cpu_icache_size(cpuid); } +static void set_current(struct task_struct *cur) +{ + /* Set TPIDRURO */ + asm("mcr p15, 0, %0, c13, c0, 3" :: "r"(cur) : "memory"); +} + /* * This is the secondary CPU boot entry. We're using this CPUs * idle thread stack, but a set of temporary page tables. From patchwork Mon Jan 24 17:47:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04378C433F5 for ; Mon, 24 Jan 2022 17:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244120AbiAXRsx (ORCPT ); Mon, 24 Jan 2022 12:48:53 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43722 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244148AbiAXRsv (ORCPT ); Mon, 24 Jan 2022 12:48:51 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F30EF6134A for ; Mon, 24 Jan 2022 17:48:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E80CC36AE7; Mon, 24 Jan 2022 17:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046530; bh=H0WV7vrKQbWB1Awhw9ySRwNXiJJ8r6SoJ0MYC3/Miog=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Awa+SUSBVw4Cjv7gWC54/phF75tHavfcZJ6zxUoWwwY7oV0AXsx78lJoCMAP/IG2l 2Hb6PloN6IwJuvnRKd3uhtUfmC42lJ7yJdyyZbdAtoWgWS/+1eDga/CXU6dQf2IAV3 vrnGWOMsNqz0up1P1d8PxmTgDTgqCKoMWHQG569mMMmBmDtsxVzIcPpvvuWNBjmxSK MK1pS1bJ7U4N6628LJbloHdMwJloyOJa9gZhGmrZt0+rlHJFlER8DrIPEOM0d24g5A dXp41i8NKOJpXmKF187f6ZaIPE03BV/Sk9GzKKceX9MN23qReKS/lkaf/PuRd9z/6P VgQ3mmryaDgTA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 17/32] ARM: assembler: introduce bl_r macro Date: Mon, 24 Jan 2022 18:47:29 +0100 Message-Id: <20220124174744.1054712-18-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1351; h=from:subject; bh=H0WV7vrKQbWB1Awhw9ySRwNXiJJ8r6SoJ0MYC3/Miog=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYi3TRRo6mSL1XuT8QGS5n8WjABJagybV0n3gU5 fN625YyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mIgAKCRDDTyI5ktmPJJLVC/ 0ZilvHXspKHYyuEMTRZkwch6X7zUEy1d/kLw4yxyn9Q4lVJlo55eCV2D3ScX8zAvtoBErnTp4hPr3T B4KxjQugAlsatQFWeR6VVSmkBU2CXyP05JWPVbcTTLE33LXICC7W0555MVLOOnH4mKrQJXyOoMseM2 yf97w9arzE2+OXu9s+uv0SlgK4S9riTNTFArSZtbV9pp9VQAvNSD8ruufKFA6yRYQe2vbAPiPVK3T5 0sU2YBS1cbni1SwKQoUcEtFhuO29dV0dMbSYBleO6RE9RCJ+sWamUOit7Qk/vE+LnFpZauJKf1DqO/ 4IXTdq4mvipX4FsDRVxceLuiNrbqICRgI4HXY4ifi6K/XOUsUVnWlM1OYMVej68RSDmmBrBoJdqRJo PUEhSiQO3+8ywAn4Uxsw7k5stq5nxfVCN06TrkUBXcsBKIKSSRPYKzDsmcx1ZhqmyOjjfxQ5go+l65 lfZAH0dfg2SaesPoQI7fnQ9WooeEYUtoQjByfAXfEhvuQ= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Add a bl_r macro that abstract the difference between the ways indirect calls are performed on older and newer ARM architecture revisions. The main difference is to prefer blx instructions over explicit LR assignments when possible, as these tend to confuse the prediction logic in out-of-order cores when speculating across a function return. Signed-off-by: Ard Biesheuvel Reviewed-by: Arnd Bergmann Acked-by: Linus Walleij Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/assembler.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index bf304596f87e..7242e9a56650 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -744,4 +744,19 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .endif .endm + /* + * bl_r - branch and link to register + * + * @dst: target to branch to + * @c: conditional opcode suffix + */ + .macro bl_r, dst:req, c + .if __LINUX_ARM_ARCH__ < 6 + mov\c lr, pc + mov\c pc, \dst + .else + blx\c \dst + .endif + .endm + #endif /* __ASM_ASSEMBLER_H__ */ From patchwork Mon Jan 24 17:47:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA36AC433EF for ; Mon, 24 Jan 2022 17:48:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244155AbiAXRs5 (ORCPT ); Mon, 24 Jan 2022 12:48:57 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43754 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244148AbiAXRsy (ORCPT ); Mon, 24 Jan 2022 12:48:54 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 487826124A for ; Mon, 24 Jan 2022 17:48:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6AE0C340E8; Mon, 24 Jan 2022 17:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046533; bh=xDLguYLJMr2DMzYb/HljYnvnaduK5/+FMV5hI3zvvO0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vEesfrm9t1oaNFyPjbugxMiSefi95H0VMmzkQE4NIZxzGDrc/XJE8a4fM1XFs8wiv VCzxoTt3ck67c3tV+LJqylAosrekCM7I3pnV8SiU1JD8l408FDz5hM4W3hLLwrBvZ7 AQ9S70fbdTRmgVBOfnidpU8rECJQ9aa7lrei4HlrwcTulPz4NocZrYc2m77cSSX6tZ RWGknX3MTZ9O8BRSWqZkfzhDlDvZpUEgIb8IHu0RPO/HdBkorsObK6466VpHG2Y5QS ji7a0/XRi0t91jkVmILLKR49aiDAxny1+mnprjWgPD2AtPImmOhGH4zYJGgca6A89p YiJrQMYd2DRQA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 18/32] ARM: unwind: support unwinding across multiple stacks Date: Mon, 24 Jan 2022 18:47:30 +0100 Message-Id: <20220124174744.1054712-19-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3602; h=from:subject; bh=xDLguYLJMr2DMzYb/HljYnvnaduK5/+FMV5hI3zvvO0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYkvKQTioAHibgOPfZJnK3coc3CsBMfh5Q5wlMj 9fesCNOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mJAAKCRDDTyI5ktmPJGecDA CiHdK2+5VQLqKaXTF6s65bpf/U41Ndw1DIlAFgSvKP30gupmn65RgobD/IVoN8TQ5NquTR13+0chgz G0z3IJF2l8C2WJSihNTychXX5gGji4nepbBqz028Urwz24aT8ziXX69AV6Tf6v6qZ6gMnY8/HNgexM BdJqfScR1p7rc9FMUWUSKgoEKFWMvrh1TgxDvbLHecZcd7hVfkPflsM38GdM3LCnNgvntqYGyoFY/A KlgnBc59fd+7bHOUSHs/Dq5NrJLF5idxrKV/8XwGm/+5iqSp9WnGTehmqGxL1o66Mim6FA8hqs9AML msqb6j3wFHeewsVBOmZQnVGe0u6ZyPeoS223t/4SAo6bRexWCLNq/wvdyFHbN2XVkUsA2utNIZa7Ya /KVIeQY5aWVt3IEfPvHirwH8/OoVgG0Khh1VsbJwjzWgH3dgUJ6mKWxC56XnNMy5pBi9VOY5aavZw/ X33Uviap4roATB/IJtJGAriXyRew0ThpnpuSvjlifmx/U= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Implement support in the unwinder for dealing with multiple stacks. This will be needed once we add support for IRQ stacks, or for the overflow stack used by the vmap'ed stacks code. This involves tracking the unwind opcodes that either update the virtual stack pointer from another virtual register, or perform an explicit subtract on the virtual stack pointer, and updating the low and high bounds that we use to sanitize the stack pointer accordingly. Signed-off-by: Ard Biesheuvel Reviewed-by: Arnd Bergmann Acked-by: Linus Walleij Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/kernel/unwind.c | 25 +++++++++++++------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index 59fdf257bf8b..9cb9af3fc433 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -52,6 +52,7 @@ EXPORT_SYMBOL(__aeabi_unwind_cpp_pr2); struct unwind_ctrl_block { unsigned long vrs[16]; /* virtual register set */ const unsigned long *insn; /* pointer to the current instructions word */ + unsigned long sp_low; /* lowest value of sp allowed */ unsigned long sp_high; /* highest value of sp allowed */ /* * 1 : check for stack overflow for each register pop. @@ -256,8 +257,12 @@ static int unwind_exec_pop_subset_r4_to_r13(struct unwind_ctrl_block *ctrl, mask >>= 1; reg++; } - if (!load_sp) + if (!load_sp) { ctrl->vrs[SP] = (unsigned long)vsp; + } else { + ctrl->sp_low = ctrl->vrs[SP]; + ctrl->sp_high = ALIGN(ctrl->sp_low, THREAD_SIZE); + } return URC_OK; } @@ -313,9 +318,10 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) if ((insn & 0xc0) == 0x00) ctrl->vrs[SP] += ((insn & 0x3f) << 2) + 4; - else if ((insn & 0xc0) == 0x40) + else if ((insn & 0xc0) == 0x40) { ctrl->vrs[SP] -= ((insn & 0x3f) << 2) + 4; - else if ((insn & 0xf0) == 0x80) { + ctrl->sp_low = ctrl->vrs[SP]; + } else if ((insn & 0xf0) == 0x80) { unsigned long mask; insn = (insn << 8) | unwind_get_byte(ctrl); @@ -330,9 +336,11 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) if (ret) goto error; } else if ((insn & 0xf0) == 0x90 && - (insn & 0x0d) != 0x0d) + (insn & 0x0d) != 0x0d) { ctrl->vrs[SP] = ctrl->vrs[insn & 0x0f]; - else if ((insn & 0xf0) == 0xa0) { + ctrl->sp_low = ctrl->vrs[SP]; + ctrl->sp_high = ALIGN(ctrl->sp_low, THREAD_SIZE); + } else if ((insn & 0xf0) == 0xa0) { ret = unwind_exec_pop_r4_to_rN(ctrl, insn); if (ret) goto error; @@ -375,13 +383,12 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl) */ int unwind_frame(struct stackframe *frame) { - unsigned long low; const struct unwind_idx *idx; struct unwind_ctrl_block ctrl; /* store the highest address on the stack to avoid crossing it*/ - low = frame->sp; - ctrl.sp_high = ALIGN(low, THREAD_SIZE); + ctrl.sp_low = frame->sp; + ctrl.sp_high = ALIGN(ctrl.sp_low, THREAD_SIZE); pr_debug("%s(pc = %08lx lr = %08lx sp = %08lx)\n", __func__, frame->pc, frame->lr, frame->sp); @@ -437,7 +444,7 @@ int unwind_frame(struct stackframe *frame) urc = unwind_exec_insn(&ctrl); if (urc < 0) return urc; - if (ctrl.vrs[SP] < low || ctrl.vrs[SP] >= ctrl.sp_high) + if (ctrl.vrs[SP] < ctrl.sp_low || ctrl.vrs[SP] > ctrl.sp_high) return -URC_FAILURE; } From patchwork Mon Jan 24 17:47:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F09A3C433EF for ; Mon, 24 Jan 2022 17:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244175AbiAXRtC (ORCPT ); Mon, 24 Jan 2022 12:49:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244148AbiAXRs7 (ORCPT ); Mon, 24 Jan 2022 12:48:59 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 866ADC06173B for ; Mon, 24 Jan 2022 09:48:59 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4404DB811B3 for ; Mon, 24 Jan 2022 17:48:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C4F2C340EA; Mon, 24 Jan 2022 17:48:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046537; bh=hzwr0hRm0pCWjq8l8jbMgr30YPDHah9JCKu9Q1CEuVU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g2iz2671c6mkQpga7HjcszEkr+8DPtlHaT7ZbGuW2Vd/vOhwat7nMCWjvOGEhfipq Fy4bvI7WdJWbAfE1b2v+kvoFitDIrOA2JpVyNW0QjkyKy76+eUHWAc0ouducfrD/6o 7//McHJszW3Vr+o8PhTLGe7ESMkWkNvqx9WDPIbTivTIvV6/kWL2TkBQSore1TzXyT DnIVna4DtibtkR36ExKcM9A47/16QTIDNwys7m0jYHOJK4LjTQsG1s3rjnxx5Czmc/ whzeWHPZ47x5NI9/YTgWFMP4gvHVEAUXVixcorvrxjXUyY+t750ZcG2M7b4wUhuqGE H7j1PN+JFsKhQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 19/32] ARM: export dump_mem() to other objects Date: Mon, 24 Jan 2022 18:47:31 +0100 Message-Id: <20220124174744.1054712-20-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2268; h=from:subject; bh=hzwr0hRm0pCWjq8l8jbMgr30YPDHah9JCKu9Q1CEuVU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYmpuhMlypzxjM2W6VUAsKIds288nXeM9jZixvH F9Bd1SyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mJgAKCRDDTyI5ktmPJK99C/ 9LlsEA9esIgLotbrsoi80VjC+BBD4ZzTpdi+TjXh1RR9x+zyyGstouS+lK/4TaJmsvCfioILYd11be RmfnPPTeK8SRfaYU//CPawFIK22NMKEEPMkyy2JEv6E+L6EjnGERbeWJ80kN2ynMr9+liCHwldU4CA zOKgYAB/Xujqem1GMHEr2WH0wLpN/AR1pkoWIdEZGHycUsy9S8BxJDcS4ypVmmymFVFH3IyBNId/+P 8AjxZEgusb1fsVjWfeZkxlJK4puWJgNDzjAg5yLsoytZKjLU9lTCkklPlYvU4pgC5Mmq/qdIGkYlmU i+l4priLrsEq7jhRJ3i1Fudp++JyCFay+6ZtKcrZV19CDfyT9UFZgSqYpaz1W80Ju9W9Ou1QKZCFaG CYPFCcVrkvGtvEOCiLdC1PJAkRHqO/YK13lLiNwXTmRUMvwl90mphUPuiClI6/h1xpUdg8MOhQ9vKF eJg29Z2gMG/PbAe+5vxeN1kixg93rh8/lpbgy7bBi9VyY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The unwind info based stack unwinder will make its own call to dump_mem() to dump the exception stack, so give it external linkage. Signed-off-by: Ard Biesheuvel Reviewed-by: Arnd Bergmann Acked-by: Linus Walleij Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/stacktrace.h | 2 ++ arch/arm/kernel/traps.c | 7 +++---- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/stacktrace.h b/arch/arm/include/asm/stacktrace.h index 8f54f9ad8a9b..33ee1aa4b8c0 100644 --- a/arch/arm/include/asm/stacktrace.h +++ b/arch/arm/include/asm/stacktrace.h @@ -36,5 +36,7 @@ void arm_get_current_stackframe(struct pt_regs *regs, struct stackframe *frame) extern int unwind_frame(struct stackframe *frame); extern void walk_stackframe(struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data); +extern void dump_mem(const char *lvl, const char *str, unsigned long bottom, + unsigned long top); #endif /* __ASM_STACKTRACE_H */ diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index da04ed85855a..710306eac71f 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -60,8 +61,6 @@ static int __init user_debug_setup(char *str) __setup("user_debug=", user_debug_setup); #endif -static void dump_mem(const char *, const char *, unsigned long, unsigned long); - void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame, const char *loglvl) { @@ -120,8 +119,8 @@ static int verify_stack(unsigned long sp) /* * Dump out the contents of some memory nicely... */ -static void dump_mem(const char *lvl, const char *str, unsigned long bottom, - unsigned long top) +void dump_mem(const char *lvl, const char *str, unsigned long bottom, + unsigned long top) { unsigned long first; int i; From patchwork Mon Jan 24 17:47:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D1D8C433F5 for ; Mon, 24 Jan 2022 17:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244148AbiAXRtC (ORCPT ); Mon, 24 Jan 2022 12:49:02 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43846 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244173AbiAXRtB (ORCPT ); Mon, 24 Jan 2022 12:49:01 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DDABC6134A for ; Mon, 24 Jan 2022 17:49:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 74896C340E8; Mon, 24 Jan 2022 17:48:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046540; bh=6iePrDdRo1FEptrMGTjRPP3aneHB8xt6RRoMqC1C9mk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ptoBXMtaZHLKhMA8ZOjT6ZDeM19zKEUsr02jw/eJdg/L6NrUF/GECFqOqpjKzN9uW GAv+VlVbUg5dUfFtKZul2fhplW6TDm6PVaOkfvT6F2rrUsGC3VGFj4nNgAB4GGN+l3 J1bdhq99fKavOh7QqNtI6SokiQ7HDMkw3m6shenBu7s0ziOeTBlsGbsfku4bvdrD5A ckMdMSQt9Wre0c2DqvUtsy9YstuWph+GT/fB1gGSVu+Pe/QB0ouKGz68DYvMz1cPN9 mDHcs9h+pZC30MNKxzPoMLaSLPei0gyF6uBTWIhkHAh3kk/E1Cf94S55E69Uz4ukPn zCBwEIhQBK0AA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 20/32] ARM: unwind: dump exception stack from calling frame Date: Mon, 24 Jan 2022 18:47:32 +0100 Message-Id: <20220124174744.1054712-21-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3477; h=from:subject; bh=6iePrDdRo1FEptrMGTjRPP3aneHB8xt6RRoMqC1C9mk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYoADUPn+SPniy3ES0sTnZF4IDI3RJ7qNFKOy8J dWMFOICJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mKAAKCRDDTyI5ktmPJEt6C/ 0Wu/eTXGd3U041kAWfvVdVR0n0DZfO2wJBB3zXW5/Wjkdjc11GQgSPEOv1pTzpr7d4yYi/74uVhzig tPpzUaRyemIsm7N/NFf41k0CQTzVcQ/oD8olnoRR5dubfuOvBLjyhxl5uoMjCG9TN6T66mgw+8eGdB WD/nHSX4zLHvCSQG95WWDNox34kbqVNQFxFNeuLqzlEUa3uCqhKm6jbRiyJpL64RSGDqd9gIeT2jtv w11qH8Gkjzkp16HeTx1QPZtvX6f4Fk3Ej6U26jUoHCzqHqMJT49LICyZOs4VuaX5CJ3jghr9vduQuH oDD4UNBqspFOyp+oHjzdsiXClX2GnAFLgHFu+wLD/qsPIct8TzfWQIQOHXWJi9hvaNidJZixLVm9nJ SyYbqQi5GOAGMbVpLMkUEEOLz3ZW/9gt3FTvoF60MP40fUP07gYpZA/lhB3lqP3KluojwDlaHIV2IH GEp+moQvWn9omVbO2Trs5kh/aTp4QwNCc51erun/qMmWA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The existing code that dumps the contents of the pt_regs structure passed to __entry routines does so while unwinding the callee frame, and dereferences the stack pointer as a struct pt_regs*. This will no longer work when we enable support for IRQ or overflow stacks, because the struct pt_regs may live on the task stack, while we are executing from another stack. The unwinder has access to this information, but only while unwinding the calling frame. So let's combine the exception stack dumping code with the handling of the calling frame as well. By printing it before dumping the caller/callee addresses, the output order is preserved. Signed-off-by: Ard Biesheuvel Reviewed-by: Arnd Bergmann Acked-by: Linus Walleij Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/stacktrace.h | 10 ++++++++++ arch/arm/kernel/traps.c | 3 ++- arch/arm/kernel/unwind.c | 8 +++++++- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/stacktrace.h b/arch/arm/include/asm/stacktrace.h index 33ee1aa4b8c0..d87d60532b86 100644 --- a/arch/arm/include/asm/stacktrace.h +++ b/arch/arm/include/asm/stacktrace.h @@ -18,6 +18,16 @@ struct stackframe { struct llist_node *kr_cur; struct task_struct *tsk; #endif +#ifdef CONFIG_ARM_UNWIND + /* + * This field is used to track the stack pointer value when calling + * __entry routines. This is needed when IRQ stacks and overflow stacks + * are used, because in that case, the struct pt_regs passed to these + * __entry routines may be at the top of the task stack, while we are + * executing from another stack. + */ + unsigned long sp_low; +#endif }; static __always_inline diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index 710306eac71f..c51b87f6fc3e 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -76,7 +76,8 @@ void dump_backtrace_entry(unsigned long where, unsigned long from, printk("%s %ps from %pS\n", loglvl, (void *)where, (void *)from); #endif - if (in_entry_text(from) && end <= ALIGN(frame, THREAD_SIZE)) + if (!IS_ENABLED(CONFIG_UNWINDER_ARM) && + in_entry_text(from) && end <= ALIGN(frame, THREAD_SIZE)) dump_mem(loglvl, "Exception stack", frame + 4, end); } diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index 9cb9af3fc433..b7a6141c342f 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -29,6 +29,7 @@ #include #include +#include #include #include #include @@ -459,6 +460,7 @@ int unwind_frame(struct stackframe *frame) frame->sp = ctrl.vrs[SP]; frame->lr = ctrl.vrs[LR]; frame->pc = ctrl.vrs[PC]; + frame->sp_low = ctrl.sp_low; return URC_OK; } @@ -502,7 +504,11 @@ void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk, urc = unwind_frame(&frame); if (urc < 0) break; - dump_backtrace_entry(where, frame.pc, frame.sp - 4, loglvl); + if (in_entry_text(where)) + dump_mem(loglvl, "Exception stack", frame.sp_low, + frame.sp_low + sizeof(struct pt_regs)); + + dump_backtrace_entry(where, frame.pc, 0, loglvl); } } From patchwork Mon Jan 24 17:47:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5516C433F5 for ; Mon, 24 Jan 2022 17:49:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244239AbiAXRtF (ORCPT ); Mon, 24 Jan 2022 12:49:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244173AbiAXRtE (ORCPT ); Mon, 24 Jan 2022 12:49:04 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 930A0C06173B for ; Mon, 24 Jan 2022 09:49:04 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 301A461350 for ; Mon, 24 Jan 2022 17:49:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD898C340EB; Mon, 24 Jan 2022 17:49:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046543; bh=IyXLRNWIOeJXRX4t2kvNBhRazTOZFHXbCV4DuNm6yhU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jBLvtlPmox7eoCnci1z8lc03/NbncHs29o1ga5lK6cDG5dZiYchIfIZnxAXxmbnFr nQ27aKh80LBqTT4DkfCgbZ7lc1fWz2PK7/8FUq6XNw9BslTw3XW3M7HPIGwiv8AV13 65u5RrR8Fi8FNaYgVC6B4kqhpa9b+lgoCUIpCQCNCUJutg5OIKZi8Y7+ERHQ424E1P hXoBkZNGgm9g1lpGnIE9jh/XY66I+5VmtMtl205dF/QvHZe5z4kHuJ7ntjA3FY+o44 Rlxza/yG5uTdxU5CAdx3mjLEpQJHF7YwKz0OIjU+RH1qzUAd4omxWaMxkrbTho5RE/ We1A6re5+vkjw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 21/32] ARM: backtrace-clang: avoid crash on bogus frame pointer Date: Mon, 24 Jan 2022 18:47:33 +0100 Message-Id: <20220124174744.1054712-22-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1844; h=from:subject; bh=IyXLRNWIOeJXRX4t2kvNBhRazTOZFHXbCV4DuNm6yhU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYps+q5+cFdeGunJhVyQFYUrEBbfJR7dkCOrjxx 0q1S8luJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mKQAKCRDDTyI5ktmPJKi2DA CuvB6mK3w1TmO3L+nCUgb1iFTKM/g0v23bk598pnw1nIPSLGSyEw+XHUVswnJMnV6064J3IxoIx19j Du59SqdXBy9PPXqaWI62+M03XMtE0Ab1n7yrLs/igODxVSDuVCaClvJXdcQLEEZfOWmv9LP+yr0SDR XHyFy42QSzwpyJPnuS/ozqDFGbVS84djudpPlr62ZhdsGQXxo4S+LbRKeXjh3iaL4x7eKm0X/WZaZg Di3IZhieYGQwK9He5a1/Xs0qVlYOd5/T0662iCC7BxBOfCUotIO7jtq7LHiWsQqmmDgyfvdyA8JUU7 ZIsOB7EoCUWUsRmEHwxaYdI2v3nwRl0iWXgGJfSTfvd2Yl1AoxCWPR5wf3bvpth8XkPhRxI+Zc2h5k MwY4odyRIxcDGbtww3Pjil8cQIIWmiOoADoNsssL0xEMIZ4JnqYYCNnhuxI2EgyIz69XnYPYMXEx5P 4rUVN0i0nPCiih4GUS4vgVh+1yazAtSnDI8YpIQiXbATc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The Clang backtrace code dereferences the link register value pulled from the stack to decide whether the caller was a branch-and-link instruction, in order to subsequently decode the offset to find the start of the calling function. Unlike other loads in this routine, this one is not protected by a fixup, and may therefore cause a crash if the address in question is bogus. So let's fix this, by treating the fault as a failure to decode the 'bl' instruction. To avoid a label renum, reuse a fixup label that guards an instruction that cannot fault to begin with. Signed-off-by: Ard Biesheuvel Reviewed-by: Nick Desaulniers Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/lib/backtrace-clang.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/lib/backtrace-clang.S b/arch/arm/lib/backtrace-clang.S index 5b2cdb1003e3..5b4bca85d06d 100644 --- a/arch/arm/lib/backtrace-clang.S +++ b/arch/arm/lib/backtrace-clang.S @@ -144,7 +144,7 @@ for_each_frame: tst frame, mask @ Check for address exceptions */ 1003: ldr sv_lr, [sv_fp, #4] @ get saved lr from next frame - ldr r0, [sv_lr, #-4] @ get call instruction +1004: ldr r0, [sv_lr, #-4] @ get call instruction ldr r3, .Lopcode+4 and r2, r3, r0 @ is this a bl call teq r2, r3 @@ -164,7 +164,7 @@ finished_setup: /* * Print the function (sv_pc) and where it was called from (sv_lr). */ -1004: mov r0, sv_pc + mov r0, sv_pc mov r1, sv_lr mov r2, frame @@ -210,7 +210,7 @@ ENDPROC(c_backtrace) .long 1001b, 1006b .long 1002b, 1006b .long 1003b, 1006b - .long 1004b, 1006b + .long 1004b, finished_setup .long 1005b, 1006b .popsection From patchwork Mon Jan 24 17:47:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 524D9C433F5 for ; Mon, 24 Jan 2022 17:49:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244414AbiAXRtK (ORCPT ); Mon, 24 Jan 2022 12:49:10 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:43938 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244376AbiAXRtI (ORCPT ); Mon, 24 Jan 2022 12:49:08 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7774261301 for ; Mon, 24 Jan 2022 17:49:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13ED4C340E8; Mon, 24 Jan 2022 17:49:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046546; bh=SWdoDEnsMcjRQFC3Qk0Dc49/nleMsaddzmvP9xDhShA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KrXXiewELlgI72/YMhwroRyiXR4KsyGGLKi2XuLnVQEK4qePNcwpbU415i+rTyUqA YkYnnwWkbVOhxTTk6bOfK3dJbbTfCgRgylFRTplFlcRtCccmQvNKUW087zYMShTuaw DFE6Gqs9VJFIhEJ80LnDTHUH59+vb8ACnz66Re+xllIT/lwX6CAcF5A6nAqgOkLRHm m4QqlMkjsylZ6QViGr3KeA53Gy3D6dyByfvybLB7GdVOxiSLIz39K2Fi4PwTV5J54n fQLv2rxSjFBX+X2v2Ab5JREwSDEDrrbOMFIrgECub3IlEF45o6dnorPaZUMNusoyUn q15NbYKjXgB0A== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 22/32] ARM: implement IRQ stacks Date: Mon, 24 Jan 2022 18:47:34 +0100 Message-Id: <20220124174744.1054712-23-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7809; h=from:subject; bh=SWdoDEnsMcjRQFC3Qk0Dc49/nleMsaddzmvP9xDhShA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYrFzeWRCzdRcxcPS38KFgfnSYtiOqNbDxC8g4U 1x11FdGJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mKwAKCRDDTyI5ktmPJNo9DA CebUazuC66eR+xqLEDI/ByH22N/dqLkHgxqh041SLJvUeD6FGUbyHPFinGmGSaQSh42zmxeAiphFhT JI5/nDZLApCTWGb/0317ujO7ujTht6+wmW6qcZ+gZ5UMyClnbCMwspZY2vJCyptmD237PzAIqijB5H 24l/kxhO3htowP80pVUBvjJbLVb86mbjyiYvNqlJcI4mLfeHvOkrUqbS8ymCPJ10JC59cpLKH5gCdP cM8h9upy4zUKKUTz/oz9ZlBogqUd3OUPbRtQZbTccHag7VBAZlNwnOKUCOlFlsYuJHMtQ31K57X4hX M+pYoT6t7BV84BQIPG+hGO/ffB6+i3ftkuv6L/VcqH3g0m5AIsls2GJOuRK8meV4A/FzMxZxPjK2LN KjLrgCsVBnP7cHSOxLvhXG/BDjfldoYBlZQ/huqNe5tOHRCQ+dI63PO3bywOCUGnS8Yj6rIVP7qNXt YmE/h0OXXc7+mTHnlM1ejFqKlLhgRak2ID2bShfY486Jw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Now that we no longer rely on the stack pointer to access the current task struct or thread info, we can implement support for IRQ stacks cleanly as well. Define a per-CPU IRQ stack and switch to this stack when taking an IRQ, provided that we were not already using that stack in the interrupted context. This is never the case for IRQs taken from user space, but ones taken while running in the kernel could fire while one taken from user space has not completed yet. Signed-off-by: Ard Biesheuvel Acked-by: Linus Walleij Tested-by: Keith Packard Acked-by: Nick Desaulniers Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/include/asm/assembler.h | 4 ++ arch/arm/kernel/entry-armv.S | 48 ++++++++++++++++++-- arch/arm/kernel/entry-v7m.S | 17 ++++++- arch/arm/kernel/irq.c | 17 +++++++ arch/arm/kernel/traps.c | 15 +++++- arch/arm/lib/backtrace-clang.S | 7 +++ arch/arm/lib/backtrace.S | 7 +++ 7 files changed, 109 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 7242e9a56650..f961f99721dd 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -86,6 +86,10 @@ #define IMM12_MASK 0xfff +/* the frame pointer used for stack unwinding */ +ARM( fpreg .req r11 ) +THUMB( fpreg .req r7 ) + /* * Enable and disable interrupts */ diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 2f912c509e0d..38e3978a50a9 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -32,9 +32,51 @@ /* * Interrupt handling. */ - .macro irq_handler + .macro irq_handler, from_user:req mov r0, sp +#ifdef CONFIG_UNWINDER_ARM + mov fpreg, sp @ Preserve original SP +#else + mov r7, fp @ Preserve original FP + mov r8, sp @ Preserve original SP +#endif + ldr_this_cpu sp, irq_stack_ptr, r2, r3 + .if \from_user == 0 +UNWIND( .setfp fpreg, sp ) + @ + @ If we took the interrupt while running in the kernel, we may already + @ be using the IRQ stack, so revert to the original value in that case. + @ + subs r2, sp, r0 @ SP above bottom of IRQ stack? + rsbscs r2, r2, #THREAD_SIZE @ ... and below the top? + movcs sp, r0 @ If so, revert to incoming SP + +#ifndef CONFIG_UNWINDER_ARM + @ + @ Inform the frame pointer unwinder where the next frame lives + @ + movcc lr, pc @ Make LR point into .entry.text so + @ that we will get a dump of the + @ exception stack for this frame. +#ifdef CONFIG_CC_IS_GCC + movcc ip, r0 @ Store the old SP in the frame record. + stmdbcc sp!, {fp, ip, lr, pc} @ Push frame record + addcc fp, sp, #12 +#else + stmdbcc sp!, {fp, lr} @ Push frame record + movcc fp, sp +#endif // CONFIG_CC_IS_GCC +#endif // CONFIG_UNWINDER_ARM + .endif + bl generic_handle_arch_irq + +#ifdef CONFIG_UNWINDER_ARM + mov sp, fpreg @ Restore original SP +#else + mov fp, r7 @ Restore original FP + mov sp, r8 @ Restore original SP +#endif // CONFIG_UNWINDER_ARM .endm .macro pabt_helper @@ -191,7 +233,7 @@ ENDPROC(__dabt_svc) .align 5 __irq_svc: svc_entry - irq_handler + irq_handler from_user=0 #ifdef CONFIG_PREEMPTION ldr r8, [tsk, #TI_PREEMPT] @ get preempt count @@ -418,7 +460,7 @@ ENDPROC(__dabt_usr) __irq_usr: usr_entry kuser_cmpxchg_check - irq_handler + irq_handler from_user=1 get_thread_info tsk mov why, #0 b ret_to_user_from_irq diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S index 4e0d318b67c6..de8a60363c85 100644 --- a/arch/arm/kernel/entry-v7m.S +++ b/arch/arm/kernel/entry-v7m.S @@ -40,11 +40,24 @@ __irq_entry: @ Invoke the IRQ handler @ mov r0, sp - stmdb sp!, {lr} + ldr_this_cpu sp, irq_stack_ptr, r1, r2 + + @ + @ If we took the interrupt while running in the kernel, we may already + @ be using the IRQ stack, so revert to the original value in that case. + @ + subs r2, sp, r0 @ SP above bottom of IRQ stack? + rsbscs r2, r2, #THREAD_SIZE @ ... and below the top? + movcs sp, r0 + + push {r0, lr} @ preserve LR and original SP + @ routine called with r0 = struct pt_regs * bl generic_handle_arch_irq - pop {lr} + pop {r0, lr} + mov sp, r0 + @ @ Check for any pending work if returning to user @ diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 5a1e52a4ee11..92ae80a8e5b4 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -43,6 +43,21 @@ unsigned long irq_err_count; +asmlinkage DEFINE_PER_CPU_READ_MOSTLY(u8 *, irq_stack_ptr); + +static void __init init_irq_stacks(void) +{ + u8 *stack; + int cpu; + + for_each_possible_cpu(cpu) { + stack = (u8 *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER); + if (WARN_ON(!stack)) + break; + per_cpu(irq_stack_ptr, cpu) = &stack[THREAD_SIZE]; + } +} + int arch_show_interrupts(struct seq_file *p, int prec) { #ifdef CONFIG_FIQ @@ -84,6 +99,8 @@ void __init init_IRQ(void) { int ret; + init_irq_stacks(); + if (IS_ENABLED(CONFIG_OF) && !machine_desc->init_irq) irqchip_init(); else diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index c51b87f6fc3e..1b8bef286fbc 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -66,6 +66,19 @@ void dump_backtrace_entry(unsigned long where, unsigned long from, { unsigned long end = frame + 4 + sizeof(struct pt_regs); + if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER) && + IS_ENABLED(CONFIG_CC_IS_GCC) && + end > ALIGN(frame, THREAD_SIZE)) { + /* + * If we are walking past the end of the stack, it may be due + * to the fact that we are on an IRQ or overflow stack. In this + * case, we can load the address of the other stack from the + * frame record. + */ + frame = ((unsigned long *)frame)[-2] - 4; + end = frame + 4 + sizeof(struct pt_regs); + } + #ifndef CONFIG_KALLSYMS printk("%sFunction entered at [<%08lx>] from [<%08lx>]\n", loglvl, where, from); @@ -280,7 +293,7 @@ static int __die(const char *str, int err, struct pt_regs *regs) if (!user_mode(regs) || in_interrupt()) { dump_mem(KERN_EMERG, "Stack: ", regs->ARM_sp, - THREAD_SIZE + (unsigned long)task_stack_page(tsk)); + ALIGN(regs->ARM_sp, THREAD_SIZE)); dump_backtrace(regs, tsk, KERN_EMERG); dump_instr(KERN_EMERG, regs); } diff --git a/arch/arm/lib/backtrace-clang.S b/arch/arm/lib/backtrace-clang.S index 5b4bca85d06d..1f0814b41bcf 100644 --- a/arch/arm/lib/backtrace-clang.S +++ b/arch/arm/lib/backtrace-clang.S @@ -197,6 +197,13 @@ finished_setup: cmp sv_fp, frame @ next frame must be mov frame, sv_fp @ above the current frame + + @ + @ Kernel stacks may be discontiguous in memory. If the next + @ frame is below the previous frame, accept it as long as it + @ lives in kernel memory. + @ + cmpls sv_fp, #PAGE_OFFSET bhi for_each_frame 1006: adr r0, .Lbad diff --git a/arch/arm/lib/backtrace.S b/arch/arm/lib/backtrace.S index e8408f22d4dc..e6e8451c5cb3 100644 --- a/arch/arm/lib/backtrace.S +++ b/arch/arm/lib/backtrace.S @@ -98,6 +98,13 @@ for_each_frame: tst frame, mask @ Check for address exceptions cmp sv_fp, frame @ next frame must be mov frame, sv_fp @ above the current frame + + @ + @ Kernel stacks may be discontiguous in memory. If the next + @ frame is below the previous frame, accept it as long as it + @ lives in kernel memory. + @ + cmpls sv_fp, #PAGE_OFFSET bhi for_each_frame 1006: adr r0, .Lbad From patchwork Mon Jan 24 17:47:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EF68C433FE for ; Mon, 24 Jan 2022 17:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244435AbiAXRtX (ORCPT ); Mon, 24 Jan 2022 12:49:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244484AbiAXRtL (ORCPT ); Mon, 24 Jan 2022 12:49:11 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32933C061744 for ; Mon, 24 Jan 2022 09:49:11 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C36C760FDB for ; Mon, 24 Jan 2022 17:49:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CEB4C340EA; Mon, 24 Jan 2022 17:49:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046550; bh=sifjNQNJvaYEmpF/TXEf0tvu/5CoVCOTu+0FI2xI1u4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ulH9xi0v36dKdwRPxm0h3KHR/s25UMUrfoclFb0QBcasIcdHh8mWHQNVYdw+v/wFz jt+0jzJzuynSO1pydKeaYM0CoT+b4brAzQYW9aLHE2S97Gle8z9Iik2VcFnKIfzhlW CsTccgTIqFeaV+q17FYVrJXrB/hydJFX4Mp3OcC64HEYvEQWdkJVcPchO8qS4XJGFx C4rTuzBHT6xxP9NXI2j8DNEV0LvpDCqKUI9FIsSs00+Ht+ZvozSCMBfOp83KlDquqJ spqtC2/+LOkxeSeBU7VnPFARS8lKhi0733fScSegcQmm9TIvddTAh+6VY4A1ts4YCN UPyoaNvPd7Org== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 23/32] ARM: call_with_stack: add unwind support Date: Mon, 24 Jan 2022 18:47:35 +0100 Message-Id: <20220124174744.1054712-24-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2301; h=from:subject; bh=sifjNQNJvaYEmpF/TXEf0tvu/5CoVCOTu+0FI2xI1u4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYtE3gGvhLgCmUm3gkuPv+muXMmo8Zv0BVJ9wIE WTEKmMaJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mLQAKCRDDTyI5ktmPJBjOC/ 0fJ5EJ7XBT2oHcL3oEgjcm1AMtan3NExCq49rRD8olPqSud5SgGVkNwdHauMJeurvHcaXxwvWbG3mP ptgWdmR8kUlv0mEHqLx+qHF986QjHGaKdNG3c/75z7AhQ0H3/E1tEVzr5tVeVSpwYyE8afKu33sLhH JsDxfKuMR0ySBFXe+ChN9L6wRIQ/p3iHeJ/p5QnlYimv2tcNQDyy4CeuGpd/Oj6PNtuSI0zFBhUs5X +ohLC53Vp52GT3sP17rG6JPphvOkAiETffcX1Rh63cfGzxhki4xTnNOxwblKARSvRhIe1LJ56Z+65p AaI4PVpwBSFD0ONQOTPywkQ1XDOmbSidZ4ij+cleQjA2VeNtchnGMh2D2Vz0WErYtBZQf+hLgGtMWE JQjPfRSlgjngAkUwqBblJfmLf7uwXgQB40mY2ks9pURAxyBr+zvny9UArrc+MgPjyB+hDN8d2tREhr 5+8Hz4B336Cf0my4/sOgPlKwFUaEtUY1eFZtPQyaFb/g4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Restructure the code and add the unwind annotations so that both the frame pointer unwinder as well as the EHABI unwind info based unwinder will be able to follow the call stack through call_with_stack(). Since GCC and Clang use different formats for the stack frame, two methods are implemented: a GCC version that pushes fp, sp, lr and pc for compatibility with the frame pointer unwinder, and a second version that works with Clang, as well as with the EHABI unwinder both in ARM and Thumb2 modes. Signed-off-by: Ard Biesheuvel Acked-by: Linus Walleij Tested-by: Keith Packard Reviewed-by: Nick Desaulniers Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/lib/call_with_stack.S | 33 +++++++++++++++----- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/arch/arm/lib/call_with_stack.S b/arch/arm/lib/call_with_stack.S index 28b0341ae786..0a268a6c513c 100644 --- a/arch/arm/lib/call_with_stack.S +++ b/arch/arm/lib/call_with_stack.S @@ -8,25 +8,42 @@ #include #include +#include /* * void call_with_stack(void (*fn)(void *), void *arg, void *sp) * * Change the stack to that pointed at by sp, then invoke fn(arg) with * the new stack. + * + * The sequence below follows the APCS frame convention for frame pointer + * unwinding, and implements the unwinder annotations needed by the EABI + * unwinder. */ -ENTRY(call_with_stack) - str sp, [r2, #-4]! - str lr, [r2, #-4]! +ENTRY(call_with_stack) +#if defined(CONFIG_UNWINDER_FRAME_POINTER) && defined(CONFIG_CC_IS_GCC) + mov ip, sp + push {fp, ip, lr, pc} + sub fp, ip, #4 +#else +UNWIND( .fnstart ) +UNWIND( .save {fpreg, lr} ) + push {fpreg, lr} +UNWIND( .setfp fpreg, sp ) + mov fpreg, sp +#endif mov sp, r2 mov r2, r0 mov r0, r1 - badr lr, 1f - ret r2 + bl_r r2 -1: ldr lr, [sp] - ldr sp, [sp, #4] - ret lr +#if defined(CONFIG_UNWINDER_FRAME_POINTER) && defined(CONFIG_CC_IS_GCC) + ldmdb fp, {fp, sp, pc} +#else + mov sp, fpreg + pop {fpreg, pc} +UNWIND( .fnend ) +#endif ENDPROC(call_with_stack) From patchwork Mon Jan 24 17:47:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41AF7C4332F for ; Mon, 24 Jan 2022 17:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231481AbiAXRtY (ORCPT ); Mon, 24 Jan 2022 12:49:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244574AbiAXRtR (ORCPT ); Mon, 24 Jan 2022 12:49:17 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 773B4C061749 for ; Mon, 24 Jan 2022 09:49:14 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1729D612ED for ; Mon, 24 Jan 2022 17:49:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A668CC36AE3; Mon, 24 Jan 2022 17:49:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046553; bh=7/tPcs0AmFlFig2nf32yKs0oXHAh78N7PZcSvkHYuEk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ix2obOLvTb/6d1w5I5g0g94HOPKD4OFWz2YGwtrQlnDFUF56aFaNnfwyABSMPZxpF gzXcLgS1t6I64zBhnNOYtUOos6GX48qM6FN41EoTbWBULO7Bx/DXxEpqIGSfZ8tdQI c2TfrzDgMnbWLFf5TvUMqjk8tfeDPFJGkaiObNP47JzDUwbpUrXGXP2K5APNngMCol +ZjlUV9ZRFZ7KcoffeCo8znRH+cCVzYzaa4/FmL0IsBplhe9ihNoiGXIjdfYXsoSM8 gUaV1Apos2fHBUUbYcw9jaUUw4GovptdDYrq8StYosSVMVuVGo8SNciGhKTHSNtrh1 Av6QbJbJzUlfQ== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 24/32] ARM: run softirqs on the per-CPU IRQ stack Date: Mon, 24 Jan 2022 18:47:36 +0100 Message-Id: <20220124174744.1054712-25-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2406; h=from:subject; bh=7/tPcs0AmFlFig2nf32yKs0oXHAh78N7PZcSvkHYuEk=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYv4B3wns+3iHt4GTJhZo+OVZXNyvKE6fQuPEiS KvEVNBWJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mLwAKCRDDTyI5ktmPJLenDA CyQSCZqXOD56Vhh+YnyWJmMe8xfbgBj1bMYXUtsZsP1J9T5rB9Yw7WBHzdzsB8UhY1D22YcB1CFCLu EFJETkK4aJHEPvIg+CkC6o5fypRX5oFxUDi6f0XWNiqHxQfBAkgnPGjVnrAU9OzIgjmsKwHiZM+lUs /uNJMlkcTCrVQryAHrpQKtEnkMjzB/JTPCXHm7gG4p08apbOfnL0cOyQPMHOXgW0iI2D0GvKVAoBk2 7FfV5PWaC5DQ0My4bPaschjK/oS5Bd7vIaXYtOxyUNwjANZVIVQ+ygP/ElGu9LutnOc1D7PoFC5y4Y 7porgJKvrOuBi1czxnwsp33uDXMpaHC5aGKW4OF5Cvur7J7wLfjuOeytImanaKJENlkvgyLAnPkD07 CQGBTyDAN+D81vDmYJD76x1cIujuYMOdwlGueW1oP/PFF0aUWHux/J+v/XPlvx3E8MkWuSFSD1pLKX XOX6foFqjgSDULN8KYdRZng64HDILZahR0hCNdtggoYgM= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Now that we have enabled IRQ stacks, any softIRQs that are handled over the back of a hard IRQ will run from the IRQ stack as well. However, any synchronous softirq processing that happens when re-enabling softIRQs from task context will still execute on that task's stack. Since any call to local_bh_enable() at any level in the task's call stack may trigger a softIRQ processing run, which could potentially cause a task stack overflow if the combined stack footprints exceed the stack's size, let's run these synchronous invocations of do_softirq() on the IRQ stack as well. Signed-off-by: Ard Biesheuvel Reviewed-by: Arnd Bergmann Acked-by: Linus Walleij Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 2 ++ arch/arm/kernel/irq.c | 14 ++++++++++++++ 2 files changed, 16 insertions(+) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 108a7a872084..b959249dd716 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -128,6 +128,8 @@ config ARM select RTC_LIB select SYS_SUPPORTS_APM_EMULATION select THREAD_INFO_IN_TASK + select HAVE_IRQ_EXIT_ON_IRQ_STACK + select HAVE_SOFTIRQ_ON_OWN_STACK select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M # Above selects are sorted alphabetically; please add new ones # according to that. Thanks. diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 92ae80a8e5b4..380376f55554 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -36,11 +36,14 @@ #include #include #include +#include #include #include #include #include +#include "reboot.h" + unsigned long irq_err_count; asmlinkage DEFINE_PER_CPU_READ_MOSTLY(u8 *, irq_stack_ptr); @@ -58,6 +61,17 @@ static void __init init_irq_stacks(void) } } +static void ____do_softirq(void *arg) +{ + __do_softirq(); +} + +void do_softirq_own_stack(void) +{ + call_with_stack(____do_softirq, NULL, + __this_cpu_read(irq_stack_ptr)); +} + int arch_show_interrupts(struct seq_file *p, int prec) { #ifdef CONFIG_FIQ From patchwork Mon Jan 24 17:47:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B70C433F5 for ; Mon, 24 Jan 2022 17:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244376AbiAXRtY (ORCPT ); Mon, 24 Jan 2022 12:49:24 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:50598 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244403AbiAXRtT (ORCPT ); Mon, 24 Jan 2022 12:49:19 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1BB91B811AF for ; Mon, 24 Jan 2022 17:49:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEF51C36AE9; Mon, 24 Jan 2022 17:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046556; bh=3FDhs7S5Ica25HEAyUeDQsXqhjFXC2xvlHsmW9hgBgs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Befq5XB9WUD3CIpsl6hMnDLaycPMhoCCmlCNzIrKa7tLDOdW4XdLbngOHtd4hYkj/ uGheG/SH1ePX6fzga3iDypM6d1Ix26NijzOuqAQia1w22v7u1dNc5aKBZCDLYFZvlg Zc9AKjNlz4UQwAHnNCAjIrwQQOfCo2pN8Sq+Gr6d9cvSztYwWr4+twrmnNbuARFtyn beWEvHw+WEcJZYKFLjHWmtIBiNc69vlZesLPJMjMbscUu9kJBOTVgyMEaX+tzyjyDv oACCnbRAJyns5fNPnKM/xNw8fIa281+GpwwBKoAMIrtHlCK7bmiC2jN2bLWkvaDfGe 0pwtzCn4pCdow== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 25/32] ARM: memcpy: use frame pointer as unwind anchor Date: Mon, 24 Jan 2022 18:47:37 +0100 Message-Id: <20220124174744.1054712-26-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8309; h=from:subject; bh=3FDhs7S5Ica25HEAyUeDQsXqhjFXC2xvlHsmW9hgBgs=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYxrw/HMZYdKW42khW02kBRjUKRXmxfElmB/LVE PM1EPAiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mMQAKCRDDTyI5ktmPJDE0C/ 9/JDIKDSsIoVGdQ+/VPooFk98gHHl85ScW+I8iH1tHQbG5irlLDKM/tZSRu5idPH8hafOSCQHjtiAl R8yfZnZj7vNT7VgS60fnrBrbxSNa0u6njXCfhjE0lfQU0GtH9Wb+M+XWtx8F2R33+CH9NT/GXHpyr1 V2DrLZGSVyRATRLk1wRWgBFy2LqrVSVWxL8rbssc6C31Omx6OZ2UaUv7tAZqY48lnn5coOXu9J4WFW 4NmhdJEKUPcCbv00lW6ZIozXSjCY2P9MUMriLGI2qSfUEtF05QxkZ2etmWjYWHzOMGJENZ04GDFKBI gjdL43KW+mWxHWnWF3+0y767kt1dg+TDkDU47Vn2mC3PZn+zo5Vv2H8JgAYRDkCWglbSX1IQwctF3c 6Vl1nYAvHcz7guUCJLz4JGQk+siFTdXi2OtFecnFGEgXaAAd6uh0ae+hBpiU7nTWcv0S9Cuw9W/uGr o513DO400bJeIzonxzpuIYCgoxZgbJnN8bv5la6y5s7WY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The memcpy template is a bit unusual in the way it manages the stack pointer: depending on the execution path through the function, the SP assumes different values as different subsets of the register file are preserved and restored again. This is problematic when it comes to EHABI unwind info, as it is not instruction accurate, and does not allow tracking the SP value as it changes. Commit 279f487e0b471 ("ARM: 8225/1: Add unwinding support for memory copy functions") addressed this by carving up the function in different chunks as far as the unwinder is concerned, and keeping a set of unwind directives for each of them, each corresponding with the state of the stack pointer during execution of the chunk in question. This not only duplicates unwind info unnecessarily, but it also complicates unwinding the stack upon overflow. Instead, let's do what the compiler does when the SP is updated halfway through a function, which is to use a frame pointer and emit the appropriate unwind directives to communicate this to the unwinder. Note that Thumb-2 uses R7 for this, while ARM uses R11 aka FP. So let's avoid touching R7 in the body of the template, so that Thumb-2 can use it as the frame pointer. R11 was not modified in the first place. Signed-off-by: Ard Biesheuvel Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/lib/copy_from_user.S | 13 ++-- arch/arm/lib/copy_template.S | 67 +++++++------------- arch/arm/lib/copy_to_user.S | 13 ++-- arch/arm/lib/memcpy.S | 13 ++-- 4 files changed, 38 insertions(+), 68 deletions(-) diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index 480a20766137..270de7debd0f 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -91,18 +91,15 @@ strb\cond \reg, [\ptr], #1 .endm - .macro enter reg1 reg2 + .macro enter regs:vararg mov r3, #0 - stmdb sp!, {r0, r2, r3, \reg1, \reg2} +UNWIND( .save {r0, r2, r3, \regs} ) + stmdb sp!, {r0, r2, r3, \regs} .endm - .macro usave reg1 reg2 - UNWIND( .save {r0, r2, r3, \reg1, \reg2} ) - .endm - - .macro exit reg1 reg2 + .macro exit regs:vararg add sp, sp, #8 - ldmfd sp!, {r0, \reg1, \reg2} + ldmfd sp!, {r0, \regs} .endm .text diff --git a/arch/arm/lib/copy_template.S b/arch/arm/lib/copy_template.S index 810a805d36dc..8fbafb074fe9 100644 --- a/arch/arm/lib/copy_template.S +++ b/arch/arm/lib/copy_template.S @@ -69,13 +69,10 @@ * than one 32bit instruction in Thumb-2) */ - - UNWIND( .fnstart ) - enter r4, lr - UNWIND( .fnend ) - UNWIND( .fnstart ) - usave r4, lr @ in first stmdb block + enter r4, UNWIND(fpreg,) lr + UNWIND( .setfp fpreg, sp ) + UNWIND( mov fpreg, sp ) subs r2, r2, #4 blt 8f @@ -86,12 +83,7 @@ bne 10f 1: subs r2, r2, #(28) - stmfd sp!, {r5 - r8} - UNWIND( .fnend ) - - UNWIND( .fnstart ) - usave r4, lr - UNWIND( .save {r5 - r8} ) @ in second stmfd block + stmfd sp!, {r5, r6, r8, r9} blt 5f CALGN( ands ip, r0, #31 ) @@ -110,9 +102,9 @@ PLD( pld [r1, #92] ) 3: PLD( pld [r1, #124] ) -4: ldr8w r1, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f +4: ldr8w r1, r3, r4, r5, r6, r8, r9, ip, lr, abort=20f subs r2, r2, #32 - str8w r0, r3, r4, r5, r6, r7, r8, ip, lr, abort=20f + str8w r0, r3, r4, r5, r6, r8, r9, ip, lr, abort=20f bge 3b PLD( cmn r2, #96 ) PLD( bge 4b ) @@ -132,8 +124,8 @@ ldr1w r1, r4, abort=20f ldr1w r1, r5, abort=20f ldr1w r1, r6, abort=20f - ldr1w r1, r7, abort=20f ldr1w r1, r8, abort=20f + ldr1w r1, r9, abort=20f ldr1w r1, lr, abort=20f #if LDR1W_SHIFT < STR1W_SHIFT @@ -150,17 +142,14 @@ str1w r0, r4, abort=20f str1w r0, r5, abort=20f str1w r0, r6, abort=20f - str1w r0, r7, abort=20f str1w r0, r8, abort=20f + str1w r0, r9, abort=20f str1w r0, lr, abort=20f CALGN( bcs 2b ) -7: ldmfd sp!, {r5 - r8} - UNWIND( .fnend ) @ end of second stmfd block +7: ldmfd sp!, {r5, r6, r8, r9} - UNWIND( .fnstart ) - usave r4, lr @ still in first stmdb block 8: movs r2, r2, lsl #31 ldr1b r1, r3, ne, abort=21f ldr1b r1, r4, cs, abort=21f @@ -169,7 +158,7 @@ str1b r0, r4, cs, abort=21f str1b r0, ip, cs, abort=21f - exit r4, pc + exit r4, UNWIND(fpreg,) pc 9: rsb ip, ip, #4 cmp ip, #2 @@ -189,13 +178,10 @@ ldr1w r1, lr, abort=21f beq 17f bgt 18f - UNWIND( .fnend ) .macro forward_copy_shift pull push - UNWIND( .fnstart ) - usave r4, lr @ still in first stmdb block subs r2, r2, #28 blt 14f @@ -205,12 +191,8 @@ CALGN( subcc r2, r2, ip ) CALGN( bcc 15f ) -11: stmfd sp!, {r5 - r9} - UNWIND( .fnend ) +11: stmfd sp!, {r5, r6, r8 - r10} - UNWIND( .fnstart ) - usave r4, lr - UNWIND( .save {r5 - r9} ) @ in new second stmfd block PLD( pld [r1, #0] ) PLD( subs r2, r2, #96 ) PLD( pld [r1, #28] ) @@ -219,35 +201,32 @@ PLD( pld [r1, #92] ) 12: PLD( pld [r1, #124] ) -13: ldr4w r1, r4, r5, r6, r7, abort=19f +13: ldr4w r1, r4, r5, r6, r8, abort=19f mov r3, lr, lspull #\pull subs r2, r2, #32 - ldr4w r1, r8, r9, ip, lr, abort=19f + ldr4w r1, r9, r10, ip, lr, abort=19f orr r3, r3, r4, lspush #\push mov r4, r4, lspull #\pull orr r4, r4, r5, lspush #\push mov r5, r5, lspull #\pull orr r5, r5, r6, lspush #\push mov r6, r6, lspull #\pull - orr r6, r6, r7, lspush #\push - mov r7, r7, lspull #\pull - orr r7, r7, r8, lspush #\push + orr r6, r6, r8, lspush #\push mov r8, r8, lspull #\pull orr r8, r8, r9, lspush #\push mov r9, r9, lspull #\pull - orr r9, r9, ip, lspush #\push + orr r9, r9, r10, lspush #\push + mov r10, r10, lspull #\pull + orr r10, r10, ip, lspush #\push mov ip, ip, lspull #\pull orr ip, ip, lr, lspush #\push - str8w r0, r3, r4, r5, r6, r7, r8, r9, ip, abort=19f + str8w r0, r3, r4, r5, r6, r8, r9, r10, ip, abort=19f bge 12b PLD( cmn r2, #96 ) PLD( bge 13b ) - ldmfd sp!, {r5 - r9} - UNWIND( .fnend ) @ end of the second stmfd block + ldmfd sp!, {r5, r6, r8 - r10} - UNWIND( .fnstart ) - usave r4, lr @ still in first stmdb block 14: ands ip, r2, #28 beq 16f @@ -262,7 +241,6 @@ 16: sub r1, r1, #(\push / 8) b 8b - UNWIND( .fnend ) .endm @@ -273,6 +251,7 @@ 18: forward_copy_shift pull=24 push=8 + UNWIND( .fnend ) /* * Abort preamble and completion macros. @@ -282,13 +261,13 @@ */ .macro copy_abort_preamble -19: ldmfd sp!, {r5 - r9} +19: ldmfd sp!, {r5, r6, r8 - r10} b 21f -20: ldmfd sp!, {r5 - r8} +20: ldmfd sp!, {r5, r6, r8, r9} 21: .endm .macro copy_abort_end - ldmfd sp!, {r4, pc} + ldmfd sp!, {r4, UNWIND(fpreg,) pc} .endm diff --git a/arch/arm/lib/copy_to_user.S b/arch/arm/lib/copy_to_user.S index 842ea5ede485..fac49e57cc0b 100644 --- a/arch/arm/lib/copy_to_user.S +++ b/arch/arm/lib/copy_to_user.S @@ -90,18 +90,15 @@ strusr \reg, \ptr, 1, \cond, abort=\abort .endm - .macro enter reg1 reg2 + .macro enter regs:vararg mov r3, #0 - stmdb sp!, {r0, r2, r3, \reg1, \reg2} +UNWIND( .save {r0, r2, r3, \regs} ) + stmdb sp!, {r0, r2, r3, \regs} .endm - .macro usave reg1 reg2 - UNWIND( .save {r0, r2, r3, \reg1, \reg2} ) - .endm - - .macro exit reg1 reg2 + .macro exit regs:vararg add sp, sp, #8 - ldmfd sp!, {r0, \reg1, \reg2} + ldmfd sp!, {r0, \regs} .endm .text diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S index e4caf48c089f..90f2b645aa0d 100644 --- a/arch/arm/lib/memcpy.S +++ b/arch/arm/lib/memcpy.S @@ -42,16 +42,13 @@ strb\cond \reg, [\ptr], #1 .endm - .macro enter reg1 reg2 - stmdb sp!, {r0, \reg1, \reg2} + .macro enter regs:vararg +UNWIND( .save {r0, \regs} ) + stmdb sp!, {r0, \regs} .endm - .macro usave reg1 reg2 - UNWIND( .save {r0, \reg1, \reg2} ) - .endm - - .macro exit reg1 reg2 - ldmfd sp!, {r0, \reg1, \reg2} + .macro exit regs:vararg + ldmfd sp!, {r0, \regs} .endm .text From patchwork Mon Jan 24 17:47:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D64CC43217 for ; Mon, 24 Jan 2022 17:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244237AbiAXRtZ (ORCPT ); Mon, 24 Jan 2022 12:49:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244427AbiAXRtX (ORCPT ); Mon, 24 Jan 2022 12:49:23 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96CC9C06173B for ; Mon, 24 Jan 2022 09:49:22 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5C456B811AC for ; Mon, 24 Jan 2022 17:49:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 441EBC36AE2; Mon, 24 Jan 2022 17:49:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046560; bh=S9VAgvS7Dq3pQmUaE1sjfY4F28GrRzw4QsaVdM6p2cE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=smbyr4J+IEJmplCjnf5mh8XUWXi3I+59MepPE4oLOdy2pkLKTQgrLJQzj4xXTLHK4 ScBKty3+LceT/7B023UuEnsrMJ1f55/gDXKabQjKHvGEbkfYAyUkieUXDIPhTQsDw+ I+zzmezPaa/biK9pQWoQ/7ONfFVlNNWQYOi+qnwbKSf/Jkd3a7vCnslj432iBjO3sN 9agB5qoGZN8nPwngk8zZW5p0hvNqh+5nfen2z1Tg6dnYMzmGunkMDafzI50Y9LS4Sx 6ZM46ruvSat1Tb8lea/rOs0OZlusCO3ede3ByI9ABBMZzlhzWx8WuSTuQtjq6Pm8AC /slrOeLAo4YyA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 26/32] ARM: memmove: use frame pointer as unwind anchor Date: Mon, 24 Jan 2022 18:47:38 +0100 Message-Id: <20220124174744.1054712-27-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6060; h=from:subject; bh=S9VAgvS7Dq3pQmUaE1sjfY4F28GrRzw4QsaVdM6p2cE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uYz0OjoiDentb6t7pZtc080VYIAGPoHXehiwE+w IX4gUl6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mMwAKCRDDTyI5ktmPJBavDA CSrS7Cos4dYkj0uU4By5+ZpsWY+H6R4VS+UV6mUUUeLYfxdzcImuarmQTSV2aIagBKV7LhSEVDlH1p pYdEf6exNAtbDwUvt+0UxhzhXVbYkH9fOeG58RSDZev1cuk/pyGsL7bo54RXPb6X96NlYox/yOwg6N DcpZJAi/SsfamP1KZWT1VEVudzRxJ1dOz/cO6B+CfsINrTEpH11sSFqD/sTFb8ISP5j5JN2rIj+9j3 Q2HyEhexeQfuaCzZORwecK2fDOo3NyQmk3vWzUr9rXFB2UlbMjL5T7FPcS6cuDzqosEUbMLfuWQXT4 soCXjO8RAhc9f0/9nfpx2PaVarPXj8qBORobSsbsxdTh+sZ1ClZUwQEoAROcoYq4aPYtViZMSDJGhB 9xfiCvDpI0gp14QB6YcQpT564kNg6WSu4+49RwF/lWbhHzAIuN+XDRGHFwpmICMkJwjLidra/LORpw iotYP1PBsJET8JoErr+ZU7ksK5RrcNGlX3Clpqay8mYck= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The memmove routine is a bit unusual in the way it manages the stack pointer: depending on the execution path through the function, the SP assumes different values as different subsets of the register file are preserved and restored again. This is problematic when it comes to EHABI unwind info, as it is not instruction accurate, and does not allow tracking the SP value as it changes. Commit 207a6cb06990c ("ARM: 8224/1: Add unwinding support for memmove function") addressed this by carving up the function in different chunks as far as the unwinder is concerned, and keeping a set of unwind directives for each of them, each corresponding with the state of the stack pointer during execution of the chunk in question. This not only duplicates unwind info unnecessarily, but it also complicates unwinding the stack upon overflow. Instead, let's do what the compiler does when the SP is updated halfway through a function, which is to use a frame pointer and emit the appropriate unwind directives to communicate this to the unwinder. Note that Thumb-2 uses R7 for this, while ARM uses R11 aka FP. So let's avoid touching R7 in the body of the function, so that Thumb-2 can use it as the frame pointer. R11 was not modified in the first place. Signed-off-by: Ard Biesheuvel Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/lib/memmove.S | 60 +++++++------------- 1 file changed, 20 insertions(+), 40 deletions(-) diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S index 6fecc12a1f51..6410554039fd 100644 --- a/arch/arm/lib/memmove.S +++ b/arch/arm/lib/memmove.S @@ -31,12 +31,13 @@ WEAK(memmove) subs ip, r0, r1 cmphi r2, ip bls __memcpy - - stmfd sp!, {r0, r4, lr} UNWIND( .fnend ) UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) @ in first stmfd block + UNWIND( .save {r0, r4, fpreg, lr} ) + stmfd sp!, {r0, r4, UNWIND(fpreg,) lr} + UNWIND( .setfp fpreg, sp ) + UNWIND( mov fpreg, sp ) add r1, r1, r2 add r0, r0, r2 subs r2, r2, #4 @@ -48,12 +49,7 @@ WEAK(memmove) bne 10f 1: subs r2, r2, #(28) - stmfd sp!, {r5 - r8} - UNWIND( .fnend ) - - UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) - UNWIND( .save {r5 - r8} ) @ in second stmfd block + stmfd sp!, {r5, r6, r8, r9} blt 5f CALGN( ands ip, r0, #31 ) @@ -72,9 +68,9 @@ WEAK(memmove) PLD( pld [r1, #-96] ) 3: PLD( pld [r1, #-128] ) -4: ldmdb r1!, {r3, r4, r5, r6, r7, r8, ip, lr} +4: ldmdb r1!, {r3, r4, r5, r6, r8, r9, ip, lr} subs r2, r2, #32 - stmdb r0!, {r3, r4, r5, r6, r7, r8, ip, lr} + stmdb r0!, {r3, r4, r5, r6, r8, r9, ip, lr} bge 3b PLD( cmn r2, #96 ) PLD( bge 4b ) @@ -88,8 +84,8 @@ WEAK(memmove) W(ldr) r4, [r1, #-4]! W(ldr) r5, [r1, #-4]! W(ldr) r6, [r1, #-4]! - W(ldr) r7, [r1, #-4]! W(ldr) r8, [r1, #-4]! + W(ldr) r9, [r1, #-4]! W(ldr) lr, [r1, #-4]! add pc, pc, ip @@ -99,17 +95,13 @@ WEAK(memmove) W(str) r4, [r0, #-4]! W(str) r5, [r0, #-4]! W(str) r6, [r0, #-4]! - W(str) r7, [r0, #-4]! W(str) r8, [r0, #-4]! + W(str) r9, [r0, #-4]! W(str) lr, [r0, #-4]! CALGN( bcs 2b ) -7: ldmfd sp!, {r5 - r8} - UNWIND( .fnend ) @ end of second stmfd block - - UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) @ still in first stmfd block +7: ldmfd sp!, {r5, r6, r8, r9} 8: movs r2, r2, lsl #31 ldrbne r3, [r1, #-1]! @@ -118,7 +110,7 @@ WEAK(memmove) strbne r3, [r0, #-1]! strbcs r4, [r0, #-1]! strbcs ip, [r0, #-1] - ldmfd sp!, {r0, r4, pc} + ldmfd sp!, {r0, r4, UNWIND(fpreg,) pc} 9: cmp ip, #2 ldrbgt r3, [r1, #-1]! @@ -137,13 +129,10 @@ WEAK(memmove) ldr r3, [r1, #0] beq 17f blt 18f - UNWIND( .fnend ) .macro backward_copy_shift push pull - UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) @ still in first stmfd block subs r2, r2, #28 blt 14f @@ -152,12 +141,7 @@ WEAK(memmove) CALGN( subcc r2, r2, ip ) CALGN( bcc 15f ) -11: stmfd sp!, {r5 - r9} - UNWIND( .fnend ) - - UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) - UNWIND( .save {r5 - r9} ) @ in new second stmfd block +11: stmfd sp!, {r5, r6, r8 - r10} PLD( pld [r1, #-4] ) PLD( subs r2, r2, #96 ) @@ -167,35 +151,31 @@ WEAK(memmove) PLD( pld [r1, #-96] ) 12: PLD( pld [r1, #-128] ) -13: ldmdb r1!, {r7, r8, r9, ip} +13: ldmdb r1!, {r8, r9, r10, ip} mov lr, r3, lspush #\push subs r2, r2, #32 ldmdb r1!, {r3, r4, r5, r6} orr lr, lr, ip, lspull #\pull mov ip, ip, lspush #\push - orr ip, ip, r9, lspull #\pull + orr ip, ip, r10, lspull #\pull + mov r10, r10, lspush #\push + orr r10, r10, r9, lspull #\pull mov r9, r9, lspush #\push orr r9, r9, r8, lspull #\pull mov r8, r8, lspush #\push - orr r8, r8, r7, lspull #\pull - mov r7, r7, lspush #\push - orr r7, r7, r6, lspull #\pull + orr r8, r8, r6, lspull #\pull mov r6, r6, lspush #\push orr r6, r6, r5, lspull #\pull mov r5, r5, lspush #\push orr r5, r5, r4, lspull #\pull mov r4, r4, lspush #\push orr r4, r4, r3, lspull #\pull - stmdb r0!, {r4 - r9, ip, lr} + stmdb r0!, {r4 - r6, r8 - r10, ip, lr} bge 12b PLD( cmn r2, #96 ) PLD( bge 13b ) - ldmfd sp!, {r5 - r9} - UNWIND( .fnend ) @ end of the second stmfd block - - UNWIND( .fnstart ) - UNWIND( .save {r0, r4, lr} ) @ still in first stmfd block + ldmfd sp!, {r5, r6, r8 - r10} 14: ands ip, r2, #28 beq 16f @@ -211,7 +191,6 @@ WEAK(memmove) 16: add r1, r1, #(\pull / 8) b 8b - UNWIND( .fnend ) .endm @@ -222,5 +201,6 @@ WEAK(memmove) 18: backward_copy_shift push=24 pull=8 + UNWIND( .fnend ) ENDPROC(memmove) ENDPROC(__memmove) From patchwork Mon Jan 24 17:47:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E40C43219 for ; Mon, 24 Jan 2022 17:49:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241579AbiAXRtZ (ORCPT ); Mon, 24 Jan 2022 12:49:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241566AbiAXRtY (ORCPT ); Mon, 24 Jan 2022 12:49:24 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B959C061401 for ; Mon, 24 Jan 2022 09:49:24 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EFA5B612ED for ; Mon, 24 Jan 2022 17:49:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B4EFC340EA; Mon, 24 Jan 2022 17:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046563; bh=u4RB/KlZR4KIQokdxDaTYGOvYyydDkCTyga2eR2F59w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h2nptmPG2ox1cPxOMadHKk0yI/8iy9Fxz9bkCWYTkcX2UxEmL4Ad5puSQRI6/Gy+4 j8hsimEZXjmak9AzkMz6S3jATKFt3WX6NopOFhnGg7cubecB1EStTiDWVGGowpB6FG S84HzavnFuKoSdTr/SVVS/8ODi6Y9xzbAU7GSa9Blmal8H8BT5sAJ7fsNNYwNI3uy9 mqupkSIBPp8Q2+1oZb69BkqYN9Jn3IPD9ZzAgMj0gSANaOTOJ+LHxV9aku/K5UR6wI p1ngKc4bMltpH7oaG7jJotRdESVdEE7ReQaCj1Z3PIncMa2lW9sjeWum3XFSuY05MU 48fQ+AzXdbKqw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 27/32] ARM: memset: clean up unwind annotations Date: Mon, 24 Jan 2022 18:47:39 +0100 Message-Id: <20220124174744.1054712-28-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1893; h=from:subject; bh=u4RB/KlZR4KIQokdxDaTYGOvYyydDkCTyga2eR2F59w=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY1talBSbgjJi+PAqC2DvlmcXa3kSNApLyzbino dmXX076JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mNQAKCRDDTyI5ktmPJMUVDA CI9EvIaAtczRDYrAF0ONgwr/7wDyjrJNBjrsksQHrjuSNM0bsx2TQkH1ytG4eovi2AT02EdJpxEi+a t9RrxcXoz216XlIkGUPCCQ6/VMAry63axxmIOIaa6lGeRMvsPLilAh0SQDQiBUgFrgKfKq365RztrI mWbFibuVI9XtmdyLZIqLXz6hE/OfI9ts0dCPtlszvmkM1zTqtbr7B95cRe71KUImZqICDHFU9jtLUY f4knlKtmq4o7bP3Qzsd+N/58FdE6S8UNQrHzUBVkP2bZp7JWWHWAuWtwd0xK2f2OUPufw4Ln5CrYG8 0mHLGf7mwP82m4HvAK561t4JVw4Kh9t/JBndALHV3wOJhJjnc8r3kXROzDgfe2woVM2OOdbAnoYyXD lcQBdHyz8NuHs0o6GgKt7HnPjLG+APO6orWkJSk0+3XXNFh4+5Mx1Yu/TT5e0VRNRyw0o78F9+Wwim HuNBmfXUCieJ+a0sB+wsi07cVlEYT8K00vM7ZC7i4yjYw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The memset implementation carves up the code in different sections, each covered with their own unwind info. In this case, it is done in a way similar to how the compiler might do it, to disambiguate between parts where the return address is in LR and the SP is unmodified, and parts where a stack frame is live, and the unwinder needs to know the size of the stack frame and the location of the return address within it. Only the placement of the unwind directives is slightly odd: the stack pushes are placed in the wrong sections, which may confuse the unwinder when attempting to unwind with PC pointing at the stack push in question. So let's fix this up, by reordering the directives and instructions as appropriate. Signed-off-by: Ard Biesheuvel Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/lib/memset.S | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S index 9817cb258c1a..d71ab61430b2 100644 --- a/arch/arm/lib/memset.S +++ b/arch/arm/lib/memset.S @@ -28,16 +28,16 @@ UNWIND( .fnstart ) mov r3, r1 7: cmp r2, #16 blt 4f +UNWIND( .fnend ) #if ! CALGN(1)+0 /* * We need 2 extra registers for this loop - use r8 and the LR */ - stmfd sp!, {r8, lr} -UNWIND( .fnend ) UNWIND( .fnstart ) UNWIND( .save {r8, lr} ) + stmfd sp!, {r8, lr} mov r8, r1 mov lr, r3 @@ -66,10 +66,9 @@ UNWIND( .fnend ) * whole cache lines at once. */ - stmfd sp!, {r4-r8, lr} -UNWIND( .fnend ) UNWIND( .fnstart ) UNWIND( .save {r4-r8, lr} ) + stmfd sp!, {r4-r8, lr} mov r4, r1 mov r5, r3 mov r6, r1 From patchwork Mon Jan 24 17:47:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722610 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4469CC433EF for ; Mon, 24 Jan 2022 17:49:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241566AbiAXRta (ORCPT ); Mon, 24 Jan 2022 12:49:30 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:50686 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244383AbiAXRt3 (ORCPT ); Mon, 24 Jan 2022 12:49:29 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F1F79B811A5 for ; Mon, 24 Jan 2022 17:49:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D34BCC340EB; Mon, 24 Jan 2022 17:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046566; bh=YcKS6pvqINnnIZZfrTl67Gen3pLE7mHp0eRfaLN7syA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=egDXXfuQ4TSf3PRYrWimm7oQFFsEmHiFvGnh0zUOO3bNSKuNlLl5mk8cIt7wbg7S6 Jw0fsqsf62hAwqkY2Zx6rRyzdtaVWHmiGoz2/dltnrNiLm/zTxnt3gC77TfWrWU+V9 qqdfeU0z6RTvKAAGg2N2PS3RML9D2ptz6f9QJ4VDHwfDN5EFOGPnWmgl4Cqoyc25r+ AALg2qzJtrYnkSAP0pFrGU4IYe+pUYMQE2L4cyASTTu/hsJPCQcCLHEVdvYlllomzK zmR3+2c0oY/dGjyK1MLqCkR1fPdYzZytCg39gISDMaph6VO+YZy28ceO70K0zUcO3/ 6COXNgm7scDlw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 28/32] ARM: unwind: disregard unwind info before stack frame is set up Date: Mon, 24 Jan 2022 18:47:40 +0100 Message-Id: <20220124174744.1054712-29-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2494; h=from:subject; bh=YcKS6pvqINnnIZZfrTl67Gen3pLE7mHp0eRfaLN7syA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY32Q+0YokqsvCTLr4wm0frEVTWcQNZVqn499oj RXzHDGiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mNwAKCRDDTyI5ktmPJOwPC/ 0a8Z2SZ6d9c3q9gHjoQgamLOJLFOOg3VHcCNTUb4ljLW5VmPuVXrh7g0XwmKPA3U1TowrdEV9oKgVK Yvo3kuXtD5szQRaAiUgvUUPu8fCsud9j0iA9Qzfp0ktk2GrNIPyF3KYmvg3xvUdywNrdcrvXraI2xv XpXVxNrelI4dO21/BbYh+K7eBOtMGnEiVt/TlXx/d0Fld6fNMoH2LMUw8Ui9bvZxWH+GVu1UlB3kpK fkhXZMLAeGmmmgYjMVJk5oibsJ0vZAD05u2gfa8QMKCNcJ2O0/b9UmQRZ0f8Fsj+whMf0ZEEhn5apb g2r8rMWr+nsCIYvzjKKi+xD1ZubygHM3HVv/t7UvDZQNNN44n6W+cosfvOLPMc/v+1mRaWlzKgs2rI 3mKWAi5NK5Svv2+nJ3DBtU2bDmD0D2nHvRMeELTXTfJ+V1zq5VEAaNhMZsuMatsYua4NO99Qhx599k 4M4mEF8XG6XOXNgAci+1MG0VsNZQecIxV8WPaMXRlQiOs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org When unwinding the stack from a stack overflow, we are likely to start from a stack push instruction, given that this is the most common way to grow the stack for compiler emitted code. This push instruction rarely appears anywhere else than at offset 0x0 of the function, and if it doesn't, the compiler tends to split up the unwind annotations, given that the stack frame layout is apparently not the same throughout the function. This means that, in the general case, if the frame's PC points at the first instruction covered by a certain unwind entry, there is no way the stack frame that the unwind entry describes could have been created yet, and so we are still on the stack frame of the caller in that case. So treat this as a special case, and return with the new PC taken from the frame's LR, without applying the unwind transformations to the virtual register set. This permits us to unwind the call stack on stack overflow when the overflow was caused by a stack push on function entry. Signed-off-by: Ard Biesheuvel Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/kernel/unwind.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index b7a6141c342f..e8d729975f12 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -411,7 +411,21 @@ int unwind_frame(struct stackframe *frame) if (idx->insn == 1) /* can't unwind */ return -URC_FAILURE; - else if ((idx->insn & 0x80000000) == 0) + else if (frame->pc == prel31_to_addr(&idx->addr_offset)) { + /* + * Unwinding is tricky when we're halfway through the prologue, + * since the stack frame that the unwinder expects may not be + * fully set up yet. However, one thing we do know for sure is + * that if we are unwinding from the very first instruction of + * a function, we are still effectively in the stack frame of + * the caller, and the unwind info has no relevance yet. + */ + if (frame->pc == frame->lr) + return -URC_FAILURE; + frame->sp_low = frame->sp; + frame->pc = frame->lr; + return URC_OK; + } else if ((idx->insn & 0x80000000) == 0) /* prel31 to the unwind table */ ctrl.insn = (unsigned long *)prel31_to_addr(&idx->insn); else if ((idx->insn & 0xff000000) == 0x80000000) From patchwork Mon Jan 24 17:47:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE257C4332F for ; Mon, 24 Jan 2022 17:49:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244403AbiAXRtb (ORCPT ); Mon, 24 Jan 2022 12:49:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244383AbiAXRtb (ORCPT ); Mon, 24 Jan 2022 12:49:31 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDEA8C06173B for ; Mon, 24 Jan 2022 09:49:30 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8A97460FDB for ; Mon, 24 Jan 2022 17:49:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2930CC340E8; Mon, 24 Jan 2022 17:49:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046570; bh=E6GmgEOc+vQafTW9XEhCWB4pMdBWsplIssEM1Ai0U3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H89+2sb18ngW36M/WDIkM/ybOiSCbRo7tNjmk9jLk/UXVn4MDvhJE+u9OzpMpTXuH 5xY3T6/lv/PXVbHI9L2mOOv6IOrt1UIPYOZzzm3nO8Ak56Ll3Ybm7anKizNwDjXxvG Fpive1QuOZLdF+EKOK56ztIO0z/B0mdkgWpluHHmvWOM/Y2s8Vdag/MppOEqlHjXXy /3H+yXhe/s6GZj8jGBNcuaex2RwggFy2764z6S/Ck6lI6ZkMQZD9ma+hdLvaO1UPq9 kOToJLQzsXoTvaJWgkuUKITTTgS/6hiZXaiIYYxbjdwp4Ewe8UdcY92UwvgdwTt8dm ZJiPYpK7ivIQw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 29/32] ARM: entry: rework stack realignment code in svc_entry Date: Mon, 24 Jan 2022 18:47:41 +0100 Message-Id: <20220124174744.1054712-30-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3424; h=from:subject; bh=E6GmgEOc+vQafTW9XEhCWB4pMdBWsplIssEM1Ai0U3c=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY5RDYWgucV+8Glqrn8Wi0LCwplEl4eTtkjLHN4 JrB3C2eJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mOQAKCRDDTyI5ktmPJOQhC/ wK6kHr+LrcKAbaF+Su+9znZGr/TaIYvbUT1J1MHKLUOTKj9NpRaLAdRFga1Wj4Afxf4Jdkci0VgGaI PCtAUITVecptOShKWXtzRwO7ghWF/DOO4rzOa+XCwpVWJuqEvsR1ljYXEyl9tSK2fbrOe0vbB19N1m ZG4M9JmYC4/m05BP9pksQS5B0VCV1x5wgv1aacaeEZKyY4BYJA/W/d28D/rtg0I2vWR40uihRHFR+f P4Gd0g1+zPECoANL+PlYuYvS7QA4gYHNB36E+Z1g79FHAfHaicFYGveIeziE1OSK2nKI9g724SLSj9 skdFmP8RKmpr94E8hAmKptK4AF/HcPMb1Rj2UAXM23y7IT7odTERSmr/aYdS2s6tuD1x7OC3j1fknj 14qDQnmooW7nBDuLjGtJi74qcztLapMTOIQncTZ9RbzydvnlIQT03I9UihZg5QZNFGzCoHlnWquyqA C3dtjgxzmRPwacc5jmwMfOVZAD9BN8dodbxTh3o6X1hHE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The original Thumb-2 enablement patches updated the stack realignment code in svc_entry to work around the lack of a STMIB instruction in Thumb-2, by subtracting 4 from the frame size, inverting the sense of the misaligment check, and changing to a STMIA instruction and a final stack push of a 4 byte quantity that results in the stack becoming aligned at the end of the sequence. It also pushes and pops R0 to the stack in order to have a temp register that Thumb-2 allows in general purpose ALU instructions, as TST using SP is not permitted. Both are a bit problematic for vmap'ed stacks, as using the stack is only permitted after we decide that we did not overflow the stack, or have already switched to the overflow stack. As for the alignment check: the current approach creates a corner case where, if the initial SUB of SP ends up right at the start of the stack, we will end up subtracting another 8 bytes and overflowing it. This means we would need to add the overflow check *after* the SUB that deliberately misaligns the stack. However, this would require us to keep local state (i.e., whether we performed the subtract or not) across the overflow check, but without any GPRs or stack available. So let's switch to an approach where we don't use the stack, and where the alignment check of the stack pointer occurs in the usual way, as this is guaranteed not to result in overflow. This means we will be able to do the overflow check first. While at it, switch to R1 so the mode stack pointer in R0 remains accessible. Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/kernel/entry-armv.S | 25 +++++++++++--------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 38e3978a50a9..a4009e4302bb 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -177,24 +177,27 @@ ENDPROC(__und_invalid) .macro svc_entry, stack_hole=0, trace=1, uaccess=1 UNWIND(.fnstart ) UNWIND(.save {r0 - pc} ) - sub sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4) + sub sp, sp, #(SVC_REGS_SIZE + \stack_hole) #ifdef CONFIG_THUMB2_KERNEL - SPFIX( str r0, [sp] ) @ temporarily saved - SPFIX( mov r0, sp ) - SPFIX( tst r0, #4 ) @ test original stack alignment - SPFIX( ldr r0, [sp] ) @ restored + add sp, r1 @ get SP in a GPR without + sub r1, sp, r1 @ using a temp register + tst r1, #4 @ test stack pointer alignment + sub r1, sp, r1 @ restore original R1 + sub sp, r1 @ restore original SP #else SPFIX( tst sp, #4 ) #endif - SPFIX( subeq sp, sp, #4 ) - stmia sp, {r1 - r12} + SPFIX( subne sp, sp, #4 ) + + ARM( stmib sp, {r1 - r12} ) + THUMB( stmia sp, {r0 - r12} ) @ No STMIB in Thumb-2 ldmia r0, {r3 - r5} - add r7, sp, #S_SP - 4 @ here for interlock avoidance + add r7, sp, #S_SP @ here for interlock avoidance mov r6, #-1 @ "" "" "" "" - add r2, sp, #(SVC_REGS_SIZE + \stack_hole - 4) - SPFIX( addeq r2, r2, #4 ) - str r3, [sp, #-4]! @ save the "real" r0 copied + add r2, sp, #(SVC_REGS_SIZE + \stack_hole) + SPFIX( addne r2, r2, #4 ) + str r3, [sp] @ save the "real" r0 copied @ from the exception stack mov r3, lr From patchwork Mon Jan 24 17:47:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722612 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44470C433F5 for ; Mon, 24 Jan 2022 17:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241396AbiAXRti (ORCPT ); Mon, 24 Jan 2022 12:49:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244765AbiAXRtg (ORCPT ); Mon, 24 Jan 2022 12:49:36 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C550BC06173B for ; Mon, 24 Jan 2022 09:49:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8E49CB811B1 for ; Mon, 24 Jan 2022 17:49:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71116C340EA; Mon, 24 Jan 2022 17:49:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046573; bh=+pQufvmuN+DCXpLjIRbTCfkOyyA2xiTpUnjNr7Ivoi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WNC2WlvY8UnzKemIy7LWXbEHs8Z7HSH0HKQYDmIDyrkbdVf11ZEKcsize7uRhtQGf peF32BcCsOtx4+iDZxppY88GES7DnAzO8TPywGvAyeDgptZtGEIgVR+Df05NDdx7EJ 2XjYgbAzAItdawO6VCbeUm7YuLADEMoZaHcIcTDmh+KLCufz6t24C9vhf4/Y6zPTRU LElRSoBx6+mHo2BspZHPsdX3UxDXqxqwuETdRGDys1aVyceYVZ2A84WnQma44B0eqq QHfwZRlV8CxdZCoerSk+d2sO8JJx21b+usymv3dkIpDPdrsmNSqt7y33Bc1T7XBboO YqvisY3D3FeQA== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 30/32] ARM: switch_to: clean up Thumb2 code path Date: Mon, 24 Jan 2022 18:47:42 +0100 Message-Id: <20220124174744.1054712-31-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2594; h=from:subject; bh=+pQufvmuN+DCXpLjIRbTCfkOyyA2xiTpUnjNr7Ivoi8=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY7MBmcoyuFXL9bDPe0NTsfpZSWvkHt5AlfuZ9d huKQsieJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mOwAKCRDDTyI5ktmPJLBBC/ 0QLzsXlHtFMVjLGgsO74jLKztyCpjEDt3dcnYOT4F3sBBvhatgBAFvsMdO7Za9qQ0FOYmIqdN6lJ1x 6rQsuSMvDIXerwAUfuWLqGmVio3a2LtTbZ5zTrYr7QCbvQ1xMZAn5wRV8qBqY9Gre7ZlKRmbjcYbkX PptLuPrd6gVBpeMIBHXd8TT9Em0oZPQqDsUUtDqSlEb3/5nF+vuSXbyKuLcZ0K4iuRmZNo3xX5L7rZ t5TfZmpuHNud8b8TyNMHs0jlfI+WoZI0zwHkfZaqOO5yESfSQWLyos6QvSJk3mSJ0RLKvN1IDDxar2 GIV2qLc4LhndrH6+xAovQBISv++pLDKQYM7si2NywvXS6Ox4ZQ6l832vuohTRo7pT700nzXqIwIQov CC9UwVc6itBnM+L5C93Oe07nsy2dgsFYWH5wcQNMqz3l4UzhmRu81//rOQIZKgu/3BTPrNPEsvo0Cv XUsxHMKItLjst5m8z+ZRXniC9Z6GZC+nQ3bVKeXedjALw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The load/store-multiple instructions that essentially perform the switch_to operation in ARM mode, by loading/storing all callee save registers as well the stack pointer and the link register or program counter, is split into 3 separate loads or stores for Thumb-2, with the IP register used as a temporary to capture the target address. We can clean this up a bit, by sticking with a single STMIA or LDMIA instruction, but one that uses IP instead of SP. While at it, switch to a MOVW/MOVT pair to load thread_notify_head. Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/entry-armv.S | 24 +++++++++++--------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index a4009e4302bb..86be80159c14 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -773,14 +773,14 @@ ENDPROC(__fiq_usr) * r0 = previous task_struct, r1 = previous thread_info, r2 = next thread_info * previous and next are guaranteed not to be the same. */ + .align 5 ENTRY(__switch_to) UNWIND(.fnstart ) UNWIND(.cantunwind ) - add ip, r1, #TI_CPU_SAVE - ARM( stmia ip!, {r4 - sl, fp, sp, lr} ) @ Store most regs on stack - THUMB( stmia ip!, {r4 - sl, fp} ) @ Store most regs on stack - THUMB( str sp, [ip], #4 ) - THUMB( str lr, [ip], #4 ) + add r3, r1, #TI_CPU_SAVE + ARM( stmia r3, {r4 - sl, fp, sp, lr} ) @ Store most regs on stack + THUMB( mov ip, sp ) + THUMB( stmia r3, {r4 - sl, fp, ip, lr} ) @ Thumb2 does not permit SP here ldr r4, [r2, #TI_TP_VALUE] ldr r5, [r2, #TI_TP_VALUE + 4] #ifdef CONFIG_CPU_USE_DOMAINS @@ -805,20 +805,22 @@ ENTRY(__switch_to) #endif mov r5, r0 add r4, r2, #TI_CPU_SAVE - ldr r0, =thread_notify_head + mov_l r0, thread_notify_head mov r1, #THREAD_NOTIFY_SWITCH bl atomic_notifier_call_chain #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_SMP) && \ !defined(CONFIG_STACKPROTECTOR_PER_TASK) str r9, [r8] #endif - THUMB( mov ip, r4 ) mov r0, r5 set_current r7, r8 - ARM( ldmia r4, {r4 - sl, fp, sp, pc} ) @ Load all regs saved previously - THUMB( ldmia ip!, {r4 - sl, fp} ) @ Load all regs saved previously - THUMB( ldr sp, [ip], #4 ) - THUMB( ldr pc, [ip] ) +#if !defined(CONFIG_THUMB2_KERNEL) + ldmia r4, {r4 - sl, fp, sp, pc} @ Load all regs saved previously +#else + ldmia r4, {r4 - sl, fp, ip, lr} @ Thumb2 does not permit SP here + mov sp, ip + ret lr +#endif UNWIND(.fnend ) ENDPROC(__switch_to) From patchwork Mon Jan 24 17:47:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 795CCC433EF for ; Mon, 24 Jan 2022 17:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241589AbiAXRtj (ORCPT ); Mon, 24 Jan 2022 12:49:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244778AbiAXRth (ORCPT ); Mon, 24 Jan 2022 12:49:37 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C162C06173B for ; Mon, 24 Jan 2022 09:49:37 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2A37361342 for ; Mon, 24 Jan 2022 17:49:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9B8AC340EB; Mon, 24 Jan 2022 17:49:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046576; bh=jU8c5g2WDlSn9v2Q9DGIfV+/kV2NVDFaFycyGk9Twqw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rgxfhxySuIBkaqwY1+yY78fl5v3+cxN1pL6tYm1PmH9CqNrCg/cnk6e6I0S9GOopX /rJu+eLhfpS3pdTXrqX79DoDglabOpKKnJAUUkVNQlJv724h3RJ9XUhvEsVYjz2T52 haP1dccSnmj29jmeY2khrJKJGtC2K8dPrWyI61Af9MHTWCiGvnmmYvlGUhn9kx0oy3 mXEpDeHwbvB0/Aqp2rRRscnNRQT/ACy3GShh7jBdu8owGc7lcgPt3F4Tw8NLh7Ls63 lVcOoSsr9ruFllduGkFdzQAolQWYoXrWt//SwKJ3CoCAGw9EG37uek7MNPi1IcP8ds ihXRfPRl7VzRw== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 31/32] ARM: mm: prepare vmalloc_seq handling for use under SMP Date: Mon, 24 Jan 2022 18:47:43 +0100 Message-Id: <20220124174744.1054712-32-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4675; h=from:subject; bh=jU8c5g2WDlSn9v2Q9DGIfV+/kV2NVDFaFycyGk9Twqw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY9a1FStKiCLg0kjH4hSkqtT1XyOzh8RYH56pR0 9dazmzaJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mPQAKCRDDTyI5ktmPJA5rC/ 9qNSoF2/6hE3+ryWpWlkUofg7CgUuwTstIW0N8r6M7K6h8ypzpJGi9WuGQtp1Imv+DyhXpHSHyN4rt Xha/7nzmD5AEqjMJAD+eFimpg+P2v/zba2su8+DK2h6EYa0Ml3okOAektufsQ8gLtIxW/tHya/NFLS +lhyzBYUwUePOtVMfK2WAvCB/CzTcx4U5pDzdN8KqtaDhFc/Kp37gi7l7/6zgtok9e47lIKCsNT51y B0SBW+oZpLIgk2kPQuaejCAVkL5AieaYqjjvqu1S7t1WT5iKnAL/rQFa//MUnGl/iyYifMkg8fvqul EQH/A1y2E5XInz8HqlxgbTk/5JQd7UkUNbrHfpPNp0hDCY7eN1x6Yq+bYT0ngwCUGaoXtAd2vbi6Yh 1dDDhWPO2cRRjjAw1QxXJDpumz3V9YCeD3cjorBYol7IO1mkMce6NyUIwyL0S++dgWL7Z5giFlkFzm HcPsdfilSDlhgK2c9k1Iofqzj599K9LVTCcvng4f2Yw/4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Currently, the vmalloc_seq counter is only used to keep track of changes in the vmalloc region on !SMP builds, which means there is no need to deal with concurrency explicitly. However, in a subsequent patch, we will wire up this same mechanism for ensuring that vmap'ed stacks are guaranteed to be mapped by the active mm before switching to a task, and here we need to ensure that changes to the page tables are visible to other CPUs when they observe a change in the sequence count. Since LPAE needs none of this, fold a check against it into the vmalloc_seq counter check after breaking it out into a separate static inline helper. Signed-off-by: Ard Biesheuvel --- arch/arm/include/asm/mmu.h | 2 +- arch/arm/include/asm/mmu_context.h | 13 +++++++++++-- arch/arm/mm/context.c | 3 +-- arch/arm/mm/ioremap.c | 18 +++++++++++------- 4 files changed, 24 insertions(+), 12 deletions(-) diff --git a/arch/arm/include/asm/mmu.h b/arch/arm/include/asm/mmu.h index 1592a4264488..e049723840d3 100644 --- a/arch/arm/include/asm/mmu.h +++ b/arch/arm/include/asm/mmu.h @@ -10,7 +10,7 @@ typedef struct { #else int switch_pending; #endif - unsigned int vmalloc_seq; + atomic_t vmalloc_seq; unsigned long sigpage; #ifdef CONFIG_VDSO unsigned long vdso; diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h index 84e58956fcab..71a26986efb9 100644 --- a/arch/arm/include/asm/mmu_context.h +++ b/arch/arm/include/asm/mmu_context.h @@ -23,6 +23,16 @@ void __check_vmalloc_seq(struct mm_struct *mm); +#ifdef CONFIG_MMU +static inline void check_vmalloc_seq(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_ARM_LPAE) && + unlikely(atomic_read(&mm->context.vmalloc_seq) != + atomic_read(&init_mm.context.vmalloc_seq))) + __check_vmalloc_seq(mm); +} +#endif + #ifdef CONFIG_CPU_HAS_ASID void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk); @@ -52,8 +62,7 @@ static inline void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm, static inline void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) { - if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq)) - __check_vmalloc_seq(mm); + check_vmalloc_seq(mm); if (irqs_disabled()) /* diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c index 48091870db89..4204ffa2d104 100644 --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -240,8 +240,7 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) unsigned int cpu = smp_processor_id(); u64 asid; - if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq)) - __check_vmalloc_seq(mm); + check_vmalloc_seq(mm); /* * We cannot update the pgd and the ASID atomicly with classic diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index 197f8eb3a775..aa08bcb72db9 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -117,16 +117,21 @@ EXPORT_SYMBOL(ioremap_page); void __check_vmalloc_seq(struct mm_struct *mm) { - unsigned int seq; + int seq; do { - seq = init_mm.context.vmalloc_seq; + seq = atomic_read(&init_mm.context.vmalloc_seq); memcpy(pgd_offset(mm, VMALLOC_START), pgd_offset_k(VMALLOC_START), sizeof(pgd_t) * (pgd_index(VMALLOC_END) - pgd_index(VMALLOC_START))); - mm->context.vmalloc_seq = seq; - } while (seq != init_mm.context.vmalloc_seq); + /* + * Use a store-release so that other CPUs that observe the + * counter's new value are guaranteed to see the results of the + * memcpy as well. + */ + atomic_set_release(&mm->context.vmalloc_seq, seq); + } while (seq != atomic_read(&init_mm.context.vmalloc_seq)); } #if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE) @@ -157,7 +162,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size) * Note: this is still racy on SMP machines. */ pmd_clear(pmdp); - init_mm.context.vmalloc_seq++; + atomic_inc_return_release(&init_mm.context.vmalloc_seq); /* * Free the page table, if there was one. @@ -174,8 +179,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size) * Ensure that the active_mm is up to date - we want to * catch any use-after-iounmap cases. */ - if (current->active_mm->context.vmalloc_seq != init_mm.context.vmalloc_seq) - __check_vmalloc_seq(current->active_mm); + check_vmalloc_seq(current->active_mm); flush_tlb_kernel_range(virt, end); } From patchwork Mon Jan 24 17:47:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12722614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 870A0C433F5 for ; Mon, 24 Jan 2022 17:49:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244427AbiAXRtn (ORCPT ); Mon, 24 Jan 2022 12:49:43 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:50832 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244406AbiAXRtm (ORCPT ); Mon, 24 Jan 2022 12:49:42 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 547F2B811AE for ; Mon, 24 Jan 2022 17:49:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0EB82C340E8; Mon, 24 Jan 2022 17:49:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643046580; bh=jHWWpER0hrK/bud4ZT+WTnUvrZIJiAorsKw4xpiitIM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=chWkQz0R+HE3X8/zsI87jRrr01QbKy5MjgFL/yYiTt7RhEMCFtl5kb/owMPoDsivy Dv9jhg+4Q+kEiCm7c3zszQwvb/jcdip22yi1XGdyFXOlIK4xI1krXn+bYrDfj7Lv/M NsRGZ/Qp6eXPa4p07CICCmC5LSydDJ7l+sKYx1GySEeN3u+2+6R1L/y4X+vvHCeTQt xCPkUjHSXaABo2B4ak4rcQjZNy9Nu43lKPgBLj95mdpT/TyPUzmqfG3rtJHIQRVuWc TQfSgzav+TX9765tA/hbZqOzNf5GouR68wtmTeG1z44ioEXO4UiRwHd0876WmdQadM uGt5EDq8/aGvg== From: Ard Biesheuvel To: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org Cc: linux-hardening@vger.kernel.org, Ard Biesheuvel , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren , Marc Zyngier , Vladimir Murzin , Jesse Taube Subject: [PATCH v5 32/32] ARM: implement support for vmap'ed stacks Date: Mon, 24 Jan 2022 18:47:44 +0100 Message-Id: <20220124174744.1054712-33-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220124174744.1054712-1-ardb@kernel.org> References: <20220124174744.1054712-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18165; h=from:subject; bh=jHWWpER0hrK/bud4ZT+WTnUvrZIJiAorsKw4xpiitIM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh7uY/biOZhC2PsdpL5hxlXIuuk4sCxyWSWPNkEOIi 822oUzyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYe7mPwAKCRDDTyI5ktmPJJ1KC/ 9YYcXTo3HDVzyH409W/7MQYf0c0xN8Fd8c2htLq9Qy6aMBHWN3RWxQjnlp3r8tzRSjrU9+uY/fvlkS EUDzHqJtsqSwh1cL8SpRKzBqgykn4+1o0SRS/scASzAyi7yHlEh2vS8Lb67QDdw4sfEXUmB7OJxcJW ZslnW0SZHFgHoRuQWDjCpYFmWEBclmrQUSBYuX/jKzIS7Z5rOvJpddDo706c8XtftuHRrsO7w0VRzj 3HvZ50SQE6c82ajr31QFnf8mFpgMK+bQ8JKBsdx4BXjs72JWIimn4mw9CAAB7KnDyFdwpSG79Md1ck pvw5AfxXXwWScxXUyOVCdWPRV1NtYqJQcx/9vebv8tp2E57CEzNk/bT8A6NSrMmojyubO+3o+Zz9UY I6Rv6XxGkA4SfslEs2TBg2Pr8i8F9jVW7X+M7hJ1Nv2ZDdH653+jsDGK1jFR7CsF3xO6GecKIhQnJJ VpReu/TOXNCIQ8yd2zh+vi7NeYqbBvY9+PmptjCsFvJpw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Wire up the generic support for managing task stack allocations via vmalloc, and implement the entry code that detects whether we faulted because of a stack overrun (or future stack overrun caused by pushing the pt_regs array) While this adds a fair amount of tricky entry asm code, it should be noted that it only adds a TST + branch to the svc_entry path. The code implementing the non-trivial handling of the overflow stack is emitted out-of-line into the .text section. Since on !LPAE, we rely on do_translation_fault() to keep PMD level page table entries that cover the vmalloc region up to date, we need to ensure that we don't hit such a stale PMD entry when accessing the stack, as the fault handler itself needs a stack to run as well. So let's bump the vmalloc_seq counter when PMD level entries in the vmalloc range are modified, so that the MM switch fetches the latest version of the entries. To ensure that kernel threads executing with an active_mm other than init_mm are up to date, add an implementation of enter_lazy_tlb() to handle this case. Note that the page table walker is not an ordinary observer in terms of concurrency, which means that memory barriers alone are not sufficient to prevent spurious translation faults from occurring when accessing the stack after a context switch. For this reason, a dummy read from the new stack is added to __switch_to() right before switching to it, so that any faults can be dealt with by do_translation_fault() while the old stack is still active. Also note that we need to increase the per-mode stack by 1 word, to gain some space to stash a GPR until we know it is safe to touch the stack. However, due to the cacheline alignment of the struct, this does not actually increase the memory footprint of the struct stack array at all. Signed-off-by: Ard Biesheuvel Tested-by: Keith Packard Tested-by: Marc Zyngier Tested-by: Vladimir Murzin # ARMv7M --- arch/arm/Kconfig | 1 + arch/arm/include/asm/mmu_context.h | 9 ++ arch/arm/include/asm/page.h | 3 + arch/arm/include/asm/thread_info.h | 8 ++ arch/arm/kernel/entry-armv.S | 91 ++++++++++++++++++-- arch/arm/kernel/entry-header.S | 37 ++++++++ arch/arm/kernel/head.S | 7 ++ arch/arm/kernel/irq.c | 9 +- arch/arm/kernel/setup.c | 8 +- arch/arm/kernel/sleep.S | 13 +++ arch/arm/kernel/traps.c | 69 ++++++++++++++- arch/arm/kernel/unwind.c | 3 +- arch/arm/kernel/vmlinux.lds.S | 4 +- 13 files changed, 247 insertions(+), 15 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index b959249dd716..cbbe38f55088 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -130,6 +130,7 @@ config ARM select THREAD_INFO_IN_TASK select HAVE_IRQ_EXIT_ON_IRQ_STACK select HAVE_SOFTIRQ_ON_OWN_STACK + select HAVE_ARCH_VMAP_STACK if MMU && ARM_HAS_GROUP_RELOCS select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M # Above selects are sorted alphabetically; please add new ones # according to that. Thanks. diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h index 71a26986efb9..db2cb06aa8cf 100644 --- a/arch/arm/include/asm/mmu_context.h +++ b/arch/arm/include/asm/mmu_context.h @@ -138,6 +138,15 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, #endif } +#ifdef CONFIG_VMAP_STACK +static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) +{ + if (mm != &init_mm) + check_vmalloc_seq(mm); +} +#define enter_lazy_tlb enter_lazy_tlb +#endif + #include #endif diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index 11b058a72a5b..5fcc8a600e36 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -147,6 +147,9 @@ extern void copy_page(void *to, const void *from); #include #else #include +#ifdef CONFIG_VMAP_STACK +#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED +#endif #endif #endif /* CONFIG_MMU */ diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index e039d8f12d9b..aecc403b2880 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -25,6 +25,14 @@ #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) #define THREAD_START_SP (THREAD_SIZE - 8) +#ifdef CONFIG_VMAP_STACK +#define THREAD_ALIGN (2 * THREAD_SIZE) +#else +#define THREAD_ALIGN THREAD_SIZE +#endif + +#define OVERFLOW_STACK_SIZE SZ_4K + #ifndef __ASSEMBLY__ struct task_struct; diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 86be80159c14..e098cc4de426 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -49,6 +49,10 @@ UNWIND( .setfp fpreg, sp ) @ subs r2, sp, r0 @ SP above bottom of IRQ stack? rsbscs r2, r2, #THREAD_SIZE @ ... and below the top? +#ifdef CONFIG_VMAP_STACK + ldr_va r2, high_memory, cc @ End of the linear region + cmpcc r2, r0 @ Stack pointer was below it? +#endif movcs sp, r0 @ If so, revert to incoming SP #ifndef CONFIG_UNWINDER_ARM @@ -174,13 +178,18 @@ ENDPROC(__und_invalid) #define SPFIX(code...) #endif - .macro svc_entry, stack_hole=0, trace=1, uaccess=1 + .macro svc_entry, stack_hole=0, trace=1, uaccess=1, overflow_check=1 UNWIND(.fnstart ) - UNWIND(.save {r0 - pc} ) sub sp, sp, #(SVC_REGS_SIZE + \stack_hole) + THUMB( add sp, r1 ) @ get SP in a GPR without + THUMB( sub r1, sp, r1 ) @ using a temp register + + .if \overflow_check + UNWIND(.save {r0 - pc} ) + do_overflow_check (SVC_REGS_SIZE + \stack_hole) + .endif + #ifdef CONFIG_THUMB2_KERNEL - add sp, r1 @ get SP in a GPR without - sub r1, sp, r1 @ using a temp register tst r1, #4 @ test stack pointer alignment sub r1, sp, r1 @ restore original R1 sub sp, r1 @ restore original SP @@ -814,16 +823,88 @@ ENTRY(__switch_to) #endif mov r0, r5 set_current r7, r8 -#if !defined(CONFIG_THUMB2_KERNEL) +#if !defined(CONFIG_THUMB2_KERNEL) && \ + !(defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE)) ldmia r4, {r4 - sl, fp, sp, pc} @ Load all regs saved previously #else ldmia r4, {r4 - sl, fp, ip, lr} @ Thumb2 does not permit SP here +#if defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE) + @ Even though we take care to ensure that the previous task's active_mm + @ has the correct translation for next's task stack, the architecture + @ permits that a translation fault caused by a speculative access is + @ taken once the result of the access should become architecturally + @ visible. Usually, we rely on do_translation_fault() to fix this up + @ transparently, but that only works for the stack if we are not using + @ it when taking the fault. So do a dummy read from next's stack while + @ still running from prev's stack, so that any faults get taken here. + ldr r2, [ip] +#endif mov sp, ip ret lr #endif UNWIND(.fnend ) ENDPROC(__switch_to) +#ifdef CONFIG_VMAP_STACK + .text + .align 2 +__bad_stack: + @ + @ We've just detected an overflow. We need to load the address of this + @ CPU's overflow stack into the stack pointer register. We have only one + @ register available so let's switch to ARM mode and use the per-CPU + @ variable accessor that does not require any scratch registers. + @ + @ We enter here with IP clobbered and its value stashed on the mode + @ stack. + @ +THUMB( bx pc ) +THUMB( nop ) +THUMB( .arm ) + ldr_this_cpu_armv6 ip, overflow_stack_ptr + + str sp, [ip, #-4]! @ Preserve original SP value + mov sp, ip @ Switch to overflow stack + pop {ip} @ Original SP in IP + +#if defined(CONFIG_UNWINDER_FRAME_POINTER) && defined(CONFIG_CC_IS_GCC) + mov ip, ip @ mov expected by unwinder + push {fp, ip, lr, pc} @ GCC flavor frame record +#else + str ip, [sp, #-8]! @ store original SP + push {fpreg, lr} @ Clang flavor frame record +#endif +UNWIND( ldr ip, [r0, #4] ) @ load exception LR +UNWIND( str ip, [sp, #12] ) @ store in the frame record + ldr ip, [r0, #12] @ reload IP + + @ Store the original GPRs to the new stack. + svc_entry uaccess=0, overflow_check=0 + +UNWIND( .save {sp, pc} ) +UNWIND( .save {fpreg, lr} ) +UNWIND( .setfp fpreg, sp ) + + ldr fpreg, [sp, #S_SP] @ Add our frame record + @ to the linked list +#if defined(CONFIG_UNWINDER_FRAME_POINTER) && defined(CONFIG_CC_IS_GCC) + ldr r1, [fp, #4] @ reload SP at entry + add fp, fp, #12 +#else + ldr r1, [fpreg, #8] +#endif + str r1, [sp, #S_SP] @ store in pt_regs + + @ Stash the regs for handle_bad_stack + mov r0, sp + + @ Time to die + bl handle_bad_stack + nop +UNWIND( .fnend ) +ENDPROC(__bad_stack) +#endif + __INIT /* diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 9f01b229841a..347c975c5d9d 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -429,3 +429,40 @@ scno .req r7 @ syscall number tbl .req r8 @ syscall table pointer why .req r8 @ Linux syscall (!= 0) tsk .req r9 @ current thread_info + + .macro do_overflow_check, frame_size:req +#ifdef CONFIG_VMAP_STACK + @ + @ Test whether the SP has overflowed. Task and IRQ stacks are aligned + @ so that SP & BIT(THREAD_SIZE_ORDER + PAGE_SHIFT) should always be + @ zero. + @ +ARM( tst sp, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT) ) +THUMB( tst r1, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT) ) +THUMB( it ne ) + bne .Lstack_overflow_check\@ + + .pushsection .text +.Lstack_overflow_check\@: + @ + @ The stack pointer is not pointing to a valid vmap'ed stack, but it + @ may be pointing into the linear map instead, which may happen if we + @ are already running from the overflow stack. We cannot detect overflow + @ in such cases so just carry on. + @ + str ip, [r0, #12] @ Stash IP on the mode stack + ldr_va ip, high_memory @ Start of VMALLOC space +ARM( cmp sp, ip ) @ SP in vmalloc space? +THUMB( cmp r1, ip ) +THUMB( itt lo ) + ldrlo ip, [r0, #12] @ Restore IP + blo .Lout\@ @ Carry on + +THUMB( sub r1, sp, r1 ) @ Restore original R1 +THUMB( sub sp, r1 ) @ Restore original SP + add sp, sp, #\frame_size @ Undo svc_entry's SP change + b __bad_stack @ Handle VMAP stack overflow + .popsection +.Lout\@: +#endif + .endm diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index c04dd94630c7..500612d3da2e 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -424,6 +424,13 @@ ENDPROC(secondary_startup) ENDPROC(secondary_startup_arm) ENTRY(__secondary_switched) +#if defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE) + @ Before using the vmap'ed stack, we have to switch to swapper_pg_dir + @ as the ID map does not cover the vmalloc region. + mrc p15, 0, ip, c2, c0, 1 @ read TTBR1 + mcr p15, 0, ip, c2, c0, 0 @ set TTBR0 + instr_sync +#endif adr_l r7, secondary_data + 12 @ get secondary_data.stack ldr sp, [r7] ldr r0, [r7, #4] @ get secondary_data.task diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c index 380376f55554..74a1c878bc7a 100644 --- a/arch/arm/kernel/irq.c +++ b/arch/arm/kernel/irq.c @@ -54,7 +54,14 @@ static void __init init_irq_stacks(void) int cpu; for_each_possible_cpu(cpu) { - stack = (u8 *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER); + if (!IS_ENABLED(CONFIG_VMAP_STACK)) + stack = (u8 *)__get_free_pages(GFP_KERNEL, + THREAD_SIZE_ORDER); + else + stack = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, + THREADINFO_GFP, NUMA_NO_NODE, + __builtin_return_address(0)); + if (WARN_ON(!stack)) break; per_cpu(irq_stack_ptr, cpu) = &stack[THREAD_SIZE]; diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 284a80c0b6e1..039feb7cd590 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -141,10 +141,10 @@ EXPORT_SYMBOL(outer_cache); int __cpu_architecture __read_mostly = CPU_ARCH_UNKNOWN; struct stack { - u32 irq[3]; - u32 abt[3]; - u32 und[3]; - u32 fiq[3]; + u32 irq[4]; + u32 abt[4]; + u32 und[4]; + u32 fiq[4]; } ____cacheline_aligned; #ifndef CONFIG_CPU_V7M diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S index 43077e11dafd..a86a1d4f3461 100644 --- a/arch/arm/kernel/sleep.S +++ b/arch/arm/kernel/sleep.S @@ -67,6 +67,12 @@ ENTRY(__cpu_suspend) ldr r4, =cpu_suspend_size #endif mov r5, sp @ current virtual SP +#ifdef CONFIG_VMAP_STACK + @ Run the suspend code from the overflow stack so we don't have to rely + @ on vmalloc-to-phys conversions anywhere in the arch suspend code. + @ The original SP value captured in R5 will be restored on the way out. + ldr_this_cpu sp, overflow_stack_ptr, r6, r7 +#endif add r4, r4, #12 @ Space for pgd, virt sp, phys resume fn sub sp, sp, r4 @ allocate CPU state on stack ldr r3, =sleep_save_sp @@ -113,6 +119,13 @@ ENTRY(cpu_resume_mmu) ENDPROC(cpu_resume_mmu) .popsection cpu_resume_after_mmu: +#if defined(CONFIG_VMAP_STACK) && !defined(CONFIG_ARM_LPAE) + @ Before using the vmap'ed stack, we have to switch to swapper_pg_dir + @ as the ID map does not cover the vmalloc region. + mrc p15, 0, ip, c2, c0, 1 @ read TTBR1 + mcr p15, 0, ip, c2, c0, 0 @ set TTBR0 + instr_sync +#endif bl cpu_init @ restore the und/abt/irq banked regs mov r0, #0 @ return zero on success ldmfd sp!, {r4 - r11, pc} diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index 1b8bef286fbc..8b076eaeaf61 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -123,7 +123,8 @@ void dump_backtrace_stm(u32 *stack, u32 instruction, const char *loglvl) static int verify_stack(unsigned long sp) { if (sp < PAGE_OFFSET || - (sp > (unsigned long)high_memory && high_memory != NULL)) + (!IS_ENABLED(CONFIG_VMAP_STACK) && + sp > (unsigned long)high_memory && high_memory != NULL)) return -EFAULT; return 0; @@ -293,7 +294,8 @@ static int __die(const char *str, int err, struct pt_regs *regs) if (!user_mode(regs) || in_interrupt()) { dump_mem(KERN_EMERG, "Stack: ", regs->ARM_sp, - ALIGN(regs->ARM_sp, THREAD_SIZE)); + ALIGN(regs->ARM_sp - THREAD_SIZE, THREAD_ALIGN) + + THREAD_SIZE); dump_backtrace(regs, tsk, KERN_EMERG); dump_instr(KERN_EMERG, regs); } @@ -840,3 +842,66 @@ void __init early_trap_init(void *vectors_base) */ #endif } + +#ifdef CONFIG_VMAP_STACK + +DECLARE_PER_CPU(u8 *, irq_stack_ptr); + +asmlinkage DEFINE_PER_CPU(u8 *, overflow_stack_ptr); + +static int __init allocate_overflow_stacks(void) +{ + u8 *stack; + int cpu; + + for_each_possible_cpu(cpu) { + stack = (u8 *)__get_free_page(GFP_KERNEL); + if (WARN_ON(!stack)) + return -ENOMEM; + per_cpu(overflow_stack_ptr, cpu) = &stack[OVERFLOW_STACK_SIZE]; + } + return 0; +} +early_initcall(allocate_overflow_stacks); + +asmlinkage void handle_bad_stack(struct pt_regs *regs) +{ + unsigned long tsk_stk = (unsigned long)current->stack; + unsigned long irq_stk = (unsigned long)this_cpu_read(irq_stack_ptr); + unsigned long ovf_stk = (unsigned long)this_cpu_read(overflow_stack_ptr); + + console_verbose(); + pr_emerg("Insufficient stack space to handle exception!"); + + pr_emerg("Task stack: [0x%08lx..0x%08lx]\n", + tsk_stk, tsk_stk + THREAD_SIZE); + pr_emerg("IRQ stack: [0x%08lx..0x%08lx]\n", + irq_stk - THREAD_SIZE, irq_stk); + pr_emerg("Overflow stack: [0x%08lx..0x%08lx]\n", + ovf_stk - OVERFLOW_STACK_SIZE, ovf_stk); + + die("kernel stack overflow", regs, 0); +} + +#ifndef CONFIG_ARM_LPAE +/* + * Normally, we rely on the logic in do_translation_fault() to update stale PMD + * entries covering the vmalloc space in a task's page tables when it first + * accesses the region in question. Unfortunately, this is not sufficient when + * the task stack resides in the vmalloc region, as do_translation_fault() is a + * C function that needs a stack to run. + * + * So we need to ensure that these PMD entries are up to date *before* the MM + * switch. As we already have some logic in the MM switch path that takes care + * of this, let's trigger it by bumping the counter every time the core vmalloc + * code modifies a PMD entry in the vmalloc region. Use release semantics on + * the store so that other CPUs observing the counter's new value are + * guaranteed to see the updated page table entries as well. + */ +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + if (start < VMALLOC_END && end > VMALLOC_START) + atomic_inc_return_release(&init_mm.context.vmalloc_seq); +} +#endif +#endif diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index e8d729975f12..c5ea328c428d 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -389,7 +389,8 @@ int unwind_frame(struct stackframe *frame) /* store the highest address on the stack to avoid crossing it*/ ctrl.sp_low = frame->sp; - ctrl.sp_high = ALIGN(ctrl.sp_low, THREAD_SIZE); + ctrl.sp_high = ALIGN(ctrl.sp_low - THREAD_SIZE, THREAD_ALIGN) + + THREAD_SIZE; pr_debug("%s(pc = %08lx lr = %08lx sp = %08lx)\n", __func__, frame->pc, frame->lr, frame->sp); diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index f02d617e3359..aa12b65a7fd6 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -138,12 +138,12 @@ SECTIONS #ifdef CONFIG_STRICT_KERNEL_RWX . = ALIGN(1<