From patchwork Wed May 29 19:03:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F37346C5 for ; Wed, 29 May 2019 19:05:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E2F102876C for ; Wed, 29 May 2019 19:05:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D736028970; Wed, 29 May 2019 19:05:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5A1C728981 for ; Wed, 29 May 2019 19:05:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0f03sdiXuXcprr6JKngFfGmhDh//tJ8Wt2vA8orxUm4=; b=Aq4rWQ2QuhkwUSKOj0xeH/Kav8 P1bbbX3vrt9WUVYWsyTLocQdVArD65QBQVWbyDnmsEAj12tr7V9jJeEczCuL5VjCIny86KuBmvBeL yrVknV1IKA1fK+2K4YdnRdyt0XBqrTihjqMtJ+vS4qFierubotB55gsgrUJfWlrxlVCEil+b+uTbR iA0tUd22T0DNgj7zPNH00SyLIXXVfm7/k3rQC5cRQQjNP+URfmVDlos2W5/cllC7yrpdPXPvrEqDx 1VYiThigIB2CUOL3cvrCoaU+zWC9LulXHZymACipSel8HrBTbjqxA+lzgr27KkYguB/kU7QNBmQz4 LYQ3QX0A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3si-0007mV-0K; Wed, 29 May 2019 19:05:00 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sT-0007YT-TE for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E204815A2; Wed, 29 May 2019 12:04:42 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1FD433F59C; Wed, 29 May 2019 12:04:40 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 1/7] arm64: cpufeature: add pointer auth meta-capabilities Date: Wed, 29 May 2019 20:03:26 +0100 Message-Id: <20190529190332.29753-2-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120445_950888_B7CE7091 X-CRM114-Status: GOOD ( 15.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To enable pointer auth for the kernel, we're going to need to check for the presence of address auth and generic auth using alternative_if. We currently have two cpucaps for each, but alternative_if needs to check a single cpucap. So define meta-capabilities that are present when either of the current two capabilities is present. Leave the existing four cpucaps in place, as they are still needed to check for mismatched systems where one CPU has the architected algorithm but another has the IMP DEF algorithm. Note, the meta-capabilities were present before but were removed in commits a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4") and 1e013d06120c ("arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed then. Note, unlike before, the current patch checks the cpucap values directly, instead of reading the CPU ID register value. Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook Reviewed-by: Suzuki K Poulose --- Changes since RFC v1: - New patch, as the meta-capabilities have been removed upstream arch/arm64/include/asm/cpucaps.h | 4 +++- arch/arm64/include/asm/cpufeature.h | 6 ++---- arch/arm64/kernel/cpufeature.c | 25 ++++++++++++++++++++++++- 3 files changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index f6a76e43f39e..601183b7b484 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -61,7 +61,9 @@ #define ARM64_HAS_GENERIC_AUTH_ARCH 40 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41 #define ARM64_HAS_IRQ_PRIO_MASKING 42 +#define ARM64_HAS_ADDRESS_AUTH 43 +#define ARM64_HAS_GENERIC_AUTH 44 -#define ARM64_NCAPS 43 +#define ARM64_NCAPS 45 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index e505e1fbd2b9..0522ea674253 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -605,15 +605,13 @@ static inline bool system_supports_cnp(void) static inline bool system_supports_address_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); } static inline bool system_supports_generic_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); } static inline bool system_uses_irq_prio_masking(void) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4061de10cea6..166584deaed2 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1205,6 +1205,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); } + +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); +} + +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); +} #endif /* CONFIG_ARM64_PTR_AUTH */ #ifdef CONFIG_ARM64_PSEUDO_NMI @@ -1466,7 +1480,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_APA_SHIFT, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Address authentication (IMP DEF algorithm)", @@ -1477,6 +1490,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_API_SHIFT, .min_field_value = ID_AA64ISAR1_API_IMP_DEF, .matches = has_cpuid_feature, + }, + { + .capability = ARM64_HAS_ADDRESS_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_address_auth, .cpu_enable = cpu_enable_address_auth, }, { @@ -1499,6 +1517,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64ISAR1_GPI_IMP_DEF, .matches = has_cpuid_feature, }, + { + .capability = ARM64_HAS_GENERIC_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_generic_auth, + }, #endif /* CONFIG_ARM64_PTR_AUTH */ #ifdef CONFIG_ARM64_PSEUDO_NMI { From patchwork Wed May 29 19:03:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA3126C5 for ; Wed, 29 May 2019 19:05:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9962C219AC for ; Wed, 29 May 2019 19:05:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8CF92286A7; Wed, 29 May 2019 19:05:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E7E42286BF for ; Wed, 29 May 2019 19:05:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kI7bjEjQzJfLKwbQEMy5bVZN1EBAOGRHO2oUr/MQiXw=; b=BcZXLkE/m3z+XhvVEpY4vmaZzC OQRCM0MxyKo7PwDE/YBW6NB8iNL56wi5CxOd0PFo0UbSM62WPgHrxau1ue4UK/iNcYl/XQhZFS2qB a9tI+504/Dlb8gsTQJs8hr8yCshUVpnY/Gk0MEsnsu77Bg2R0tgekiiCrxz2ZIwnmF5v0igyn/xjv s5k6bjTqgy6tQt1RZPdZs+eOzuNxFN5QfJa0TVE1ngCZVoyr/PvfP+NiFOjkCysIwJ9X14XfX1HDx dryzLSknN4JFxyxz+nmHvsbZjjREXuNNBxBXqQ4LF3t0dAoYy7ooe5RJMBDUNryGzCGlyQOVyYOzV oUqSH7Fw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3ss-0000H4-QZ; Wed, 29 May 2019 19:05:10 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sU-0007Yz-8O for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91CFF15BF; Wed, 29 May 2019 12:04:45 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C3D1D3F59C; Wed, 29 May 2019 12:04:43 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 2/7] arm64: install user ptrauth keys at kernel exit time Date: Wed, 29 May 2019 20:03:27 +0100 Message-Id: <20190529190332.29753-3-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120446_303919_8524605F X-CRM114-Status: GOOD ( 20.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP As we're going to enable pointer auth within the kernel and use a different APIAKey for the kernel itself, then move the user APIAKey switch to EL0 exception return. The other 4 keys could remain switched during task switch, but are also moved to keep things simple. Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook --- Changes since RFC v1: - Install all 5 keys on kernel_exit - Updated prctl to have keys installed on kernel exit - Renamed ptrauth-asm.h to asm_pointer_auth.h - Minor cleanups - Updated the commit message arch/arm64/include/asm/asm_pointer_auth.h | 43 +++++++++++++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 23 +---------------- arch/arm64/kernel/asm-offsets.c | 11 ++++++++ arch/arm64/kernel/entry.S | 3 +++ arch/arm64/kernel/pointer_auth.c | 3 --- arch/arm64/kernel/process.c | 1 - 6 files changed, 58 insertions(+), 26 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h new file mode 100644 index 000000000000..e3bfddfe80b6 --- /dev/null +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_POINTER_AUTH_H +#define __ASM_ASM_POINTER_AUTH_H + +#include +#include +#include +#include + +#ifdef CONFIG_ARM64_PTR_AUTH + + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + mov \tmp1, #THREAD_KEYS_USER + add \tmp1, \tsk, \tmp1 +alternative_if ARM64_HAS_ADDRESS_AUTH + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIB] + msr_s SYS_APIBKEYLO_EL1, \tmp2 + msr_s SYS_APIBKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDA] + msr_s SYS_APDAKEYLO_EL1, \tmp2 + msr_s SYS_APDAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB] + msr_s SYS_APDBKEYLO_EL1, \tmp2 + msr_s SYS_APDBKEYHI_EL1, \tmp3 +alternative_else_nop_endif +alternative_if ARM64_HAS_GENERIC_AUTH + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA] + msr_s SYS_APGAKEYLO_EL1, \tmp2 + msr_s SYS_APGAKEYHI_EL1, \tmp3 +alternative_else_nop_endif + .endm + +#else /* CONFIG_ARM64_PTR_AUTH */ + + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + .endm + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 15d49515efdd..fc8dc70cc19f 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -50,19 +50,6 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) -static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) -{ - if (system_supports_address_auth()) { - __ptrauth_key_install(APIA, keys->apia); - __ptrauth_key_install(APIB, keys->apib); - __ptrauth_key_install(APDA, keys->apda); - __ptrauth_key_install(APDB, keys->apdb); - } - - if (system_supports_generic_auth()) - __ptrauth_key_install(APGA, keys->apga); -} - extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); /* @@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) } #define ptrauth_thread_init_user(tsk) \ -do { \ - struct task_struct *__ptiu_tsk = (tsk); \ - ptrauth_keys_init(&__ptiu_tsk->thread.keys_user); \ - ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user); \ -} while (0) - -#define ptrauth_thread_switch(tsk) \ - ptrauth_keys_switch(&(tsk)->thread.keys_user) + ptrauth_keys_init(&(tsk)->thread.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) -#define ptrauth_thread_switch(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7f40dcbdd51d..edc471e4acb1 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -51,6 +51,9 @@ int main(void) #endif BLANK(); DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); +#endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); @@ -153,5 +156,13 @@ int main(void) DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(PTRAUTH_KEY_APIA, offsetof(struct ptrauth_keys, apia)); + DEFINE(PTRAUTH_KEY_APIB, offsetof(struct ptrauth_keys, apib)); + DEFINE(PTRAUTH_KEY_APDA, offsetof(struct ptrauth_keys, apda)); + DEFINE(PTRAUTH_KEY_APDB, offsetof(struct ptrauth_keys, apdb)); + DEFINE(PTRAUTH_KEY_APGA, offsetof(struct ptrauth_keys, apga)); + BLANK(); +#endif return 0; } diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index c50a7a75f2e0..73a28d88f78d 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -336,6 +337,8 @@ alternative_if ARM64_WORKAROUND_845719 alternative_else_nop_endif #endif 3: + ptrauth_keys_install_user tsk, x0, x1, x2 + apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index c507b584259d..95985be67891 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (!arg) { ptrauth_keys_init(keys); - ptrauth_keys_switch(keys); return 0; } @@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga)); - ptrauth_keys_switch(keys); - return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3767fb21a5b8..a9b30111b725 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -481,7 +481,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, contextidr_thread_switch(next); entry_task_switch(next); uao_thread_switch(next); - ptrauth_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case From patchwork Wed May 29 19:03:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967547 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9419C76 for ; Wed, 29 May 2019 19:05:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 826612838B for ; Wed, 29 May 2019 19:05:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 761592876C; Wed, 29 May 2019 19:05:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F1BF92838B for ; Wed, 29 May 2019 19:05:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4AYgSldbxqEkLBwu6M9NpJxaS1UMelYYuvMoKZM0E2s=; b=ajMOOzeBUHDLPdvqdBSEkGZt50 GtBtv5grDkJPm0lx4GaqvHq1dJ/7uj1j9Qdy2mT0gxGkXsZbrqgHGKBUXDYiQggRzSqOhs+Vvl6TD +4ghDj4/07e9l93kabkBco+4IzV8JPHiYRPFnzyiWIlwVo+3RvEhMN5uf1DABz2Q2uUXomJVHY/MB 6RSXrcHQyFrmkTw82vqt/nZ/IIuNJbY2FjgerTevxJza0lA06i2GhtiZdJ59oV5Yc17I5kJH37buf zuszLdHIdFEmHiJYE9F0PRJzSxCy8ZGwBtP9xftTe40NErALGcUQqSwosuyQ2xzDmxxWzdZzoYSyo RI+afSLA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3t5-0000wX-Mk; Wed, 29 May 2019 19:05:23 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sW-0007Zk-Ep for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:51 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31DDCA78; Wed, 29 May 2019 12:04:48 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 63FA03F59C; Wed, 29 May 2019 12:04:46 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 3/7] arm64: cpufeature: handle conflicts based on capability Date: Wed, 29 May 2019 20:03:28 +0100 Message-Id: <20190529190332.29753-4-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120448_664554_F723F717 X-CRM114-Status: GOOD ( 24.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Each system capability can be of either boot, local, or system scope, depending on when the state of the capability is finalized. When we detect a conflict on a late CPU, we either offline the CPU or panic the system. We currently always panic if the conflict is caused by a boot scope capability, and offline the CPU if the conflict is caused by a local or system scope capability. We're going to want to add new capability (for pointer authentication) which needs to be boot scope but doesn't need to panic the system when a conflict is detected. So add a new flag to specify whether the capability requires the system to panic or not. Current boot scope capabilities are updated to set the flag, so there should be no functional change as a result of this patch. Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook Reviewed-by: Suzuki K Poulose --- Changes since RFC v1: - New patch, to have ptrauth mismatches disable secondaries instead of panicking arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++++- arch/arm64/kernel/cpufeature.c | 23 +++++++++-------------- 2 files changed, 23 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 0522ea674253..ad952f2e0a2b 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -217,6 +217,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * In some non-typical cases either both (a) and (b), or neither, * should be permitted. This can be described by including neither * or both flags in the capability's type field. + * + * In case of a conflict, the CPU is prevented from booting. If the + * ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability, + * then a kernel panic is triggered. */ @@ -249,6 +253,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) /* Is it safe for a late CPU to miss this capability when system has it */ #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) +/* Panic when a conflict is detected */ +#define ARM64_CPUCAP_PANIC_ON_CONFLICT ((u16)BIT(6)) /* * CPU errata workarounds that need to be enabled at boot time if one or @@ -290,7 +296,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * CPU feature used early in the boot based on the boot CPU. All secondary * CPUs must match the state of the capability as detected by the boot CPU. */ -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \ + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT) struct arm64_cpu_capabilities { const char *desc; @@ -354,6 +361,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); } +static inline bool +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT); +} + /* * Generic helper for handling capabilties with multiple (match,enable) pairs * of call backs, sharing the same capability bit. diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 166584deaed2..8a595b4cb0aa 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1796,10 +1796,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask) * Run through the list of capabilities to check for conflicts. * If the system has already detected a capability, take necessary * action on this CPU. - * - * Returns "false" on conflicts. */ -static bool verify_local_cpu_caps(u16 scope_mask) +static void verify_local_cpu_caps(u16 scope_mask) { int i; bool cpu_has_cap, system_has_cap; @@ -1844,10 +1842,12 @@ static bool verify_local_cpu_caps(u16 scope_mask) pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n", smp_processor_id(), caps->capability, caps->desc, system_has_cap, cpu_has_cap); - return false; - } - return true; + if (cpucap_panic_on_conflict(caps)) + cpu_panic_kernel(); + else + cpu_die_early(); + } } /* @@ -1857,12 +1857,8 @@ static bool verify_local_cpu_caps(u16 scope_mask) static void check_early_cpu_features(void) { verify_cpu_asid_bits(); - /* - * Early features are used by the kernel already. If there - * is a conflict, we cannot proceed further. - */ - if (!verify_local_cpu_caps(SCOPE_BOOT_CPU)) - cpu_panic_kernel(); + + verify_local_cpu_caps(SCOPE_BOOT_CPU); } static void @@ -1910,8 +1906,7 @@ static void verify_local_cpu_capabilities(void) * check_early_cpu_features(), as they need to be verified * on all secondary CPUs. */ - if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU)) - cpu_die_early(); + verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU); verify_local_elf_hwcaps(arm64_elf_hwcaps); From patchwork Wed May 29 19:03:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 625A176 for ; Wed, 29 May 2019 19:05:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52AFE2876C for ; Wed, 29 May 2019 19:05:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 471E9288AD; Wed, 29 May 2019 19:05:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 98C922876C for ; Wed, 29 May 2019 19:05:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=eK3I0qBLiWpHLRDsqb0B05nHkxHrVBzfnkYPeJtII+c=; b=FBsOJhW0O4IxVV0b7p3Dh2ec7q xz9PvRt+tFfF/oYpZZLQMGMCINDqwzAzqBbrbPFXaQG/UD6PjfyeHAqblBuSjWcKDDlvu/VRnqeYf J3I4jRl4A/FKB3tBf/g/kqjFa5JNyY5N44dOk5J1XK9qdGVgGQlOGVF1qoz8WbmEEW4ytB/WPJ3t2 p3pqhuKm1bgmK+WoPJSnKGoNJKYLHrZyLecL/4JIOBgRFmSKopJ8hcoEnhF+UqeJYhM20pJOUGE/1 kZHOZg+zOiKLN+tGAscJgzvRCTJBjmILG9nouSpSh7x0kFFvAOOvsVu+gCTdd3HOSxd1jFH08zdm5 hiOTq99Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3tF-0001C0-7J; Wed, 29 May 2019 19:05:33 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sZ-0007dt-UF for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDE9715A2; Wed, 29 May 2019 12:04:50 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F0A463F59C; Wed, 29 May 2019 12:04:48 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 4/7] arm64: enable ptrauth earlier Date: Wed, 29 May 2019 20:03:29 +0100 Message-Id: <20190529190332.29753-5-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120452_788939_9AC90DF8 X-CRM114-Status: GOOD ( 23.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. A function that enables pointer auth cannot return, so compile such functions without pointer auth (using a compiler function attribute). The __no_ptrauth macro will be defined to the required function attribute in a later patch. Do not use the cpu_enable callback, to avoid compiling the whole callchain down to cpu_enable without pointer auth. Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then we offline the late CPU. Until now we would have just disabled address auth in this case. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook --- Changes since RFC v1: - Enable instructions for all 5 keys - Replaced __always_inline with __no_ptrauth as it turns out __always_inline is only a hint (and could therefore result in crashes) - Left the __no_ptrauth definition blank for now as it needs to be determined with more complex logic in a later patch - Updated the Kconfig symbol description - Minor cleanups - Updated the commit message arch/arm64/Kconfig | 4 ++++ arch/arm64/include/asm/cpufeature.h | 9 +++++++++ arch/arm64/include/asm/pointer_auth.h | 19 +++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 13 +++---------- arch/arm64/kernel/smp.c | 7 ++++++- 5 files changed, 41 insertions(+), 11 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7e34b9eba5de..f4c1e9b30129 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1304,6 +1304,10 @@ config ARM64_PTR_AUTH hardware it will not be advertised to userspace nor will it be enabled. + If the feature is present on the primary CPU but not a secondary CPU, + then the secondary CPU will be offlined. On such a system, this + option should not be selected. + endmenu config ARM64_SVE diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index ad952f2e0a2b..e36a7948b763 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -299,6 +299,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \ (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT) +/* + * CPU feature used early in the boot based on the boot CPU. It is safe for a + * late CPU to have this feature even though the boot CPU hasn't enabled it, + * although the feature will not be used by Linux in this case. If the boot CPU + * has enabled this feature already, then every late CPU must have it. + */ +#define ARM64_CPUCAP_BOOT_CPU_FEATURE \ + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index fc8dc70cc19f..a97b7dc10bdb 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -11,6 +11,13 @@ #ifdef CONFIG_ARM64_PTR_AUTH /* + * Compile the function without pointer authentication instructions. This + * allows pointer authentication to be enabled/disabled within the function + * (but leaves the function unprotected by pointer authentication). + */ +#define __no_ptrauth + +/* * Each key is a 128-bit quantity which is split across a pair of 64-bit * registers (Lo and Hi). */ @@ -50,6 +57,16 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) +static inline void __no_ptrauth ptrauth_cpu_enable(void) +{ + if (!system_supports_address_auth()) + return; + + sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | + SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); + isb(); +} + extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); /* @@ -68,6 +85,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) ptrauth_keys_init(&(tsk)->thread.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ +#define __no_ptrauth +#define ptrauth_cpu_enable(tsk) #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 8a595b4cb0aa..2cf042ebb237 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1200,12 +1200,6 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) #endif /* CONFIG_ARM64_RAS_EXTN */ #ifdef CONFIG_ARM64_PTR_AUTH -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) -{ - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); -} - static bool has_address_auth(const struct arm64_cpu_capabilities *entry, int __unused) { @@ -1474,7 +1468,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (architected algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_APA_SHIFT, @@ -1484,7 +1478,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (IMP DEF algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_API_SHIFT, @@ -1493,9 +1487,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, { .capability = ARM64_HAS_ADDRESS_AUTH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .matches = has_address_auth, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Generic authentication (architected algorithm)", diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 824de7038967..eca6aa05ac66 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include #include @@ -238,6 +239,8 @@ asmlinkage notrace void secondary_start_kernel(void) */ check_local_cpu_capabilities(); + ptrauth_cpu_enable(); + if (cpu_ops[cpu]->cpu_postboot) cpu_ops[cpu]->cpu_postboot(); @@ -432,7 +435,7 @@ void __init smp_cpus_done(unsigned int max_cpus) mark_linear_text_alias_ro(); } -void __init smp_prepare_boot_cpu(void) +void __init __no_ptrauth smp_prepare_boot_cpu(void) { set_my_cpu_offset(per_cpu_offset(smp_processor_id())); /* @@ -452,6 +455,8 @@ void __init smp_prepare_boot_cpu(void) /* Conditionally switch to GIC PMR for interrupt masking */ if (system_uses_irq_prio_masking()) init_gic_priority_masking(); + + ptrauth_cpu_enable(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) From patchwork Wed May 29 19:03:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7AFA576 for ; Wed, 29 May 2019 19:05:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6AB25289D6 for ; Wed, 29 May 2019 19:05:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C75B286E4; Wed, 29 May 2019 19:05:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AB590286E4 for ; Wed, 29 May 2019 19:05:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=P2Yft9VPeZtWiWRfFXIOfqqQEVhyvXaFVbHggvNmf7c=; b=ZOM9CGUQkrK17rdrKoe6rlQFsj ekOTd8Ywrx3P1P5bh27QoYrwwL2n4dvOirOVzQoadpkjhSoB0xgUX1I7oym88e74DUHhm8R5e2CCJ EI7qHEHn2xk5l2rL+KSgJjO3lSkodE9z7o926fOJ9fZzur75FkGcHLNIr0uia4Tn6jTFBdjTJzQg+ OrvN7ZOxvhKpmhIsT0ywkXdZr3fWpbCZcRh6BMrAgvg3MiKcjAKbJYCiGFJobvpCHUJowXCe4TJQv brRizeTSkEjB1vu+N9pOMWp4Kv0fuk6cyNkDKiOxegAKBlF0z9Cmox6KsGhGYr/4YpK2tyx32HgNh X8zeFd7w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3tQ-0001RK-On; Wed, 29 May 2019 19:05:44 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sb-0007gI-QQ for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74B9A15BF; Wed, 29 May 2019 12:04:53 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7974D3F59C; Wed, 29 May 2019 12:04:51 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 5/7] arm64: initialize and switch ptrauth kernel keys Date: Wed, 29 May 2019 20:03:30 +0100 Message-Id: <20190529190332.29753-6-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120453_937781_70919E84 X-CRM114-Status: GOOD ( 19.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Set up keys to use pointer authentication within the kernel. The kernel will be compiled with APIAKey instructions, the other keys are currently unused. Each task is given its own APIAKey, which is initialized during fork. The key is changed during context switch and on kernel entry from EL0. A function that changes the key cannot return, so compile such functions without pointer auth (using __no_ptrauth which will be defined to a compiler function attribute later). Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook --- Changes since RFC v1: - Updated ptrauth_keys_init to not randomly initialize the other 4 (unused) kernel-space keys, to save entropy - Replaced __always_inline with __no_ptrauth as it turns out __always_inline is only a hint (and could therefore result in crashes) - Moved ptrauth_keys_install_kernel earlier in kernel_entry, in case in the future C function calls are added in kernel_entry - Added ISB after key install in kernel_exit, in case in the future C function calls are added after the macro - Dropped an outdated comment - Updated the commit message arch/arm64/include/asm/asm_pointer_auth.h | 16 +++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 33 +++++++++++++++++++++---------- arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/entry.S | 1 + arch/arm64/kernel/pointer_auth.c | 2 +- arch/arm64/kernel/process.c | 3 +++ arch/arm64/kernel/smp.c | 3 +++ 8 files changed, 49 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index e3bfddfe80b6..f595da9661a4 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -25,11 +25,24 @@ alternative_if ARM64_HAS_ADDRESS_AUTH ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APDB] msr_s SYS_APDBKEYLO_EL1, \tmp2 msr_s SYS_APDBKEYHI_EL1, \tmp3 + isb alternative_else_nop_endif alternative_if ARM64_HAS_GENERIC_AUTH ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APGA] msr_s SYS_APGAKEYLO_EL1, \tmp2 msr_s SYS_APGAKEYHI_EL1, \tmp3 + isb +alternative_else_nop_endif + .endm + + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + mov \tmp1, #THREAD_KEYS_KERNEL + add \tmp1, \tsk, \tmp1 +alternative_if ARM64_HAS_ADDRESS_AUTH + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + isb alternative_else_nop_endif .endm @@ -38,6 +51,9 @@ alternative_else_nop_endif .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .endm + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + .endm + #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index a97b7dc10bdb..79f35f5ecff5 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -25,10 +25,6 @@ struct ptrauth_key { unsigned long lo, hi; }; -/* - * We give each process its own keys, which are shared by all threads. The keys - * are inherited upon fork(), and reinitialised upon exec*(). - */ struct ptrauth_keys { struct ptrauth_key apia; struct ptrauth_key apib; @@ -37,16 +33,18 @@ struct ptrauth_keys { struct ptrauth_key apga; }; -static inline void ptrauth_keys_init(struct ptrauth_keys *keys) +static inline void ptrauth_keys_init(struct ptrauth_keys *keys, bool user) { if (system_supports_address_auth()) { get_random_bytes(&keys->apia, sizeof(keys->apia)); - get_random_bytes(&keys->apib, sizeof(keys->apib)); - get_random_bytes(&keys->apda, sizeof(keys->apda)); - get_random_bytes(&keys->apdb, sizeof(keys->apdb)); + if (user) { + get_random_bytes(&keys->apib, sizeof(keys->apib)); + get_random_bytes(&keys->apda, sizeof(keys->apda)); + get_random_bytes(&keys->apdb, sizeof(keys->apdb)); + } } - if (system_supports_generic_auth()) + if (system_supports_generic_auth() && user) get_random_bytes(&keys->apga, sizeof(keys->apga)); } @@ -57,6 +55,15 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0) +static inline void __no_ptrauth ptrauth_keys_switch(struct ptrauth_keys *keys) +{ + if (!system_supports_address_auth()) + return; + + __ptrauth_key_install(APIA, keys->apia); + isb(); +} + static inline void __no_ptrauth ptrauth_cpu_enable(void) { if (!system_supports_address_auth()) @@ -82,7 +89,11 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) } #define ptrauth_thread_init_user(tsk) \ - ptrauth_keys_init(&(tsk)->thread.keys_user) + ptrauth_keys_init(&(tsk)->thread.keys_user, true) +#define ptrauth_thread_init_kernel(tsk) \ + ptrauth_keys_init(&(tsk)->thread.keys_kernel, false) +#define ptrauth_thread_switch(tsk) \ + ptrauth_keys_switch(&(tsk)->thread.keys_kernel) #else /* CONFIG_ARM64_PTR_AUTH */ #define __no_ptrauth @@ -90,6 +101,8 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) +#define ptrauth_thread_init_kernel(tsk) +#define ptrauth_thread_switch(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 5d9ce62bdebd..f7684a021ca2 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -149,6 +149,7 @@ struct thread_struct { struct debug_info debug; /* debugging */ #ifdef CONFIG_ARM64_PTR_AUTH struct ptrauth_keys keys_user; + struct ptrauth_keys keys_kernel; #endif }; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index edc471e4acb1..7dfebecd387d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -53,6 +53,7 @@ int main(void) DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); + DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); #endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 73a28d88f78d..001d508cd63f 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -184,6 +184,7 @@ alternative_cb_end apply_ssbd 1, x22, x23 + ptrauth_keys_install_kernel tsk, x20, x22, x23 .else add x21, sp, #S_FRAME_SIZE get_current_task tsk diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index 95985be67891..4ca141b8c581 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -18,7 +18,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) return -EINVAL; if (!arg) { - ptrauth_keys_init(keys); + ptrauth_keys_init(keys, true); return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index a9b30111b725..890d7185641b 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -378,6 +378,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, */ fpsimd_flush_task_state(p); + ptrauth_thread_init_kernel(p); + if (likely(!(p->flags & PF_KTHREAD))) { *childregs = *current_pt_regs(); childregs->regs[0] = 0; @@ -481,6 +483,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, contextidr_thread_switch(next); entry_task_switch(next); uao_thread_switch(next); + ptrauth_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index eca6aa05ac66..c5a6f3e8660b 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -239,6 +239,7 @@ asmlinkage notrace void secondary_start_kernel(void) */ check_local_cpu_capabilities(); + ptrauth_thread_switch(current); ptrauth_cpu_enable(); if (cpu_ops[cpu]->cpu_postboot) @@ -456,6 +457,8 @@ void __init __no_ptrauth smp_prepare_boot_cpu(void) if (system_uses_irq_prio_masking()) init_gic_priority_masking(); + ptrauth_thread_init_kernel(current); + ptrauth_thread_switch(current); ptrauth_cpu_enable(); } From patchwork Wed May 29 19:03:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967553 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E000A76 for ; Wed, 29 May 2019 19:06:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF42B289D6 for ; Wed, 29 May 2019 19:06:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C35F3289EF; Wed, 29 May 2019 19:06:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6A9D1289D6 for ; Wed, 29 May 2019 19:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4s2kRVODns7bmhs19XcYE+HHwkMMC6XdJYEFM3PkBTU=; b=s1qo1tv5X/Kw3/Z0QIe0xGHocR q9IA4W8k/yqeZGdMv4S3oel9IPiomffMLRmULZ7roPDohPV/HUvmBnBy+dH+WqE0GnhgWc4utCg1Q 4dmC6cBps8gjZqGxZRITzf+wIVX0jHkf+C+vsgkwamfsshJ8CUXp+EgQWl7C96qde6/icbQmWKpWm XIo6pefHRrcZUp74q29dKTO/ve0y1b0oWRg/5M5DWU3CXx9XJBuvhZP08llIUton26oksinYsTljb +mTS7ab4G78Ah4AIoB5WE+nhEIjrLBo6e6pUeUpCfBrtnWmmINCem6wh5dKjOtiL/m4WXDsNUPBwZ 2a2nurPg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3ta-0001fV-Ab; Wed, 29 May 2019 19:05:54 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3se-0007jO-D3 for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:04:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 186ECA78; Wed, 29 May 2019 12:04:56 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 48AF03F59C; Wed, 29 May 2019 12:04:54 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 6/7] arm64: unwind: strip PAC from kernel addresses Date: Wed, 29 May 2019 20:03:31 +0100 Message-Id: <20190529190332.29753-7-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120456_455137_E4F608BB X-CRM114-Status: GOOD ( 14.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Rutland When we enable pointer authentication in the kernel, LR values saved to the stack will have a PAC which we must strip in order to retrieve the real return address. Strip PACs when unwinding the stack in order to account for this. Reviewed-by: Amit Daniel Kachhap Signed-off-by: Mark Rutland Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook --- Changes since RFC v1: - Moved the patch later in the series arch/arm64/include/asm/pointer_auth.h | 10 +++++++--- arch/arm64/kernel/stacktrace.c | 3 +++ 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 79f35f5ecff5..5491c34b4dc3 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -80,12 +80,16 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); * The EL0 pointer bits used by a pointer authentication code. * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. */ -#define ptrauth_user_pac_mask() GENMASK(54, vabits_user) +#define ptrauth_user_pac_mask() GENMASK(54, vabits_user) + +#define ptrauth_kernel_pac_mask() (GENMASK(63, 56) | GENMASK(54, VA_BITS)) -/* Only valid for EL0 TTBR0 instruction pointers */ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptr & ~ptrauth_user_pac_mask(); + if (ptr & BIT_ULL(55)) + return ptr | ptrauth_kernel_pac_mask(); + else + return ptr & ~ptrauth_user_pac_mask(); } #define ptrauth_thread_init_user(tsk) \ diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index d908b5e9e949..df07c27a9673 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -24,6 +24,7 @@ #include #include +#include #include #include @@ -56,6 +57,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8)); + frame->pc = ptrauth_strip_insn_pac(frame->pc); + #ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack && (frame->pc == (unsigned long)return_to_handler)) { From patchwork Wed May 29 19:03:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kristina Martsenko X-Patchwork-Id: 10967555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4102A76 for ; Wed, 29 May 2019 19:06:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D7B1289CF for ; Wed, 29 May 2019 19:06:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2177D286E4; Wed, 29 May 2019 19:06:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 97BA0289CF for ; Wed, 29 May 2019 19:06:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=L3etFPGknbXRH6bJFsmMrWGnprqVFG5UzgxRtxwzdy4=; b=K0jsmDInGnJaYMbtJmY3z9AMoG 4wDTeJGdB9B8iBWReM8zcTDBRw8TzMHaIIqdZ19PxBLUNOgqKfPDQdF3xMoVNEHXE0CWqsw0wsnma N5RqiXC0t+b+xzQYs/NXt+4gtruDftFwfY7npzw/4kuLgZJERdbahKQqovRqEEUYw/0xVaCRFQntm ZG+uqmrvrbZxe3CLfjSWKtJRTZzeG8U36SAFhKZTHrKZWmiOvMQcGI9OPtGQAseByOLfPlYepS++4 nfp52BTbdt1R6+hBYVu+bhDE/3XHz1qqQBYSEBP+Ky/knPJQmw1Nu7dNHNxbSTp/sNojQzEj8cBfv 6DQ0e2yw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3uT-00026z-Mb; Wed, 29 May 2019 19:06:49 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hW3sg-0007m0-Tb for linux-arm-kernel@lists.infradead.org; Wed, 29 May 2019 19:05:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9369115A2; Wed, 29 May 2019 12:04:58 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C60783F59C; Wed, 29 May 2019 12:04:56 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Subject: [RFC v2 7/7] arm64: compile the kernel with ptrauth return address signing Date: Wed, 29 May 2019 20:03:32 +0100 Message-Id: <20190529190332.29753-8-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190529190332.29753-1-kristina.martsenko@arm.com> References: <20190529190332.29753-1-kristina.martsenko@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190529_120458_966362_B2CB6674 X-CRM114-Status: GOOD ( 19.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , Catalin Marinas , Suzuki K Poulose , Will Deacon , Ramana Radhakrishnan , Amit Kachhap , Dave Martin MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Compile all non-leaf functions with two ptrauth instructions: PACIASP in the prologue to sign the return address, and AUTIASP in the epilogue to authenticate the return address (from the stack). If authentication fails, the return will cause an instruction abort to be taken, followed by an oops and killing the task. This should help protect the kernel against attacks using return-oriented programming. The new instructions are in the HINT encoding space, so on a system without ptrauth they execute as NOPs. CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM guests, but also automatically builds the kernel with ptrauth instructions if the compiler supports it. If there is no compiler support, we do not warn that the kernel was built without ptrauth instructions. GCC 7 and 8 support the -msign-return-address option, while GCC 9 deprecates that option and replaces it with -mbranch-protection. Support both options. Signed-off-by: Kristina Martsenko Reviewed-by: Kees Cook --- Changes since RFC v1: - Fixed support for compilers without ptrauth - Added support for the new -mbranch-protection option - Switched from protecting all functions to only protecting non-leaf functions (for no good reason, I have not done e.g. gadget analysis) - Moved __no_ptrauth definition to this patch, depending on compiler support - Updated the Kconfig symbol description - Updated the commit message arch/arm64/Kconfig | 12 +++++++++++- arch/arm64/Makefile | 6 ++++++ arch/arm64/include/asm/pointer_auth.h | 6 ++++++ 3 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f4c1e9b30129..3ce93d88fae1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1295,11 +1295,15 @@ config ARM64_PTR_AUTH and other attacks. This option enables these instructions at EL0 (i.e. for userspace). - Choosing this option will cause the kernel to initialise secret keys for each process at exec() time, with these keys being context-switched along with the process. + If the compiler supports the -mbranch-protection or + -msign-return-address flag (e.g. GCC 7 or later), then this option + will also cause the kernel itself to be compiled with return address + protection. + The feature is detected at runtime. If the feature is not present in hardware it will not be advertised to userspace nor will it be enabled. @@ -1308,6 +1312,12 @@ config ARM64_PTR_AUTH then the secondary CPU will be offlined. On such a system, this option should not be selected. +config CC_HAS_BRANCH_PROT_PAC_RET + def_bool $(cc-option,-mbranch-protection=pac-ret) + +config CC_HAS_SIGN_RETURN_ADDRESS + def_bool $(cc-option,-msign-return-address=non-leaf) + endmenu config ARM64_SVE diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index b025304bde46..1dfbe755b531 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -66,6 +66,12 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif +ifeq ($(CONFIG_ARM64_PTR_AUTH),y) +pac-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=non-leaf +pac-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret +KBUILD_CFLAGS += $(pac-flags-y) +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 5491c34b4dc3..3a83c40ffd8a 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -15,7 +15,13 @@ * allows pointer authentication to be enabled/disabled within the function * (but leaves the function unprotected by pointer authentication). */ +#if defined(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) +#define __no_ptrauth __attribute__((target("branch-protection=none"))) +#elif defined(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) +#define __no_ptrauth __attribute__((target("sign-return-address=none"))) +#else #define __no_ptrauth +#endif /* * Each key is a 128-bit quantity which is split across a pair of 64-bit