From patchwork Thu Oct 8 18:16:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 11824307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB6E214D5 for ; Thu, 8 Oct 2020 18:18:42 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 69B15221FE for ; Thu, 8 Oct 2020 18:18:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Au44HXhY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69B15221FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=w1cmSo389jP5xt0QWxdPkRz9s8a3zWxUhEa8IBg3BiY=; b=Au44HXhYziOj7CuikgTqIOv69V TgjDuGVudBcCpxeoZja7jlvSv90ca+sCxA0nyR45bkXzELKu/U0P56wvcggGXhMgaF2rmTWlW62q+ dMVuE09/gbJ64pSDznpMK5vA3Ugh0rAgTvF5h4kFM7eia7PEOtPFYZ6R4Yhanh6PNZlRtR7dZVdoQ EksYeFSxKDue/4pOFGRVli1S3MOQqxqwi4IvC8rdhG4gIdDdZDGUKVqIBCzAt2HlVeFLKLeUtKhEB D7/JVhh4Xre58xriMNWEVgpWWWjIWKw7wHHIUJnnaMGHLrAfVH7NtDfAFLUquAYlPCkur5xW49aGu PUZKnteA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTP-0004cC-Dr; Thu, 08 Oct 2020 18:17:03 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTK-0004aQ-7z for linux-arm-kernel@lists.infradead.org; Thu, 08 Oct 2020 18:16:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9AC451529; Thu, 8 Oct 2020 11:16:55 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4BBC73F802; Thu, 8 Oct 2020 11:16:54 -0700 (PDT) From: Qais Yousef To: Catalin Marinas , Will Deacon , Marc Zyngier , "Peter Zijlstra (Intel)" Subject: [RFC PATCH 1/3] arm64: kvm: Handle Asymmetric AArch32 systems Date: Thu, 8 Oct 2020 19:16:39 +0100 Message-Id: <20201008181641.32767-2-qais.yousef@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201008181641.32767-1-qais.yousef@arm.com> References: <20201008181641.32767-1-qais.yousef@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201008_141658_363164_DE978436 X-CRM114-Status: GOOD ( 15.28 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Greg Kroah-Hartman , Qais Yousef , Linus Torvalds , Morten Rasmussen , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On a system without uniform support for AArch32 at EL0, it is possible for the guest to force run AArch32 at EL0 and potentially cause an illegal exception if running on the wrong core. Add an extra check to catch if the guest ever does that and prevent it from running again, treating it as ARM_EXCEPTION_IL. We try to catch this misbehavior as early as possible and not rely on PSTATE.IL to occur. Tested on Juno by instrumenting the host to: * Fake asym aarch32. * Comment out hiding of ID registers from the guest. Any attempt to run 32bit app in the guest will produce such error on qemu: # ./test error: kvm run failed Invalid argument R00=ffff0fff R01=ffffffff R02=00000000 R03=00087968 R04=000874b8 R05=ffd70b24 R06=ffd70b2c R07=00000055 R08=00000000 R09=00000000 R10=00000000 R11=00000000 R12=0000001c R13=ffd6f974 R14=0001ff64 R15=ffff0fe0 PSR=a0000010 N-C- A usr32 Signed-off-by: Qais Yousef --- arch/arm64/kvm/arm.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b588c3b5c2f0..22ff3373d855 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -644,6 +644,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; int ret; + if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + kvm_err("Illegal AArch32 mode at EL0, can't run."); + return -ENOEXEC; + } + if (unlikely(!kvm_vcpu_initialized(vcpu))) return -ENOEXEC; @@ -804,6 +809,17 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) preempt_enable(); + /* + * For asym aarch32 systems we present a 64bit only system to + * the guest. But in case it managed somehow to escape that and + * enter 32bit mode, catch that and prevent it from running + * again. + */ + if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + kvm_err("Detected illegal AArch32 mode at EL0, exiting."); + ret = ARM_EXCEPTION_IL; + } + ret = handle_exit(vcpu, ret); } From patchwork Thu Oct 8 18:16:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 11824309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12B081592 for ; Thu, 8 Oct 2020 18:18:44 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B32EA221FE for ; Thu, 8 Oct 2020 18:18:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="f9LrRuAN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B32EA221FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ijy0k9zFGDu9/bVwhrc6UyhA1XThRgViEkvRU9Rq1q4=; b=f9LrRuANXUSExOE5nRrs37cqj4 Zb/FVRXdR/qlvQdAyGaeFMUTnafjQ6o5WRI6psFOFav6s5X1BRYfAhH0Ppr9yQN3n+/cF2Uex6oI2 zTH9qw8h6pnHMW1PPN9PHrcqIFQP28nDS4MDIRcb0iuE/Rp60Kt2wMtH2uZB41kxlL+afC1iU4kMV A9cVRcdpZm8e0K6yysOvoW05Z0s7Gl8Ep1dRtP/z+aoGpSzrOgb/Y0CN3D5u5WqONTdn8Dq40062z HpnIxVl6r5Y550264js0EVtwPY8PDG2IO6akx4aFWS2LAiqL1MTrmWRcX8n9KgrWkG5yYlpgBiarO 6RGnVu9A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTS-0004d4-Lp; Thu, 08 Oct 2020 18:17:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTM-0004bB-HW for linux-arm-kernel@lists.infradead.org; Thu, 08 Oct 2020 18:17:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6B78D6E; Thu, 8 Oct 2020 11:16:58 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5188D3F802; Thu, 8 Oct 2020 11:16:57 -0700 (PDT) From: Qais Yousef To: Catalin Marinas , Will Deacon , Marc Zyngier , "Peter Zijlstra (Intel)" Subject: [RFC PATCH 2/3] arm64: Add support for asymmetric AArch32 EL0 configurations Date: Thu, 8 Oct 2020 19:16:40 +0100 Message-Id: <20201008181641.32767-3-qais.yousef@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201008181641.32767-1-qais.yousef@arm.com> References: <20201008181641.32767-1-qais.yousef@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201008_141700_841158_9F2E3342 X-CRM114-Status: GOOD ( 25.62 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Greg Kroah-Hartman , Qais Yousef , Linus Torvalds , Morten Rasmussen , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org When the CONFIG_ASYMMETRIC_AARCH32 option is enabled (EXPERT), the type of the ARM64_HAS_32BIT_EL0 capability becomes WEAK_LOCAL_CPU_FEATURE. The kernel will now return true for system_supports_32bit_el0() and checks 32-bit tasks are affined to AArch32 capable CPUs only in do_notify_resume(). If the affinity contains a non-capable AArch32 CPU, the tasks will get SIGKILLed. If the last CPU supporting 32-bit is offlined, the kernel will SIGKILL any scheduled 32-bit tasks (the alternative is to prevent offlining through a new .cpu_disable feature entry). In addition to the relaxation of the ARM64_HAS_32BIT_EL0 capability, this patch factors out the 32-bit cpuinfo and features setting into separate functions: __cpuinfo_store_cpu_32bit(), init_cpu_32bit_features(). The cpuinfo of the booting CPU (boot_cpu_data) is now updated on the first 32-bit capable CPU even if it is a secondary one. The ID_AA64PFR0_EL0_64BIT_ONLY feature is relaxed to FTR_NONSTRICT and FTR_HIGHER_SAFE when the asymmetric AArch32 support is enabled. The compat_elf_hwcaps are only verified for the AArch32-capable CPUs to still allow hotplugging AArch64-only CPUs. Make sure that KVM never sees the asymmetric 32bit system. Guest can still ignore ID registers and force run 32bit at EL0. Co-developed-by: Qais Yousef Signed-off-by: Catalin Marinas Signed-off-by: Qais Yousef --- arch/arm64/Kconfig | 14 ++++++ arch/arm64/include/asm/cpu.h | 2 + arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/cpufeature.h | 20 +++++++- arch/arm64/include/asm/thread_info.h | 5 +- arch/arm64/kernel/cpufeature.c | 66 +++++++++++++++----------- arch/arm64/kernel/cpuinfo.c | 71 ++++++++++++++++++---------- arch/arm64/kernel/process.c | 17 +++++++ arch/arm64/kernel/signal.c | 18 +++++++ arch/arm64/kvm/arm.c | 5 +- arch/arm64/kvm/guest.c | 2 +- arch/arm64/kvm/sys_regs.c | 14 +++++- 12 files changed, 176 insertions(+), 61 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d232837cbee..591853504dc4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1868,6 +1868,20 @@ config DMI endmenu +config ASYMMETRIC_AARCH32 + bool "Allow support for asymmetric AArch32 support" + depends on COMPAT && EXPERT + help + Enable this option to allow support for asymmetric AArch32 EL0 + CPU configurations. Once the AArch32 EL0 support is detected + on a CPU, the feature is made available to user space to allow + the execution of 32-bit (compat) applications. If the affinity + of the 32-bit application contains a non-AArch32 capable CPU + or the last AArch32 capable CPU is offlined, the application + will be killed. + + If unsure say N. + config SYSVIPC_COMPAT def_bool y depends on COMPAT && SYSVIPC diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h index 7faae6ff3ab4..c920fa45e502 100644 --- a/arch/arm64/include/asm/cpu.h +++ b/arch/arm64/include/asm/cpu.h @@ -15,6 +15,7 @@ struct cpuinfo_arm64 { struct cpu cpu; struct kobject kobj; + bool aarch32_valid; u32 reg_ctr; u32 reg_cntfrq; u32 reg_dczid; @@ -65,6 +66,7 @@ void cpuinfo_store_cpu(void); void __init cpuinfo_store_boot_cpu(void); void __init init_cpu_features(struct cpuinfo_arm64 *info); +void init_cpu_32bit_features(struct cpuinfo_arm64 *info); void update_cpu_features(int cpu, struct cpuinfo_arm64 *info, struct cpuinfo_arm64 *boot); diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 07b643a70710..d3b6a5dce456 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -64,7 +64,8 @@ #define ARM64_BTI 54 #define ARM64_HAS_ARMv8_4_TTL 55 #define ARM64_HAS_TLB_RANGE 56 +#define ARM64_HAS_ASYM_32BIT_EL0 57 -#define ARM64_NCAPS 57 +#define ARM64_NCAPS 58 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 89b4f0142c28..fa2413715041 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -17,6 +17,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #include @@ -582,9 +583,26 @@ static inline bool cpu_supports_mixed_endian_el0(void) return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1)); } -static inline bool system_supports_32bit_el0(void) +static inline bool system_supports_sym_32bit_el0(void) { return cpus_have_const_cap(ARM64_HAS_32BIT_EL0); + +} + +static inline bool system_supports_asym_32bit_el0(void) +{ +#ifdef CONFIG_ASYMMETRIC_AARCH32 + return !cpus_have_const_cap(ARM64_HAS_32BIT_EL0) && + cpus_have_const_cap(ARM64_HAS_ASYM_32BIT_EL0); +#else + return false; +#endif +} + +static inline bool system_supports_32bit_el0(void) +{ + return system_supports_sym_32bit_el0() || + system_supports_asym_32bit_el0(); } static inline bool system_supports_4kb_granule(void) diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 5e784e16ee89..312974ab2c85 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -67,6 +67,7 @@ void arch_release_task_struct(struct task_struct *tsk); #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */ #define TIF_FSCHECK 5 /* Check FS is USER_DS on return */ +#define TIF_CHECK_32BIT_AFFINITY 6 /* Check thread affinity for asymmetric AArch32 */ #define TIF_SYSCALL_TRACE 8 /* syscall trace active */ #define TIF_SYSCALL_AUDIT 9 /* syscall auditing */ #define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */ @@ -95,11 +96,13 @@ void arch_release_task_struct(struct task_struct *tsk); #define _TIF_FSCHECK (1 << TIF_FSCHECK) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_32BIT (1 << TIF_32BIT) +#define _TIF_CHECK_32BIT_AFFINITY (1 << TIF_CHECK_32BIT_AFFINITY) #define _TIF_SVE (1 << TIF_SVE) #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ - _TIF_UPROBE | _TIF_FSCHECK) + _TIF_UPROBE | _TIF_FSCHECK | \ + _TIF_CHECK_32BIT_AFFINITY) #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6424584be01e..d46732113305 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -63,7 +63,6 @@ #define pr_fmt(fmt) "CPU features: " fmt #include -#include #include #include #include @@ -753,7 +752,7 @@ static void __init sort_ftr_regs(void) * Any bits that are not covered by an arm64_ftr_bits entry are considered * RES0 for the system-wide value, and must strictly match. */ -static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) +static void init_cpu_ftr_reg(u32 sys_reg, u64 new) { u64 val = 0; u64 strict_mask = ~0x0ULL; @@ -835,30 +834,6 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1); init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0); - if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { - init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); - init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1); - init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0); - init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1); - init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2); - init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3); - init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4); - init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5); - init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6); - init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0); - init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); - init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); - init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); - init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4); - init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5); - init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); - init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); - init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2); - init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0); - init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1); - init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); - } - if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) { init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); sve_init_vq_map(); @@ -877,6 +852,31 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) setup_boot_cpu_capabilities(); } +void init_cpu_32bit_features(struct cpuinfo_arm64 *info) +{ + init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); + init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1); + init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0); + init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1); + init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2); + init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3); + init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4); + init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5); + init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6); + init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0); + init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); + init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); + init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); + init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4); + init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5); + init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); + init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); + init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2); + init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0); + init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1); + init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); +} + static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new) { const struct arm64_ftr_bits *ftrp; @@ -1804,6 +1804,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64PFR0_EL0_SHIFT, .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, }, +#ifdef CONFIG_ASYMMETRIC_AARCH32 + { + .capability = ARM64_HAS_ASYM_32BIT_EL0, + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + .matches = has_cpuid_feature, + .sys_reg = SYS_ID_AA64PFR0_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64PFR0_EL0_SHIFT, + .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, + }, +#endif #ifdef CONFIG_KVM { .desc = "32-bit EL1 Support", @@ -2576,7 +2587,8 @@ static void verify_local_cpu_capabilities(void) verify_local_elf_hwcaps(arm64_elf_hwcaps); - if (system_supports_32bit_el0()) + if (system_supports_32bit_el0() && + this_cpu_has_cap(ARM64_HAS_32BIT_EL0)) verify_local_elf_hwcaps(compat_elf_hwcaps); if (system_supports_sve()) diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index d0076c2159e6..b7f69cbbc088 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -362,32 +362,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1); - /* Update the 32bit ID registers only if AArch32 is implemented */ - if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { - info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); - info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1); - info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); - info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); - info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); - info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1); - info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1); - info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1); - info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1); - info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1); - info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); - info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); - info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); - info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1); - info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1); - info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); - info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); - info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1); - - info->reg_mvfr0 = read_cpuid(MVFR0_EL1); - info->reg_mvfr1 = read_cpuid(MVFR1_EL1); - info->reg_mvfr2 = read_cpuid(MVFR2_EL1); - } - if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(info->reg_id_aa64pfr0)) info->reg_zcr = read_zcr_features(); @@ -395,10 +369,51 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) cpuinfo_detect_icache_policy(info); } +static void __cpuinfo_store_cpu_32bit(struct cpuinfo_arm64 *info) +{ + info->aarch32_valid = true; + + info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); + info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1); + info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); + info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); + info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); + info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1); + info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1); + info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1); + info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1); + info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1); + info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); + info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); + info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); + info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1); + info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1); + info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); + info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); + info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1); + + info->reg_mvfr0 = read_cpuid(MVFR0_EL1); + info->reg_mvfr1 = read_cpuid(MVFR1_EL1); + info->reg_mvfr2 = read_cpuid(MVFR2_EL1); +} + void cpuinfo_store_cpu(void) { struct cpuinfo_arm64 *info = this_cpu_ptr(&cpu_data); __cpuinfo_store_cpu(info); + if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) + __cpuinfo_store_cpu_32bit(info); + /* + * With asymmetric AArch32 support, populate the boot CPU information + * on the first 32-bit capable secondary CPU if the primary one + * skipped this step. + */ + if (IS_ENABLED(CONFIG_ASYMMETRIC_AARCH32) && + !boot_cpu_data.aarch32_valid && + id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { + __cpuinfo_store_cpu_32bit(&boot_cpu_data); + init_cpu_32bit_features(&boot_cpu_data); + } update_cpu_features(smp_processor_id(), info, &boot_cpu_data); } @@ -406,7 +421,11 @@ void __init cpuinfo_store_boot_cpu(void) { struct cpuinfo_arm64 *info = &per_cpu(cpu_data, 0); __cpuinfo_store_cpu(info); + if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) + __cpuinfo_store_cpu_32bit(info); boot_cpu_data = *info; init_cpu_features(&boot_cpu_data); + if (id_aa64pfr0_32bit_el0(boot_cpu_data.reg_id_aa64pfr0)) + init_cpu_32bit_features(&boot_cpu_data); } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index f1804496b935..a2f9ffb2b173 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -513,6 +513,15 @@ static void entry_task_switch(struct task_struct *next) __this_cpu_write(__entry_task, next); } +static void aarch32_thread_switch(struct task_struct *next) +{ + struct thread_info *ti = task_thread_info(next); + + if (IS_ENABLED(CONFIG_ASYMMETRIC_AARCH32) && is_compat_thread(ti) && + !this_cpu_has_cap(ARM64_HAS_32BIT_EL0)) + set_ti_thread_flag(ti, TIF_CHECK_32BIT_AFFINITY); +} + /* * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT. * Assuming the virtual counter is enabled at the beginning of times: @@ -562,6 +571,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, uao_thread_switch(next); ssbs_thread_switch(next); erratum_1418040_thread_switch(prev, next); + aarch32_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case @@ -620,6 +630,13 @@ void arch_setup_new_exec(void) current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0; ptrauth_thread_init_user(current); + + /* + * If exec'ing a 32-bit task, force the asymmetric 32-bit feature + * check as the task may not go through a switch_to() call. + */ + if (IS_ENABLED(CONFIG_ASYMMETRIC_AARCH32) && is_compat_task()) + set_thread_flag(TIF_CHECK_32BIT_AFFINITY); } #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 3b4f31f35e45..cf94cc248fbe 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -907,6 +908,17 @@ static void do_signal(struct pt_regs *regs) restore_saved_sigmask(); } +static void check_aarch32_cpumask(void) +{ + /* + * If the task moved to uncapable CPU, SIGKILL it. + */ + if (!this_cpu_has_cap(ARM64_HAS_32BIT_EL0)) { + pr_warn_once("CPU affinity contains CPUs that are not capable of running 32-bit tasks\n"); + force_sig(SIGKILL); + } +} + asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { @@ -929,6 +941,12 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, } else { local_daif_restore(DAIF_PROCCTX); + if (IS_ENABLED(CONFIG_ASYMMETRIC_AARCH32) && + thread_flags & _TIF_CHECK_32BIT_AFFINITY) { + clear_thread_flag(TIF_CHECK_32BIT_AFFINITY); + check_aarch32_cpumask(); + } + if (thread_flags & _TIF_UPROBE) uprobe_notify_resume(regs); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 22ff3373d855..17c6674a7fcd 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -644,7 +644,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; int ret; - if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + if (!system_supports_sym_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { kvm_err("Illegal AArch32 mode at EL0, can't run."); return -ENOEXEC; } @@ -815,7 +815,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * enter 32bit mode, catch that and prevent it from running * again. */ - if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + if (!system_supports_sym_32bit_el0() && + vcpu_mode_is_32bit(vcpu)) { kvm_err("Detected illegal AArch32 mode at EL0, exiting."); ret = ARM_EXCEPTION_IL; } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index dfb5218137ca..0f67b53eaf17 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -226,7 +226,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) u64 mode = (*(u64 *)valp) & PSR_AA32_MODE_MASK; switch (mode) { case PSR_AA32_MODE_USR: - if (!system_supports_32bit_el0()) + if (!system_supports_sym_32bit_el0()) return -EINVAL; break; case PSR_AA32_MODE_FIQ: diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 077293b5115f..0b9aaee1df59 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -670,7 +670,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) */ val = ((pmcr & ~ARMV8_PMU_PMCR_MASK) | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E); - if (!system_supports_32bit_el0()) + if (!system_supports_sym_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; __vcpu_sys_reg(vcpu, r->reg) = val; } @@ -722,7 +722,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, val = __vcpu_sys_reg(vcpu, PMCR_EL0); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; - if (!system_supports_32bit_el0()) + if (!system_supports_sym_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; __vcpu_sys_reg(vcpu, PMCR_EL0) = val; kvm_pmu_handle_pmcr(vcpu, val); @@ -1131,6 +1131,16 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT); + + if (!system_supports_sym_32bit_el0()) { + /* + * We could be running on asym aarch32 system. + * Override to present a aarch64 only system. + */ + val &= ~(0xfUL << ID_AA64PFR0_EL0_SHIFT); + val |= (ID_AA64PFR0_EL0_64BIT_ONLY << ID_AA64PFR0_EL0_SHIFT); + } + } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | (0xfUL << ID_AA64ISAR1_API_SHIFT) | From patchwork Thu Oct 8 18:16:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qais Yousef X-Patchwork-Id: 11824303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFA5214D5 for ; Thu, 8 Oct 2020 18:17:21 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7C1C0221FE for ; Thu, 8 Oct 2020 18:17:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="2vqB/z+F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C1C0221FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=r8fmomix39HQFYajsDKFd8hvomh0ZzS5AAIOWsA2ZqA=; b=2vqB/z+FlWsQ3HT6XcC2SPTgWq ca632G3VcEWXYy3/njAuBd2TuZ6RzWUrDpPYBpFyTAujYbmEn6lJkcqlWSNslgqyY2YaJ2DPySyMj tKSOdK7v9EgSum0RRBstNWqS7DZudjLE4gMjyJmjgFjvLagBt2LUK8DYQoWqWWrKvCixSj09oS4uA jLp02tGEig+f1QGh/tsNZABHND6jfJdQrdcB7eV/mMZfSyqnqqt8E4Yf+zqn/B8vSwKexPwmzSEzM Neb5SF5bqEz/rGzKDgN5TImCmn4ZCm6QkyYqRFXWGFo87medLhwsI9g6osVtjWkYwxchhUVO2XZUA iZEaAlSA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTU-0004dQ-GL; Thu, 08 Oct 2020 18:17:08 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kQaTO-0004bl-CW for linux-arm-kernel@lists.infradead.org; Thu, 08 Oct 2020 18:17:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5FEFC1529; Thu, 8 Oct 2020 11:17:01 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 110683F802; Thu, 8 Oct 2020 11:16:59 -0700 (PDT) From: Qais Yousef To: Catalin Marinas , Will Deacon , Marc Zyngier , "Peter Zijlstra (Intel)" Subject: [RFC PATCH 3/3] arm64: Handle AArch32 tasks running on non AArch32 cpu Date: Thu, 8 Oct 2020 19:16:41 +0100 Message-Id: <20201008181641.32767-4-qais.yousef@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201008181641.32767-1-qais.yousef@arm.com> References: <20201008181641.32767-1-qais.yousef@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201008_141702_541270_5E524970 X-CRM114-Status: GOOD ( 22.57 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Greg Kroah-Hartman , Qais Yousef , Linus Torvalds , Morten Rasmussen , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On Asym AArch32 system, if a task has invalid cpus in its affinity, we try to fix the cpumask silently and let it continue to run. If the cpumask doesn't contain any valid cpu, we have no choice but to send SIGKILL. This patch can be omitted if user-space can guarantee the cpumask of all AArch32 apps only contains AArch32 capable CPUs. The biggest challenge in user space managing the affinity is handling apps who try to modify their own CPU affinity via sched_setaffinity(). Without this change they could trigger a SIGKILL if they unknowingly affine to the wrong set of CPUs. Only by enforcing all 32bit apps to a cpuset user space can guarantee apps can't escape this affinity. Signed-off-by: Qais Yousef --- arch/arm64/Kconfig | 8 ++++---- arch/arm64/include/asm/cpufeature.h | 2 ++ arch/arm64/kernel/cpufeature.c | 11 +++++++++++ arch/arm64/kernel/signal.c | 25 ++++++++++++++++++++----- 4 files changed, 37 insertions(+), 9 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 591853504dc4..ad6d52dd8ac0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1875,10 +1875,10 @@ config ASYMMETRIC_AARCH32 Enable this option to allow support for asymmetric AArch32 EL0 CPU configurations. Once the AArch32 EL0 support is detected on a CPU, the feature is made available to user space to allow - the execution of 32-bit (compat) applications. If the affinity - of the 32-bit application contains a non-AArch32 capable CPU - or the last AArch32 capable CPU is offlined, the application - will be killed. + the execution of 32-bit (compat) applications by migrating + them to the capable CPUs. Offlining such CPUs leads to 32-bit + applications being killed. Similarly if the affinity contains + no 32-bit capable CPU. If unsure say N. diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index fa2413715041..57275e47cd3d 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -376,6 +376,8 @@ cpucap_multi_entry_cap_matches(const struct arm64_cpu_capabilities *entry, return false; } +extern cpumask_t aarch32_el0_mask; + extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d46732113305..4c0858c13e6d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1721,6 +1721,16 @@ cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap) return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT); } +#ifdef CONFIG_ASYMMETRIC_AARCH32 +cpumask_t aarch32_el0_mask; + +static void cpu_enable_aarch32_el0(struct arm64_cpu_capabilities const *cap) +{ + if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) + cpumask_set_cpu(smp_processor_id(), &aarch32_el0_mask); +} +#endif + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", @@ -1799,6 +1809,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .capability = ARM64_HAS_32BIT_EL0, .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, + .cpu_enable = cpu_enable_aarch32_el0, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64PFR0_EL0_SHIFT, diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index cf94cc248fbe..7e97f1589f33 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -908,13 +908,28 @@ static void do_signal(struct pt_regs *regs) restore_saved_sigmask(); } -static void check_aarch32_cpumask(void) +static void set_32bit_cpus_allowed(void) { + cpumask_var_t cpus_allowed; + int ret = 0; + + if (cpumask_subset(current->cpus_ptr, &aarch32_el0_mask)) + return; + /* - * If the task moved to uncapable CPU, SIGKILL it. + * On asym aarch32 systems, if the task has invalid cpus in its mask, + * we try to fix it by removing the invalid ones. */ - if (!this_cpu_has_cap(ARM64_HAS_32BIT_EL0)) { - pr_warn_once("CPU affinity contains CPUs that are not capable of running 32-bit tasks\n"); + if (!alloc_cpumask_var(&cpus_allowed, GFP_ATOMIC)) { + ret = -ENOMEM; + } else { + cpumask_and(cpus_allowed, current->cpus_ptr, &aarch32_el0_mask); + ret = set_cpus_allowed_ptr(current, cpus_allowed); + free_cpumask_var(cpus_allowed); + } + + if (ret) { + pr_warn_once("Failed to fixup affinity of running 32-bit task\n"); force_sig(SIGKILL); } } @@ -944,7 +959,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (IS_ENABLED(CONFIG_ASYMMETRIC_AARCH32) && thread_flags & _TIF_CHECK_32BIT_AFFINITY) { clear_thread_flag(TIF_CHECK_32BIT_AFFINITY); - check_aarch32_cpumask(); + set_32bit_cpus_allowed(); } if (thread_flags & _TIF_UPROBE)