From patchwork Tue May 25 15:14:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB0BEC47084 for ; Tue, 25 May 2021 15:17:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 982876113B for ; Tue, 25 May 2021 15:17:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 982876113B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d9/oMktCBKJOqBVWU3jh37WyW9Vgnbk6latxJdpqCgA=; b=cnN1xctp6L6hhP blVrb2p+Q3nm9rQItopFdi0ZfED1CLEOymRyDP/nBXO+RWLLl2+NQz8SX/SqD3a1g0LH4u4+RdqpU 7bxdUADHiksmMFm77CcFr5vhJBlkh4aZw7T7Vj++E4GZoFOlWPcigHrkO4gLR9LUQViWmkBY9jwYM +Xb/i6TwxrkIVCSInb0iMA04ERFazKUXfAjr27CF7yUiJvmIAj5/EL4NjiRa9+HtORIzZb3SbJPyZ Bj9UkYyrAY3gnE2mqD2PukQ99RTsuZD0X8zJQs2UBxPn8EwVvldZi3QHhXibbvRdO1YaLziEvOXoS ZPfmldfUM0tx4eKJGqaw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmQ-005td1-47; Tue, 25 May 2021 15:15:38 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYm8-005tUL-7f for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:23 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 80DAA61429; Tue, 25 May 2021 15:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955719; bh=CaGYuHn+SqIHxXU6UGDKeLwnmsQSpMp2vSwKl8cv+Hs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nvmR/T+9ImdYkkQM1ZIkxS6tIN/uW8GD9TCkPdOWhJ3aEGz7iodQEj0jl299SBLmu AqW+Sm2k4NLx6qCNcgNmpWR8xXGEiWmWahHD/NLhTJcvvbeBgNXy824r229OikQ0r2 U4QmE1KH1XElloVrc9LB9xRzwSJmAjfoqYCQuY+t2bpDDH6ou/T45Dr88vriiU9mUi DkFl+Q35Q8hfvU+7LCPbKN7VwLVgkYVrSfrjrU24kqwVHSLYWO4sAE8WE3z+mCAcM2 7re0y70/gZq2/v1xHXibaIn8Gd9y4aXemC5Ei6EIxutr8ZVk+gqHl21iPPxJsiguCJ ygvohKNFwCQCw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com, Valentin Schneider Subject: [PATCH v7 01/22] sched: Favour predetermined active CPU as migration destination Date: Tue, 25 May 2021 16:14:11 +0100 Message-Id: <20210525151432.16875-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081520_350229_D01889B3 X-CRM114-Status: GOOD ( 16.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since commit 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()"), the migration stopper thread is left to determine the destination CPU of the running task being migrated, even though set_cpus_allowed_ptr() already identified a candidate target earlier on. Unfortunately, the stopper doesn't check whether or not the new destination CPU is active or not, so __migrate_task() can leave the task sitting on a CPU that is outside of its affinity mask, even if the CPU originally chosen by SCA is still active. For example, with CONFIG_CPUSET=n: $ taskset -pc 0-2 $PID # offline CPUs 3-4 $ taskset -pc 3-5 $PID Then $PID remains on its current CPU (one of 0-2) and does not get migrated to CPU 5. Rework 'struct migration_arg' so that an optional pointer to an affinity mask can be provided to the stopper, allowing us to respect the original choice of destination CPU when migrating. Note that there is still the potential to race with a concurrent CPU hot-unplug of the destination CPU if the caller does not hold the hotplug lock. Reported-by: Valentin Schneider Signed-off-by: Will Deacon --- kernel/sched/core.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5226cc26a095..1702a60d178d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1869,6 +1869,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf, struct migration_arg { struct task_struct *task; int dest_cpu; + const struct cpumask *dest_mask; struct set_affinity_pending *pending; }; @@ -1917,6 +1918,7 @@ static int migration_cpu_stop(void *data) struct set_affinity_pending *pending = arg->pending; struct task_struct *p = arg->task; int dest_cpu = arg->dest_cpu; + const struct cpumask *dest_mask = arg->dest_mask; struct rq *rq = this_rq(); bool complete = false; struct rq_flags rf; @@ -1956,12 +1958,8 @@ static int migration_cpu_stop(void *data) complete = true; } - if (dest_cpu < 0) { - if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) - goto out; - - dest_cpu = cpumask_any_distribute(&p->cpus_mask); - } + if (dest_mask && (cpumask_test_cpu(task_cpu(p), dest_mask))) + goto out; if (task_on_rq_queued(p)) rq = __migrate_task(rq, &rf, p, dest_cpu); @@ -2249,7 +2247,8 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag init_completion(&my_pending.done); my_pending.arg = (struct migration_arg) { .task = p, - .dest_cpu = -1, /* any */ + .dest_cpu = dest_cpu, + .dest_mask = &p->cpus_mask, .pending = &my_pending, }; From patchwork Tue May 25 15:14:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7B05C2B9F8 for ; Tue, 25 May 2021 15:18:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 840C461159 for ; Tue, 25 May 2021 15:18:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 840C461159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YoIaXQMXmc9GpGk6iCi/zDLTMhC6677oGGrXeLhxgEQ=; b=jnO+aZc3k2MF7q UrH0WuYbQeLEk/OItSX019oZwBriw0jR+GEn+32z4keSz0tAvgsm5rDrwokRYQ/st4M6DFN6DgQ+o 3kQ7WFFYoFHDd/v7JtwmT4XftkU6zltbyrxdYNBEZMbXguxA5r+QDTHlGK6usBV4Z1odQsZzXHleR mp3+ILyUtj2tW0eITXyZv/WTeTFUoThLsMNxFRelc6XGiDBrENCryFV0rSSnnlnM1Y78wI7Nd2HoZ 919+TEIPZ80om0D3hb+R1ZIe4fOGmuIc7GMQGSdjtb86gPqw25HK9bi+vrrZJDG9vi1mHMeEaoP3A 7nxR1RJGJCWbcHCbG+Gw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmg-005tkH-8g; Tue, 25 May 2021 15:15:54 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmC-005tW5-0B for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:26 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 59B7C6142D; Tue, 25 May 2021 15:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955723; bh=732cwF8Iz8DasP4Nf4vVgWosVv6a8JLz5raQQCtclVc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FzB+SqTZCs0OmZEqLFfkw0EGKZRmoBu0T2KlJ+aiUiQAkzSEb4YCUo24rWI0iBOYU Jdpy8G8kOHQhw4QJ0wLgwSY9kh1AK8Z6JJbNwrDlkX2SlguUp/0/mf0oxLyR6KaORv /pAoUPi6rEHagAOuuy3p4hPJ6GZokVjhJ1XccHtOVh4a1ywpptbDV0YBHhX8Nxvibh A76/KDGRUdWmlxoapq3ZhP2BO6tr/+8TjrEV0TngM+7HzDIkDzaZ0KQz8Sr5iCG3ud nLg2Xo5uharF/cH89mpFF6Nn4VkHmo0twT2ZyZjtPpwwQNEZJQkY1wXY9+4wcvYeLH TsPDQHpfyDSGA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 02/22] arm64: cpuinfo: Split AArch32 registers out into a separate struct Date: Tue, 25 May 2021 16:14:12 +0100 Message-Id: <20210525151432.16875-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081524_119972_3A28AE2F X-CRM114-Status: GOOD ( 20.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for late initialisation of the "sanitised" AArch32 register state, move the AArch32 registers out of 'struct cpuinfo' and into their own struct definition. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/include/asm/cpu.h | 44 +++++++++++---------- arch/arm64/kernel/cpufeature.c | 71 ++++++++++++++++++---------------- arch/arm64/kernel/cpuinfo.c | 53 +++++++++++++------------ 3 files changed, 89 insertions(+), 79 deletions(-) diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h index 7faae6ff3ab4..f4e01aa0f442 100644 --- a/arch/arm64/include/asm/cpu.h +++ b/arch/arm64/include/asm/cpu.h @@ -12,26 +12,7 @@ /* * Records attributes of an individual CPU. */ -struct cpuinfo_arm64 { - struct cpu cpu; - struct kobject kobj; - u32 reg_ctr; - u32 reg_cntfrq; - u32 reg_dczid; - u32 reg_midr; - u32 reg_revidr; - - u64 reg_id_aa64dfr0; - u64 reg_id_aa64dfr1; - u64 reg_id_aa64isar0; - u64 reg_id_aa64isar1; - u64 reg_id_aa64mmfr0; - u64 reg_id_aa64mmfr1; - u64 reg_id_aa64mmfr2; - u64 reg_id_aa64pfr0; - u64 reg_id_aa64pfr1; - u64 reg_id_aa64zfr0; - +struct cpuinfo_32bit { u32 reg_id_dfr0; u32 reg_id_dfr1; u32 reg_id_isar0; @@ -54,6 +35,29 @@ struct cpuinfo_arm64 { u32 reg_mvfr0; u32 reg_mvfr1; u32 reg_mvfr2; +}; + +struct cpuinfo_arm64 { + struct cpu cpu; + struct kobject kobj; + u32 reg_ctr; + u32 reg_cntfrq; + u32 reg_dczid; + u32 reg_midr; + u32 reg_revidr; + + u64 reg_id_aa64dfr0; + u64 reg_id_aa64dfr1; + u64 reg_id_aa64isar0; + u64 reg_id_aa64isar1; + u64 reg_id_aa64mmfr0; + u64 reg_id_aa64mmfr1; + u64 reg_id_aa64mmfr2; + u64 reg_id_aa64pfr0; + u64 reg_id_aa64pfr1; + u64 reg_id_aa64zfr0; + + struct cpuinfo_32bit aarch32; /* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */ u64 reg_zcr; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index efed2830d141..a4db25cd7122 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -863,6 +863,31 @@ static void __init init_cpu_hwcaps_indirect_list(void) static void __init setup_boot_cpu_capabilities(void); +static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info) +{ + init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); + init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1); + init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0); + init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1); + init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2); + init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3); + init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4); + init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5); + init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6); + init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0); + init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); + init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); + init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); + init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4); + init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5); + init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); + init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); + init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2); + init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0); + init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1); + init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); +} + void __init init_cpu_features(struct cpuinfo_arm64 *info) { /* Before we start using the tables, make sure it is sorted */ @@ -882,29 +907,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1); init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0); - if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { - init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); - init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1); - init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0); - init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1); - init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2); - init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3); - init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4); - init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5); - init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6); - init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0); - init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); - init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); - init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); - init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4); - init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5); - init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); - init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); - init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2); - init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0); - init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1); - init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); - } + if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) + init_32bit_cpu_features(&info->aarch32); if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) { init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); @@ -975,20 +979,12 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field) WARN_ON(!ftrp->width); } -static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info, - struct cpuinfo_arm64 *boot) +static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info, + struct cpuinfo_32bit *boot) { int taint = 0; u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); - /* - * If we don't have AArch32 at all then skip the checks entirely - * as the register values may be UNKNOWN and we're not going to be - * using them for anything. - */ - if (!id_aa64pfr0_32bit_el0(pfr0)) - return taint; - /* * If we don't have AArch32 at EL1, then relax the strictness of * EL1-dependent register fields to avoid spurious sanity check fails. @@ -1135,10 +1131,17 @@ void update_cpu_features(int cpu, } /* + * If we don't have AArch32 at all then skip the checks entirely + * as the register values may be UNKNOWN and we're not going to be + * using them for anything. + * * This relies on a sanitised view of the AArch64 ID registers * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last. */ - taint |= update_32bit_cpu_features(cpu, info, boot); + if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { + taint |= update_32bit_cpu_features(cpu, &info->aarch32, + &boot->aarch32); + } /* * Mismatched CPU features are a recipe for disaster. Don't even diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 51fcf99d5351..264c119a6cae 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -344,6 +344,32 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info) pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu); } +static void __cpuinfo_store_cpu_32bit(struct cpuinfo_32bit *info) +{ + info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); + info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1); + info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); + info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); + info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); + info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1); + info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1); + info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1); + info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1); + info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1); + info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); + info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); + info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); + info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1); + info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1); + info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); + info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); + info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1); + + info->reg_mvfr0 = read_cpuid(MVFR0_EL1); + info->reg_mvfr1 = read_cpuid(MVFR1_EL1); + info->reg_mvfr2 = read_cpuid(MVFR2_EL1); +} + static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) { info->reg_cntfrq = arch_timer_get_cntfrq(); @@ -371,31 +397,8 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1); - /* Update the 32bit ID registers only if AArch32 is implemented */ - if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { - info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); - info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1); - info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); - info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); - info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); - info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1); - info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1); - info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1); - info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1); - info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1); - info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); - info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); - info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); - info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1); - info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1); - info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); - info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); - info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1); - - info->reg_mvfr0 = read_cpuid(MVFR0_EL1); - info->reg_mvfr1 = read_cpuid(MVFR1_EL1); - info->reg_mvfr2 = read_cpuid(MVFR2_EL1); - } + if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) + __cpuinfo_store_cpu_32bit(&info->aarch32); if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(info->reg_id_aa64pfr0)) From patchwork Tue May 25 15:14:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23304C2B9F8 for ; Tue, 25 May 2021 15:18:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DBA2A6113B for ; Tue, 25 May 2021 15:18:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBA2A6113B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bYkNGNlU98B9TARc27JwySJwbAnNHWuGF8a93pfkm8g=; b=vuRbeHxvacm0R4 HmVifAET8nMP5gvcmohCZsNqwbQ8c+C5Zptw1ICkJnF/NYOpcnhpxccNh0an2G+sxZhY8QbU7mab/ CZQnyxIEx1z6kPOzvYwBWRxw4CV0AaReSE7Yucfxx9UmHuKuhQ9iE1tCsNYedSXtvZ7XB3o+f1Vhl 27WpT4MopeHmsi0lX+FAW9iiSqo3ENt6lg7UlM+Ke0i1kg5G56xGsTpKIYhGNGFSYI10GVV7mMPmD AzQDDmXlnW/cFNHAlY8djaIEvyxqUIQjba+gYQ22Y0HQcEWIqaq84UgNPnZw0uzYYvwDv7M10YBuo KRGFo6fhOtx/gk0vGtKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYn1-005tuX-G5; Tue, 25 May 2021 15:16:15 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmF-005tXu-MU for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:29 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1536C6141C; Tue, 25 May 2021 15:15:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955727; bh=D1ZUCC3t4rqEiio5pZbDL/BbJagMpdBEJf9mcoEZS24=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DsK5CODEnyJbLuxsaSlEHUYANoGva9QrhRJ5owW5psbdWAVPoVbYGRfm/aEzKaWu7 k+9QS8bqrpnJQxt8JrFKgnpFLCJwfJzmggPLbYyMrlhvtpTIVgswop+Gt+WGgQpe+P DukJKtZoijSbzbFOugW9gFjf2FlQ7KijyUvq0wf5hMFf0wZkSfLL4D/gj9HXWRXvqQ 0KR1MOh2r/Oyd+TggptGb9VwbkYMA1Sa3QFFesujMZ9xFSWq3JUDn2UDlzWsuZjgeb rGFrSGjN29klv48KVMxBChIJVQBfB7otQ2rycZayro7Y6nibqmhb5OS/drl3S684nV dqtWMGd9yQiKQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 03/22] arm64: Allow mismatched 32-bit EL0 support Date: Tue, 25 May 2021 16:14:13 +0100 Message-Id: <20210525151432.16875-4-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081527_834145_52B3475F X-CRM114-Status: GOOD ( 22.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When confronted with a mixture of CPUs, some of which support 32-bit applications and others which don't, we quite sensibly treat the system as 64-bit only for userspace and prevent execve() of 32-bit binaries. Unfortunately, some crazy folks have decided to build systems like this with the intention of running 32-bit applications, so relax our sanitisation logic to continue to advertise 32-bit support to userspace on these systems and track the real 32-bit capable cores in a cpumask instead. For now, the default behaviour remains but will be tied to a command-line option in a later patch. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/include/asm/cpufeature.h | 8 +- arch/arm64/kernel/cpufeature.c | 114 ++++++++++++++++++++++++---- arch/arm64/tools/cpucaps | 3 +- 3 files changed, 110 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 338840c00e8e..603bf4160cd6 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -630,9 +630,15 @@ static inline bool cpu_supports_mixed_endian_el0(void) return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1)); } +const struct cpumask *system_32bit_el0_cpumask(void); +DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0); + static inline bool system_supports_32bit_el0(void) { - return cpus_have_const_cap(ARM64_HAS_32BIT_EL0); + u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + + return static_branch_unlikely(&arm64_mismatched_32bit_el0) || + id_aa64pfr0_32bit_el0(pfr0); } static inline bool system_supports_4kb_granule(void) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a4db25cd7122..4194a47de62d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -107,6 +107,24 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE); bool arm64_use_ng_mappings = false; EXPORT_SYMBOL(arm64_use_ng_mappings); +/* + * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs + * support it? + */ +static bool __read_mostly allow_mismatched_32bit_el0; + +/* + * Static branch enabled only if allow_mismatched_32bit_el0 is set and we have + * seen at least one CPU capable of 32-bit EL0. + */ +DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0); + +/* + * Mask of CPUs supporting 32-bit EL0. + * Only valid if arm64_mismatched_32bit_el0 is enabled. + */ +static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly; + /* * Flag to indicate if we have computed the system wide * capabilities based on the boot time active CPUs. This @@ -767,7 +785,7 @@ static void __init sort_ftr_regs(void) * Any bits that are not covered by an arm64_ftr_bits entry are considered * RES0 for the system-wide value, and must strictly match. */ -static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) +static void init_cpu_ftr_reg(u32 sys_reg, u64 new) { u64 val = 0; u64 strict_mask = ~0x0ULL; @@ -863,7 +881,7 @@ static void __init init_cpu_hwcaps_indirect_list(void) static void __init setup_boot_cpu_capabilities(void); -static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info) +static void init_32bit_cpu_features(struct cpuinfo_32bit *info) { init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0); init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1); @@ -979,6 +997,22 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field) WARN_ON(!ftrp->width); } +static void update_mismatched_32bit_el0_cpu_features(struct cpuinfo_arm64 *info, + struct cpuinfo_arm64 *boot) +{ + static bool boot_cpu_32bit_regs_overridden = false; + + if (!allow_mismatched_32bit_el0 || boot_cpu_32bit_regs_overridden) + return; + + if (id_aa64pfr0_32bit_el0(boot->reg_id_aa64pfr0)) + return; + + boot->aarch32 = info->aarch32; + init_32bit_cpu_features(&boot->aarch32); + boot_cpu_32bit_regs_overridden = true; +} + static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info, struct cpuinfo_32bit *boot) { @@ -1139,6 +1173,7 @@ void update_cpu_features(int cpu, * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last. */ if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { + update_mismatched_32bit_el0_cpu_features(info, boot); taint |= update_32bit_cpu_features(cpu, &info->aarch32, &boot->aarch32); } @@ -1251,6 +1286,28 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) return feature_matches(val, entry); } +const struct cpumask *system_32bit_el0_cpumask(void) +{ + if (!system_supports_32bit_el0()) + return cpu_none_mask; + + if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) + return cpu_32bit_el0_mask; + + return cpu_possible_mask; +} + +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope) +{ + if (!has_cpuid_feature(entry, scope)) + return allow_mismatched_32bit_el0; + + if (scope == SCOPE_SYSTEM) + pr_info("detected: 32-bit EL0 Support\n"); + + return true; +} + static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope) { bool has_sre; @@ -1869,10 +1926,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .cpu_enable = cpu_copy_el2regs, }, { - .desc = "32-bit EL0 Support", - .capability = ARM64_HAS_32BIT_EL0, + .capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE, .type = ARM64_CPUCAP_SYSTEM_FEATURE, - .matches = has_cpuid_feature, + .matches = has_32bit_el0, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64PFR0_EL0_SHIFT, @@ -2381,7 +2437,7 @@ static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = { {}, }; -static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap) +static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap) { switch (cap->hwcap_type) { case CAP_HWCAP: @@ -2426,7 +2482,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap) return rc; } -static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) +static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) { /* We support emulation of accesses to CPU ID feature registers */ cpu_set_named_feature(CPUID); @@ -2601,7 +2657,7 @@ static void check_early_cpu_features(void) } static void -verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps) +__verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps) { for (; caps->matches; caps++) @@ -2612,6 +2668,14 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps) } } +static void verify_local_elf_hwcaps(void) +{ + __verify_local_elf_hwcaps(arm64_elf_hwcaps); + + if (id_aa64pfr0_32bit_el0(read_cpuid(ID_AA64PFR0_EL1))) + __verify_local_elf_hwcaps(compat_elf_hwcaps); +} + static void verify_sve_features(void) { u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); @@ -2676,11 +2740,7 @@ static void verify_local_cpu_capabilities(void) * on all secondary CPUs. */ verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU); - - verify_local_elf_hwcaps(arm64_elf_hwcaps); - - if (system_supports_32bit_el0()) - verify_local_elf_hwcaps(compat_elf_hwcaps); + verify_local_elf_hwcaps(); if (system_supports_sve()) verify_sve_features(); @@ -2815,6 +2875,34 @@ void __init setup_cpu_features(void) ARCH_DMA_MINALIGN); } +static int enable_mismatched_32bit_el0(unsigned int cpu) +{ + struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu); + bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0); + + if (cpu_32bit) { + cpumask_set_cpu(cpu, cpu_32bit_el0_mask); + static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0); + setup_elf_hwcaps(compat_elf_hwcaps); + } + + return 0; +} + +static int __init init_32bit_el0_mask(void) +{ + if (!allow_mismatched_32bit_el0) + return 0; + + if (!zalloc_cpumask_var(&cpu_32bit_el0_mask, GFP_KERNEL)) + return -ENOMEM; + + return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "arm64/mismatched_32bit_el0:online", + enable_mismatched_32bit_el0, NULL); +} +subsys_initcall_sync(init_32bit_el0_mask); + static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap) { cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index 21fbdda7086e..49305c2e6dfd 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -3,7 +3,8 @@ # Internal CPU capabilities constants, keep this list sorted BTI -HAS_32BIT_EL0 +# Unreliable: use system_supports_32bit_el0() instead. +HAS_32BIT_EL0_DO_NOT_USE HAS_32BIT_EL1 HAS_ADDRESS_AUTH HAS_ADDRESS_AUTH_ARCH From patchwork Tue May 25 15:14:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EA67C2B9F8 for ; Tue, 25 May 2021 15:19:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB0796113B for ; Tue, 25 May 2021 15:19:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB0796113B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4StY/vWuIAQZh/x9dqp+sy4mPIOf+BCdDZYDFyrghDo=; b=aHcTNLvTDPL6Uo gwdv4Ag576EZjTmwT0EPoX0qQLz/looQ888WxeAOHyjknCpNrJaFsMR7O30RVJ87Hs5WvwrEj1PSC VPf2zPxE/URZ5ERx1HJhH8Kzi75gK5ATVrdDfmrHGSS3pDrXFKw6SF85mVUGLZRYiQo4KLqYN5tNk WDGmmVcQduJ0JKr/aQfPFyAYY9eRlsaIaW9Z8UBJ32q4lVkrCOLZhPs5CDR3PeXm5/HyrHsd5YfJg 0gysAn3L9ufeu8Ngmp56eoimsyIkJ6L4wf9CC+XsUT6tFoPLX1Tl62Kmpyh8XkE170/SgNd8vd3kz F7ORk/IK7Ygdk/xAh69Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYnk-005uKa-N6; Tue, 25 May 2021 15:17:01 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmJ-005tZl-CY for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:32 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id C40876142E; Tue, 25 May 2021 15:15:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955731; bh=iCW9NKyrbGkMxHcLgSLRB5Nf72sT8+JudVFeMeF95HU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lyBLnhcA8wo1hU2DqGOjfq0QJmNvr/isoo7LCQdyT3+fvqpDiKV0CynUr0L52SDsI 1xtfrE9AaE+vfZzJMEw0BGAaIgiLUW/xXg7ls7DG6VLhbY2tiihgWMcrS8DYusUEYv mO5WivCKYFBniNAE0BmZCtuo5xu+QuMB6fnim/ktxeHZaBmnpC5qmkxKi0TYTV6vUa GTOGs1mRQug8rupVTmbchSmrWltfufcqsJGC3vVbtvIoWWh9Ft41UNxIOwwXcbGQnV vtnfAkldabJW483EXJpHFEzT4br15Pb2hNMGT65ApBrbJMIyoZ07Jg4+b95ngGQgSJ uoc2TMNNF7adA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 04/22] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support Date: Tue, 25 May 2021 16:14:14 +0100 Message-Id: <20210525151432.16875-5-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081531_476090_561C58CD X-CRM114-Status: GOOD ( 13.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If a vCPU is caught running 32-bit code on a system with mismatched support at EL0, then we should kill it. Acked-by: Marc Zyngier Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/kvm/arm.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 1cb39c0803a4..5bdba97a7654 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -692,6 +692,15 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) } } +static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu) +{ + if (likely(!vcpu_mode_is_32bit(vcpu))) + return false; + + return !system_supports_32bit_el0() || + static_branch_unlikely(&arm64_mismatched_32bit_el0); +} + /** * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code * @vcpu: The VCPU pointer @@ -875,7 +884,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) * with the asymmetric AArch32 case), return to userspace with * a fatal error. */ - if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) { + if (vcpu_mode_is_bad_32bit(vcpu)) { /* * As we have caught the guest red-handed, decide that * it isn't fit for purpose anymore by making the vcpu From patchwork Tue May 25 15:14:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26426C2B9F8 for ; Tue, 25 May 2021 15:19:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D55E961159 for ; Tue, 25 May 2021 15:19:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D55E961159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yLI7yzwDVWNv12fkqoZ5M0qTzattNCnqJAcqbRaw3W8=; b=RssXfgjPIrrQGz yZwKvR+qPkPEt9lDKNxlrMFFF/fPpUwQXgcaiPNh49pkFjuYgRCcOHioXqJ4/hQcRW8C9nhraZqP6 mYgYjpuWKogKg+C1bK3wsOswcfQv2s+1ZWwvtCr2vVRvYWdPaA26OWxI26z3bOP7YsCb9eg2LzQgm lOs/KC8qX9GgGrUyg71/mxvRF4KxDOMvVYyNvr4P/bLwXXvPA8DB+ixW64QGezzOJ3hj/Vja/TIkQ /GPhfggsTj5a6AFcIwgfF8eAMHz5+UiS09S0mrGIAEndiR0U5CG4llbr7hM7kCwr++jdsv+mh3aq2 vzaF11eFbpPR6XZKT0yQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYoP-005ufg-3o; Tue, 25 May 2021 15:17:41 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmN-005tbd-37 for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:36 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7956D6141B; Tue, 25 May 2021 15:15:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955734; bh=69yrtpXUSeq6JELDEXRI7j6w/FRp267IP2K39DOOExs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lUkCtvJIN2SdlSsWMul1oJGOc3O4hWXlrEgs2cYL2BnOIyetp21Kco3P95TgWBJPT cy6/5H9Y5DTAZTQ0HWoMVx9DYn296zSuukspRzIXxv8KDnLiM1FeNmuvPJcam+Apu1 0ljv2P3SPfQwWwWMSvCZAWVOyAMQQc5Sv4vYiOcaGu67Tm0hcgEbeVo1vFYQrw9b45 +pI7pIBJYqUQlVwLzmcjohl0KEnno+zPqwjkMEkT5Wf8ivGsaGgBjgMIYDakSVb3tF omPaLy5NT6xBuFeza1EUOMHQ59zzfEuePjwZeaIbV/X0X64Tq7t7mV43P4S/hRx9Kp T1g0j1HZmK3MA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 05/22] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Date: Tue, 25 May 2021 16:14:15 +0100 Message-Id: <20210525151432.16875-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081535_218725_4614CCDD X-CRM114-Status: GOOD ( 17.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea. Ensure that 32-bit applications always take the slow-path when returning to userspace on a system with mismatched support at EL0, so that we can avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/kernel/process.c | 19 ++++++++++++++++++- arch/arm64/kernel/signal.c | 26 ++++++++++++++++++++++++++ 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index b4bb67f17a2c..f4a91bf1ce0c 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -527,6 +527,15 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, write_sysreg(val, cntkctl_el1); } +static void compat_thread_switch(struct task_struct *next) +{ + if (!is_compat_thread(task_thread_info(next))) + return; + + if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) + set_tsk_thread_flag(next, TIF_NOTIFY_RESUME); +} + static void update_sctlr_el1(u64 sctlr) { /* @@ -568,6 +577,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ssbs_thread_switch(next); erratum_1418040_thread_switch(prev, next); ptrauth_thread_switch_user(next); + compat_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case @@ -633,8 +643,15 @@ unsigned long arch_align_stack(unsigned long sp) */ void arch_setup_new_exec(void) { - current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0; + unsigned long mmflags = 0; + + if (is_compat_task()) { + mmflags = MMCF_AARCH32; + if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) + set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); + } + current->mm->context.flags = mmflags; ptrauth_thread_init_user(); mte_thread_init_user(); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 6237486ff6bb..f8192f4ae0b8 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -911,6 +911,19 @@ static void do_signal(struct pt_regs *regs) restore_saved_sigmask(); } +static bool cpu_affinity_invalid(struct pt_regs *regs) +{ + if (!compat_user_mode(regs)) + return false; + + /* + * We're preemptible, but a reschedule will cause us to check the + * affinity again. + */ + return !cpumask_test_cpu(raw_smp_processor_id(), + system_32bit_el0_cpumask()); +} + asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { @@ -938,6 +951,19 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (thread_flags & _TIF_NOTIFY_RESUME) { tracehook_notify_resume(regs); rseq_handle_notify_resume(NULL, regs); + + /* + * If we reschedule after checking the affinity + * then we must ensure that TIF_NOTIFY_RESUME + * is set so that we check the affinity again. + * Since tracehook_notify_resume() clears the + * flag, ensure that the compiler doesn't move + * it after the affinity check. + */ + barrier(); + + if (cpu_affinity_invalid(regs)) + force_sig(SIGKILL); } if (thread_flags & _TIF_FOREIGN_FPSTATE) From patchwork Tue May 25 15:14:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65F39C47084 for ; Tue, 25 May 2021 15:19:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 291E061159 for ; Tue, 25 May 2021 15:19:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 291E061159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=N75cKEesyNdFWW0uItVWXXvrmZbiPdeEv6Kj0VmG+AU=; b=drmz9mk98IzVn4 QpdSC5athJ3we4EMvHrs6dC78XhAsrxM8DwzBuCdR0xr0EybKPv3dn+w9uO6vub2P7TQKIWp2RCDU HhauPfDcrdoKn8PxR4kXxyerNnLnGiWKbLYCYZQlsy/OYhNPv3dhUJxdJctxSrk4tD3EC6DgLfsO2 Wvk73Yh3CkT7NIfrJ9qgANmIsdUhY88boTpPr/MLVIfsJUo5W+BGa/cE1yMFfZXeszWKGS3fJT/Ok ZOFQBZ4Q809U75rNfL2GxVswRwR2R52lAi3vurQJXfxy1UEfFygIwW0qdTkFrTM2ofbjal4OTmBtV H4Z31l+DbcA0I+3aiPuw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYon-005uww-3C; Tue, 25 May 2021 15:18:05 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmQ-005tdR-Qz for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:40 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3BCDC61249; Tue, 25 May 2021 15:15:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955738; bh=XluxWdEmv/04KQW4UclQeq22wYmZOXwCY0/76gkRfFs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u0qe6u8mKs2drnaEFHz9TRk0KnRfE/g+NbHRMA33zr0FjSkL9mgjbbkZZzjMoe1Nh 7Fepz0j7zYJ++S9S72IwnVsUUExX77+5PKiQ1quH9WOwgCHTvnn/70FYRGm0/vDcni ygE6ujXPw737bTCEjUXGzb5QkNfBLdS04VrrNuIHjrOgCU96ZixVIwX2AVjfkV3E+W 2mjIUxZ+3mUElb7rB/ps9vcQn8MQGQEbuohh4E5nLlwld8rLae8/3IpdLvODq4v9xT Uq70F7wipF0LARQMErpLNlaYUDolDfAhs+JeHCr7z6TZYl1n5+QLes9c8eZOgJFAOL ofb+QI1s+VDsg== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 06/22] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Date: Tue, 25 May 2021 16:14:16 +0100 Message-Id: <20210525151432.16875-7-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081538_948536_337CDDE6 X-CRM114-Status: GOOD ( 12.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since 32-bit applications will be killed if they are caught trying to execute on a 64-bit-only CPU in a mismatched system, advertise the set of 32-bit capable CPUs to userspace in sysfs. Reviewed-by: Greg Kroah-Hartman Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- .../ABI/testing/sysfs-devices-system-cpu | 9 +++++++++ arch/arm64/kernel/cpufeature.c | 19 +++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index fe13baa53c59..899377b2715a 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -494,6 +494,15 @@ Description: AArch64 CPU registers 'identification' directory exposes the CPU ID registers for identifying model and revision of the CPU. +What: /sys/devices/system/cpu/aarch32_el0 +Date: May 2021 +Contact: Linux ARM Kernel Mailing list +Description: Identifies the subset of CPUs in the system that can execute + AArch32 (32-bit ARM) applications. If present, the same format as + /sys/devices/system/cpu/{offline,online,possible,present} is used. + If absent, then all or none of the CPUs can execute AArch32 + applications and execve() will behave accordingly. + What: /sys/devices/system/cpu/cpu#/cpu_capacity Date: December 2016 Contact: Linux kernel mailing list diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4194a47de62d..959442f76ed7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -67,6 +67,7 @@ #include #include #include +#include #include #include #include @@ -1297,6 +1298,24 @@ const struct cpumask *system_32bit_el0_cpumask(void) return cpu_possible_mask; } +static ssize_t aarch32_el0_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + const struct cpumask *mask = system_32bit_el0_cpumask(); + + return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(mask)); +} +static const DEVICE_ATTR_RO(aarch32_el0); + +static int __init aarch32_el0_sysfs_init(void) +{ + if (!allow_mismatched_32bit_el0) + return 0; + + return device_create_file(cpu_subsys.dev_root, &dev_attr_aarch32_el0); +} +device_initcall(aarch32_el0_sysfs_init); + static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope) { if (!has_cpuid_feature(entry, scope)) From patchwork Tue May 25 15:14:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D53C2B9F8 for ; Tue, 25 May 2021 15:20:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3C46A61159 for ; Tue, 25 May 2021 15:20:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C46A61159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QfYojPHxyJl6jw+ceg3nqqi9VABVL2+8YnUuq6tskjg=; b=SxHc3ez/o/T+Ti J3ZZx9zkX+Tp/W+kTk6ROdSwUKboYocuqUnXF78JGsDxZffGiIfL/ZoRwFK48DEhBSj67l3qZEAsS im9jIJPTjlAx6t8k6du7z9O1P8iDnBnCuohS3RFDXgV3/jrp/wZ3zPgsnVtxHbgxvrdu+G0tQ5ZME dMzITe8c2Cxlp8mf1guY+JLthkelEPUhi6e9X8eT8XKhZKS7pfATNXLKICyPY7DlD4Ygpd9lh+Za5 TsqioGthONUxQkSWkCCZcHL3U5NrXsE3iIKHugJP3GuFiEuTA5ljck1t1CObgO0lBhjZRnl/Hrubg f05Ke+93wsyOjG7thIFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYpF-005vD2-0u; Tue, 25 May 2021 15:18:33 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmU-005tev-Gp for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:44 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id E8B246141D; Tue, 25 May 2021 15:15:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955742; bh=QagufuVpOB4kXBKDhC/C+NjO+ztB6hqCnw2dJisJ2TM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pOWNl3Ngb0Mbd4T+OcanPvU4JIFpSJzcilJqEM/j4OfZnS8VQDZlC09dRG5Ry1mQW yfFPsp2eHau679tdJxMO1kOZhD2jIMXOk0oNRrni/tKFHZSuS7IJ+IbYHcVSrJ6k+U RKVnaQZ1r1bZ436wutDgRzdIrFNsFlXhUiji4dtp1p918kQtsXvCMUG08fsCOWhjH+ oRkATuClqNcqT4z36pzJYUdH6kJm6xvUprg+MfRIEN01XsJ1BP/D0Ghq9+qVcX9thl CNw9MN2XqWBdC8ujh6yd2TQQmb9QdzOHDCw6TEOO+VwDkTS/Rn5IoGzffqnJTtD01A QJMIj7bAzqy+Q== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 07/22] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Date: Tue, 25 May 2021 16:14:17 +0100 Message-Id: <20210525151432.16875-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081542_653308_4B69B389 X-CRM114-Status: GOOD ( 15.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. On such a system, we must take care not to migrate a task to an unsupported CPU when forcefully moving tasks in select_fallback_rq() in response to a CPU hot-unplug operation. Introduce a task_cpu_possible_mask() hook which, given a task argument, allows an architecture to return a cpumask of CPUs that are capable of executing that task. The default implementation returns the cpu_possible_mask, since sane machines do not suffer from per-cpu ISA limitations that affect scheduling. The new mask is used when selecting the fallback runqueue as a last resort before forcing a migration to the first active CPU. Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- include/linux/mmu_context.h | 11 +++++++++++ kernel/sched/core.c | 5 ++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h index 03dee12d2b61..1a599ba3524f 100644 --- a/include/linux/mmu_context.h +++ b/include/linux/mmu_context.h @@ -14,4 +14,15 @@ static inline void leave_mm(int cpu) { } #endif +/* + * CPUs that are capable of running task @p. By default, we assume a sane, + * homogeneous system. Must contain at least one active CPU. + */ +#ifndef task_cpu_possible_mask +# define task_cpu_possible_mask(p) cpu_possible_mask +# define task_cpu_possible(cpu, p) true +#else +# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p)) +#endif + #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1702a60d178d..00ed51528c70 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1814,7 +1814,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) /* Non kernel threads are not allowed during either online or offline. */ if (!(p->flags & PF_KTHREAD)) - return cpu_active(cpu); + return cpu_active(cpu) && task_cpu_possible(cpu, p); /* KTHREAD_IS_PER_CPU is always allowed. */ if (kthread_is_per_cpu(p)) @@ -2791,10 +2791,9 @@ static int select_fallback_rq(int cpu, struct task_struct *p) * * More yuck to audit. */ - do_set_cpus_allowed(p, cpu_possible_mask); + do_set_cpus_allowed(p, task_cpu_possible_mask(p)); state = fail; break; - case fail: BUG(); break; From patchwork Tue May 25 15:14:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3165BC2B9F8 for ; Tue, 25 May 2021 15:20:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E54CE6113B for ; Tue, 25 May 2021 15:20:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E54CE6113B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AD+uPfhrbxH+1eqcSVBtdwPyP652Tps5/XHuCxfxtKU=; b=1P9/iq+KMKXzu7 VVVGQj7Rt5P76o4OSSQYVW8XTVLY4heJPKUUxfcgKzVNz2a6ZaPNIV1FhT775UI9NnBfRGCRgf5V1 rAdcmUCCnSsL3HlYq8Hoe9SDPeQT2QMvGC8PxXjgk5Xl/ynVrqAiOqFUdRVk365QcxckaikCv3mk0 rATPSQwZNM/OCz07ajfTCt2fYuDMKFgpdJ1N01Qxfk8R7nzgTcdqwQEiLnaJUjYl+rfIFDny1eKmN QubDaJtfbsURdMtTHBjLojrpO+H0JBYJdPFL9OrHas3DkthhJ6autJoUMKQ8btNhJXZY/T+2LKtCp eeCCMmrEFkPUgaVjx6vw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYpr-005vaQ-HF; Tue, 25 May 2021 15:19:12 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmY-005tgu-Dn for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:47 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id A7ADC6142F; Tue, 25 May 2021 15:15:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955746; bh=r0igYkdDa6pTkf5eR6dqCXpYwXk+rrFbiGdpzDOL3Z8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I4V+2OfCK+A62N/gJBG8uc7LZTybgyIOpJB3/JHXiNBkKpqkdAxuXCZhkcTn++95A iacFXwLrhTL/85OaF/86ioTWnGRoMX61GVttCc2qjJInGLFFs1xolWfMm+kvmJn8xs b3lUWUsk+6CoY9z9aC7cHa5J9hhL46Q+FhHyg54l763ZLBJeDaDi1er3QH70hgmMia ECChJ78tYf790Uu6CT8p79SZ8EtJ+pgTbG2IPZ+CemhqF7XWyLn35BEE8TnMfNjx5K nxnjTHlNiLYteoDh0U9BdQd32VRDUfMzHB1Jf236uAPM66EUb5HcjMk73PPqdaVkKH jK0QtrPAWbcJA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com, Li Zefan Subject: [PATCH v7 08/22] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Date: Tue, 25 May 2021 16:14:18 +0100 Message-Id: <20210525151432.16875-9-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081546_535844_B8552778 X-CRM114-Status: GOOD ( 14.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the scheduler cannot find an allowed CPU for a task, cpuset_cpus_allowed_fallback() will widen the affinity to cpu_possible_mask if cgroup v1 is in use. In preparation for allowing architectures to provide their own fallback mask, just return early if we're either using cgroup v1 or we're using cgroup v2 with a mask that contains invalid CPUs. This will allow select_fallback_rq() to figure out the mask by itself. Cc: Li Zefan Cc: Tejun Heo Cc: Johannes Weiner Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- include/linux/cpuset.h | 1 + kernel/cgroup/cpuset.c | 12 ++++++++++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 04c20de66afc..ed6ec677dd6b 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -15,6 +15,7 @@ #include #include #include +#include #include #ifdef CONFIG_CPUSETS diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index a945504c0ae7..8c799260a4a2 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3322,9 +3322,17 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) void cpuset_cpus_allowed_fallback(struct task_struct *tsk) { + const struct cpumask *cs_mask; + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); + rcu_read_lock(); - do_set_cpus_allowed(tsk, is_in_v2_mode() ? - task_cs(tsk)->cpus_allowed : cpu_possible_mask); + cs_mask = task_cs(tsk)->cpus_allowed; + + if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask)) + goto unlock; /* select_fallback_rq will try harder */ + + do_set_cpus_allowed(tsk, cs_mask); +unlock: rcu_read_unlock(); /* From patchwork Tue May 25 15:14:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE7FC2B9F8 for ; Tue, 25 May 2021 15:21:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E641C61159 for ; Tue, 25 May 2021 15:21:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E641C61159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3xn5O/bPgoihFAo6Q9IxBeP94ghPNc4ufz6txpRMnTM=; b=aj9LZUeGd3+1d+ e1EIwoud3azD0VIaBXNAQB2x0jBi1oV7wy7JuIwonTlULbGF5m2LwlF81ksgqPmrpIrDfWEMcemml yYTIrpa55eJMoeIpds+H+kxDbpxsnU5bYdWEzEIRkgpxvfhkyCxJNEZAKvgw4RWR8nLgZyXr/7wvL suUygr9Any3reRqmG+ZP+Igh2GVpFzosfq5+jH3vAjBcCPOG10t2EBmmUDDt7r3wxkz5QSUIiCP8v 9XyzsWiDxMvuDJuxskZwsgioYdSWFJ9AdKc0MS8/nTCLXaMh2tsWoPiJrbMGswJ2Ozgc3Actvh0Wt lP1IYNtllsi8V68MSbdg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYqc-005w0q-Rq; Tue, 25 May 2021 15:19:59 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmc-005tj0-7f for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:52 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 82F906141C; Tue, 25 May 2021 15:15:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955749; bh=X1RfmCls++M6x8osvdHN0JcEJXAR60lgV32pFhaL6hM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s1iIdvnuyCqXnHXv2Qcjp+CTEP7dB/SRsVcAE35JZ5PhW7tFs7//u1v1DhizdZppM BYFmhA0EaRtEhCH/5GlpVejqIoBsSahz5RzGxa2pctfzu66f1CHrfhPKQGvZgudiVY QNFMPAiPP768NIwN51R199lVBXDMqK4WrLjgbSGsdSUI1s48dYDLBXY6ngFyLQ89SI KjY1Q9prjcad6qmF7xubIxorno7ADGZQfgwhyXudSuaDabzF36AurglKImlPsWP9QZ POo9yVXmVO1NOTS5UQOr0rXYXaw5HTWTs1KKAPjKjNfBUAJcdiUnZza8eKerkfrrfB bosbdJ73azuzA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com, Li Zefan Subject: [PATCH v7 09/22] cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() Date: Tue, 25 May 2021 16:14:19 +0100 Message-Id: <20210525151432.16875-10-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081550_364972_04D84780 X-CRM114-Status: GOOD ( 20.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Modify guarantee_online_cpus() to take task_cpu_possible_mask() into account when trying to find a suitable set of online CPUs for a given task. This will avoid passing an invalid mask to set_cpus_allowed_ptr() during ->attach() and will subsequently allow the cpuset hierarchy to be taken into account when forcefully overriding the affinity mask for a task which requires migration to a compatible CPU. Cc: Li Zefan Cc: Tejun Heo Cc: Johannes Weiner Signed-off-by: Will Deacon --- include/linux/cpuset.h | 2 +- kernel/cgroup/cpuset.c | 41 ++++++++++++++++++++++++++--------------- 2 files changed, 27 insertions(+), 16 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index ed6ec677dd6b..414a8e694413 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -185,7 +185,7 @@ static inline void cpuset_read_unlock(void) { } static inline void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask) { - cpumask_copy(mask, cpu_possible_mask); + cpumask_copy(mask, task_cpu_possible_mask(p)); } static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 8c799260a4a2..e0649c970f48 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -372,18 +372,29 @@ static inline bool is_in_v2_mode(void) } /* - * Return in pmask the portion of a cpusets's cpus_allowed that - * are online. If none are online, walk up the cpuset hierarchy - * until we find one that does have some online cpus. + * Return in pmask the portion of a task's cpusets's cpus_allowed that + * are online and are capable of running the task. If none are found, + * walk up the cpuset hierarchy until we find one that does have some + * appropriate cpus. * * One way or another, we guarantee to return some non-empty subset * of cpu_online_mask. * * Call with callback_lock or cpuset_mutex held. */ -static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask) +static void guarantee_online_cpus(struct task_struct *tsk, + struct cpumask *pmask) { - while (!cpumask_intersects(cs->effective_cpus, cpu_online_mask)) { + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); + struct cpuset *cs; + + if (WARN_ON(!cpumask_and(pmask, possible_mask, cpu_online_mask))) + cpumask_copy(pmask, cpu_online_mask); + + rcu_read_lock(); + cs = task_cs(tsk); + + while (!cpumask_intersects(cs->effective_cpus, pmask)) { cs = parent_cs(cs); if (unlikely(!cs)) { /* @@ -393,11 +404,13 @@ static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask) * cpuset's effective_cpus is on its way to be * identical to cpu_online_mask. */ - cpumask_copy(pmask, cpu_online_mask); - return; + goto out_unlock; } } - cpumask_and(pmask, cs->effective_cpus, cpu_online_mask); + cpumask_and(pmask, pmask, cs->effective_cpus); + +out_unlock: + rcu_read_unlock(); } /* @@ -2199,15 +2212,13 @@ static void cpuset_attach(struct cgroup_taskset *tset) percpu_down_write(&cpuset_rwsem); - /* prepare for attach */ - if (cs == &top_cpuset) - cpumask_copy(cpus_attach, cpu_possible_mask); - else - guarantee_online_cpus(cs, cpus_attach); - guarantee_online_mems(cs, &cpuset_attach_nodemask_to); cgroup_taskset_for_each(task, css, tset) { + if (cs != &top_cpuset) + guarantee_online_cpus(task, cpus_attach); + else + cpumask_copy(cpus_attach, task_cpu_possible_mask(task)); /* * can_attach beforehand should guarantee that this doesn't * fail. TODO: have a better way to handle failure here @@ -3303,7 +3314,7 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) spin_lock_irqsave(&callback_lock, flags); rcu_read_lock(); - guarantee_online_cpus(task_cs(tsk), pmask); + guarantee_online_cpus(tsk, pmask); rcu_read_unlock(); spin_unlock_irqrestore(&callback_lock, flags); } From patchwork Tue May 25 15:14:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAA65C2B9F8 for ; Tue, 25 May 2021 15:22:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9E15961159 for ; Tue, 25 May 2021 15:22:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E15961159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7KSFAx6xtjs2y7SS9y5uEFNoBM+eukocIUjaMkw/yr8=; b=fv7nWmQrTklCDu idmtvX6OVMeODjT1Js06aWZ+yntTnprxl/TV/IWdvVfsnT6IwXt4sdXLoujCAinTK5JhLX6PzG6FB zYe63JeIOcnGTvK5iKw7z7yvUycPdcx6e85YrUQF72QljWQkcQ2Bo5/6Lg5I5Et/YPPsy0fxl8ksP b3Ed9/pSycsTX0yBT323ej8B7iPtESLYa1e3nw+PQX7rFRRqFlyi+lsQ0V/cjhkwhZMLBfvqIk8bV 9CUjVZXyhQUPAB3tc6KnO35C3t4p8/FY5Ia7xNYNP5GKQXi6p26XimcAbkYJBnMx2nrOQA9JISKEP WyskmgTFEHvGHkhIzHhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYrG-005wLJ-Bt; Tue, 25 May 2021 15:20:38 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmf-005tkc-UZ for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:55 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5C0B061429; Tue, 25 May 2021 15:15:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955753; bh=qGkVsA0G9EvUFdmfSvnTeaGukNiKFBrINcN26bFiPSE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pFTaybQbHLvBuCd7/j1WYqxd2rrK144oMKPNln6WRnzTnHq+wuE/OmmjuHj44bPD5 3zWaSegMAwu1m/cjoKUM6OBh1+35h/9nYLjhfk1nullfSmPmx0MY3sIQPdIDj0kr40 Nxcw7Bsy9dwhJ+Bbxx1nKVpRy3x/cQupXFOIkMaEoria4GfLqmzwO/PcHNg3Ua0HeG edGZX1BPw9/ED/bYANqOBHaBn2SDSzJzjD/Qq8gyF3i0G7fBe39OAIRrkyK6uEkhmo Um1OHw7OEvhBeKfEW0/QvRHateZPaBjV+ncrqVgHezbfPr2Jdse6tMysSyWSoRCe01 2tfEY3TU+9OBA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 10/22] sched: Reject CPU affinity changes based on task_cpu_possible_mask() Date: Tue, 25 May 2021 16:14:20 +0100 Message-Id: <20210525151432.16875-11-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081554_048547_468D6455 X-CRM114-Status: GOOD ( 12.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Reject explicit requests to change the affinity mask of a task via set_cpus_allowed_ptr() if the requested mask is not a subset of the mask returned by task_cpu_possible_mask(). This ensures that the 'cpus_mask' for a given task cannot contain CPUs which are incapable of executing it, except in cases where the affinity is forced. Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- kernel/sched/core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 00ed51528c70..8ca7854747f1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2346,6 +2346,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, u32 flags) { const struct cpumask *cpu_valid_mask = cpu_active_mask; + const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p); unsigned int dest_cpu; struct rq_flags rf; struct rq *rq; @@ -2366,6 +2367,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, * set_cpus_allowed_common() and actually reset p->cpus_ptr. */ cpu_valid_mask = cpu_online_mask; + } else if (!cpumask_subset(new_mask, cpu_allowed_mask)) { + ret = -EINVAL; + goto out; } /* From patchwork Tue May 25 15:14:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0CACC2B9F8 for ; Tue, 25 May 2021 15:23:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7DD0761159 for ; Tue, 25 May 2021 15:23:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DD0761159 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wNo3ffpXPgzTtiKoHZMuhGt+36LI8KqeCtvnO/6L83s=; b=rwF6RXxmsOZZB6 a1sjHqnb1vbl2YOvMuF4biC/6+jYmVfqak2XcTGKbg1AfPrWqsjhnpPR82OL6mJHae4fT6XnLReVo 2iFDdRa3+llA3+mG7D/XQuQfecS3CoaJEITNglUPL/n4TaO8u4qDPNCzfhKaQDJ4oQ/74LchYV4G6 9Ayq+vE3CtjAZTCIiohokzxMDwafXmNfluUkFw4nWkW8cJ/UbWwYAda/JC1SU5kBEERRlxN+n6ING IERwE5A9ZO2T6iU9JMJwDpQ/gdtfkZR8W2SqVTDweRpfm8kqipDnJhunaFDHMfpsXFEfzek0Ciht/ m9b13R7ubVNtqQVyHuxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYs2-005wiK-LP; Tue, 25 May 2021 15:21:27 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmj-005tmf-L9 for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:00 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 11A7261434; Tue, 25 May 2021 15:15:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955757; bh=7S8FiuEyUSxwLWtrNEdNEEBikQ/xnj7VhrLn/rpdZ7g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tT1hvDV0Q64QQV2RwYIPi7s5h6v3u95FXiCfMLKY5Tu9B2D4fn6RejklJELt5LUPY q83Ki+wXcQT4nDKIE78sPC0by9bJTjbtEzY3n5t+1C/QSw/iW2qI7IgMWsn92VnHDI FoibsqwL8KUwrpEig69mufleLvpOQMbOP4wFvBMcSOoqg5LAInKuaROiGatEhOMvar tB/mdQOGfyopzSPzskYzT3VHscqJVVL5JDPE5kEBbUX7uOONtUTJXsXgpSetwlQ4tc Ub1UWdz8JHAtqtSwfQwrzJb/SSj3KHV70zcYM5lxLyHyQWR8sBrS7N4J0Th69DF9kk cNvWNd5juU7sQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 11/22] sched: Introduce task_struct::user_cpus_ptr to track requested affinity Date: Tue, 25 May 2021 16:14:21 +0100 Message-Id: <20210525151432.16875-12-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081557_855327_02C5A202 X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for saving and restoring the user-requested CPU affinity mask of a task, add a new cpumask_t pointer to 'struct task_struct'. If the pointer is non-NULL, then the mask is copied across fork() and freed on task exit. Signed-off-by: Will Deacon --- include/linux/sched.h | 13 +++++++++++++ init/init_task.c | 1 + kernel/fork.c | 2 ++ kernel/sched/core.c | 20 ++++++++++++++++++++ 4 files changed, 36 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index d2c881384517..db32d4f7e5b3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -730,6 +730,7 @@ struct task_struct { unsigned int policy; int nr_cpus_allowed; const cpumask_t *cpus_ptr; + cpumask_t *user_cpus_ptr; cpumask_t cpus_mask; void *migration_pending; #ifdef CONFIG_SMP @@ -1688,6 +1689,8 @@ extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_ #ifdef CONFIG_SMP extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); +extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node); +extern void release_user_cpus_ptr(struct task_struct *p); #else static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { @@ -1698,6 +1701,16 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p, const struct cpuma return -EINVAL; return 0; } +static inline int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node) +{ + if (src->user_cpus_ptr) + return -EINVAL; + return 0; +} +static inline void release_user_cpus_ptr(struct task_struct *p) +{ + WARN_ON(p->user_cpus_ptr); +} #endif extern int yield_to(struct task_struct *p, bool preempt); diff --git a/init/init_task.c b/init/init_task.c index 8b08c2e19cbb..158c2b1689e1 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -80,6 +80,7 @@ struct task_struct init_task .normal_prio = MAX_PRIO - 20, .policy = SCHED_NORMAL, .cpus_ptr = &init_task.cpus_mask, + .user_cpus_ptr = NULL, .cpus_mask = CPU_MASK_ALL, .nr_cpus_allowed= NR_CPUS, .mm = NULL, diff --git a/kernel/fork.c b/kernel/fork.c index dc06afd725cb..d3710e7f1686 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -446,6 +446,7 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + release_user_cpus_ptr(tsk); scs_release(tsk); #ifndef CONFIG_THREAD_INFO_IN_TASK @@ -918,6 +919,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) #endif if (orig->cpus_ptr == &orig->cpus_mask) tsk->cpus_ptr = &tsk->cpus_mask; + dup_user_cpus_ptr(tsk, orig, node); /* * One for the user space visible state that goes away when reaped. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8ca7854747f1..9349b8ecbcf9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2124,6 +2124,26 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) __do_set_cpus_allowed(p, new_mask, 0); } +int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, + int node) +{ + if (!src->user_cpus_ptr) + return 0; + + dst->user_cpus_ptr = kmalloc_node(cpumask_size(), GFP_KERNEL, node); + if (!dst->user_cpus_ptr) + return -ENOMEM; + + cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr); + return 0; +} + +void release_user_cpus_ptr(struct task_struct *p) +{ + kfree(p->user_cpus_ptr); + p->user_cpus_ptr = NULL; +} + /* * This function is wildly self concurrent; here be dragons. * From patchwork Tue May 25 15:14:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B85B9C2B9F8 for ; Tue, 25 May 2021 15:25:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7F9BB613AD for ; Tue, 25 May 2021 15:25:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7F9BB613AD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gx+G9reQLbUdTk+mkydTlMRiDxA/NC/IVojdd4RTlTM=; b=cGCq1PKreiGQWi 0emEbtrj7XK9z8jenEbz5sQWd605A0dtF8c12aB0ZC9kJRIaucAn0cPK3i+FcipaGQB4SNkydgeCx +kqvyWkGMcFCr1IZYxu8yuhQqdqQRbiDw1d6+vVK4JU0a2LfHcfTg+/VAXDYj12uWW0NrA98TU2df JVXLxvxj5AEom360PvG1k1t0wzMVgbD10OEiWlq0nk1TBAa4BhIabC2qGCEnUjWAH4j7IjlWp7H0A ggMAp4zQMkHSz3FHVM0nJDJkurCpl69VCxrz9AAAJ+HI0lrdnln//o5ZbVFqA1nfrZy6P1rOFFagU gyCEzYSJwwbX+zF9XFpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYt6-005xBr-Bt; Tue, 25 May 2021 15:22:33 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmn-005ton-DO for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:04 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id BA4466142F; Tue, 25 May 2021 15:15:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955761; bh=EvkTo2K1PcCIoQ8/XfG+DFYRwxrDfvbuzCOjvHhGVVA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qB7Rb9GQYCJT7DcWUyR6ChOA8Xjy3y9N0mX6afK36FmQkI0DmFdHmDAWHbPN71RNo GV73kWJHQwapNF/07xw+4SlB6qj7ndZ4xIBLMevlKW6ILJCyw63fWvAY2lwGKSq+KU lV4O5vmV/gYreyg++dxUdv20YWg2qYBtzWu2PPmajOYkVwQqObJx0IHj5vU6LXxBbY hIynBXOh6Lt9CIwPBSJxVQX+4Sy89kH76WK75B4LW2AL9LIJ54PX7QcpdKbrDCePBT JtyGI2hrgqg7ZfOfU/aX6ARJSO5lLCwZUH69BdO5kAvo2uI/qtvnJFA7FEVlQCsGnJ HetFLjMhr+q4w== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 12/22] sched: Split the guts of sched_setaffinity() into a helper function Date: Tue, 25 May 2021 16:14:22 +0100 Message-Id: <20210525151432.16875-13-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081601_521119_9F15A906 X-CRM114-Status: GOOD ( 16.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for replaying user affinity requests using a saved mask, split sched_setaffinity() up so that the initial task lookup and security checks are only performed when the request is coming directly from userspace. Signed-off-by: Will Deacon --- kernel/sched/core.c | 105 ++++++++++++++++++++++++-------------------- 1 file changed, 57 insertions(+), 48 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9349b8ecbcf9..0b7faca947a9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6784,53 +6784,22 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, return retval; } -long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) +static int +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) { - cpumask_var_t cpus_allowed, new_mask; - struct task_struct *p; int retval; + cpumask_var_t cpus_allowed, new_mask; - rcu_read_lock(); - - p = find_process_by_pid(pid); - if (!p) { - rcu_read_unlock(); - return -ESRCH; - } - - /* Prevent p going away */ - get_task_struct(p); - rcu_read_unlock(); + if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) + return -ENOMEM; - if (p->flags & PF_NO_SETAFFINITY) { - retval = -EINVAL; - goto out_put_task; - } - if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) { - retval = -ENOMEM; - goto out_put_task; - } if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) { retval = -ENOMEM; goto out_free_cpus_allowed; } - retval = -EPERM; - if (!check_same_owner(p)) { - rcu_read_lock(); - if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) { - rcu_read_unlock(); - goto out_free_new_mask; - } - rcu_read_unlock(); - } - - retval = security_task_setscheduler(p); - if (retval) - goto out_free_new_mask; - cpuset_cpus_allowed(p, cpus_allowed); - cpumask_and(new_mask, in_mask, cpus_allowed); + cpumask_and(new_mask, mask, cpus_allowed); /* * Since bandwidth control happens on root_domain basis, @@ -6851,23 +6820,63 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) #endif again: retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); + if (retval) + goto out_free_new_mask; - if (!retval) { - cpuset_cpus_allowed(p, cpus_allowed); - if (!cpumask_subset(new_mask, cpus_allowed)) { - /* - * We must have raced with a concurrent cpuset - * update. Just reset the cpus_allowed to the - * cpuset's cpus_allowed - */ - cpumask_copy(new_mask, cpus_allowed); - goto again; - } + cpuset_cpus_allowed(p, cpus_allowed); + if (!cpumask_subset(new_mask, cpus_allowed)) { + /* + * We must have raced with a concurrent cpuset update. + * Just reset the cpumask to the cpuset's cpus_allowed. + */ + cpumask_copy(new_mask, cpus_allowed); + goto again; } + out_free_new_mask: free_cpumask_var(new_mask); out_free_cpus_allowed: free_cpumask_var(cpus_allowed); + return retval; +} + +long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) +{ + struct task_struct *p; + int retval; + + rcu_read_lock(); + + p = find_process_by_pid(pid); + if (!p) { + rcu_read_unlock(); + return -ESRCH; + } + + /* Prevent p going away */ + get_task_struct(p); + rcu_read_unlock(); + + if (p->flags & PF_NO_SETAFFINITY) { + retval = -EINVAL; + goto out_put_task; + } + + if (!check_same_owner(p)) { + rcu_read_lock(); + if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) { + rcu_read_unlock(); + retval = -EPERM; + goto out_put_task; + } + rcu_read_unlock(); + } + + retval = security_task_setscheduler(p); + if (retval) + goto out_put_task; + + retval = __sched_setaffinity(p, in_mask); out_put_task: put_task_struct(p); return retval; From patchwork Tue May 25 15:14:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B56F6C4707F for ; Tue, 25 May 2021 15:27:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7DD09613F1 for ; Tue, 25 May 2021 15:27:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DD09613F1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tw6riNpXUlsQXDvuLF5/4x4ErD0qmNilo3NsiVFhjos=; b=KmzTHh8pWiAumr 1sTwapBkeG51pO9A9zMdKGmJi2GUjzIWjeop8+BhPAoP62NALE4Xf9cFO/BrxizDA+RiiGbR0mXP2 eU82hnd8D4Z/fwd3betJc/zToDffJV2qrd3No39rjRljtNtiumKq6ArzvtlLQhHq14incql6e4SXD qQCUK2bh/g0aMcyzDEcN2BLr5X6RY/gWrkZM0LbNMV9oWIG+rr4EjySiIvmeZO8MeHSO98YkoEtpM wTfvUDkAcTMcvUcj1Tl9Xh48RpAizBaTUCuFCgC1Z+69xUJRfnwo+xj9t2QdjnDlISS9apHm6Mpth Nve2jcQB655nVWoDISHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYuq-005y46-C2; Tue, 25 May 2021 15:24:21 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmr-005tqn-0P for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:07 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 70F976141B; Tue, 25 May 2021 15:16:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955764; bh=vjGbxepguKiUXN3zhU6rxtrlzr33c/K90Gk5U+u4xsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cGOSIu284mUOQFYkduVw4k7+/JEibs89xXyCrLC2+Pvaq4d0/uBiKJ20EtPp1bPTP QtAphSeh+IdX2FA4xzzAOhA3SBDr6WLiVo3b/8zlkYf7e5jFk/bZ+zam4hydIh/den HKnatb9P8w1f2nUdMEArGXAcEXa+edlRM2oeWqHudu8iF5XxAAMOOnLu63XcZc0dRp YKcG4uWxb1p86hjF5hNqMwIdWBW3Rcr2gzU7NVgJi2kPEbMadEqnI789gQ0P1SzV9H R21/vKvKnkhIv5327JqiVaMLVAxYdnU0TuDQMUAfWf4tbDpA1YVq5a9HsEmrT8YaU1 haD+HplaL9sSw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 13/22] sched: Allow task CPU affinity to be restricted on asymmetric systems Date: Tue, 25 May 2021 16:14:23 +0100 Message-Id: <20210525151432.16875-14-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081605_141529_1CCFBDB8 X-CRM114-Status: GOOD ( 33.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although userspace can carefully manage the affinity masks for such tasks, one place where it is particularly problematic is execve() because the CPU on which the execve() is occurring may be incompatible with the new application image. In such a situation, it is desirable to restrict the affinity mask of the task and ensure that the new image is entered on a compatible CPU. From userspace's point of view, this looks the same as if the incompatible CPUs have been hotplugged off in the task's affinity mask. Similarly, if a subsequent execve() reverts to a compatible image, then the old affinity is restored if it is still valid. In preparation for restricting the affinity mask for compat tasks on arm64 systems without uniform support for 32-bit applications, introduce {force,relax}_compatible_cpus_allowed_ptr(), which respectively restrict and restore the affinity mask for a task based on the compatible CPUs. Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- include/linux/sched.h | 2 + kernel/sched/core.c | 173 ++++++++++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 1 + 3 files changed, 160 insertions(+), 16 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index db32d4f7e5b3..91a6cfeae242 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1691,6 +1691,8 @@ extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node); extern void release_user_cpus_ptr(struct task_struct *p); +extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); +extern void relax_compatible_cpus_allowed_ptr(struct task_struct *p); #else static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0b7faca947a9..998ed1dbfd4f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2353,26 +2353,21 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag } /* - * Change a given task's CPU affinity. Migrate the thread to a - * proper CPU and schedule it away if the CPU it's executing on - * is removed from the allowed bitmask. - * - * NOTE: the caller must have a valid reference to the task, the - * task must not exit() & deallocate itself prematurely. The - * call is not atomic; no spinlocks may be held. + * Called with both p->pi_lock and rq->lock held; drops both before returning. */ -static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, - u32 flags) +static int __set_cpus_allowed_ptr_locked(struct task_struct *p, + const struct cpumask *new_mask, + u32 flags, + struct rq *rq, + struct rq_flags *rf) + __releases(rq->lock) + __releases(p->pi_lock) { const struct cpumask *cpu_valid_mask = cpu_active_mask; const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p); unsigned int dest_cpu; - struct rq_flags rf; - struct rq *rq; int ret = 0; - rq = task_rq_lock(p, &rf); update_rq_clock(rq); if (p->flags & PF_KTHREAD || is_migration_disabled(p)) { @@ -2426,20 +2421,166 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, __do_set_cpus_allowed(p, new_mask, flags); - return affine_move_task(rq, p, &rf, dest_cpu, flags); + if (flags & SCA_USER) + release_user_cpus_ptr(p); + + return affine_move_task(rq, p, rf, dest_cpu, flags); out: - task_rq_unlock(rq, p, &rf); + task_rq_unlock(rq, p, rf); return ret; } +/* + * Change a given task's CPU affinity. Migrate the thread to a + * proper CPU and schedule it away if the CPU it's executing on + * is removed from the allowed bitmask. + * + * NOTE: the caller must have a valid reference to the task, the + * task must not exit() & deallocate itself prematurely. The + * call is not atomic; no spinlocks may be held. + */ +static int __set_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *new_mask, u32 flags) +{ + struct rq_flags rf; + struct rq *rq; + + rq = task_rq_lock(p, &rf); + return __set_cpus_allowed_ptr_locked(p, new_mask, flags, rq, &rf); +} + int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) { return __set_cpus_allowed_ptr(p, new_mask, 0); } EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); +/* + * Change a given task's CPU affinity to the intersection of its current + * affinity mask and @subset_mask, writing the resulting mask to @new_mask + * and pointing @p->user_cpus_ptr to a copy of the old mask. + * If the resulting mask is empty, leave the affinity unchanged and return + * -EINVAL. + */ +static int restrict_cpus_allowed_ptr(struct task_struct *p, + struct cpumask *new_mask, + const struct cpumask *subset_mask) +{ + struct rq_flags rf; + struct rq *rq; + int err; + struct cpumask *user_mask = NULL; + + if (!p->user_cpus_ptr) + user_mask = kmalloc(cpumask_size(), GFP_KERNEL); + + rq = task_rq_lock(p, &rf); + + /* + * Forcefully restricting the affinity of a deadline task is + * likely to cause problems, so fail and noisily override the + * mask entirely. + */ + if (task_has_dl_policy(p) && dl_bandwidth_enabled()) { + err = -EPERM; + goto err_unlock; + } + + if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) { + err = -EINVAL; + goto err_unlock; + } + + /* + * We're about to butcher the task affinity, so keep track of what + * the user asked for in case we're able to restore it later on. + */ + if (user_mask) { + cpumask_copy(user_mask, p->cpus_ptr); + p->user_cpus_ptr = user_mask; + } + + return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf); + +err_unlock: + task_rq_unlock(rq, p, &rf); + kfree(user_mask); + return err; +} + +/* + * Restrict the CPU affinity of task @p so that it is a subset of + * task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the + * old affinity mask. If the resulting mask is empty, we warn and walk + * up the cpuset hierarchy until we find a suitable mask. + */ +void force_compatible_cpus_allowed_ptr(struct task_struct *p) +{ + cpumask_var_t new_mask; + const struct cpumask *override_mask = task_cpu_possible_mask(p); + + alloc_cpumask_var(&new_mask, GFP_KERNEL); + + /* + * __migrate_task() can fail silently in the face of concurrent + * offlining of the chosen destination CPU, so take the hotplug + * lock to ensure that the migration succeeds. + */ + cpus_read_lock(); + if (!cpumask_available(new_mask)) + goto out_set_mask; + + if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask)) + goto out_free_mask; + + /* + * We failed to find a valid subset of the affinity mask for the + * task, so override it based on its cpuset hierarchy. + */ + cpuset_cpus_allowed(p, new_mask); + override_mask = new_mask; + +out_set_mask: + if (printk_ratelimit()) { + printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n", + task_pid_nr(p), p->comm, + cpumask_pr_args(override_mask)); + } + + WARN_ON(set_cpus_allowed_ptr(p, override_mask)); +out_free_mask: + cpus_read_unlock(); + free_cpumask_var(new_mask); +} + +static int +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask); + +/* + * Restore the affinity of a task @p which was previously restricted by a + * call to force_compatible_cpus_allowed_ptr(). This will clear (and free) + * @p->user_cpus_ptr. + */ +void relax_compatible_cpus_allowed_ptr(struct task_struct *p) +{ + unsigned long flags; + struct cpumask *mask = p->user_cpus_ptr; + + /* + * Try to restore the old affinity mask. If this fails, then + * we free the mask explicitly to avoid it being inherited across + * a subsequent fork(). + */ + if (!mask || !__sched_setaffinity(p, mask)) + return; + + raw_spin_lock_irqsave(&p->pi_lock, flags); + release_user_cpus_ptr(p); + raw_spin_unlock_irqrestore(&p->pi_lock, flags); +} + void set_task_cpu(struct task_struct *p, unsigned int new_cpu) { #ifdef CONFIG_SCHED_DEBUG @@ -6819,7 +6960,7 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask) } #endif again: - retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); + retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER); if (retval) goto out_free_new_mask; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a189bec13729..29c35b51411b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1956,6 +1956,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq); #define SCA_CHECK 0x01 #define SCA_MIGRATE_DISABLE 0x02 #define SCA_MIGRATE_ENABLE 0x04 +#define SCA_USER 0x08 #ifdef CONFIG_SMP From patchwork Tue May 25 15:14:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4522BC2B9F8 for ; Tue, 25 May 2021 15:28:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0BC2661260 for ; Tue, 25 May 2021 15:28:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BC2661260 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2o+KRi2mqM3Aw3Z1ysDDB46RpdCSj1xvg4kXStb5w6U=; b=mCwfl2Tlpc+WEZ Nq/pVNnVzAAofZg1FrIsEE3oPjqoUy4ZNTPeQYSLFRTUY7qob02uTgWpwGXZZ+0fcDG1gk8DAQYfP wlajxiONYkykymTHzFMmHxNm/tb4qFnGmGHTU1xcKG5PQ8u9B+q42mA3c9Bniqcs57hJd47TVE6Xv F2HoOnrscT0+1C+MZZP4JD9TdOP2aD0iD3s5GDdCQ7mXVX74oVFdADqdxxFd4Ksfye5kpbsGRoooq 5o6ucnvRa9fykFILHB9f0uQboVyApVobf7HZRZjWn06BoeYnDiQDgqc0o3DIkjLgIztwoXGdEYlOi G6uL+MwPdO+Fll5RnJnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYwB-005yek-7W; Tue, 25 May 2021 15:25:45 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmu-005tsY-N8 for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:10 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2519C6143A; Tue, 25 May 2021 15:16:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955768; bh=YbjV1Pm1usgwdcOO9+IaxC2SS5ZVjGCuAEGiFNBbt6k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oS9TScc6s92VkbZ5RKwyB1tFAVi1DmDHC5wFUe/lfaSXE96vGZHRnPjwzcEv+F+WR 0zdcJZJdfvRPuh+TcrTjwkfrhxfboAnnind4HqE59g8R7dww1B2YJ4fxTxY4V8u0qR zTTUUzF17oYJbFBTelpIPWa95Ngi+G1T/EheMo+k6e52kCrzwyOx9xstA12zKeQxTk g1GTMJ/qlUg7+wM2mPGPKghy28cddMpAfcYSVFY6+gMoyOTNnOFNNpUfOHITtcGMWE BvoctnRh7yDUlyq0hWNBf9Sle3BtGTGh3REnOZQoS4KKqdQ7CTO6cymMXGdv20/knh PrTfBsD8Xcl7A== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 14/22] sched: Introduce task_cpus_dl_admissible() to check proposed affinity Date: Tue, 25 May 2021 16:14:24 +0100 Message-Id: <20210525151432.16875-15-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081608_847624_B760D7F7 X-CRM114-Status: GOOD ( 15.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for restricting the affinity of a task during execve() on arm64, introduce a new task_cpus_dl_admissible() helper function to give an indication as to whether the restricted mask is admissible for a deadline task. Signed-off-by: Will Deacon --- include/linux/sched.h | 6 ++++++ kernel/sched/core.c | 44 +++++++++++++++++++++++++++---------------- 2 files changed, 34 insertions(+), 16 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 91a6cfeae242..9b17d8cfa6ef 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1691,6 +1691,7 @@ extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node); extern void release_user_cpus_ptr(struct task_struct *p); +extern bool task_cpus_dl_admissible(struct task_struct *p, const struct cpumask *mask); extern void force_compatible_cpus_allowed_ptr(struct task_struct *p); extern void relax_compatible_cpus_allowed_ptr(struct task_struct *p); #else @@ -1713,6 +1714,11 @@ static inline void release_user_cpus_ptr(struct task_struct *p) { WARN_ON(p->user_cpus_ptr); } + +static inline bool task_cpus_dl_admissible(struct task_struct *p, const struct cpumask *mask) +{ + return true; +} #endif extern int yield_to(struct task_struct *p, bool preempt); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 998ed1dbfd4f..42e2aecf087c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6925,6 +6925,31 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, return retval; } +#ifdef CONFIG_SMP +bool task_cpus_dl_admissible(struct task_struct *p, const struct cpumask *mask) +{ + bool ret; + + /* + * If the task isn't a deadline task or admission control is + * disabled then we don't care about affinity changes. + */ + if (!task_has_dl_policy(p) || !dl_bandwidth_enabled()) + return true; + + /* + * Since bandwidth control happens on root_domain basis, + * if admission test is enabled, we only admit -deadline + * tasks allowed to run on all the CPUs in the task's + * root_domain. + */ + rcu_read_lock(); + ret = cpumask_subset(task_rq(p)->rd->span, mask); + rcu_read_unlock(); + return ret; +} +#endif + static int __sched_setaffinity(struct task_struct *p, const struct cpumask *mask) { @@ -6942,23 +6967,10 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask) cpuset_cpus_allowed(p, cpus_allowed); cpumask_and(new_mask, mask, cpus_allowed); - /* - * Since bandwidth control happens on root_domain basis, - * if admission test is enabled, we only admit -deadline - * tasks allowed to run on all the CPUs in the task's - * root_domain. - */ -#ifdef CONFIG_SMP - if (task_has_dl_policy(p) && dl_bandwidth_enabled()) { - rcu_read_lock(); - if (!cpumask_subset(task_rq(p)->rd->span, new_mask)) { - retval = -EBUSY; - rcu_read_unlock(); - goto out_free_new_mask; - } - rcu_read_unlock(); + if (!task_cpus_dl_admissible(p, new_mask)) { + retval = -EBUSY; + goto out_free_new_mask; } -#endif again: retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER); if (retval) From patchwork Tue May 25 15:14:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21395C2B9F8 for ; Tue, 25 May 2021 15:29:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E525E61260 for ; Tue, 25 May 2021 15:29:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E525E61260 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fzfv0DtpLS6zk/aV5nOdEpi6bBS6Kl1awEe/E3Lqc6A=; b=drXrqu/tepLL1T maf56UFUhiQ43uUmnPpFwixcCNNX+1Pn1OJNXJGK45o1W6znvECK7lMV5H1PZJp7AW727pDjD9Nk9 1K3yh4AEB27NxrwcToRze7O4MU8YUXz72c+2lRbY3PZJk0ADnFCNIbmK91rvoGBfexA0R1Id91uY5 o6q/z0sKr+ZuaQ4FYCKJGfwXgD+/BVK3iIc1qs9TbCTHBhgimexYYnRUXUhbmiWvBBnW3q3x+k/15 FOwhLPxDL3poqaaeD/RclVD4/ghOi7ttJz44JIaC2mr8ogG/QTac7ylEXU8k7nAUlnhb0lV4/075j XUkiKuYDR/K/2qe6RDsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYy2-005zXk-QX; Tue, 25 May 2021 15:27:39 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmy-005tua-Fk for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:14 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id CD71161429; Tue, 25 May 2021 15:16:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955772; bh=88QGO6GFkFrMZ/mZqmBjqTxJcB37zYt9m+f1u0tTDnk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=esDQMvGCE7znUlKphB1UaWc0uh205v2ZgAcri43giJdHCWz6R2J7C8OEZZ2CSABIB WupYShsYBNvJQ70LzZOzUPzSrlKp1eY8CUK/t7c1D7HMgxrEVdz7A6OSbeGoEdGFhp JLvTSScuGbwFweldZwbbqnl5ZgkFpg0WQMqp9StPTulQ6Wa5X+wWFnudjr5kzd5Kkr G9br2puupvhiJR6PiO4SWz1EnnVBGrRHGtPp5omRX55qeuxGz+jPa+XS2qj8Wiu2x3 s2guRCvTLgEiu0ylOt/rvOTQpDN/zUfBOqRQQFCtZ90iaHyW4u8RmTJYiyM69EtPk+ hFDrTktYzflKQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 15/22] freezer: Add frozen_or_skipped() helper function Date: Tue, 25 May 2021 16:14:25 +0100 Message-Id: <20210525151432.16875-16-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081612_600678_F744A070 X-CRM114-Status: GOOD ( 13.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Occasionally it is necessary to see if a task is either frozen or sleeping in the PF_FREEZER_SKIP state. In preparation for adding additional users of this check, introduce a frozen_or_skipped() helper function and convert the hung task detector over to using it. Signed-off-by: Will Deacon --- include/linux/freezer.h | 6 ++++++ kernel/hung_task.c | 4 ++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/freezer.h b/include/linux/freezer.h index 0621c5f86c39..b9e1e4200101 100644 --- a/include/linux/freezer.h +++ b/include/linux/freezer.h @@ -27,6 +27,11 @@ static inline bool frozen(struct task_struct *p) return p->flags & PF_FROZEN; } +static inline bool frozen_or_skipped(struct task_struct *p) +{ + return p->flags & (PF_FROZEN | PF_FREEZER_SKIP); +} + extern bool freezing_slow_path(struct task_struct *p); /* @@ -270,6 +275,7 @@ static inline int freezable_schedule_hrtimeout_range(ktime_t *expires, #else /* !CONFIG_FREEZER */ static inline bool frozen(struct task_struct *p) { return false; } +static inline bool frozen_or_skipped(struct task_struct *p) { return false; } static inline bool freezing(struct task_struct *p) { return false; } static inline void __thaw_task(struct task_struct *t) {} diff --git a/kernel/hung_task.c b/kernel/hung_task.c index 396ebaebea3f..d2d4c4159b23 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -92,8 +92,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) * Ensure the task is not frozen. * Also, skip vfork and any other user process that freezer should skip. */ - if (unlikely(t->flags & (PF_FROZEN | PF_FREEZER_SKIP))) - return; + if (unlikely(frozen_or_skipped(t))) + return; /* * When a freshly created task is scheduled once, changes its state to From patchwork Tue May 25 15:14:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEF32C2B9F8 for ; Tue, 25 May 2021 15:31:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AE82C613A9 for ; Tue, 25 May 2021 15:31:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE82C613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VgLEna1HjQHGyferY5M0e4dyiE+pOf7wghUAAt+/s6w=; b=GVIK69D4TNlGBR gk8hu39Oc+9LT+x4GZPENJDXzJTl5J0qKSb4MOHdMgDgW+MJ717Eju6gSuMMnpFJygARR3o6JLFnz G8JXdqX+BrAfdD7RTefNNi+Q+3yjtuOscY5Ksqp04YkYRgYxxOSwaq4qL7LuLv8SUsmbwaScjU11Y KOs95bZ3rPLnBFuYthwrO7naG9oGojWQ6ITS7nW+mFWK8Ob8tf8eZOvorBputBFhIcINjruGWhHj0 BP95AGKhltSKolsoH8dKcYCFlNsG7kJxeNeZGn43tgreCT2GGO9+sm6BGAIrLhjnogpK39JM8Sf09 wldbv3f+5g1MqULsOtyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYzh-0060O0-7J; Tue, 25 May 2021 15:29:22 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYn2-005twk-9y for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:18 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 82DA961438; Tue, 25 May 2021 15:16:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955775; bh=9UU66YoFU+MWE+7w60LmGA05+Or0mzIHMNsG2tNk4ns=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dlgUzyKC5gdxYik6ckOtyZdo4wMazJjJMhXbLCAv5Wb6iDO8HuLImvRa49/rWpjdq CIyHNVMBPrJvm2CwfaM09OvMd+bCWAtdcFd0mvmOT8cxCvuuCSLY88zTy79g8qbkKi AP4WzYGTLOzJ79AFR8qcglc6lCIBivBAESmfVpdPzd+lSbxj/Dexmh1F8v/R9a0IU/ w3k/9zVEVNyHEsYrzgSAZ2vDf8ZotlgNwlrPAOW7hcSoIk10ydJci0SR5A2bhb4WEo 2p9LRqht6LJjJ3w0/1xiAwUuqZoVdSCG2k0Wa383AHvG9w7sorKn8QAsJ9d86n08V2 PncpM42zvfU5Q== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 16/22] sched: Defer wakeup in ttwu() for unschedulable frozen tasks Date: Tue, 25 May 2021 16:14:26 +0100 Message-Id: <20210525151432.16875-17-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081616_452171_A92881E2 X-CRM114-Status: GOOD ( 21.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although we take care to prevent explicit hot-unplug of all 32-bit capable CPUs on such a system, this is required when suspending on some SoCs where the firmware mandates that the suspend/resume operation is handled by CPU 0, which may not be capable of running 32-bit tasks. Consequently, there is a window on the resume path where no 32-bit capable CPUs are available for scheduling and waking up a 32-bit task will result in a scheduler BUG() due to failure of select_fallback_rq(): | kernel BUG at kernel/sched/core.c:2858! | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP | ... | Call trace: | select_fallback_rq+0x4b0/0x4e4 | try_to_wake_up.llvm.4388853297126348405+0x460/0x5b0 | default_wake_function+0x1c/0x30 | autoremove_wake_function+0x1c/0x60 | __wake_up_common.llvm.11763074518265335900+0x100/0x1b8 | __wake_up+0x78/0xc4 | ep_poll_callback+0x20c/0x3fc Prevent wakeups of unschedulable frozen tasks in ttwu() and instead defer the wakeup to __thaw_tasks(), which runs only once all the secondary CPUs are back online. Signed-off-by: Will Deacon --- kernel/freezer.c | 10 +++++++++- kernel/sched/core.c | 13 +++++++++++++ 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/kernel/freezer.c b/kernel/freezer.c index dc520f01f99d..8f3d950c2a87 100644 --- a/kernel/freezer.c +++ b/kernel/freezer.c @@ -11,6 +11,7 @@ #include #include #include +#include /* total number of freezing conditions in effect */ atomic_t system_freezing_cnt = ATOMIC_INIT(0); @@ -146,9 +147,16 @@ bool freeze_task(struct task_struct *p) void __thaw_task(struct task_struct *p) { unsigned long flags; + const struct cpumask *mask = task_cpu_possible_mask(p); spin_lock_irqsave(&freezer_lock, flags); - if (frozen(p)) + /* + * Wake up frozen tasks. On asymmetric systems where tasks cannot + * run on all CPUs, ttwu() may have deferred a wakeup generated + * before thaw_secondary_cpus() had completed so we generate + * additional wakeups here for tasks in the PF_FREEZER_SKIP state. + */ + if (frozen(p) || (frozen_or_skipped(p) && mask != cpu_possible_mask)) wake_up_process(p); spin_unlock_irqrestore(&freezer_lock, flags); } diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 42e2aecf087c..6cb9677d635a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3529,6 +3529,19 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) if (!(p->state & state)) goto unlock; +#ifdef CONFIG_FREEZER + /* + * If we're going to wake up a thread which may be frozen, then + * we can only do so if we have an active CPU which is capable of + * running it. This may not be the case when resuming from suspend, + * as the secondary CPUs may not yet be back online. See __thaw_task() + * for the actual wakeup. + */ + if (unlikely(frozen_or_skipped(p)) && + !cpumask_intersects(cpu_active_mask, task_cpu_possible_mask(p))) + goto unlock; +#endif + trace_sched_waking(p); /* We're going to change ->state: */ From patchwork Tue May 25 15:14:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6477BC2B9F8 for ; Tue, 25 May 2021 15:33:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2D26B61260 for ; Tue, 25 May 2021 15:33:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D26B61260 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IpK3T8L7Zzx6gJEEHDGZlxbwg5N3HtTfZdCwqCDUyPo=; b=O15FBpvuEfLj5y RXK40NbBiyCK4DfH7mGlEu31I5HGgiBJwxDqy/KiElj27Xy6+Vhlbc2s3bezPjmx6nque/GFLp5ky kCpeyOAQILQ71YnBNkw4O2Ny5aYkkWqJdY+3f+eaz0YWhRodBImqU9UK8WXd1anumgOZ/2wcj8udn TmzLa0svowN7LfSpnyjzUDKK1AqvXVx7uQei6gbx15kR9iw/HUNTeXwacRuLGVBGKz/47ReZEj7iY dhbbrQsR6f3JovvSMx8TbAxvGaxFcgnEvenNXPHC8hD+gPWurzysCqCcf2tEB17rIsEEo+Q3v3zGX gontKBQh+GR4cor0SSlg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ1L-00614R-2i; Tue, 25 May 2021 15:31:03 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYn5-005tz0-TK for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:21 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4AE7761436; Tue, 25 May 2021 15:16:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955779; bh=2Tu/2JiXMKcPPPKmdOfa8s5PZpHuaTBt8RxTqRmrXV8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QjROUIYpRep4wk4NrxWkdXd9BKzpYBLggho3ynNEsiA7QOFn+2BsNSDSt8Bc5rRgn U5zTEKARsOYWhGzW3bhHU7WspRpp+kK2IqA6p1yJsw702ImoI2d3jJMdmkNxOl77yc IAhB3lCf++kFf7063e3+g3HG9eDGc+D0UV020MhLWq8f7c+Yu19aVGWdOqux6/eRYO MyGaQg3+xLJ0KdKBO1acQVNRCnI8XDemC3ZWXStjSygTMMpY0TZeSUq9W4yA/26Uh/ Y/7oQAGTeOv+uz2swF+3qxWd6Pzww7VjOofJqK2qbl12lu+P4s/VoAfe3LinlsRGFM WqWT2wKoXmPVw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 17/22] arm64: Implement task_cpu_possible_mask() Date: Tue, 25 May 2021 16:14:27 +0100 Message-Id: <20210525151432.16875-18-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081620_014182_9DDF241E X-CRM114-Status: UNSURE ( 9.75 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide an implementation of task_cpu_possible_mask() so that we can prevent 64-bit-only cores being added to the 'cpus_mask' for compat tasks on systems with mismatched 32-bit support at EL0, Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/include/asm/mmu_context.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index d3cef9133539..bb9b7510f334 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -231,6 +231,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, update_saved_ttbr0(tsk, next); } +static inline const struct cpumask * +task_cpu_possible_mask(struct task_struct *p) +{ + if (!static_branch_unlikely(&arm64_mismatched_32bit_el0)) + return cpu_possible_mask; + + if (!is_compat_thread(task_thread_info(p))) + return cpu_possible_mask; + + return system_32bit_el0_cpumask(); +} +#define task_cpu_possible_mask task_cpu_possible_mask + void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); From patchwork Tue May 25 15:14:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DF1CC2B9F8 for ; Tue, 25 May 2021 15:34:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3850760FDA for ; Tue, 25 May 2021 15:34:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3850760FDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=67II0sIAF2Jv8bRtS/LjLSKSoh5PWQDpFS3dxv1sl/A=; b=o7INGCdURp49su B7J2eUAwQvG/Y2y2QLxCL4CtJ+AMoDGetk1V0nuMy759c8+Jsz/BirFR3UPpxUZ32Z5fL/cdHZf0P dzEWJWs5+bG7uXCNNKvPb3WGhKV8FKf7SghOBd4nMPXJ527wKialQFE1kqGkbZXu5wqTgcF1nUtss g8BllZybPngcxFaR/xc/N+u2QSO1n+7Vj+PSKZaAhisGBSxTcmcoSgMo8O/UxkxmVFZeggX0xsw4i ssPH+GWweZMvO2R7p1GtsK4d8yJmEgNCxVa2hCSA5ar8KEJvGw3PoGZj8tA+0h8d4h73ZptR9nyN9 3p/WOR7rRi8L06zyNCOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ2j-0061cj-OT; Tue, 25 May 2021 15:32:30 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYn9-005u1G-K6 for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:25 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0140F61430; Tue, 25 May 2021 15:16:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955783; bh=Rpqpc0SYApg4oyVbkj8g4oWZvRg7aFPkP29VROgy2Do=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nfYGvdyHQIUSHl/IkXr5tQbmPkNQ3Xln1tuLhKfhdksH2JPvtJMOMyK6suHC6vQAl tKEJfxn95j9At/7/oenmNdOtacD9gHKDY089TYgHdKlEWRNvxZ67m7CQOOkQ5vbLLQ AfmL4k6If26rsAMDMJ1DyL/irDtkCYY1PsIGk+/ZJ2exGBlLKUeX4TkzacNMlwH7dY i5RUG/r2OMuITgtGGLayvO7AQ9xbKA/7d1KX1BiiueaJPWgAv3BjCJS6bzYBPK2GRy LVOpVFTo5nMACd31WD1uP3J8Z+oa+MqkjlvFQpB8iE4yZGsFL/CMd55NCHUb8rEkOP HlgE/rbLgyQzw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 18/22] arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 Date: Tue, 25 May 2021 16:14:28 +0100 Message-Id: <20210525151432.16875-19-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081623_852919_66DBA9CA X-CRM114-Status: GOOD ( 17.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When exec'ing a 32-bit task on a system with mismatched support for 32-bit EL0, try to ensure that it starts life on a CPU that can actually run it. Similarly, when exec'ing a 64-bit task on such a system, try to restore the old affinity mask if it was previously restricted. Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/include/asm/elf.h | 6 ++---- arch/arm64/kernel/process.c | 39 +++++++++++++++++++++++++++++++++++- 2 files changed, 40 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 8d1c8dcb87fd..97932fbf973d 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -213,10 +213,8 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; /* AArch32 EABI. */ #define EF_ARM_EABI_MASK 0xff000000 -#define compat_elf_check_arch(x) (system_supports_32bit_el0() && \ - ((x)->e_machine == EM_ARM) && \ - ((x)->e_flags & EF_ARM_EABI_MASK)) - +int compat_elf_check_arch(const struct elf32_hdr *); +#define compat_elf_check_arch compat_elf_check_arch #define compat_start_thread compat_start_thread /* * Unlike the native SET_PERSONALITY macro, the compat version maintains diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index f4a91bf1ce0c..3aea06fdd1f9 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -638,6 +639,28 @@ unsigned long arch_align_stack(unsigned long sp) return sp & ~0xf; } +#ifdef CONFIG_COMPAT +int compat_elf_check_arch(const struct elf32_hdr *hdr) +{ + if (!system_supports_32bit_el0()) + return false; + + if ((hdr)->e_machine != EM_ARM) + return false; + + if (!((hdr)->e_flags & EF_ARM_EABI_MASK)) + return false; + + /* + * Prevent execve() of a 32-bit program from a deadline task + * if the restricted affinity mask would be inadmissible on an + * asymmetric system. + */ + return !static_branch_unlikely(&arm64_mismatched_32bit_el0) || + task_cpus_dl_admissible(current, system_32bit_el0_cpumask()); +} +#endif + /* * Called from setup_new_exec() after (COMPAT_)SET_PERSONALITY. */ @@ -647,8 +670,22 @@ void arch_setup_new_exec(void) if (is_compat_task()) { mmflags = MMCF_AARCH32; - if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) + + /* + * Restrict the CPU affinity mask for a 32-bit task so that + * it contains only 32-bit-capable CPUs. + * + * From the perspective of the task, this looks similar to + * what would happen if the 64-bit-only CPUs were hot-unplugged + * at the point of execve(), although we try a bit harder to + * honour the cpuset hierarchy. + */ + if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) { + force_compatible_cpus_allowed_ptr(current); set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); + } + } else if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) { + relax_compatible_cpus_allowed_ptr(current); } current->mm->context.flags = mmflags; From patchwork Tue May 25 15:14:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13E8BC2B9F8 for ; Tue, 25 May 2021 15:36:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3BC860FDA for ; Tue, 25 May 2021 15:36:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3BC860FDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7i/1PtY6vI1lSlQXJo7EMjTZQMoDSwtPMsTf6KZjs3Q=; b=DzuSNi4L6svn5e wqV/xo5mPtb3FJpuHpansfLHe2fDdCwZdAfPBMwGObpxqxTnZq97psNsB3QxlPDht3LSENY3HrnjU t4BXOd5zVrfgQXEctCsN1u2SDE1QmnKg3hoV1fxQ9bSer43MeU51scXc1vUTm/SZ9vw5C9Yhl+Vzo 57N0DXqvOMkx1essBNE4FWU+nYVPxGH0o1PvAiRKs/5fX9GeNe7dZ1jMfMiBVZrJStXZ/jjx5rgzQ SydsdepGQch0y16mLJExEO34dF5CC0zIGJH5TOL+I6vm27k0ldqvyd5CE/Q4iwNEDowy1SydVufIr QH5g1VKM8Id9nhnCfhUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ4T-0062LP-7m; Tue, 25 May 2021 15:34:18 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYnD-005u3q-Ar for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:28 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id AEB816141C; Tue, 25 May 2021 15:16:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955787; bh=nSPy9SuPb1FO/mDztTPLt4/d9CdHbLFYERw4tXoz0Kw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BJ+pNO4rqX8DQBMEkPtuN0vy2sWPDrc/M+7cHmrCj4P4mgUDhRtJDPrtQgdXQ/Nqu ZFD/JFAFCDoH2CkInfTtjAREMB+wvekyyDjT6grP2FTu5u8eoMq8dxdlxSovV3WnXS m5zbN/RqIE0lTwos1bBeKb/k6kwZqMawypoFd1QT9/N74SEYZtKBSyGYisyP3qE82N m9gCDuNZK1rFJemOea+/Zzm39lM7j9Gbi1xdp4I7EV0ESxz7rBDZPNaqG5cvdXNRMT W1rM61F7T2R7Rk3MoA75ZH7MK+T0EWQyUm1lQs9uj8K/Y+Fi+AMuOI+nmJz+li7Fhr pKsFDcXgooLyQ== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 19/22] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Date: Tue, 25 May 2021 16:14:29 +0100 Message-Id: <20210525151432.16875-20-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081627_485905_6119BFBE X-CRM114-Status: GOOD ( 15.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If we want to support 32-bit applications, then when we identify a CPU with mismatched 32-bit EL0 support we must ensure that we will always have an active 32-bit CPU available to us from then on. This is important for the scheduler, because is_cpu_allowed() will be constrained to 32-bit CPUs for compat tasks and forced migration due to a hotplug event will hang if no 32-bit CPUs are available. On detecting a mismatch, prevent offlining of either the mismatching CPU if it is 32-bit capable, or find the first active 32-bit capable CPU otherwise. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/kernel/cpufeature.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 959442f76ed7..72efdc611b14 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2896,15 +2896,33 @@ void __init setup_cpu_features(void) static int enable_mismatched_32bit_el0(unsigned int cpu) { + static int lucky_winner = -1; + struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu); bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0); if (cpu_32bit) { cpumask_set_cpu(cpu, cpu_32bit_el0_mask); static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0); - setup_elf_hwcaps(compat_elf_hwcaps); } + if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit) + return 0; + + if (lucky_winner >= 0) + return 0; + + /* + * We've detected a mismatch. We need to keep one of our CPUs with + * 32-bit EL0 online so that is_cpu_allowed() doesn't end up rejecting + * every CPU in the system for a 32-bit task. + */ + lucky_winner = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, + cpu_active_mask); + get_cpu_device(lucky_winner)->offline_disabled = true; + setup_elf_hwcaps(compat_elf_hwcaps); + pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n", + cpu, lucky_winner); return 0; } From patchwork Tue May 25 15:14:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B20C2B9F8 for ; Tue, 25 May 2021 15:38:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A0F9061360 for ; Tue, 25 May 2021 15:38:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0F9061360 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kI1FIPlUYinp2lRaz9x6a2/dBkIT29yQ07d+sx83g5I=; b=GwyuosghQQFcJ+ TVSK8JU4/0pTrcQD+6XA+jN62brp8VYwLFN1M6f8gVHwE4piRqPgjs+z1aO0akUnqnDz9Xc3v8Bz4 0rZxWrsJCbd2xzxto1SQkprmdTbnS88kzClG2o3ASMdIH5WiA0byI0DL1OAHI3rzEeJ91RIqCLQ24 mB0FS/BfbMS/87Q8ycbFX7/7NMhSWonaf9dQ8OlSUPZLFexEWrQDbZG3+DfetO0NuPooCvMeIqtG5 vQjD6pcUCdrq6kRL6j6jKt934KDE8b/Te9MbayV8uNsAvIM7SJx2o9gHGV7je1bocWxrX1zfnFiLe b7skGHlV7gyrFk8ZRSFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ5u-0062zQ-HQ; Tue, 25 May 2021 15:35:47 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYnH-005u69-3f for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:32 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 72E906141D; Tue, 25 May 2021 15:16:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955790; bh=+/Imnr+Lo2MHpB6ujnUBakmFLkmdTTKNQf8pvMFeGqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CMsDJnKhwUnXSsyJXLrzpp4HjoVRD3S63OQWe/qSC1xxp0FfgSlshU7VAV4lB9tlt a2vKC55LaHtqUTG6HPX6i/f5m0GoX3SN0/oQLTYycrccS+Fupt6spRFqHE+Pt2EbjX FJq2oAMDuRRmqfNNAVfjZQ3z3HbiBJvKzlYQByFCIufApdE3+dSHaEGEfhZ9Fs4rvL uQCF2tZy0dn4c8gKG04jjh1j0A/J4NEjcGRrqVTBdO8zPVKJY72c55iJLAlH+ItyGp mbJFmQPr363ErKG6rJp88QUrdbCb+P7oZvK5ih329JPQZLYse5jjOQk5mcQyB2Kf1s KpMZm+8xwHQkA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 20/22] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Date: Tue, 25 May 2021 16:14:30 +0100 Message-Id: <20210525151432.16875-21-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081631_220931_628A1910 X-CRM114-Status: GOOD ( 12.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow systems with mismatched 32-bit support at EL0 to run 32-bit applications based on a new kernel parameter. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- Documentation/admin-guide/kernel-parameters.txt | 8 ++++++++ arch/arm64/kernel/cpufeature.c | 7 +++++++ 2 files changed, 15 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index cb89dbdedc46..a2e453919bb6 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -287,6 +287,14 @@ do not want to use tracing_snapshot_alloc() as it needs to be done where GFP_KERNEL allocations are allowed. + allow_mismatched_32bit_el0 [ARM64] + Allow execve() of 32-bit applications and setting of the + PER_LINUX32 personality on systems where only a strict + subset of the CPUs support 32-bit EL0. When this + parameter is present, the set of CPUs supporting 32-bit + EL0 is indicated by /sys/devices/system/cpu/aarch32_el0 + and hot-unplug operations may be restricted. + amd_iommu= [HW,X86-64] Pass parameters to the AMD IOMMU driver in the system. Possible values are: diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 72efdc611b14..f2c97baa050f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1298,6 +1298,13 @@ const struct cpumask *system_32bit_el0_cpumask(void) return cpu_possible_mask; } +static int __init parse_32bit_el0_param(char *str) +{ + allow_mismatched_32bit_el0 = true; + return 0; +} +early_param("allow_mismatched_32bit_el0", parse_32bit_el0_param); + static ssize_t aarch32_el0_show(struct device *dev, struct device_attribute *attr, char *buf) { From patchwork Tue May 25 15:14:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20D0CC2B9F8 for ; Tue, 25 May 2021 15:39:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB00C61360 for ; Tue, 25 May 2021 15:39:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB00C61360 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PKvZCGYSdt5wZKR7ghHHsojJ6XnDSOiFbD5WaueGWcQ=; b=sHl2ZXBvSFTrJJ hjkP9dqlhHuQtJxq6DzdwsPF7A9XRifRTKvW5o33pX93uNPjidoUgK9K3oy4FWZUiQ6ugwzsxMgDn 79SSM8w7O5VHGVKaxmOCVk+wu8W24HVYM7MqGkdXN2FdIRkF3FEI9bTuP8L2Knt91g/irtFhP3q9M 4u6TnM8ckUbm4onqDTwMtDGbpH3bqvjzFGzufx2+Q0SoX0dvneq9IBzMmnozjDXqRzfisv+QD/aFz kjwxskQviMQkxFu6GCQiEMkTMb9B8QKwY1UMt1gtN3O/gpbnd0v6NXlp15ZUXb3zpZd5+G9AwL7nw sqrZjIvvXs3ynQ/ro4Fg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ7L-0063ZA-IW; Tue, 25 May 2021 15:37:16 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYnK-005u8Q-Qh for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:36 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2FD336142B; Tue, 25 May 2021 15:16:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955794; bh=bITebhW029VSLPA5n/z6FJvPD2urFmyE8hsU+mJVOVU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NFe/S5qSCKyPxIL6d+o3UyCg4SICwcsHmQsKHD2Q5Ah5A0UzUyo5ii4sfiX5zR2t5 sRUYfyNuYNOpH0hY1db9fQvooPTVro4DGHvQYoDVhN6MSJd+f03CjLBbPnEjaha8cZ 1SaIs/1XGsb4GCDuIUDx1xqZ9AyKp7ggFAPhQa7aVmeBvya2ILPKfu/ySArHdAliyN ELu86MGYl5sxJFGMknUHycNOMbj4WHMQ7vG3xxajHrZzNORQknoFTSYGDhWDE7hkyQ 5aYjFJrytdaz6RrLL07H5omuz6luMNwfyZvd7jbOPqTMRNtXuVwZx88B6J/ZwSwiB3 Jt1/7D64SAssA== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 21/22] arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores Date: Tue, 25 May 2021 16:14:31 +0100 Message-Id: <20210525151432.16875-22-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081634_964228_08179A51 X-CRM114-Status: GOOD ( 15.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The scheduler now knows enough about these braindead systems to place 32-bit tasks accordingly, so throw out the safety checks and allow the ret-to-user path to avoid do_notify_resume() if there is nothing to do. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon --- arch/arm64/kernel/process.c | 14 +------------- arch/arm64/kernel/signal.c | 26 -------------------------- 2 files changed, 1 insertion(+), 39 deletions(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3aea06fdd1f9..5581c376644e 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -528,15 +528,6 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, write_sysreg(val, cntkctl_el1); } -static void compat_thread_switch(struct task_struct *next) -{ - if (!is_compat_thread(task_thread_info(next))) - return; - - if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) - set_tsk_thread_flag(next, TIF_NOTIFY_RESUME); -} - static void update_sctlr_el1(u64 sctlr) { /* @@ -578,7 +569,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ssbs_thread_switch(next); erratum_1418040_thread_switch(prev, next); ptrauth_thread_switch_user(next); - compat_thread_switch(next); /* * Complete any pending TLB or cache maintenance on this CPU in case @@ -680,10 +670,8 @@ void arch_setup_new_exec(void) * at the point of execve(), although we try a bit harder to * honour the cpuset hierarchy. */ - if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) { + if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) force_compatible_cpus_allowed_ptr(current); - set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); - } } else if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) { relax_compatible_cpus_allowed_ptr(current); } diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index f8192f4ae0b8..6237486ff6bb 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -911,19 +911,6 @@ static void do_signal(struct pt_regs *regs) restore_saved_sigmask(); } -static bool cpu_affinity_invalid(struct pt_regs *regs) -{ - if (!compat_user_mode(regs)) - return false; - - /* - * We're preemptible, but a reschedule will cause us to check the - * affinity again. - */ - return !cpumask_test_cpu(raw_smp_processor_id(), - system_32bit_el0_cpumask()); -} - asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { @@ -951,19 +938,6 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (thread_flags & _TIF_NOTIFY_RESUME) { tracehook_notify_resume(regs); rseq_handle_notify_resume(NULL, regs); - - /* - * If we reschedule after checking the affinity - * then we must ensure that TIF_NOTIFY_RESUME - * is set so that we check the affinity again. - * Since tracehook_notify_resume() clears the - * flag, ensure that the compiler doesn't move - * it after the affinity check. - */ - barrier(); - - if (cpu_affinity_invalid(regs)) - force_sig(SIGKILL); } if (thread_flags & _TIF_FOREIGN_FPSTATE) From patchwork Tue May 25 15:14:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE6A3C4707F for ; Tue, 25 May 2021 15:40:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 852C2613E1 for ; Tue, 25 May 2021 15:40:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 852C2613E1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WS1vXKXEYSHTsJHmT3xukSl93EL0NmlIPEF52GFyK7E=; b=b2Q7o58lDZdOSg 9+VXwYp9oEh07S+Nm7gZ5M2eWUzep9A5tXWB0hCDIqqiL9S/OOEbJZJJS+c0HWD//T1RHZWMGQtkR bY7PvABllwVXWzJWInyeBLl13htzzHNhVGtoqv90uzfzmkNxqDn4PuC0bzJakAc2vAkYa3roZ/HOF 9Sucd7fuoKGLvgS9aNmIZ06azPsKVfkORcKUwkQGjh/oY2zWajyov3xDyxOrR/OovrwR2gPSMtu8O mhZ1m+a5ppyqiCsmkO39bDtyGjok8VmlF/EKOnLRHA9xUo6vPRgpQLVf3QVQOSfzpbFm6EmF12yu3 MWYQchftplEqkyNIgTWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llZ8V-00642k-6h; Tue, 25 May 2021 15:38:28 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYnO-005uAj-It for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:16:40 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id E73116113B; Tue, 25 May 2021 15:16:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955798; bh=AlwYAcazRrNIST5jJniLnXVWfIkd1LdUJ0qaHNTvRH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mo8gxvZ2+lJDPBR1ECaEqmucaTbqcV9ClehY7CqtPjbQwkakdfFjJpP5II97uDWXZ SvvDY6kFti1aAp8uyiuPehj4EQLIak0sSELyJLyDVOr+0Lks+szjfk57ZKKAOrD1rB af+hvBqYpTuRf0Gjwoq2/W+/ma6eZQ3xCJo+M6Hh+2iNOpC/AW8TRyaDef/Q2DPjh/ +N65QafCcQKmF21f49wQp3TFTizmAAVlf6wB0/GfM1Wo4/uYbjqVNNehgyZqU7p8Xc lgqvK/TZ25/yhGwUqbQuyqmf0vCXu4jtLJBHOjpYAO1be7m2SY/G/xZ8+JacuH9TAX uRtv/V1eynfwg== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 22/22] Documentation: arm64: describe asymmetric 32-bit support Date: Tue, 25 May 2021 16:14:32 +0100 Message-Id: <20210525151432.16875-23-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081638_729223_5DA27948 X-CRM114-Status: GOOD ( 31.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Document support for running 32-bit tasks on asymmetric 32-bit systems and its impact on the user ABI when enabled. Signed-off-by: Will Deacon --- .../admin-guide/kernel-parameters.txt | 3 + Documentation/arm64/asymmetric-32bit.rst | 154 ++++++++++++++++++ Documentation/arm64/index.rst | 1 + 3 files changed, 158 insertions(+) create mode 100644 Documentation/arm64/asymmetric-32bit.rst diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a2e453919bb6..5a1dc7e628a5 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -295,6 +295,9 @@ EL0 is indicated by /sys/devices/system/cpu/aarch32_el0 and hot-unplug operations may be restricted. + See Documentation/arm64/asymmetric-32bit.rst for more + information. + amd_iommu= [HW,X86-64] Pass parameters to the AMD IOMMU driver in the system. Possible values are: diff --git a/Documentation/arm64/asymmetric-32bit.rst b/Documentation/arm64/asymmetric-32bit.rst new file mode 100644 index 000000000000..a70a2b97e60b --- /dev/null +++ b/Documentation/arm64/asymmetric-32bit.rst @@ -0,0 +1,154 @@ +====================== +Asymmetric 32-bit SoCs +====================== + +Author: Will Deacon + +This document describes the impact of asymmetric 32-bit SoCs on the +execution of 32-bit (``AArch32``) applications. + +Date: 2021-05-17 + +Introduction +============ + +Some Armv9 SoCs suffer from a big.LITTLE misfeature where only a subset +of the CPUs are capable of executing 32-bit user applications. On such +a system, Linux by default treats the asymmetry as a "mismatch" and +disables support for both the ``PER_LINUX32`` personality and +``execve(2)`` of 32-bit ELF binaries, with the latter returning +``-ENOEXEC``. If the mismatch is detected during late onlining of a +64-bit-only CPU, then the onlining operation fails and the new CPU is +unavailable for scheduling. + +Surprisingly, these SoCs have been produced with the intention of +running legacy 32-bit binaries. Unsurprisingly, that doesn't work very +well with the default behaviour of Linux. + +It seems inevitable that future SoCs will drop 32-bit support +altogether, so if you're stuck in the unenviable position of needing to +run 32-bit code on one of these transitionary platforms then you would +be wise to consider alternatives such as recompilation, emulation or +retirement. If neither of those options are practical, then read on. + +Enabling kernel support +======================= + +Since the kernel support is not completely transparent to userspace, +allowing 32-bit tasks to run on an asymmetric 32-bit system requires an +explicit "opt-in" and can be enabled by passing the +``allow_mismatched_32bit_el0`` parameter on the kernel command-line. + +For the remainder of this document we will refer to an *asymmetric +system* to mean an asymmetric 32-bit SoC running Linux with this kernel +command-line option enabled. + +Userspace impact +================ + +32-bit tasks running on an asymmetric system behave in mostly the same +way as on a homogeneous system, with a few key differences relating to +CPU affinity. + +sysfs +----- + +The subset of CPUs capable of running 32-bit tasks is described in +``/sys/devices/system/cpu/aarch32_el0`` and is documented further in +``Documentation/ABI/testing/sysfs-devices-system-cpu``. + +**Note:** CPUs are advertised by this file as they are detected and so +late-onlining of 32-bit-capable CPUs can result in the file contents +being modified by the kernel at runtime. Once advertised, CPUs are never +removed from the file. + +``execve(2)`` +------------- + +On a homogeneous system, the CPU affinity of a task is preserved across +``execve(2)``. This is not always possible on an asymmetric system, +specifically when the new program being executed is 32-bit yet the +affinity mask contains 64-bit-only CPUs. In this situation, the kernel +determines the new affinity mask as follows: + + 1. If the 32-bit-capable subset of the affinity mask is not empty, + then the affinity is restricted to that subset and the old affinity + mask is saved. This saved mask is inherited over ``fork(2)`` and + preserved across ``execve(2)`` of 32-bit programs. + + **Note:** This step does not apply to ``SCHED_DEADLINE`` tasks. + See `SCHED_DEADLINE`_. + + 2. Otherwise, the cpuset hierarchy of the task is walked until an + ancestor is found containing at least one 32-bit-capable CPU. The + affinity of the task is then changed to match the 32-bit-capable + subset of the cpuset determined by the walk. + + 3. On failure (i.e. out of memory), the affinity is changed to the set + of all 32-bit-capable CPUs of which the kernel is aware. + +A subsequent ``execve(2)`` of a 64-bit program by the 32-bit task will +invalidate the affinity mask saved in (1) and attempt to restore the CPU +affinity of the task using the saved mask if it was previously valid. +This restoration may fail due to intervening changes to the deadline +policy or cpuset hierarchy, in which case the ``execve(2)`` continues +with the affinity unchanged. + +Calls to ``sched_setaffinity(2)`` for a 32-bit task will consider only +the 32-bit-capable CPUs of the requested affinity mask. On success, the +affinity for the task is updated and any saved mask from a prior +``execve(2)`` is invalidated. + +``SCHED_DEADLINE`` +------------------ + +Explicit admission of a 32-bit deadline task to the default root domain +(e.g. by calling ``sched_setattr(2)``) is rejected on an asymmetric +32-bit system unless admission control is disabled by writing -1 to +``/proc/sys/kernel/sched_rt_runtime_us``. + +``execve(2)`` of a 32-bit program from a 64-bit deadline task will +return ``-ENOEXEC`` if the root domain for the task contains any +64-bit-only CPUs and admission control is enabled. Concurrent offlining +of 32-bit-capable CPUs may still necessitate the procedure described in +`execve(2)`_, in which case step (1) is skipped and a warning is +emitted on the console. + +**Note:** It is recommended that a set of 32-bit-capable CPUs are placed +into a separate root domain if ``SCHED_DEADLINE`` is to be used with +32-bit tasks on an asymmetric system. Failure to do so is likely to +result in missed deadlines. + +Cpusets +------- + +The affinity of a 32-bit task on an asymmetric system may include CPUs +that are not explicitly allowed by the cpuset to which it is attached. +This can occur as a result of the following two situations: + + - A 64-bit task attached to a cpuset which allows only 64-bit CPUs + executes a 32-bit program. + + - All of the 32-bit-capable CPUs allowed by a cpuset containing a + 32-bit task are offlined. + +In both of these cases, the new affinity is calculated according to step +(2) of the process described in `execve(2)`_ and the cpuset hierarchy is +unchanged irrespective of the cgroup version. + +CPU hotplug +----------- + +On an asymmetric system, the first detected 32-bit-capable CPU is +prevented from being offlined by userspace and any such attempt will +return ``-EPERM``. Note that suspend is still permitted even if the +primary CPU (i.e. CPU 0) is 64-bit-only. + +KVM +--- + +Although KVM will not advertise 32-bit EL0 support to any vCPUs on an +asymmetric system, a broken guest at EL1 could still attempt to execute +32-bit code at EL0. In this case, an exit from a vCPU thread in 32-bit +mode will return to host userspace with an ``exit_reason`` of +``KVM_EXIT_FAIL_ENTRY``. diff --git a/Documentation/arm64/index.rst b/Documentation/arm64/index.rst index 97d65ba12a35..4f840bac083e 100644 --- a/Documentation/arm64/index.rst +++ b/Documentation/arm64/index.rst @@ -10,6 +10,7 @@ ARM64 Architecture acpi_object_usage amu arm-acpi + asymmetric-32bit booting cpu-feature-registers elf_hwcaps