From patchwork Mon Oct 28 16:54:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13853751 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3DB4155C97; Mon, 28 Oct 2024 16:54:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730134473; cv=none; b=K/ZTJoihaiNjW9XD3cpnfuGakd9B45bHTfkVtwUJlbu9zxotlLF1tm3d4dDtCsZkM31VqWEVZMyt5u9BWjQ5CeWyjMtTMIrIDcpRESq4zzueFIqWIzhY+6ggTUX64UdsVv/QlWUQqiYZyIdtwqCguahSlHJzRHD9QuIxxZmN888= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730134473; c=relaxed/simple; bh=rrl5Cf3sGFo9Tm43JqFtYQb0O1dldmbe1HOCsSTldCg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=pbe6Eef/7D5hYOSh/vAMv+wf/umjOCOW9nFi+tgaAH2d01XWfQ+I/h2o3wzE8B0y1Go4rwxJsiY9iw7qsx7aD/phu8QQ5Ny/7k6RkGb139DTQYtXmyw98Yb3ZXx+SgAQzX8xhKuvfAddXmzhgg7SpYGx55XYINok66OhUuPymqM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ypy6v5kk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ypy6v5kk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E860AC4CEC3; Mon, 28 Oct 2024 16:54:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730134472; bh=rrl5Cf3sGFo9Tm43JqFtYQb0O1dldmbe1HOCsSTldCg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Ypy6v5kkFvNVzT69+bPXjItjBywSMIqnvj6g/RLUjEGpC35JGejh0rCOWL0Zf0CNd TOaFwNFYSdygZwLZWbmY9fXJ95LBtwYE1a9dZVa4RKiDyTD/ugweNmT8polmakZf7U Tc4PJc3+EkwJ1XRtZ+Y8xR6T+AXqe7k7jrbdbrovJ6fhkAaw8fNYyP67pmV5t/BC3c RZsH/SiVTpUKxZqVQ+I9p0zL3o5sK/Q7Cpj+ECTZVhaqX62Y+bZHSDFEySkiXJdSAv Sc48ghX6Swa4wCSPr9biCHIZzOkXbhGuTQEWqKciWur7tQsySrD4KFwhvMN71eGSw/ dfMqv2juClBkw== Date: Mon, 28 Oct 2024 17:54:29 +0100 From: Frederic Weisbecker To: Will Deacon Cc: LKML , Peter Zijlstra , Vincent Guittot , Thomas Gleixner , Michal Hocko , Vlastimil Babka , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , Uladzislau Rezki , rcu@vger.kernel.org, Michal Hocko Subject: [PATCH 1/2] arm64: Keep first mismatched 32bits el0 capable CPU online through its callbacks Message-ID: References: <20240926224910.11106-1-frederic@kernel.org> <20240926224910.11106-12-frederic@kernel.org> <20241008105434.GA9243@willie-the-truck> <20241028162514.GA2709@willie-the-truck> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: The first mismatched 32bits el0 capable CPU is designated as the last resort CPU for compat 32 bits tasks. As such this CPU is forbidden to go offline. However this restriction is applied to the device object of the CPU, which is not easy to revert later if needed because other components may have forbidden the target to be offline and they are not tracked. But the task cpu possible mask is going to be made aware of housekeeping CPUs. In that context, a better 32 bits el0 last resort CPU may be found later on boot. When that happens, the old fallback can be made offlineable again. To make this possible and more flexible, drive the offlineable decision from the cpuhotplug callbacks themselves. Signed-off-by: Frederic Weisbecker --- arch/arm64/kernel/cpufeature.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 718728a85430..53ee8ce38d5b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3591,15 +3591,15 @@ void __init setup_user_features(void) minsigstksz_setup(); } -static int enable_mismatched_32bit_el0(unsigned int cpu) -{ - /* - * The first 32-bit-capable CPU we detected and so can no longer - * be offlined by userspace. -1 indicates we haven't yet onlined - * a 32-bit-capable CPU. - */ - static int lucky_winner = -1; +/* + * The first 32-bit-capable CPU we detected and so can no longer + * be offlined by userspace. -1 indicates we haven't yet onlined + * a 32-bit-capable CPU. + */ +static int cpu_32bit_unofflineable = -1; +static int mismatched_32bit_el0_online(unsigned int cpu) +{ struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu); bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0); @@ -3611,7 +3611,7 @@ static int enable_mismatched_32bit_el0(unsigned int cpu) if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit) return 0; - if (lucky_winner >= 0) + if (cpu_32bit_unofflineable < 0) return 0; /* @@ -3619,16 +3619,20 @@ static int enable_mismatched_32bit_el0(unsigned int cpu) * 32-bit EL0 online so that is_cpu_allowed() doesn't end up rejecting * every CPU in the system for a 32-bit task. */ - lucky_winner = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, - cpu_active_mask); - get_cpu_device(lucky_winner)->offline_disabled = true; + cpu_32bit_unofflineable = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, + cpu_active_mask); setup_elf_hwcaps(compat_elf_hwcaps); elf_hwcap_fixup(); pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n", - cpu, lucky_winner); + cpu, cpu_32bit_unofflineable); return 0; } +static int mismatched_32bit_el0_offline(unsigned int cpu) +{ + return cpu == cpu_32bit_unofflineable ? -EBUSY : 0; +} + static int __init init_32bit_el0_mask(void) { if (!allow_mismatched_32bit_el0) @@ -3639,7 +3643,7 @@ static int __init init_32bit_el0_mask(void) return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "arm64/mismatched_32bit_el0:online", - enable_mismatched_32bit_el0, NULL); + mismatched_32bit_el0_online, mismatched_32bit_el0_offline); } subsys_initcall_sync(init_32bit_el0_mask); From patchwork Mon Oct 28 16:56:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13853752 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 646231DED60; Mon, 28 Oct 2024 16:56:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730134595; cv=none; b=AazDncn6sJhitQ+bkoR9Bwom4k4WwY6vghY/jERkKj3X93E73knHgVnMKJMcLEwVY29iRMZ0N1lr/jSXQ8UqZCbdKvQULJ7jg+7pl7bsL9EEWq8L7eU2uMVI5tr6EsBTDwp55UHt8YdcCbk2hJNOpwQxx6aCz1UsWWKgTnSy8To= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730134595; c=relaxed/simple; bh=7WQiJxzs/J7mK/O9sPfFrbb4pwB/jW3PPX4YHzWk7V0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Ai3kPjJqeli0NRszV3PMnt+lzMO2iK0KTI7lBT+RoOHNqFKxuwHS8DfchPKeVkoVfzLzJxdDAyCnScg6UqAzgPDjvdXtOXE5nYPPz0RCG9mBvpJ/CT/C8DEG8anqiJsMl4rJsk/jwyZeybMxB4z0mvfpPQd2pIdSBYLQWYBL9zY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cOEr9aHJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cOEr9aHJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7ACEC4CEC3; Mon, 28 Oct 2024 16:56:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730134595; bh=7WQiJxzs/J7mK/O9sPfFrbb4pwB/jW3PPX4YHzWk7V0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=cOEr9aHJZHqM6r+LVCEF6PmmHRAZpEG74ML8vdmTATWYZrIlsO1ynlkrTBJKT9Y2v WZWll9Ttd4QuUUli2sF2XGKxDFKol2JisOgsWVsqJ8Nws+BFQqu0dpIYF53FDTtbjL aHAfY5hXOP8bI5Tn1J4c7n82T9RZiVV402lcHVfZTgq7O1/Z9KmxAqgsrpBxjsMSav 0rTZ7ztKaD+xk6xZEgks1aC7sMV5/jY5NerXLkwaf9kMYEnDkzFAakyLXEA1XO43xS vXsbBVtqPQVM6DTBEcMaGcifSI8h2uqXh8CkjJXJ/LUtlKb/qUrHGQnQ7Kao0cqrgL UQKvhb2yT/DkA== Date: Mon, 28 Oct 2024 17:56:32 +0100 From: Frederic Weisbecker To: Will Deacon Cc: LKML , Peter Zijlstra , Vincent Guittot , Thomas Gleixner , Michal Hocko , Vlastimil Babka , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , Uladzislau Rezki , rcu@vger.kernel.org, Michal Hocko Subject: [PATCH 2/2] sched,arm64: Handle CPU isolation on last resort fallback rq selection Message-ID: References: <20240926224910.11106-1-frederic@kernel.org> <20240926224910.11106-12-frederic@kernel.org> <20241008105434.GA9243@willie-the-truck> <20241028162514.GA2709@willie-the-truck> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: When a kthread or any other task has an affinity mask that is fully offline or unallowed, the scheduler reaffines the task to all possible CPUs as a last resort. This default decision doesn't mix up very well with nohz_full CPUs that are part of the possible cpumask but don't want to be disturbed by unbound kthreads or even detached pinned user tasks. Make the fallback affinity setting aware of nohz_full. ARM64 is a special case and its last resort EL0 32bits capable CPU can be updated as housekeeping CPUs appear on boot. Suggested-by: Michal Hocko Signed-off-by: Frederic Weisbecker --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/include/asm/mmu_context.h | 2 ++ arch/arm64/kernel/cpufeature.c | 47 +++++++++++++++++++++++----- include/linux/mmu_context.h | 1 + kernel/sched/core.c | 2 +- 5 files changed, 45 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 3d261cc123c1..992d782f2899 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -663,6 +663,7 @@ static inline bool supports_clearbhb(int scope) } const struct cpumask *system_32bit_el0_cpumask(void); +const struct cpumask *fallback_32bit_el0_cpumask(void); DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0); static inline bool system_supports_32bit_el0(void) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 7c09d47e09cb..8d481e16271b 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -282,6 +282,8 @@ task_cpu_possible_mask(struct task_struct *p) } #define task_cpu_possible_mask task_cpu_possible_mask +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p); + void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 53ee8ce38d5b..4eabe0f02cc8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -75,6 +75,7 @@ #include #include #include +#include #include #include @@ -133,6 +134,7 @@ DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0); * Only valid if arm64_mismatched_32bit_el0 is enabled. */ static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly; +static cpumask_var_t fallback_32bit_el0_mask __cpumask_var_read_mostly; void dump_cpu_features(void) { @@ -1618,6 +1620,23 @@ const struct cpumask *system_32bit_el0_cpumask(void) return cpu_possible_mask; } +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p) +{ + if (!static_branch_unlikely(&arm64_mismatched_32bit_el0)) + return housekeeping_cpumask(HK_TYPE_TICK); + + if (!is_compat_thread(task_thread_info(p))) + return housekeeping_cpumask(HK_TYPE_TICK); + + if (!system_supports_32bit_el0()) + return cpu_none_mask; + + if (!cpumask_empty(fallback_32bit_el0_mask)) + return fallback_32bit_el0_mask; + else + return cpu_32bit_el0_mask; +} + static int __init parse_32bit_el0_param(char *str) { allow_mismatched_32bit_el0 = true; @@ -3605,22 +3624,33 @@ static int mismatched_32bit_el0_online(unsigned int cpu) if (cpu_32bit) { cpumask_set_cpu(cpu, cpu_32bit_el0_mask); + if (housekeeping_cpu(cpu, HK_TYPE_TICK)) + cpumask_set_cpu(cpu, fallback_32bit_el0_mask); static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0); } - if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit) + if (cpu_32bit_unofflineable >= 0) { + if (!housekeeping_cpu(cpu_32bit_unofflineable, HK_TYPE_TICK) && + cpu_32bit && housekeeping_cpu(cpu, HK_TYPE_TICK)) { + cpu_32bit_unofflineable = cpu; + pr_info("Asymmetric 32-bit EL0 support detected on housekeeping CPU %u;" + "CPU hot-unplug disabled on CPU %u\n", cpu, cpu); + } return 0; + } - if (cpu_32bit_unofflineable < 0) + if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit) return 0; /* - * We've detected a mismatch. We need to keep one of our CPUs with - * 32-bit EL0 online so that is_cpu_allowed() doesn't end up rejecting - * every CPU in the system for a 32-bit task. + * We've detected a mismatch. We need to keep one of our CPUs, preferrably + * housekeeping, with 32-bit EL0 online so that is_cpu_allowed() doesn't end up + * rejecting every CPU in the system for a 32-bit task. */ - cpu_32bit_unofflineable = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, - cpu_active_mask); + cpu_32bit_unofflineable = cpumask_any_and(fallback_32bit_el0_mask, cpu_active_mask); + if (cpu_32bit_unofflineable >= nr_cpu_ids) + cpu_32bit_unofflineable = cpumask_any_and(cpu_32bit_el0_mask, cpu_active_mask); + setup_elf_hwcaps(compat_elf_hwcaps); elf_hwcap_fixup(); pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n", @@ -3641,6 +3671,9 @@ static int __init init_32bit_el0_mask(void) if (!zalloc_cpumask_var(&cpu_32bit_el0_mask, GFP_KERNEL)) return -ENOMEM; + if (!zalloc_cpumask_var(&fallback_32bit_el0_mask, GFP_KERNEL)) + return -ENOMEM; + return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "arm64/mismatched_32bit_el0:online", mismatched_32bit_el0_online, mismatched_32bit_el0_offline); diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h index bbaec80c78c5..ac01dc4eb2ce 100644 --- a/include/linux/mmu_context.h +++ b/include/linux/mmu_context.h @@ -24,6 +24,7 @@ static inline void leave_mm(void) { } #ifndef task_cpu_possible_mask # define task_cpu_possible_mask(p) cpu_possible_mask # define task_cpu_possible(cpu, p) true +# define task_cpu_fallback_mask(p) housekeeping_cpumask(HK_TYPE_TICK) #else # define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p)) #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index aeb595514461..1edce360f1a6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3489,7 +3489,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) * * More yuck to audit. */ - do_set_cpus_allowed(p, task_cpu_possible_mask(p)); + do_set_cpus_allowed(p, task_cpu_fallback_mask(p)); state = fail; break; case fail: From patchwork Thu Sep 26 22:48:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813712 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3CD61B2523; Thu, 26 Sep 2024 22:49:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390981; cv=none; b=U1Xyydh3vnqgKlKQnHQGW7puOM2Rx1QGy0P6W0q1FK+XtFEvcjBycTXlBZ9wW48vBCbzzuBjVkJ9Ln7W2t0WV2dayCq9lnMF7TBfJmAkNZxB0AfFnPeGrXw0wFoaZNgdfNzSeQI/RtoAy7Qy7ng/l0epFwo9QVPYlBHkC70YWF8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390981; c=relaxed/simple; bh=92oEHSM6E5QZI86HEze8W34mCNCuSP+gt4f24shWjnc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GyzA6gM8n3Z/t+Zd1X6/bzYE/vXeOcd68Ranw6k8Zmc674QKsI8Ls6fIQhK4o5mq8vtRLja95KWZhwojNL94JSmNSI8NBz+ZwV4tKiI5hnf+mmYWOLrdGrFqYzOvflmHBgFP+NwUPdDLoKInOliFdngf/yc+EOrOpJW5lUFfMJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OFOzyMhs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OFOzyMhs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A699EC4CEC9; Thu, 26 Sep 2024 22:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727390981; bh=92oEHSM6E5QZI86HEze8W34mCNCuSP+gt4f24shWjnc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OFOzyMhsolEevHbP5JheK/yJ2si1BnSND7B0gJHO4BQ729HVM9s9J1QexXf9Uh+En bi8R19DQalwlsOGmVSKSZAkwf/sdUTNZ0Fr/CPppxA4QVcFCtcldRMgFfMWjNxbGkm zCaXGzb3Ne+x7Ltl7pDAhty21/NSJ4KZrbPaQlaXMMJ7LbodhDfiOtxAl44i/N+j1H R7cVtGRSDj6cSzB6j9J7LuZwfO3VDbO1k+PIg4/C5K3/JeogOODGuKTAmJA7Q78AU6 yh8S2+6wUdxDoBUlLpUKtZp8gE0yHnmuHsFEbUMvHR/zBF6c2B+31Y6J8ZA+xDQEJ+ jtF9W8+zKlpGg== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Will Deacon , Peter Zijlstra , Vincent Guittot , Thomas Gleixner , Michal Hocko , Vlastimil Babka , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , Uladzislau Rezki , rcu@vger.kernel.org, Michal Hocko Subject: [PATCH 11/20] sched: Handle CPU isolation on last resort fallback rq selection Date: Fri, 27 Sep 2024 00:48:59 +0200 Message-ID: <20240926224910.11106-12-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When a kthread or any other task has an affinity mask that is fully offline or unallowed, the scheduler reaffines the task to all possible CPUs as a last resort. This default decision doesn't mix up very well with nohz_full CPUs that are part of the possible cpumask but don't want to be disturbed by unbound kthreads or even detached pinned user tasks. Make the fallback affinity setting aware of nohz_full. This applies to all architectures supporting nohz_full except arm32. However this architecture that overrides the task possible mask is unlikely to be willing to integrate new development. Suggested-by: Michal Hocko Signed-off-by: Frederic Weisbecker --- kernel/sched/core.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 43e453ab7e20..d4b759c1cbf1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3421,6 +3421,21 @@ void kick_process(struct task_struct *p) } EXPORT_SYMBOL_GPL(kick_process); +static const struct cpumask *task_cpu_fallback_mask(struct task_struct *t) +{ + const struct cpumask *mask; + + mask = task_cpu_possible_mask(p); + /* + * Architectures that overrides the task possible mask + * must handle CPU isolation. + */ + if (mask != cpu_possible_mask) + return mask; + else + return housekeeping_cpumask(HK_TYPE_TICK); +} + /* * ->cpus_ptr is protected by both rq->lock and p->pi_lock * @@ -3489,7 +3504,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) * * More yuck to audit. */ - do_set_cpus_allowed(p, task_cpu_possible_mask(p)); + do_set_cpus_allowed(p, task_cpu_fallback_mask(p)); state = fail; break; case fail: From patchwork Thu Sep 26 22:49:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813713 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDB681B29A8; Thu, 26 Sep 2024 22:49:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390985; cv=none; b=qXBpLTWZjsluDtnYzBlZJVV+s0U9DneO9hKX0uw1i2r4yCAQ1OY//Jmfhk7YYoCA83lz2fw9jMfBtheqMgMFkPTsg7pqgupM7H3N0mGF4tWGO55ZmHUU5BP7ejsDgeoAIGod9CKCBMNXys7bsWrhOhrrkbaeSj5UDWef2k8Qxtg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390985; c=relaxed/simple; bh=OmLFF6xgwbA0hM9UsQOF4k2zgh/bcASGVcZgCAmI/wE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WV2HmF3chBd5bVr9wKLAcGxp3F9mvkN9g3AoQuVucfaF5whkM7+lhPar5WBQw3Sg6YJ9l4HEZvIkMVzKZfDOZr2cZLAGbxQRO5fzXsj2eu9jz4WNYbubGCfVogSlzKwtgayWQPUh9twNECQJfUZXxme2cQahLRpGrxR8m5DHMVQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TZybb+rL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TZybb+rL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14BFEC4CECE; Thu, 26 Sep 2024 22:49:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727390985; bh=OmLFF6xgwbA0hM9UsQOF4k2zgh/bcASGVcZgCAmI/wE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TZybb+rLGew6y4q8nSczjj8DHRzJC7OPnSggLAYdW0ADMoiewKcozZLiLAW5sPq4b z/w7b2NScoCYFn5Y/Vu1k0LnkOv267r9y6sguOW4Nv2aO4I7KXAEGOqyeRdLQChUMM 4cTaoClZ1K67txUjJeVPMY4Il+eDer2maAH5KqQoz0Z9Yw9rGHdsmUmNnXCXQE0T10 MCioZOjqH18cVl7ms8sVxuKS31EpXnLgg1Dlb0+4pwj9q3OZxhffnGHGs/ou8q3fwC YN471CMyZJ3Ua5hJe27BPWBvBRj3fkTtvgx7DdcqZcLHY6H98mFfFSQVAxEosOWDlo CNXTOM4e0Ln8Q== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Andrew Morton , Kees Cook , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka , linux-mm@kvack.org, "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org, Uladzislau Rezki Subject: [PATCH 12/20] kthread: Make sure kthread hasn't started while binding it Date: Fri, 27 Sep 2024 00:49:00 +0200 Message-ID: <20240926224910.11106-13-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Make sure the kthread is sleeping in the schedule_preempt_disabled() call before calling its handler when kthread_bind[_mask]() is called on it. This provides a sanity check verifying that the task is not randomly blocked later at some point within its function handler, in which case it could be just concurrently awaken, leaving the call to do_set_cpus_allowed() without any effect until the next voluntary sleep. Rely on the wake-up ordering to ensure that the newly introduced "started" field returns the expected value: TASK A TASK B ------ ------ READ kthread->started wake_up_process(B) rq_lock() ... rq_unlock() // RELEASE schedule() rq_lock() // ACQUIRE // schedule task B rq_unlock() WRITE kthread->started Similarly, writing kthread->started before subsequent voluntary sleeps will be visible after calling wait_task_inactive() in __kthread_bind_mask(), reporting potential misuse of the API. Upcoming patches will make further use of this facility. Acked-by: Vlastimil Babka Signed-off-by: Frederic Weisbecker --- kernel/kthread.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/kthread.c b/kernel/kthread.c index db4ceb0f503c..1527a522cdd3 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -53,6 +53,7 @@ struct kthread_create_info struct kthread { unsigned long flags; unsigned int cpu; + int started; int result; int (*threadfn)(void *); void *data; @@ -382,6 +383,8 @@ static int kthread(void *_create) schedule_preempt_disabled(); preempt_enable(); + self->started = 1; + ret = -EINTR; if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) { cgroup_kthread_ready(); @@ -540,7 +543,9 @@ static void __kthread_bind(struct task_struct *p, unsigned int cpu, unsigned int void kthread_bind_mask(struct task_struct *p, const struct cpumask *mask) { + struct kthread *kthread = to_kthread(p); __kthread_bind_mask(p, mask, TASK_UNINTERRUPTIBLE); + WARN_ON_ONCE(kthread->started); } /** @@ -554,7 +559,9 @@ void kthread_bind_mask(struct task_struct *p, const struct cpumask *mask) */ void kthread_bind(struct task_struct *p, unsigned int cpu) { + struct kthread *kthread = to_kthread(p); __kthread_bind(p, cpu, TASK_UNINTERRUPTIBLE); + WARN_ON_ONCE(kthread->started); } EXPORT_SYMBOL(kthread_bind); From patchwork Thu Sep 26 22:49:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813714 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 093011B0113; Thu, 26 Sep 2024 22:49:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390989; cv=none; b=S+Vcpu57A9yGer8aF7DEyZjy7wfi9t/NtFEr21uUoPy48h9nSf/+ESRgem1zyHsFQI/u0WPZ794YHBKwSkzLwA0AMJRkFVhM6EMvlXE5PKUEdtDHZ0INSLfGmKjw0L9oSecJaXO/BDt8RM5KKUjsCPbu5S0xrIKJtCnldOtULLQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390989; c=relaxed/simple; bh=bE1OM/op4+e/YyAtntgPKGkBQ7JAfe3Vd2i3n+7WjN8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L19qrHtoty7OX88/amCd4jPK0EAZNxgWyxR0exHE8o/yNRVrMZx2cKZWZxa0znZe3A6H7RABMBNOtMJiSeNFi0gcOe5IXshECp+YHBVT8BPxD0jwGCvcNdPMh5lR8zM1xfscYC4jj74tbE/b/3OIdzYkfbpKKBolq7kG1Ht0o3E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lyeZP9PW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lyeZP9PW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B97D7C4CECF; Thu, 26 Sep 2024 22:49:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727390988; bh=bE1OM/op4+e/YyAtntgPKGkBQ7JAfe3Vd2i3n+7WjN8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lyeZP9PWK4kodAkIS7yTyNhlbSAraYXy5xoRG+4jGTQgHQpPMG/SpnI8qwYvYWXHN 2n5SVdMmIQKyc28FIyY6jAWXZSFbIDk6s3gnVQeBr0LRLKKQlX9IM9vojf/i73iQqD CfCm3C+5NVgQBGW+P/HXiSTEfAolKpVXHCSG1YKaW1j4jcoJQOX3jab5KUWQOAmqUA CDsPW6j8jql2Dz6xVR+fFUfQx3zOCu0yD4gRCsNFASNmCFinYbw3AUOYceXe10QK0l fA7mxZEWvX0uj8fKH8Sz+UHIWEgZI1Vlifl6t85IucnruBVsouNjrOEnm0aK52cFE0 lwZINyZQN4z5w== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Andrew Morton , Kees Cook , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka , linux-mm@kvack.org, "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Uladzislau Rezki , Zqiang , rcu@vger.kernel.org Subject: [PATCH 13/20] kthread: Default affine kthread to its preferred NUMA node Date: Fri, 27 Sep 2024 00:49:01 +0200 Message-ID: <20240926224910.11106-14-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Kthreads attached to a preferred NUMA node for their task structure allocation can also be assumed to run preferrably within that same node. A more precise affinity is usually notified by calling kthread_create_on_cpu() or kthread_bind[_mask]() before the first wakeup. For the others, a default affinity to the node is desired and sometimes implemented with more or less success when it comes to deal with hotplug events and nohz_full / CPU Isolation interactions: - kcompactd is affine to its node and handles hotplug but not CPU Isolation - kswapd is affine to its node and ignores hotplug and CPU Isolation - A bunch of drivers create their kthreads on a specific node and don't take care about affining further. Handle that default node affinity preference at the generic level instead, provided a kthread is created on an actual node and doesn't apply any specific affinity such as a given CPU or a custom cpumask to bind to before its first wake-up. This generic handling is aware of CPU hotplug events and CPU isolation such that: * When a housekeeping CPU goes up that is part of the node of a given kthread, the related task is re-affined to that own node if it was previously running on the default last resort online housekeeping set from other nodes. * When a housekeeping CPU goes down while it was part of the node of a kthread, the running task is migrated (or the sleeping task is woken up) automatically by the scheduler to other housekeepers within the same node or, as a last resort, to all housekeepers from other nodes. Acked-by: Vlastimil Babka Signed-off-by: Frederic Weisbecker --- include/linux/cpuhotplug.h | 1 + kernel/kthread.c | 106 ++++++++++++++++++++++++++++++++++++- 2 files changed, 106 insertions(+), 1 deletion(-) diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 2361ed4d2b15..228f27150a93 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -239,6 +239,7 @@ enum cpuhp_state { CPUHP_AP_WORKQUEUE_ONLINE, CPUHP_AP_RANDOM_ONLINE, CPUHP_AP_RCUTREE_ONLINE, + CPUHP_AP_KTHREADS_ONLINE, CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 40, diff --git a/kernel/kthread.c b/kernel/kthread.c index 1527a522cdd3..736276d313c2 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -35,6 +35,9 @@ static DEFINE_SPINLOCK(kthread_create_lock); static LIST_HEAD(kthread_create_list); struct task_struct *kthreadd_task; +static LIST_HEAD(kthreads_hotplug); +static DEFINE_MUTEX(kthreads_hotplug_lock); + struct kthread_create_info { /* Information passed to kthread() from kthreadd. */ @@ -53,6 +56,7 @@ struct kthread_create_info struct kthread { unsigned long flags; unsigned int cpu; + unsigned int node; int started; int result; int (*threadfn)(void *); @@ -64,6 +68,8 @@ struct kthread { #endif /* To store the full name if task comm is truncated. */ char *full_name; + struct task_struct *task; + struct list_head hotplug_node; }; enum KTHREAD_BITS { @@ -122,8 +128,11 @@ bool set_kthread_struct(struct task_struct *p) init_completion(&kthread->exited); init_completion(&kthread->parked); + INIT_LIST_HEAD(&kthread->hotplug_node); p->vfork_done = &kthread->exited; + kthread->task = p; + kthread->node = tsk_fork_get_node(current); p->worker_private = kthread; return true; } @@ -314,6 +323,11 @@ void __noreturn kthread_exit(long result) { struct kthread *kthread = to_kthread(current); kthread->result = result; + if (!list_empty(&kthread->hotplug_node)) { + mutex_lock(&kthreads_hotplug_lock); + list_del(&kthread->hotplug_node); + mutex_unlock(&kthreads_hotplug_lock); + } do_exit(0); } EXPORT_SYMBOL(kthread_exit); @@ -339,6 +353,48 @@ void __noreturn kthread_complete_and_exit(struct completion *comp, long code) } EXPORT_SYMBOL(kthread_complete_and_exit); +static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpumask) +{ + cpumask_and(cpumask, cpumask_of_node(kthread->node), + housekeeping_cpumask(HK_TYPE_KTHREAD)); + + if (cpumask_empty(cpumask)) + cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_KTHREAD)); +} + +static void kthread_affine_node(void) +{ + struct kthread *kthread = to_kthread(current); + cpumask_var_t affinity; + + WARN_ON_ONCE(kthread_is_per_cpu(current)); + + if (kthread->node == NUMA_NO_NODE) { + housekeeping_affine(current, HK_TYPE_RCU); + } else { + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) { + WARN_ON_ONCE(1); + return; + } + + mutex_lock(&kthreads_hotplug_lock); + WARN_ON_ONCE(!list_empty(&kthread->hotplug_node)); + list_add_tail(&kthread->hotplug_node, &kthreads_hotplug); + /* + * The node cpumask is racy when read from kthread() but: + * - a racing CPU going down will either fail on the subsequent + * call to set_cpus_allowed_ptr() or be migrated to housekeepers + * afterwards by the scheduler. + * - a racing CPU going up will be handled by kthreads_online_cpu() + */ + kthread_fetch_affinity(kthread, affinity); + set_cpus_allowed_ptr(current, affinity); + mutex_unlock(&kthreads_hotplug_lock); + + free_cpumask_var(affinity); + } +} + static int kthread(void *_create) { static const struct sched_param param = { .sched_priority = 0 }; @@ -369,7 +425,6 @@ static int kthread(void *_create) * back to default in case they have been changed. */ sched_setscheduler_nocheck(current, SCHED_NORMAL, ¶m); - set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_KTHREAD)); /* OK, tell user we're spawned, wait for stop or wakeup */ __set_current_state(TASK_UNINTERRUPTIBLE); @@ -385,6 +440,9 @@ static int kthread(void *_create) self->started = 1; + if (!(current->flags & PF_NO_SETAFFINITY)) + kthread_affine_node(); + ret = -EINTR; if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) { cgroup_kthread_ready(); @@ -779,6 +837,52 @@ int kthreadd(void *unused) return 0; } +/* + * Re-affine kthreads according to their preferences + * and the newly online CPU. The CPU down part is handled + * by select_fallback_rq() which default re-affines to + * housekeepers in case the preferred affinity doesn't + * apply anymore. + */ +static int kthreads_online_cpu(unsigned int cpu) +{ + cpumask_var_t affinity; + struct kthread *k; + int ret; + + guard(mutex)(&kthreads_hotplug_lock); + + if (list_empty(&kthreads_hotplug)) + return 0; + + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) + return -ENOMEM; + + ret = 0; + + list_for_each_entry(k, &kthreads_hotplug, hotplug_node) { + if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) || + kthread_is_per_cpu(k->task) || + k->node == NUMA_NO_NODE)) { + ret = -EINVAL; + continue; + } + kthread_fetch_affinity(k, affinity); + set_cpus_allowed_ptr(k->task, affinity); + } + + free_cpumask_var(affinity); + + return ret; +} + +static int kthreads_init(void) +{ + return cpuhp_setup_state(CPUHP_AP_KTHREADS_ONLINE, "kthreads:online", + kthreads_online_cpu, NULL); +} +early_initcall(kthreads_init); + void __kthread_init_worker(struct kthread_worker *worker, const char *name, struct lock_class_key *key) From patchwork Thu Sep 26 22:49:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813715 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39A471B2536; Thu, 26 Sep 2024 22:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390997; cv=none; b=d7eOX6ha8z4LTfzjhEWJ9lf6dB1r7vQSWeqGAAeCfMgRwYDLR8exPJp0bhe7IdKTdZc3pVWSGHVB3RGOWCb0QdRRLqmJghacTyrv3bAsE1pjp0fqazQDgpuGms667CTreUrxtlqtqgVq6D+mCD7KEP50pRv+BWp+KAlbd45etKQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727390997; c=relaxed/simple; bh=m1Tcj1zEK+qAK6ToFK+B33y3KL3pudrwFEmXyRTSR20=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SFznQOQ5ngElsRGvCOjrlanV92hb7QO1WqiczLzVlXziwLiyqB2h0xgW62vMOnSKEse+u/1yf/gjFuQISuRKs6inQJZYPX/fQwO2faKgZRIJu39xUY4dTssLCV0RmQLGVAc1r+Qt/khgaS4qTY2xHBqa5+OFfcsklMcJQjdQxNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iWWKYk8L; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iWWKYk8L" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5996C4CED1; Thu, 26 Sep 2024 22:49:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727390997; bh=m1Tcj1zEK+qAK6ToFK+B33y3KL3pudrwFEmXyRTSR20=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iWWKYk8L61abF7LxWDWRGHMgy+nOKnnunvJUW6jJ41wneZPY/6lNIfMOqSed6R/gW jYrTjTIZIFOLj5S9/D62bKvAsDKJlVPJ2WzeTRs1NDsf+swIeolOnOhhlyqYZwrjg8 C3/ohAwI76aHXOJp2XqCIN+THlWHkX+Dtjz2+LU4B7XMIRnGUgqj7s9Brak89HhBCs XLWJeiAf4oZhQIlBTJ63ZQvlf9wr3Mos+fhlYgs/hiXgr1bxJwGQ87/eZuQ290s0yE 6biSgYOA3Bmkgv2xZ+CrpL2IyvqvrMNfjI8hKSfgjFL8+HhCyyLz68L7TfR8fcZFpO 4sV4ZR3ULcjMQ== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Andrew Morton , Kees Cook , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka , linux-mm@kvack.org, "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Uladzislau Rezki , Zqiang , rcu@vger.kernel.org Subject: [PATCH 16/20] kthread: Implement preferred affinity Date: Fri, 27 Sep 2024 00:49:04 +0200 Message-ID: <20240926224910.11106-17-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Affining kthreads follow either of four existing different patterns: 1) Per-CPU kthreads must stay affine to a single CPU and never execute relevant code on any other CPU. This is currently handled by smpboot code which takes care of CPU-hotplug operations. 2) Kthreads that _have_ to be affine to a specific set of CPUs and can't run anywhere else. The affinity is set through kthread_bind_mask() and the subsystem takes care by itself to handle CPU-hotplug operations. 3) Kthreads that prefer to be affine to a specific NUMA node. That preferred affinity is applied by default when an actual node ID is passed on kthread creation, provided the kthread is not per-CPU and no call to kthread_bind_mask() has been issued before the first wake-up. 4) Similar to the previous point but kthreads have a preferred affinity different than a node. It is set manually like any other task and CPU-hotplug is supposed to be handled by the relevant subsystem so that the task is properly reaffined whenever a given CPU from the preferred affinity comes up. Also care must be taken so that the preferred affinity doesn't cross housekeeping cpumask boundaries. Provide a function to handle the last usecase, mostly reusing the current node default affinity infrastructure. kthread_affine_preferred() is introduced, to be used just like kthread_bind_mask(), right after kthread creation and before the first wake up. The kthread is then affine right away to the cpumask passed through the API if it has online housekeeping CPUs. Otherwise it will be affine to all online housekeeping CPUs as a last resort. As with node affinity, it is aware of CPU hotplug events such that: * When a housekeeping CPU goes up that is part of the preferred affinity of a given kthread, the related task is re-affined to that preferred affinity if it was previously running on the default last resort online housekeeping set. * When a housekeeping CPU goes down while it was part of the preferred affinity of a kthread, the running task is migrated (or the sleeping task is woken up) automatically by the scheduler to other housekeepers within the preferred affinity or, as a last resort, to all housekeepers from other nodes. Acked-by: Vlastimil Babka Signed-off-by: Frederic Weisbecker --- include/linux/kthread.h | 1 + kernel/kthread.c | 68 ++++++++++++++++++++++++++++++++++++----- 2 files changed, 62 insertions(+), 7 deletions(-) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index b11f53c1ba2e..30209bdf83a2 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -85,6 +85,7 @@ kthread_run_on_cpu(int (*threadfn)(void *data), void *data, void free_kthread_struct(struct task_struct *k); void kthread_bind(struct task_struct *k, unsigned int cpu); void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask); +int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask); int kthread_stop(struct task_struct *k); int kthread_stop_put(struct task_struct *k); bool kthread_should_stop(void); diff --git a/kernel/kthread.c b/kernel/kthread.c index 736276d313c2..91037533afda 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -70,6 +70,7 @@ struct kthread { char *full_name; struct task_struct *task; struct list_head hotplug_node; + struct cpumask *preferred_affinity; }; enum KTHREAD_BITS { @@ -327,6 +328,11 @@ void __noreturn kthread_exit(long result) mutex_lock(&kthreads_hotplug_lock); list_del(&kthread->hotplug_node); mutex_unlock(&kthreads_hotplug_lock); + + if (kthread->preferred_affinity) { + kfree(kthread->preferred_affinity); + kthread->preferred_affinity = NULL; + } } do_exit(0); } @@ -355,9 +361,17 @@ EXPORT_SYMBOL(kthread_complete_and_exit); static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpumask) { - cpumask_and(cpumask, cpumask_of_node(kthread->node), - housekeeping_cpumask(HK_TYPE_KTHREAD)); + const struct cpumask *pref; + if (kthread->preferred_affinity) { + pref = kthread->preferred_affinity; + } else { + if (WARN_ON_ONCE(kthread->node == NUMA_NO_NODE)) + return; + pref = cpumask_of_node(kthread->node); + } + + cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD)); if (cpumask_empty(cpumask)) cpumask_copy(cpumask, housekeeping_cpumask(HK_TYPE_KTHREAD)); } @@ -440,7 +454,7 @@ static int kthread(void *_create) self->started = 1; - if (!(current->flags & PF_NO_SETAFFINITY)) + if (!(current->flags & PF_NO_SETAFFINITY) && !self->preferred_affinity) kthread_affine_node(); ret = -EINTR; @@ -837,12 +851,53 @@ int kthreadd(void *unused) return 0; } +int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask) +{ + struct kthread *kthread = to_kthread(p); + cpumask_var_t affinity; + unsigned long flags; + int ret; + + if (!wait_task_inactive(p, TASK_UNINTERRUPTIBLE) || kthread->started) { + WARN_ON(1); + return -EINVAL; + } + + WARN_ON_ONCE(kthread->preferred_affinity); + + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) + return -ENOMEM; + + kthread->preferred_affinity = kzalloc(sizeof(struct cpumask), GFP_KERNEL); + if (!kthread->preferred_affinity) { + ret = -ENOMEM; + goto out; + } + + mutex_lock(&kthreads_hotplug_lock); + cpumask_copy(kthread->preferred_affinity, mask); + WARN_ON_ONCE(!list_empty(&kthread->hotplug_node)); + list_add_tail(&kthread->hotplug_node, &kthreads_hotplug); + kthread_fetch_affinity(kthread, affinity); + + /* It's safe because the task is inactive. */ + raw_spin_lock_irqsave(&p->pi_lock, flags); + do_set_cpus_allowed(p, affinity); + raw_spin_unlock_irqrestore(&p->pi_lock, flags); + + mutex_unlock(&kthreads_hotplug_lock); +out: + free_cpumask_var(affinity); + + return 0; +} + /* * Re-affine kthreads according to their preferences * and the newly online CPU. The CPU down part is handled * by select_fallback_rq() which default re-affines to - * housekeepers in case the preferred affinity doesn't - * apply anymore. + * housekeepers from other nodes in case the preferred + * affinity doesn't apply anymore. */ static int kthreads_online_cpu(unsigned int cpu) { @@ -862,8 +917,7 @@ static int kthreads_online_cpu(unsigned int cpu) list_for_each_entry(k, &kthreads_hotplug, hotplug_node) { if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) || - kthread_is_per_cpu(k->task) || - k->node == NUMA_NO_NODE)) { + kthread_is_per_cpu(k->task))) { ret = -EINVAL; continue; } From patchwork Thu Sep 26 22:49:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813716 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80D001B29C9; Thu, 26 Sep 2024 22:50:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391000; cv=none; b=KZrgkC9+Aqqm8HI8KXPM1uppVOSiaNa6QFoP8055+7HWIkxV5OrXUFcsnO8gQhOMlseYF1OU3pXK5a8WYP4QCdhQmHg6mZsZHqGrRxnoxLz8A8hj9JHPoNICfQc5EeRmdjkZsjnO1Sicp8aMweybjEfoF8AjCbr/YVRkQNQduXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391000; c=relaxed/simple; bh=+P4kzizCeNOj1Ff6ZQgJUmeNr/qO1N9UQodOY09ijkk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A6uSieDb1d3IYwuhG8+CiDIiK3SuveyzkyZ0DC35HBcdxd9kpck3wyXa0SHCB2P3sro5wcvA2B5m+hQbae3/H9K4DbG808wET9dESYW7JovMnXDUV9axXK6ECFX9hKng5NFEo3yT0g38qTGwNQoVEx69SU9jdD6DvDbv1tT9i2M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PtxSAIGX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PtxSAIGX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DFA1C4CECE; Thu, 26 Sep 2024 22:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727391000; bh=+P4kzizCeNOj1Ff6ZQgJUmeNr/qO1N9UQodOY09ijkk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PtxSAIGXraxeSEcnRb5c6or1mDuRelDxeTl4uTsmPJoNX4SkfcLp8zfw6DI3U6IKo hGeCb0rW4cmP1twq+Q/Ux7r7kQ0c+3/feiQC/OFLvPd5+mPm44hnrWQpUEi8QAo2HN /AeZndBE6GOjR0iiD3XUIS4vvlr2Byt76Pi3DYp/LK1n/HKBKlFBZLyIVDhhYE25Mq 3uAyq78eTTKQ20XvaHSoz0+r7uc7qqhTighQfrX9aQgV7QWXOIO9K+FycdA466YzZY 1p5v51Na8Y6gMtVNn1HfNWfQLDlkNSi2OI/oAz7bcxdGDDVD2djD6m1wCqTBSno0yx 8rauYfzaiZfxw== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , "Paul E. McKenney" , Uladzislau Rezki , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org, Andrew Morton , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka Subject: [PATCH 17/20] rcu: Use kthread preferred affinity for RCU boost Date: Fri, 27 Sep 2024 00:49:05 +0200 Message-ID: <20240926224910.11106-18-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that kthreads have an infrastructure to handle preferred affinity against CPU hotplug and housekeeping cpumask, convert RCU boost to use it instead of handling all the constraints by itself. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree.c | 27 +++++++++++++++++++-------- kernel/rcu/tree_plugin.h | 11 ++--------- 2 files changed, 21 insertions(+), 17 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index a60616e69b66..c1e9f0818d51 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -149,7 +149,6 @@ static int rcu_scheduler_fully_active __read_mostly; static void rcu_report_qs_rnp(unsigned long mask, struct rcu_node *rnp, unsigned long gps, unsigned long flags); -static struct task_struct *rcu_boost_task(struct rcu_node *rnp); static void invoke_rcu_core(void); static void rcu_report_exp_rdp(struct rcu_data *rdp); static void sync_sched_exp_online_cleanup(int cpu); @@ -5007,6 +5006,22 @@ int rcutree_prepare_cpu(unsigned int cpu) return 0; } +static void rcu_thread_affine_rnp(struct task_struct *t, struct rcu_node *rnp) +{ + cpumask_var_t affinity; + int cpu; + + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) + return; + + for_each_leaf_node_possible_cpu(rnp, cpu) + cpumask_set_cpu(cpu, affinity); + + kthread_affine_preferred(t, affinity); + + free_cpumask_var(affinity); +} + /* * Update kthreads affinity during CPU-hotplug changes. * @@ -5026,19 +5041,18 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoingcpu) unsigned long mask; struct rcu_data *rdp; struct rcu_node *rnp; - struct task_struct *task_boost, *task_exp; + struct task_struct *task_exp; rdp = per_cpu_ptr(&rcu_data, cpu); rnp = rdp->mynode; - task_boost = rcu_boost_task(rnp); task_exp = rcu_exp_par_gp_task(rnp); /* - * If CPU is the boot one, those tasks are created later from early + * If CPU is the boot one, this task is created later from early * initcall since kthreadd must be created first. */ - if (!task_boost && !task_exp) + if (!task_exp) return; if (!zalloc_cpumask_var(&cm, GFP_KERNEL)) @@ -5060,9 +5074,6 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoingcpu) if (task_exp) set_cpus_allowed_ptr(task_exp, cm); - if (task_boost) - set_cpus_allowed_ptr(task_boost, cm); - mutex_unlock(&rnp->kthread_mutex); free_cpumask_var(cm); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 1c7cbd145d5e..223f3a02351e 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1217,16 +1217,13 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) raw_spin_lock_irqsave_rcu_node(rnp, flags); rnp->boost_kthread_task = t; raw_spin_unlock_irqrestore_rcu_node(rnp, flags); + sp.sched_priority = kthread_prio; sched_setscheduler_nocheck(t, SCHED_FIFO, &sp); + rcu_thread_affine_rnp(t, rnp); wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ } -static struct task_struct *rcu_boost_task(struct rcu_node *rnp) -{ - return READ_ONCE(rnp->boost_kthread_task); -} - #else /* #ifdef CONFIG_RCU_BOOST */ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) @@ -1243,10 +1240,6 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp) { } -static struct task_struct *rcu_boost_task(struct rcu_node *rnp) -{ - return NULL; -} #endif /* #else #ifdef CONFIG_RCU_BOOST */ /* From patchwork Thu Sep 26 22:49:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813717 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 48C0C1B2ED5; Thu, 26 Sep 2024 22:50:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391004; cv=none; b=V1HSns4nv+aKi5D88YbMLq5pfpGdkoRrlaeVAeKYp/HTQ0kFOYZKeNHvzWmhzlYR3B2gwNpPp7StD3k8AqoSoba1PsgXJiVrOulGr2d0hcI9j7ee6o4TO31VvHWizVONBJ/I3IKlFYbQlnldDvEIZMAOv2vgy6wWpLknBiDGxeQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391004; c=relaxed/simple; bh=RXq2bB5DoKDocGNG0/VP6/WISAYwDUKtYUPfrdT95y8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W0k2A74Wi3EAm3/lZ/dUopum98uEKgeVnMlGWS6ui95aJryD86ebKYR0Xy+GzqYj7NJoYHFRYG3s7utfJ7fL9lfHooHiyywfukVMTVSDhWmcvRGNG2EPg+OkbAnAeWYWScW06MEw0hZgEfwiB601/N1jxkFiijs6/HJ5FRa2j6g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Nk66THNm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Nk66THNm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C46B0C4CECD; Thu, 26 Sep 2024 22:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727391003; bh=RXq2bB5DoKDocGNG0/VP6/WISAYwDUKtYUPfrdT95y8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nk66THNmmU+VDDA3GP1MyvDe+1U8iGLogZB0wDqe/pxeeQE/+1zYgSiSbt2AUL0a5 NnrTKUiDQKCDmAv4ov6gdJ7ytxKbWjS3TBpPbiq6mNFiAhYaDCYtOoJBfKew3Rl8UQ KpNYpoI4csBN/96OnUoPg4xn2VWT4U948ICe2PI03/as8peDnD6vQZ7Dtgxt5RYu5Q Rex9d0Ix6en9/8BAgs6aOOyj2t1YZ418z7NFSUyGWxKD0qB91NXQYQ0EEqwrW/ahUS PtDnM7hl1SlXLgA1P/+qlRLLtj0JrCuysiMfx9nkLhyeYt2CWuVNqsyIGXUudd+XoS +BZWAB9p8GyUg== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , "Paul E. McKenney" , Uladzislau Rezki , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org, Andrew Morton , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka Subject: [PATCH 18/20] kthread: Unify kthread_create_on_cpu() and kthread_create_worker_on_cpu() automatic format Date: Fri, 27 Sep 2024 00:49:06 +0200 Message-ID: <20240926224910.11106-19-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 kthread_create_on_cpu() uses the CPU argument as an implicit and unique printf argument to add to the format whereas kthread_create_worker_on_cpu() still relies on explicitly passing the printf arguments. This difference in behaviour is error prone and doesn't help standardizing per-CPU kthread names. Unify the behaviours and convert kthread_create_worker_on_cpu() to use the printf behaviour of kthread_create_on_cpu(). Signed-off-by: Frederic Weisbecker --- fs/erofs/zdata.c | 2 +- include/linux/kthread.h | 21 +++++++++++---- kernel/kthread.c | 59 ++++++++++++++++++++++++----------------- 3 files changed, 52 insertions(+), 30 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 8936790618c6..050aaa016ec8 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -318,7 +318,7 @@ static void erofs_destroy_percpu_workers(void) static struct kthread_worker *erofs_init_percpu_worker(int cpu) { struct kthread_worker *worker = - kthread_create_worker_on_cpu(cpu, 0, "erofs_worker/%u", cpu); + kthread_create_worker_on_cpu(cpu, 0, "erofs_worker/%u"); if (IS_ERR(worker)) return worker; diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 30209bdf83a2..0c66e7c1092a 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -187,13 +187,24 @@ extern void __kthread_init_worker(struct kthread_worker *worker, int kthread_worker_fn(void *worker_ptr); -__printf(2, 3) +__printf(3, 4) +struct kthread_worker *kthread_create_worker_on_node(unsigned int flags, + int node, + const char namefmt[], ...); + +#define kthread_create_worker(flags, namefmt, ...) \ +({ \ + struct kthread_worker *__kw \ + = kthread_create_worker_on_node(flags, NUMA_NO_NODE, \ + namefmt, ## __VA_ARGS__); \ + if (!IS_ERR(__kw)) \ + wake_up_process(__kw->task); \ + __kw; \ +}) + struct kthread_worker * -kthread_create_worker(unsigned int flags, const char namefmt[], ...); - -__printf(3, 4) struct kthread_worker * kthread_create_worker_on_cpu(int cpu, unsigned int flags, - const char namefmt[], ...); + const char namefmt[]); bool kthread_queue_work(struct kthread_worker *worker, struct kthread_work *work); diff --git a/kernel/kthread.c b/kernel/kthread.c index 91037533afda..7eb93c248c59 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1028,12 +1028,11 @@ int kthread_worker_fn(void *worker_ptr) EXPORT_SYMBOL_GPL(kthread_worker_fn); static __printf(3, 0) struct kthread_worker * -__kthread_create_worker(int cpu, unsigned int flags, - const char namefmt[], va_list args) +__kthread_create_worker_on_node(unsigned int flags, int node, + const char namefmt[], va_list args) { struct kthread_worker *worker; struct task_struct *task; - int node = NUMA_NO_NODE; worker = kzalloc(sizeof(*worker), GFP_KERNEL); if (!worker) @@ -1041,20 +1040,14 @@ __kthread_create_worker(int cpu, unsigned int flags, kthread_init_worker(worker); - if (cpu >= 0) - node = cpu_to_node(cpu); - task = __kthread_create_on_node(kthread_worker_fn, worker, - node, namefmt, args); + node, namefmt, args); if (IS_ERR(task)) goto fail_task; - if (cpu >= 0) - kthread_bind(task, cpu); - worker->flags = flags; worker->task = task; - wake_up_process(task); + return worker; fail_task: @@ -1065,6 +1058,7 @@ __kthread_create_worker(int cpu, unsigned int flags, /** * kthread_create_worker - create a kthread worker * @flags: flags modifying the default behavior of the worker + * @node: task structure for the thread is allocated on this node * @namefmt: printf-style name for the kthread worker (task). * * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) @@ -1072,25 +1066,49 @@ __kthread_create_worker(int cpu, unsigned int flags, * when the caller was killed by a fatal signal. */ struct kthread_worker * -kthread_create_worker(unsigned int flags, const char namefmt[], ...) +kthread_create_worker_on_node(unsigned int flags, int node, const char namefmt[], ...) { struct kthread_worker *worker; va_list args; va_start(args, namefmt); - worker = __kthread_create_worker(-1, flags, namefmt, args); + worker = __kthread_create_worker_on_node(flags, node, namefmt, args); va_end(args); + if (worker) + wake_up_process(worker->task); + + return worker; +} +EXPORT_SYMBOL(kthread_create_worker_on_node); + +static __printf(3, 4) struct kthread_worker * +__kthread_create_worker_on_cpu(int cpu, unsigned int flags, + const char namefmt[], ...) +{ + struct kthread_worker *worker; + va_list args; + + va_start(args, namefmt); + worker = __kthread_create_worker_on_node(flags, cpu_to_node(cpu), + namefmt, args); + va_end(args); + + if (worker) { + kthread_bind(worker->task, cpu); + wake_up_process(worker->task); + } + return worker; } -EXPORT_SYMBOL(kthread_create_worker); /** * kthread_create_worker_on_cpu - create a kthread worker and bind it * to a given CPU and the associated NUMA node. * @cpu: CPU number * @flags: flags modifying the default behavior of the worker - * @namefmt: printf-style name for the kthread worker (task). + * @namefmt: printf-style name for the thread. Format is restricted + * to "name.*%u". Code fills in cpu number. * * Use a valid CPU number if you want to bind the kthread worker * to the given CPU and the associated NUMA node. @@ -1122,16 +1140,9 @@ EXPORT_SYMBOL(kthread_create_worker); */ struct kthread_worker * kthread_create_worker_on_cpu(int cpu, unsigned int flags, - const char namefmt[], ...) + const char namefmt[]) { - struct kthread_worker *worker; - va_list args; - - va_start(args, namefmt); - worker = __kthread_create_worker(cpu, flags, namefmt, args); - va_end(args); - - return worker; + return __kthread_create_worker_on_cpu(cpu, flags, namefmt, cpu); } EXPORT_SYMBOL(kthread_create_worker_on_cpu); From patchwork Thu Sep 26 22:49:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813718 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C6FD1B2EF6; Thu, 26 Sep 2024 22:50:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391007; cv=none; b=AQiQU27/i6uSDfKDhDtpOKkjA10JfB1MngHvnmTIR1uXEuwJHOaDid/u4r43vGFPZNA4XF65ffBkeyvfkGF14gYBoazs1k4vq0tUoAup3YCABB3TGy58UDhSQw7dn/pp3d+pbqozJRch3tXhgg/2JX6Tz6oE6zblY2fHAgFRqzs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391007; c=relaxed/simple; bh=gbiEmsoL5oIwV/Hij/zvEWwIxGGA6Grm6wu0OWzI9zg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X0euZ0GmuLcw2IpYbx4wvJEGP284BZviptYTL7Nkb+QvDNk7czodyU72hdb4X+Gt8LhnckmmFSASGni4g7twgjc3ZVn2WJXxKCjz6Ieu708NiczjysILaSP30SCjJd1E6txSS5Z60TS0LJCdraFT2OR5pfUDpid4rJiK6Sg2P4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZFE+Yzme; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZFE+Yzme" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C250C4CED0; Thu, 26 Sep 2024 22:50:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727391007; bh=gbiEmsoL5oIwV/Hij/zvEWwIxGGA6Grm6wu0OWzI9zg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZFE+YzmebD7kQeesRNrbAaA599cGULwfSJEd9ImUAsaNsXiqD3uCvATnwqfzlLNG0 JXRGe5g1mKZ1piB9Ohjiy+g6vsJcfvw7aaNU7ydUNJ4+zqiyJM5BDr6oGcm37eYkLl n8jGwPMk5LEY4f5GLtz8H9fSqWm77iW9mKUFRCcfxSCczdH/5CkQGk2AoZM5qw3sJx EstxYn3d8pdbyYMcOuWn+Q81y4EHZlAMPQvNaJ8lQZCcwLwgXERZteQsmVDZKrdiuV YLqY3EVDReNWPxKKS5uR4Ujkwva93dqe130T1e5IYt+5dSZ2mLZuJeSTH6wgr/WOZp VWssfKto7vZDQ== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , "Paul E. McKenney" , Uladzislau Rezki , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org, Andrew Morton , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka Subject: [PATCH 19/20] treewide: Introduce kthread_run_worker[_on_cpu]() Date: Fri, 27 Sep 2024 00:49:07 +0200 Message-ID: <20240926224910.11106-20-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 kthread_create() creates a kthread without running it yet. kthread_run() creates a kthread and runs it. On the other hand, kthread_create_worker() creates a kthread worker and runs it. This difference in behaviours is confusing. Also there is no way to create a kthread worker and affine it using kthread_bind_mask() or kthread_affine_preferred() before starting it. Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]() that behaves just like kthread_run(). kthread_create_worker[_on_cpu]() will now only create a kthread worker without starting it. Signed-off-by: Frederic Weisbecker Acked-by: Paul E. McKenney --- arch/x86/kvm/i8254.c | 2 +- crypto/crypto_engine.c | 2 +- drivers/cpufreq/cppc_cpufreq.c | 2 +- drivers/gpu/drm/drm_vblank_work.c | 2 +- .../drm/i915/gem/selftests/i915_gem_context.c | 2 +- drivers/gpu/drm/i915/gt/selftest_execlists.c | 2 +- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 2 +- drivers/gpu/drm/i915/gt/selftest_slpc.c | 2 +- drivers/gpu/drm/i915/selftests/i915_request.c | 8 ++-- drivers/gpu/drm/msm/disp/msm_disp_snapshot.c | 2 +- drivers/gpu/drm/msm/msm_atomic.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 2 +- drivers/gpu/drm/msm/msm_kms.c | 2 +- .../platform/chips-media/wave5/wave5-vpu.c | 2 +- drivers/net/dsa/mv88e6xxx/chip.c | 2 +- drivers/net/ethernet/intel/ice/ice_dpll.c | 2 +- drivers/net/ethernet/intel/ice/ice_gnss.c | 2 +- drivers/net/ethernet/intel/ice/ice_ptp.c | 2 +- drivers/platform/chrome/cros_ec_spi.c | 2 +- drivers/ptp/ptp_clock.c | 2 +- drivers/spi/spi.c | 2 +- drivers/usb/typec/tcpm/tcpm.c | 2 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 2 +- drivers/watchdog/watchdog_dev.c | 2 +- fs/erofs/zdata.c | 2 +- include/linux/kthread.h | 48 ++++++++++++++++--- kernel/kthread.c | 31 +++--------- kernel/rcu/tree.c | 4 +- kernel/sched/ext.c | 2 +- kernel/workqueue.c | 2 +- net/dsa/tag_ksz.c | 2 +- net/dsa/tag_ocelot_8021q.c | 2 +- net/dsa/tag_sja1105.c | 2 +- 33 files changed, 83 insertions(+), 66 deletions(-) diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c index cd57a517d04a..d7ab8780ab9e 100644 --- a/arch/x86/kvm/i8254.c +++ b/arch/x86/kvm/i8254.c @@ -681,7 +681,7 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags) pid_nr = pid_vnr(pid); put_pid(pid); - pit->worker = kthread_create_worker(0, "kvm-pit/%d", pid_nr); + pit->worker = kthread_run_worker(0, "kvm-pit/%d", pid_nr); if (IS_ERR(pit->worker)) goto fail_kthread; diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index e60a0eb628e8..c7c16da5e649 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -517,7 +517,7 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev, crypto_init_queue(&engine->queue, qlen); spin_lock_init(&engine->queue_lock); - engine->kworker = kthread_create_worker(0, "%s", engine->name); + engine->kworker = kthread_run_worker(0, "%s", engine->name); if (IS_ERR(engine->kworker)) { dev_err(dev, "failed to create crypto request pump task\n"); return NULL; diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 1a5ad184d28f..9b91cba133c9 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -241,7 +241,7 @@ static void __init cppc_freq_invariance_init(void) if (fie_disabled) return; - kworker_fie = kthread_create_worker(0, "cppc_fie"); + kworker_fie = kthread_run_worker(0, "cppc_fie"); if (IS_ERR(kworker_fie)) { pr_warn("%s: failed to create kworker_fie: %ld\n", __func__, PTR_ERR(kworker_fie)); diff --git a/drivers/gpu/drm/drm_vblank_work.c b/drivers/gpu/drm/drm_vblank_work.c index 1752ffb44e1d..9cc71120246f 100644 --- a/drivers/gpu/drm/drm_vblank_work.c +++ b/drivers/gpu/drm/drm_vblank_work.c @@ -277,7 +277,7 @@ int drm_vblank_worker_init(struct drm_vblank_crtc *vblank) INIT_LIST_HEAD(&vblank->pending_work); init_waitqueue_head(&vblank->work_wait_queue); - worker = kthread_create_worker(0, "card%d-crtc%d", + worker = kthread_run_worker(0, "card%d-crtc%d", vblank->dev->primary->index, vblank->pipe); if (IS_ERR(worker)) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index 89d4dc8b60c6..eb0158e43417 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -369,7 +369,7 @@ static int live_parallel_switch(void *arg) if (!data[n].ce[0]) continue; - worker = kthread_create_worker(0, "igt/parallel:%s", + worker = kthread_run_worker(0, "igt/parallel:%s", data[n].ce[0]->engine->name); if (IS_ERR(worker)) { err = PTR_ERR(worker); diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c index 222ca7c44951..81c31396eceb 100644 --- a/drivers/gpu/drm/i915/gt/selftest_execlists.c +++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c @@ -3574,7 +3574,7 @@ static int smoke_crescendo(struct preempt_smoke *smoke, unsigned int flags) arg[id].batch = NULL; arg[id].count = 0; - worker[id] = kthread_create_worker(0, "igt/smoke:%d", id); + worker[id] = kthread_run_worker(0, "igt/smoke:%d", id); if (IS_ERR(worker[id])) { err = PTR_ERR(worker[id]); break; diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 9ce8ff1c04fe..9d3aeb237295 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -1025,7 +1025,7 @@ static int __igt_reset_engines(struct intel_gt *gt, threads[tmp].engine = other; threads[tmp].flags = flags; - worker = kthread_create_worker(0, "igt/%s", + worker = kthread_run_worker(0, "igt/%s", other->name); if (IS_ERR(worker)) { err = PTR_ERR(worker); diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c index 4ecc4ae74a54..e218b229681f 100644 --- a/drivers/gpu/drm/i915/gt/selftest_slpc.c +++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c @@ -489,7 +489,7 @@ static int live_slpc_tile_interaction(void *arg) return -ENOMEM; for_each_gt(gt, i915, i) { - threads[i].worker = kthread_create_worker(0, "igt/slpc_parallel:%d", gt->info.id); + threads[i].worker = kthread_run_worker(0, "igt/slpc_parallel:%d", gt->info.id); if (IS_ERR(threads[i].worker)) { ret = PTR_ERR(threads[i].worker); diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c index acae30a04a94..88870844b5bd 100644 --- a/drivers/gpu/drm/i915/selftests/i915_request.c +++ b/drivers/gpu/drm/i915/selftests/i915_request.c @@ -492,7 +492,7 @@ static int mock_breadcrumbs_smoketest(void *arg) for (n = 0; n < ncpus; n++) { struct kthread_worker *worker; - worker = kthread_create_worker(0, "igt/%d", n); + worker = kthread_run_worker(0, "igt/%d", n); if (IS_ERR(worker)) { ret = PTR_ERR(worker); ncpus = n; @@ -1645,7 +1645,7 @@ static int live_parallel_engines(void *arg) for_each_uabi_engine(engine, i915) { struct kthread_worker *worker; - worker = kthread_create_worker(0, "igt/parallel:%s", + worker = kthread_run_worker(0, "igt/parallel:%s", engine->name); if (IS_ERR(worker)) { err = PTR_ERR(worker); @@ -1806,7 +1806,7 @@ static int live_breadcrumbs_smoketest(void *arg) unsigned int i = idx * ncpus + n; struct kthread_worker *worker; - worker = kthread_create_worker(0, "igt/%d.%d", idx, n); + worker = kthread_run_worker(0, "igt/%d.%d", idx, n); if (IS_ERR(worker)) { ret = PTR_ERR(worker); goto out_flush; @@ -3219,7 +3219,7 @@ static int perf_parallel_engines(void *arg) memset(&engines[idx].p, 0, sizeof(engines[idx].p)); - worker = kthread_create_worker(0, "igt:%s", + worker = kthread_run_worker(0, "igt:%s", engine->name); if (IS_ERR(worker)) { err = PTR_ERR(worker); diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c index e75b97127c0d..2be00b11e557 100644 --- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c +++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot.c @@ -109,7 +109,7 @@ int msm_disp_snapshot_init(struct drm_device *drm_dev) mutex_init(&kms->dump_mutex); - kms->dump_worker = kthread_create_worker(0, "%s", "disp_snapshot"); + kms->dump_worker = kthread_run_worker(0, "%s", "disp_snapshot"); if (IS_ERR(kms->dump_worker)) DRM_ERROR("failed to create disp state task\n"); diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c index 9c45d641b521..a7a2384044ff 100644 --- a/drivers/gpu/drm/msm/msm_atomic.c +++ b/drivers/gpu/drm/msm/msm_atomic.c @@ -115,7 +115,7 @@ int msm_atomic_init_pending_timer(struct msm_pending_timer *timer, timer->kms = kms; timer->crtc_idx = crtc_idx; - timer->worker = kthread_create_worker(0, "atomic-worker-%d", crtc_idx); + timer->worker = kthread_run_worker(0, "atomic-worker-%d", crtc_idx); if (IS_ERR(timer->worker)) { int ret = PTR_ERR(timer->worker); timer->worker = NULL; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index a274b8466423..15f74e9dfc9e 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -859,7 +859,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, gpu->funcs = funcs; gpu->name = name; - gpu->worker = kthread_create_worker(0, "gpu-worker"); + gpu->worker = kthread_run_worker(0, "gpu-worker"); if (IS_ERR(gpu->worker)) { ret = PTR_ERR(gpu->worker); gpu->worker = NULL; diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index af6a6fcb1173..8db9f3afb8ac 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -269,7 +269,7 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv) /* initialize event thread */ ev_thread = &priv->event_thread[drm_crtc_index(crtc)]; ev_thread->dev = ddev; - ev_thread->worker = kthread_create_worker(0, "crtc_event:%d", crtc->base.id); + ev_thread->worker = kthread_run_worker(0, "crtc_event:%d", crtc->base.id); if (IS_ERR(ev_thread->worker)) { ret = PTR_ERR(ev_thread->worker); DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n"); diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu.c b/drivers/media/platform/chips-media/wave5/wave5-vpu.c index 7273254ecb03..c49f5ed461cf 100644 --- a/drivers/media/platform/chips-media/wave5/wave5-vpu.c +++ b/drivers/media/platform/chips-media/wave5/wave5-vpu.c @@ -231,7 +231,7 @@ static int wave5_vpu_probe(struct platform_device *pdev) dev_err(&pdev->dev, "failed to get irq resource, falling back to polling\n"); hrtimer_init(&dev->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); dev->hrtimer.function = &wave5_vpu_timer_callback; - dev->worker = kthread_create_worker(0, "vpu_irq_thread"); + dev->worker = kthread_run_worker(0, "vpu_irq_thread"); if (IS_ERR(dev->worker)) { dev_err(&pdev->dev, "failed to create vpu irq worker\n"); ret = PTR_ERR(dev->worker); diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 5b4e2ce5470d..a5908e2ff2cf 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -393,7 +393,7 @@ static int mv88e6xxx_irq_poll_setup(struct mv88e6xxx_chip *chip) kthread_init_delayed_work(&chip->irq_poll_work, mv88e6xxx_irq_poll); - chip->kworker = kthread_create_worker(0, "%s", dev_name(chip->dev)); + chip->kworker = kthread_run_worker(0, "%s", dev_name(chip->dev)); if (IS_ERR(chip->kworker)) return PTR_ERR(chip->kworker); diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index cd95705d1e7f..1f11a24387f3 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2050,7 +2050,7 @@ static int ice_dpll_init_worker(struct ice_pf *pf) struct kthread_worker *kworker; kthread_init_delayed_work(&d->work, ice_dpll_periodic_work); - kworker = kthread_create_worker(0, "ice-dplls-%s", + kworker = kthread_run_worker(0, "ice-dplls-%s", dev_name(ice_pf_to_dev(pf))); if (IS_ERR(kworker)) return PTR_ERR(kworker); diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c index c8ea1af51ad3..fcd1f808b696 100644 --- a/drivers/net/ethernet/intel/ice/ice_gnss.c +++ b/drivers/net/ethernet/intel/ice/ice_gnss.c @@ -182,7 +182,7 @@ static struct gnss_serial *ice_gnss_struct_init(struct ice_pf *pf) pf->gnss_serial = gnss; kthread_init_delayed_work(&gnss->read_work, ice_gnss_read); - kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev)); + kworker = kthread_run_worker(0, "ice-gnss-%s", dev_name(dev)); if (IS_ERR(kworker)) { kfree(gnss); return NULL; diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index ef2e858f49bb..cd7da48bdf91 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -3185,7 +3185,7 @@ static int ice_ptp_init_work(struct ice_pf *pf, struct ice_ptp *ptp) /* Allocate a kworker for handling work required for the ports * connected to the PTP hardware clock. */ - kworker = kthread_create_worker(0, "ice-ptp-%s", + kworker = kthread_run_worker(0, "ice-ptp-%s", dev_name(ice_pf_to_dev(pf))); if (IS_ERR(kworker)) return PTR_ERR(kworker); diff --git a/drivers/platform/chrome/cros_ec_spi.c b/drivers/platform/chrome/cros_ec_spi.c index 86a3d32a7763..08f566cc1480 100644 --- a/drivers/platform/chrome/cros_ec_spi.c +++ b/drivers/platform/chrome/cros_ec_spi.c @@ -715,7 +715,7 @@ static int cros_ec_spi_devm_high_pri_alloc(struct device *dev, int err; ec_spi->high_pri_worker = - kthread_create_worker(0, "cros_ec_spi_high_pri"); + kthread_run_worker(0, "cros_ec_spi_high_pri"); if (IS_ERR(ec_spi->high_pri_worker)) { err = PTR_ERR(ec_spi->high_pri_worker); diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c index c56cd0f63909..89a4420972e7 100644 --- a/drivers/ptp/ptp_clock.c +++ b/drivers/ptp/ptp_clock.c @@ -295,7 +295,7 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, if (ptp->info->do_aux_work) { kthread_init_delayed_work(&ptp->aux_work, ptp_aux_kworker); - ptp->kworker = kthread_create_worker(0, "ptp%d", ptp->index); + ptp->kworker = kthread_run_worker(0, "ptp%d", ptp->index); if (IS_ERR(ptp->kworker)) { err = PTR_ERR(ptp->kworker); pr_err("failed to create ptp aux_worker %d\n", err); diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index c1dad30a4528..f2f4b6ee25d4 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -2053,7 +2053,7 @@ static int spi_init_queue(struct spi_controller *ctlr) ctlr->busy = false; ctlr->queue_empty = true; - ctlr->kworker = kthread_create_worker(0, dev_name(&ctlr->dev)); + ctlr->kworker = kthread_run_worker(0, dev_name(&ctlr->dev)); if (IS_ERR(ctlr->kworker)) { dev_err(&ctlr->dev, "failed to create message pump kworker\n"); return PTR_ERR(ctlr->kworker); diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c index fc619478200f..66ae934ad196 100644 --- a/drivers/usb/typec/tcpm/tcpm.c +++ b/drivers/usb/typec/tcpm/tcpm.c @@ -7577,7 +7577,7 @@ struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc) mutex_init(&port->lock); mutex_init(&port->swap_lock); - port->wq = kthread_create_worker(0, dev_name(dev)); + port->wq = kthread_run_worker(0, dev_name(dev)); if (IS_ERR(port->wq)) return ERR_CAST(port->wq); sched_set_fifo(port->wq->task); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 8ffea8430f95..c204fc8e471a 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -229,7 +229,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, dev = &vdpasim->vdpa.dev; kthread_init_work(&vdpasim->work, vdpasim_work_fn); - vdpasim->worker = kthread_create_worker(0, "vDPA sim worker: %s", + vdpasim->worker = kthread_run_worker(0, "vDPA sim worker: %s", dev_attr->name); if (IS_ERR(vdpasim->worker)) goto err_iommu; diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c index 4190cb800cc4..19698d87dc57 100644 --- a/drivers/watchdog/watchdog_dev.c +++ b/drivers/watchdog/watchdog_dev.c @@ -1229,7 +1229,7 @@ int __init watchdog_dev_init(void) { int err; - watchdog_kworker = kthread_create_worker(0, "watchdogd"); + watchdog_kworker = kthread_run_worker(0, "watchdogd"); if (IS_ERR(watchdog_kworker)) { pr_err("Failed to create watchdog kworker\n"); return PTR_ERR(watchdog_kworker); diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 050aaa016ec8..bf6b4d8cb283 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -318,7 +318,7 @@ static void erofs_destroy_percpu_workers(void) static struct kthread_worker *erofs_init_percpu_worker(int cpu) { struct kthread_worker *worker = - kthread_create_worker_on_cpu(cpu, 0, "erofs_worker/%u"); + kthread_run_worker_on_cpu(cpu, 0, "erofs_worker/%u"); if (IS_ERR(worker)) return worker; diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 0c66e7c1092a..8d27403888ce 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -193,19 +193,53 @@ struct kthread_worker *kthread_create_worker_on_node(unsigned int flags, const char namefmt[], ...); #define kthread_create_worker(flags, namefmt, ...) \ -({ \ - struct kthread_worker *__kw \ - = kthread_create_worker_on_node(flags, NUMA_NO_NODE, \ - namefmt, ## __VA_ARGS__); \ - if (!IS_ERR(__kw)) \ - wake_up_process(__kw->task); \ - __kw; \ + kthread_create_worker_on_node(flags, NUMA_NO_NODE, namefmt, ## __VA_ARGS__); + +/** + * kthread_run_worker - create and wake a kthread worker. + * @flags: flags modifying the default behavior of the worker + * @namefmt: printf-style name for the thread. + * + * Description: Convenient wrapper for kthread_create_worker() followed by + * wake_up_process(). Returns the kthread_worker or ERR_PTR(-ENOMEM). + */ +#define kthread_run_worker(flags, namefmt, ...) \ +({ \ + struct kthread_worker *__kw \ + = kthread_create_worker(flags, namefmt, ## __VA_ARGS__); \ + if (!IS_ERR(__kw)) \ + wake_up_process(__kw->task); \ + __kw; \ }) struct kthread_worker * kthread_create_worker_on_cpu(int cpu, unsigned int flags, const char namefmt[]); +/** + * kthread_run_worker_on_cpu - create and wake a cpu bound kthread worker. + * @cpu: CPU number + * @flags: flags modifying the default behavior of the worker + * @namefmt: printf-style name for the thread. Format is restricted + * to "name.*%u". Code fills in cpu number. + * + * Description: Convenient wrapper for kthread_create_worker_on_cpu() + * followed by wake_up_process(). Returns the kthread_worker or + * ERR_PTR(-ENOMEM). + */ +static inline struct kthread_worker * +kthread_run_worker_on_cpu(int cpu, unsigned int flags, + const char namefmt[]) +{ + struct kthread_worker *kw; + + kw = kthread_create_worker_on_cpu(cpu, flags, namefmt); + if (!IS_ERR(kw)) + wake_up_process(kw->task); + + return kw; +} + bool kthread_queue_work(struct kthread_worker *worker, struct kthread_work *work); diff --git a/kernel/kthread.c b/kernel/kthread.c index 7eb93c248c59..d9fee08e9a66 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1075,33 +1075,10 @@ kthread_create_worker_on_node(unsigned int flags, int node, const char namefmt[] worker = __kthread_create_worker_on_node(flags, node, namefmt, args); va_end(args); - if (worker) - wake_up_process(worker->task); - return worker; } EXPORT_SYMBOL(kthread_create_worker_on_node); -static __printf(3, 4) struct kthread_worker * -__kthread_create_worker_on_cpu(int cpu, unsigned int flags, - const char namefmt[], ...) -{ - struct kthread_worker *worker; - va_list args; - - va_start(args, namefmt); - worker = __kthread_create_worker_on_node(flags, cpu_to_node(cpu), - namefmt, args); - va_end(args); - - if (worker) { - kthread_bind(worker->task, cpu); - wake_up_process(worker->task); - } - - return worker; -} - /** * kthread_create_worker_on_cpu - create a kthread worker and bind it * to a given CPU and the associated NUMA node. @@ -1142,7 +1119,13 @@ struct kthread_worker * kthread_create_worker_on_cpu(int cpu, unsigned int flags, const char namefmt[]) { - return __kthread_create_worker_on_cpu(cpu, flags, namefmt, cpu); + struct kthread_worker *worker; + + worker = kthread_create_worker_on_node(flags, cpu_to_node(cpu), namefmt, cpu); + if (worker) + kthread_bind(worker->task, cpu); + + return worker; } EXPORT_SYMBOL(kthread_create_worker_on_cpu); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c1e9f0818d51..a44228b0949a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4902,7 +4902,7 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) if (rnp->exp_kworker) return; - kworker = kthread_create_worker(0, name, rnp_index); + kworker = kthread_run_worker(0, name, rnp_index); if (IS_ERR_OR_NULL(kworker)) { pr_err("Failed to create par gp kworker on %d/%d\n", rnp->grplo, rnp->grphi); @@ -4929,7 +4929,7 @@ static void __init rcu_start_exp_gp_kworker(void) const char *name = "rcu_exp_gp_kthread_worker"; struct sched_param param = { .sched_priority = kthread_prio }; - rcu_exp_gp_kworker = kthread_create_worker(0, name); + rcu_exp_gp_kworker = kthread_run_worker(0, name); if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) { pr_err("Failed to create %s!\n", name); rcu_exp_gp_kworker = NULL; diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index c09e3dc38c34..4835fa4d9326 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -4885,7 +4885,7 @@ static struct kthread_worker *scx_create_rt_helper(const char *name) { struct kthread_worker *helper; - helper = kthread_create_worker(0, name); + helper = kthread_run_worker(0, name); if (helper) sched_set_fifo(helper->task); return helper; diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 9949ffad8df0..f5c7447ae1de 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -7814,7 +7814,7 @@ static void __init wq_cpu_intensive_thresh_init(void) unsigned long thresh; unsigned long bogo; - pwq_release_worker = kthread_create_worker(0, "pool_workqueue_release"); + pwq_release_worker = kthread_run_worker(0, "pool_workqueue_release"); BUG_ON(IS_ERR(pwq_release_worker)); /* if the user set it to a specific value, keep it */ diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c index 281bbac5539d..c33d4bf17929 100644 --- a/net/dsa/tag_ksz.c +++ b/net/dsa/tag_ksz.c @@ -66,7 +66,7 @@ static int ksz_connect(struct dsa_switch *ds) if (!priv) return -ENOMEM; - xmit_worker = kthread_create_worker(0, "dsa%d:%d_xmit", + xmit_worker = kthread_run_worker(0, "dsa%d:%d_xmit", ds->dst->index, ds->index); if (IS_ERR(xmit_worker)) { ret = PTR_ERR(xmit_worker); diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c index 8e8b1bef6af6..6ce0bc166792 100644 --- a/net/dsa/tag_ocelot_8021q.c +++ b/net/dsa/tag_ocelot_8021q.c @@ -110,7 +110,7 @@ static int ocelot_connect(struct dsa_switch *ds) if (!priv) return -ENOMEM; - priv->xmit_worker = kthread_create_worker(0, "felix_xmit"); + priv->xmit_worker = kthread_run_worker(0, "felix_xmit"); if (IS_ERR(priv->xmit_worker)) { err = PTR_ERR(priv->xmit_worker); kfree(priv); diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c index 3e902af7eea6..02adec693811 100644 --- a/net/dsa/tag_sja1105.c +++ b/net/dsa/tag_sja1105.c @@ -707,7 +707,7 @@ static int sja1105_connect(struct dsa_switch *ds) spin_lock_init(&priv->meta_lock); - xmit_worker = kthread_create_worker(0, "dsa%d:%d_xmit", + xmit_worker = kthread_run_worker(0, "dsa%d:%d_xmit", ds->dst->index, ds->index); if (IS_ERR(xmit_worker)) { err = PTR_ERR(xmit_worker); From patchwork Thu Sep 26 22:49:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13813719 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B0951B3725; Thu, 26 Sep 2024 22:50:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391010; cv=none; b=Ji9g3Mtm+10nviW+VjlRN9XCeaqaXyDTXz1ZFyErxOjQCVunUB73WIij7gdxXiYfQaB6Bnfv6FKlgVX5+zvWG/WkfUm6goADqcNh/GV2OEXR1vi3IhvorwpesXMTffmofe6um8FLHtExVnv7Cic2VGbCxnvR+OrkUIEcCUCi5mI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727391010; c=relaxed/simple; bh=HhhGy9lcKYwacFx42C9dv/c41qI/bpaVk8il7/rFSP0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OxJN2YknHdTR7n9Nd029iPpIoQSWBPrUcfmaV/52IwwcH6o5ZHKPaa3VvqYRUfQyu/vYvjNJLWovInyiSQVHK27iqHcTXVN8aQ9McvyzE5xRdbep8s9+/HClFISrqPBX29y52c/j92A+n5VWwqGTEZ2FVc4NAMMojOqMKeBYOwg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZpSrgBYH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZpSrgBYH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7193BC4CECF; Thu, 26 Sep 2024 22:50:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727391010; bh=HhhGy9lcKYwacFx42C9dv/c41qI/bpaVk8il7/rFSP0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZpSrgBYH+mrXHuAzre0ZglTpX5h6cK+vBuIqQVKelJ4h80/S1PgwtOa3rgHq+dmC0 qkXdmbR82RgmwbwxO5KMxAWV6vXCcWe5afkH9AQwfxYVVf2pppWXzU8bb7OaC+CIty 9/TchZOG+DEp/p/IJvVpGm9Fe35F43uO/VUFwLLO6k+vCUpaPs3tzNa1MOWAanEDfV bWcVxBLv7WLfr7UVkJLWzTKMBMhp4ECJXJkfsJZVQ5kh4FLl0lUhB9rXiLfIwWe+q0 3ZWcBWhrXM6ne0TevxHaqv3JCz0RNLYW/gmiKsMRXnUOKAeqfz6JX9Iq6hFc3BX+e4 gtuiXSaZAHmhg== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , "Paul E. McKenney" , Uladzislau Rezki , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org, Andrew Morton , Peter Zijlstra , Thomas Gleixner , Michal Hocko , Vlastimil Babka Subject: [PATCH 20/20] rcu: Use kthread preferred affinity for RCU exp kworkers Date: Fri, 27 Sep 2024 00:49:08 +0200 Message-ID: <20240926224910.11106-21-frederic@kernel.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240926224910.11106-1-frederic@kernel.org> References: <20240926224910.11106-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that kthreads have an infrastructure to handle preferred affinity against CPU hotplug and housekeeping cpumask, convert RCU exp workers to use it instead of handling all the constraints by itself. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree.c | 105 +++++++++------------------------------------- 1 file changed, 19 insertions(+), 86 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index a44228b0949a..d377a162c56c 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4890,6 +4890,22 @@ rcu_boot_init_percpu_data(int cpu) rcu_boot_init_nocb_percpu_data(rdp); } +static void rcu_thread_affine_rnp(struct task_struct *t, struct rcu_node *rnp) +{ + cpumask_var_t affinity; + int cpu; + + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) + return; + + for_each_leaf_node_possible_cpu(rnp, cpu) + cpumask_set_cpu(cpu, affinity); + + kthread_affine_preferred(t, affinity); + + free_cpumask_var(affinity); +} + struct kthread_worker *rcu_exp_gp_kworker; static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) @@ -4902,7 +4918,7 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) if (rnp->exp_kworker) return; - kworker = kthread_run_worker(0, name, rnp_index); + kworker = kthread_create_worker(0, name, rnp_index); if (IS_ERR_OR_NULL(kworker)) { pr_err("Failed to create par gp kworker on %d/%d\n", rnp->grplo, rnp->grphi); @@ -4912,16 +4928,9 @@ static void rcu_spawn_exp_par_gp_kworker(struct rcu_node *rnp) if (IS_ENABLED(CONFIG_RCU_EXP_KTHREAD)) sched_setscheduler_nocheck(kworker->task, SCHED_FIFO, ¶m); -} -static struct task_struct *rcu_exp_par_gp_task(struct rcu_node *rnp) -{ - struct kthread_worker *kworker = READ_ONCE(rnp->exp_kworker); - - if (!kworker) - return NULL; - - return kworker->task; + rcu_thread_affine_rnp(kworker->task, rnp); + wake_up_process(kworker->task); } static void __init rcu_start_exp_gp_kworker(void) @@ -5006,79 +5015,6 @@ int rcutree_prepare_cpu(unsigned int cpu) return 0; } -static void rcu_thread_affine_rnp(struct task_struct *t, struct rcu_node *rnp) -{ - cpumask_var_t affinity; - int cpu; - - if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) - return; - - for_each_leaf_node_possible_cpu(rnp, cpu) - cpumask_set_cpu(cpu, affinity); - - kthread_affine_preferred(t, affinity); - - free_cpumask_var(affinity); -} - -/* - * Update kthreads affinity during CPU-hotplug changes. - * - * Set the per-rcu_node kthread's affinity to cover all CPUs that are - * served by the rcu_node in question. The CPU hotplug lock is still - * held, so the value of rnp->qsmaskinit will be stable. - * - * We don't include outgoingcpu in the affinity set, use -1 if there is - * no outgoing CPU. If there are no CPUs left in the affinity set, - * this function allows the kthread to execute on any CPU. - * - * Any future concurrent calls are serialized via ->kthread_mutex. - */ -static void rcutree_affinity_setting(unsigned int cpu, int outgoingcpu) -{ - cpumask_var_t cm; - unsigned long mask; - struct rcu_data *rdp; - struct rcu_node *rnp; - struct task_struct *task_exp; - - rdp = per_cpu_ptr(&rcu_data, cpu); - rnp = rdp->mynode; - - task_exp = rcu_exp_par_gp_task(rnp); - - /* - * If CPU is the boot one, this task is created later from early - * initcall since kthreadd must be created first. - */ - if (!task_exp) - return; - - if (!zalloc_cpumask_var(&cm, GFP_KERNEL)) - return; - - mutex_lock(&rnp->kthread_mutex); - mask = rcu_rnp_online_cpus(rnp); - for_each_leaf_node_possible_cpu(rnp, cpu) - if ((mask & leaf_node_cpu_bit(rnp, cpu)) && - cpu != outgoingcpu) - cpumask_set_cpu(cpu, cm); - cpumask_and(cm, cm, housekeeping_cpumask(HK_TYPE_RCU)); - if (cpumask_empty(cm)) { - cpumask_copy(cm, housekeeping_cpumask(HK_TYPE_RCU)); - if (outgoingcpu >= 0) - cpumask_clear_cpu(outgoingcpu, cm); - } - - if (task_exp) - set_cpus_allowed_ptr(task_exp, cm); - - mutex_unlock(&rnp->kthread_mutex); - - free_cpumask_var(cm); -} - /* * Has the specified (known valid) CPU ever been fully online? */ @@ -5107,7 +5043,6 @@ int rcutree_online_cpu(unsigned int cpu) if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE) return 0; /* Too early in boot for scheduler work. */ sync_sched_exp_online_cleanup(cpu); - rcutree_affinity_setting(cpu, -1); // Stop-machine done, so allow nohz_full to disable tick. tick_dep_clear(TICK_DEP_BIT_RCU); @@ -5324,8 +5259,6 @@ int rcutree_offline_cpu(unsigned int cpu) rnp->ffmask &= ~rdp->grpmask; raw_spin_unlock_irqrestore_rcu_node(rnp, flags); - rcutree_affinity_setting(cpu, cpu); - // nohz_full CPUs need the tick for stop-machine to work quickly tick_dep_set(TICK_DEP_BIT_RCU); return 0;