From patchwork Thu Oct 29 22:18:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11867737 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D7C1697 for ; Thu, 29 Oct 2020 22:32:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4426621D24 for ; Thu, 29 Oct 2020 22:32:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="FpvHwJm6"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="02FwbJXl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4426621D24 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 216186B005D; Thu, 29 Oct 2020 18:32:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C3676B0062; Thu, 29 Oct 2020 18:32:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 065C66B006C; Thu, 29 Oct 2020 18:32:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id C1E8B6B0062 for ; Thu, 29 Oct 2020 18:32:12 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 576F7181AC9CC for ; Thu, 29 Oct 2020 22:32:12 +0000 (UTC) X-FDA: 77426412504.28.fang91_3203dc927291 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 45E9A6D67 for ; Thu, 29 Oct 2020 22:32:12 +0000 (UTC) X-Spam-Summary: 1,0,0,3caef30f8494a7f5,d41d8cd98f00b204,tglx@linutronix.de,,RULES_HIT:41:69:152:355:379:800:960:973:988:989:1183:1260:1277:1311:1313:1314:1345:1437:1515:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2559:2562:2731:2736:2741:3138:3139:3140:3141:3142:3165:3865:3866:3867:3868:3870:3871:3872:3874:4119:4250:5007:6119:6261:6653:6742:6743:7875:7903:9592:10004:11026:11473:11657:11658:11914:12043:12160:12294:12296:12297:12438:12679:12683:13161:13215:13229:14096:14097:14181:14659:14721:14819:21080:21433:21451:21627:21740:21966:21990:30012:30034:30054,0,RBL:193.142.43.55:@linutronix.de:.lbl8.mailshell.net-64.100.201.201 62.2.6.100;04yr6mdp8dy8reg4gn7mzxo1se743ocr1oxa9utazupyjpzhadjfnycmfr9zfg8.79rbz3bnpy7za1ccokkg4nzqws4pdytpi4wmeiiymfgua7975i33muaz5irkqs6.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:224,LUA_SUMMARY: none X-HE-Tag: fang91_3203dc927291 X-Filterd-Recvd-Size: 8391 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Thu, 29 Oct 2020 22:32:11 +0000 (UTC) Message-Id: <20201029222650.648971542@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1604010729; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=KaNIP+gHbCxEiFjMUXsw2ll/YULzf8E240MXoq9IavQ=; b=FpvHwJm6pm9/uI4al2h2FOCNdJpqPqZtax3Dclwt8IA/Oh5Ponv2lMvh1GZE/kMxwHRMcZ 5Pj/25hwyvseGa2stRrQ6hWDY3/J9EfoBCubT4NV9cRLaJkPnkbul6TYsf2UE/bjO6nYqy MP3NqSdxEeVXzr2dPAJp2eN+544XjjCZzvysUGJFSV0DJ4F1x7pAoj62zoQgnVkHmJBzJv 2aMoaV1jf69bT21PfPJ2XWc/VjbaCiFGLIpd8e3CUva0mbOXWUXlSMVPOsqCdikb0DRuKV GDHVG9u29UKonL0Qkb43DnDsJff/u2qboH5QLh+1dzup/h2AbZ3PDpxH1nrUVw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1604010729; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=KaNIP+gHbCxEiFjMUXsw2ll/YULzf8E240MXoq9IavQ=; b=02FwbJXl3OAIrXjRtBQ0DicQZ2DyF6HHvJuRKCFGyMrTUHOwJHXKipX4ubGORyuMMUanCP 7rAr7hNu3+xpCHBw== Date: Thu, 29 Oct 2020 23:18:07 +0100 From: Thomas Gleixner To: LKML Cc: linux-arch@vger.kernel.org, Linus Torvalds , Peter Zijlstra , Paul McKenney , David Airlie , Daniel Vetter , Ard Biesheuvel , Herbert Xu , Christoph Hellwig , Sebastian Andrzej Siewior , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Andrew Morton , linux-mm@kvack.org, x86@kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Guo Ren , linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , sparclinux@vger.kernel.org, Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org Subject: [patch V2 01/18] sched: Make migrate_disable/enable() independent of RT References: <20201029221806.189523375@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the scheduler can deal with migrate disable properly, there is no real compelling reason to make it only available for RT. There are quite some code pathes which needlessly disable preemption in order to prevent migration and some constructs like kmap_atomic() enforce it implicitly. Making it available independent of RT allows to provide a preemptible variant of kmap_atomic() and makes the code more consistent in general. FIXME: Rework the comment in preempt.h Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Steven Rostedt Cc: Ben Segall Cc: Mel Gorman Cc: Daniel Bristot de Oliveira --- include/linux/preempt.h | 38 +++----------------------------------- include/linux/sched.h | 2 +- kernel/sched/core.c | 12 ++---------- kernel/sched/sched.h | 2 +- lib/smp_processor_id.c | 2 +- 5 files changed, 8 insertions(+), 48 deletions(-) --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -322,7 +322,7 @@ static inline void preempt_notifier_init #endif -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP /* * Migrate-Disable and why it is undesired. @@ -382,43 +382,11 @@ static inline void preempt_notifier_init extern void migrate_disable(void); extern void migrate_enable(void); -#elif defined(CONFIG_PREEMPT_RT) +#else static inline void migrate_disable(void) { } static inline void migrate_enable(void) { } -#else /* !CONFIG_PREEMPT_RT */ - -/** - * migrate_disable - Prevent migration of the current task - * - * Maps to preempt_disable() which also disables preemption. Use - * migrate_disable() to annotate that the intent is to prevent migration, - * but not necessarily preemption. - * - * Can be invoked nested like preempt_disable() and needs the corresponding - * number of migrate_enable() invocations. - */ -static __always_inline void migrate_disable(void) -{ - preempt_disable(); -} - -/** - * migrate_enable - Allow migration of the current task - * - * Counterpart to migrate_disable(). - * - * As migrate_disable() can be invoked nested, only the outermost invocation - * reenables migration. - * - * Currently mapped to preempt_enable(). - */ -static __always_inline void migrate_enable(void) -{ - preempt_enable(); -} - -#endif /* CONFIG_SMP && CONFIG_PREEMPT_RT */ +#endif /* CONFIG_SMP */ #endif /* __LINUX_PREEMPT_H */ --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -715,7 +715,7 @@ struct task_struct { const cpumask_t *cpus_ptr; cpumask_t cpus_mask; void *migration_pending; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP unsigned short migration_disabled; #endif unsigned short migration_flags; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1696,8 +1696,6 @@ void check_preempt_curr(struct rq *rq, s #ifdef CONFIG_SMP -#ifdef CONFIG_PREEMPT_RT - static void __do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask, u32 flags); @@ -1772,8 +1770,6 @@ static inline bool rq_has_pinned_tasks(s return rq->nr_pinned; } -#endif - /* * Per-CPU kthreads are allowed to run on !active && online CPUs, see * __set_cpus_allowed_ptr() and select_fallback_rq(). @@ -2841,7 +2837,7 @@ void sched_set_stop_task(int cpu, struct } } -#else +#else /* CONFIG_SMP */ static inline int __set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask, @@ -2850,10 +2846,6 @@ static inline int __set_cpus_allowed_ptr return set_cpus_allowed_ptr(p, new_mask); } -#endif /* CONFIG_SMP */ - -#if !defined(CONFIG_SMP) || !defined(CONFIG_PREEMPT_RT) - static inline void migrate_disable_switch(struct rq *rq, struct task_struct *p) { } static inline bool rq_has_pinned_tasks(struct rq *rq) @@ -2861,7 +2853,7 @@ static inline bool rq_has_pinned_tasks(s return false; } -#endif +#endif /* !CONFIG_SMP */ static void ttwu_stat(struct task_struct *p, int cpu, int wake_flags) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1056,7 +1056,7 @@ struct rq { struct cpuidle_state *idle_state; #endif -#if defined(CONFIG_PREEMPT_RT) && defined(CONFIG_SMP) +#if CONFIG_SMP unsigned int nr_pinned; #endif unsigned int push_busy; --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -26,7 +26,7 @@ unsigned int check_preemption_disabled(c if (current->nr_cpus_allowed == 1) goto out; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP if (current->migration_disabled) goto out; #endif