From patchwork Sun Jun 23 13:38:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2768151 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A0E39C0AB1 for ; Sun, 23 Jun 2013 13:56:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9F28520105 for ; Sun, 23 Jun 2013 13:56:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80B3C20137 for ; Sun, 23 Jun 2013 13:56:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751896Ab3FWNmR (ORCPT ); Sun, 23 Jun 2013 09:42:17 -0400 Received: from e23smtp03.au.ibm.com ([202.81.31.145]:57244 "EHLO e23smtp03.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751847Ab3FWNmN (ORCPT ); Sun, 23 Jun 2013 09:42:13 -0400 Received: from /spool/local by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 23 Jun 2013 23:32:47 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sun, 23 Jun 2013 23:32:45 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 402952BB0050; Sun, 23 Jun 2013 23:42:08 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5NDRCph1179932; Sun, 23 Jun 2013 23:27:12 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5NDg5ew023065; Sun, 23 Jun 2013 23:42:08 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.195.141]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5NDfvC9022963; Sun, 23 Jun 2013 23:41:58 +1000 From: "Srivatsa S. Bhat" Subject: [PATCH 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Rusty Russell , Alex Shi , KOSAKI Motohiro , Tejun Heo , Thomas Gleixner , Andrew Morton , Yasuaki Ishimatsu , "Rafael J. Wysocki" Date: Sun, 23 Jun 2013 19:08:45 +0530 Message-ID: <20130623133841.19094.69631.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> References: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062313-6102-0000-0000-000003BE6A2B Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-8.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a debugging infrastructure to warn if an atomic hotplug reader has not invoked get_online_cpus_atomic() before traversing/accessing the cpu_online_mask. Encapsulate these checks under a new debug config option DEBUG_HOTPLUG_CPU. This debugging infrastructure proves useful in the tree-wide conversion of atomic hotplug readers from preempt_disable() to the new APIs, and help us catch the places we missed, much before we actually get rid of stop_machine(). We can perhaps remove the debugging checks later on. Cc: Rusty Russell Cc: Alex Shi Cc: KOSAKI Motohiro Cc: Tejun Heo Cc: Thomas Gleixner Cc: Andrew Morton Cc: Yasuaki Ishimatsu Cc: "Rafael J. Wysocki" Signed-off-by: Srivatsa S. Bhat --- include/linux/cpumask.h | 12 ++++++++ kernel/cpu.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 87 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index d08e4d2..9197ca4 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -101,6 +101,18 @@ extern const struct cpumask *const cpu_active_mask; #define cpu_active(cpu) ((cpu) == 0) #endif +#ifdef CONFIG_DEBUG_HOTPLUG_CPU +extern void check_hotplug_safe_cpumask(const struct cpumask *mask); +extern void check_hotplug_safe_cpu(unsigned int cpu, + const struct cpumask *mask); +#else +static inline void check_hotplug_safe_cpumask(const struct cpumask *mask) { } +static inline void check_hotplug_safe_cpu(unsigned int cpu, + const struct cpumask *mask) +{ +} +#endif + /* verify cpu argument to cpumask_* operators */ static inline unsigned int cpumask_check(unsigned int cpu) { diff --git a/kernel/cpu.c b/kernel/cpu.c index 860f51a..e90d9d7 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -63,6 +63,72 @@ static struct { .refcount = 0, }; +#ifdef CONFIG_DEBUG_HOTPLUG_CPU + +static DEFINE_PER_CPU(unsigned long, atomic_reader_refcnt); + +static int current_is_hotplug_safe(const struct cpumask *mask) +{ + + /* If we are not dealing with cpu_online_mask, don't complain. */ + if (mask != cpu_online_mask) + return 1; + + /* If this is the task doing hotplug, don't complain. */ + if (unlikely(current == cpu_hotplug.active_writer)) + return 1; + + /* If we are in early boot, don't complain. */ + if (system_state != SYSTEM_RUNNING) + return 1; + + /* + * Check if the current task is in atomic context and it has + * invoked get_online_cpus_atomic() to synchronize with + * CPU Hotplug. + */ + if (preempt_count() || irqs_disabled()) + return this_cpu_read(atomic_reader_refcnt); + else + return 1; /* No checks for non-atomic contexts for now */ +} + +static inline void warn_hotplug_unsafe(void) +{ + WARN_ONCE(1, "Must use get/put_online_cpus_atomic() to synchronize" + " with CPU hotplug\n"); +} + +/* + * Check if the task (executing in atomic context) has the required protection + * against CPU hotplug, while accessing the specified cpumask. + */ +void check_hotplug_safe_cpumask(const struct cpumask *mask) +{ + if (!current_is_hotplug_safe(mask)) + warn_hotplug_unsafe(); +} +EXPORT_SYMBOL_GPL(check_hotplug_safe_cpumask); + +/* + * Similar to check_hotplug_safe_cpumask(), except that we don't complain + * if the task (executing in atomic context) is testing whether the CPU it + * is executing on is online or not. + * + * (A task executing with preemption disabled on a CPU, automatically prevents + * offlining that CPU, irrespective of the actual implementation of CPU + * offline. So we don't enforce holding of get_online_cpus_atomic() for that + * case). + */ +void check_hotplug_safe_cpu(unsigned int cpu, const struct cpumask *mask) +{ + if(!current_is_hotplug_safe(mask) && cpu != smp_processor_id()) + warn_hotplug_unsafe(); +} +EXPORT_SYMBOL_GPL(check_hotplug_safe_cpu); + +#endif + void get_online_cpus(void) { might_sleep(); @@ -189,13 +255,22 @@ unsigned int get_online_cpus_atomic(void) * from going offline. */ preempt_disable(); + +#ifdef CONFIG_DEBUG_HOTPLUG_CPU + this_cpu_inc(atomic_reader_refcnt); +#endif return smp_processor_id(); } EXPORT_SYMBOL_GPL(get_online_cpus_atomic); void put_online_cpus_atomic(void) { + +#ifdef CONFIG_DEBUG_HOTPLUG_CPU + this_cpu_dec(atomic_reader_refcnt); +#endif preempt_enable(); + } EXPORT_SYMBOL_GPL(put_online_cpus_atomic);