From patchwork Tue Jan 22 07:42:40 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2016271 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 430473FCDE for ; Tue, 22 Jan 2013 07:45:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932067Ab3AVHoi (ORCPT ); Tue, 22 Jan 2013 02:44:38 -0500 Received: from e28smtp06.in.ibm.com ([122.248.162.6]:55026 "EHLO e28smtp06.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755772Ab3AVHog (ORCPT ); Tue, 22 Jan 2013 02:44:36 -0500 Received: from /spool/local by e28smtp06.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Jan 2013 13:12:39 +0530 Received: from d28dlp03.in.ibm.com (9.184.220.128) by e28smtp06.in.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 22 Jan 2013 13:12:37 +0530 Received: from d28relay04.in.ibm.com (d28relay04.in.ibm.com [9.184.220.61]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id 5E62E125804C; Tue, 22 Jan 2013 13:14:54 +0530 (IST) Received: from d28av04.in.ibm.com (d28av04.in.ibm.com [9.184.220.66]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0M7iSJd58589274; Tue, 22 Jan 2013 13:14:28 +0530 Received: from d28av04.in.ibm.com (loopback [127.0.0.1]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0M7iPWK032285; Tue, 22 Jan 2013 18:44:29 +1100 Received: from srivatsabhat.in.ibm.com (srivatsabhat.in.ibm.com [9.124.35.112]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0M7iPEi032234; Tue, 22 Jan 2013 18:44:25 +1100 From: "Srivatsa S. Bhat" Subject: [PATCH v5 35/45] m32r: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, fweisbec@gmail.com, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 22 Jan 2013 13:12:40 +0530 Message-ID: <20130122074234.13822.22312.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> References: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012207-9574-0000-0000-000006489D52 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() or local_irq_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Hirokazu Takata Cc: linux-m32r@ml.linux-m32r.org Cc: linux-m32r-ja@ml.linux-m32r.org Signed-off-by: Srivatsa S. Bhat --- arch/m32r/kernel/smp.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c index ce7aea3..0dad4d7 100644 --- a/arch/m32r/kernel/smp.c +++ b/arch/m32r/kernel/smp.c @@ -151,7 +151,7 @@ void smp_flush_cache_all(void) cpumask_t cpumask; unsigned long *mask; - preempt_disable(); + get_online_cpus_atomic(); cpumask_copy(&cpumask, cpu_online_mask); cpumask_clear_cpu(smp_processor_id(), &cpumask); spin_lock(&flushcache_lock); @@ -162,7 +162,7 @@ void smp_flush_cache_all(void) while (flushcache_cpumask) mb(); spin_unlock(&flushcache_lock); - preempt_enable(); + put_online_cpus_atomic(); } void smp_flush_cache_all_interrupt(void) @@ -250,7 +250,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm) unsigned long *mmc; unsigned long flags; - preempt_disable(); + get_online_cpus_atomic(); cpu_id = smp_processor_id(); mmc = &mm->context[cpu_id]; cpumask_copy(&cpu_mask, mm_cpumask(mm)); @@ -268,7 +268,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm) if (!cpumask_empty(&cpu_mask)) flush_tlb_others(cpu_mask, mm, NULL, FLUSH_ALL); - preempt_enable(); + put_online_cpus_atomic(); } /*==========================================================================* @@ -715,10 +715,12 @@ static void send_IPI_allbutself(int ipi_num, int try) { cpumask_t cpumask; + get_online_cpus_atomic(); cpumask_copy(&cpumask, cpu_online_mask); cpumask_clear_cpu(smp_processor_id(), &cpumask); send_IPI_mask(&cpumask, ipi_num, try); + put_online_cpus_atomic(); } /*==========================================================================* @@ -750,6 +752,7 @@ static void send_IPI_mask(const struct cpumask *cpumask, int ipi_num, int try) if (num_cpus <= 1) /* NO MP */ return; + get_online_cpus_atomic(); cpumask_and(&tmp, cpumask, cpu_online_mask); BUG_ON(!cpumask_equal(cpumask, &tmp)); @@ -760,6 +763,7 @@ static void send_IPI_mask(const struct cpumask *cpumask, int ipi_num, int try) } send_IPI_mask_phys(&physid_mask, ipi_num, try); + put_online_cpus_atomic(); } /*==========================================================================*