From patchwork Tue Jan 22 07:44:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2016481 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 163BE3FCDE for ; Tue, 22 Jan 2013 07:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755263Ab3AVHqM (ORCPT ); Tue, 22 Jan 2013 02:46:12 -0500 Received: from e28smtp05.in.ibm.com ([122.248.162.5]:40549 "EHLO e28smtp05.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755432Ab3AVHqH (ORCPT ); Tue, 22 Jan 2013 02:46:07 -0500 Received: from /spool/local by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Jan 2013 13:14:48 +0530 Received: from d28dlp02.in.ibm.com (9.184.220.127) by e28smtp05.in.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 22 Jan 2013 13:14:46 +0530 Received: from d28relay03.in.ibm.com (d28relay03.in.ibm.com [9.184.220.60]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 0AC133940056; Tue, 22 Jan 2013 13:16:03 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay03.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0M7k0pO43122864; Tue, 22 Jan 2013 13:16:00 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0M7jxjx018638; Tue, 22 Jan 2013 18:46:02 +1100 Received: from srivatsabhat.in.ibm.com (srivatsabhat.in.ibm.com [9.124.35.112]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0M7jwDR018600; Tue, 22 Jan 2013 18:45:58 +1100 From: "Srivatsa S. Bhat" Subject: [PATCH v5 40/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, fweisbec@gmail.com, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 22 Jan 2013 13:14:13 +0530 Message-ID: <20130122074409.13822.91528.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> References: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012207-8256-0000-0000-000005EE5B8A Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() or local_irq_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Paul Mundt Cc: linux-sh@vger.kernel.org Signed-off-by: Srivatsa S. Bhat --- arch/sh/kernel/smp.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c index 2062aa8..232fabe 100644 --- a/arch/sh/kernel/smp.c +++ b/arch/sh/kernel/smp.c @@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm) */ void flush_tlb_mm(struct mm_struct *mm) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1); @@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm) } local_flush_tlb_mm(mm); - preempt_enable(); + put_online_cpus_atomic(); } struct flush_tlb_data { @@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { struct flush_tlb_data fd; @@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma, cpu_context(i, mm) = 0; } local_flush_tlb_range(vma, start, end); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_kernel_range_ipi(void *info) @@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) { struct flush_tlb_data fd; @@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) cpu_context(i, vma->vm_mm) = 0; } local_flush_tlb_page(vma, page); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_one_ipi(void *info)