From patchwork Tue Jun 25 20:34:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2780641 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EB68F9F245 for ; Tue, 25 Jun 2013 20:38:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D16C3204C4 for ; Tue, 25 Jun 2013 20:37:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DC53A204AE for ; Tue, 25 Jun 2013 20:37:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754214Ab3FYUhb (ORCPT ); Tue, 25 Jun 2013 16:37:31 -0400 Received: from e28smtp01.in.ibm.com ([122.248.162.1]:47910 "EHLO e28smtp01.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754209Ab3FYUh1 (ORCPT ); Tue, 25 Jun 2013 16:37:27 -0400 Received: from /spool/local by e28smtp01.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 26 Jun 2013 02:00:27 +0530 Received: from d28dlp03.in.ibm.com (9.184.220.128) by e28smtp01.in.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 26 Jun 2013 02:00:24 +0530 Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id 538BE125804E; Wed, 26 Jun 2013 02:06:25 +0530 (IST) Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5PKbbtJ9502836; Wed, 26 Jun 2013 02:07:37 +0530 Received: from d28av01.in.ibm.com (loopback [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5PKbGNk014981; Tue, 25 Jun 2013 20:37:21 GMT Received: from srivatsabhat.in.ibm.com ([9.79.199.80]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5PKbDWA014903; Tue, 25 Jun 2013 20:37:13 GMT From: "Srivatsa S. Bhat" Subject: [PATCH v2 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Chris Metcalf , "Srivatsa S. Bhat" Date: Wed, 26 Jun 2013 02:04:00 +0530 Message-ID: <20130625203400.16593.86066.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130625202452.16593.22810.stgit@srivatsabhat.in.ibm.com> References: <20130625202452.16593.22810.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062520-4790-0000-0000-000008F8DEF5 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Chris Metcalf Signed-off-by: Srivatsa S. Bhat --- arch/tile/kernel/module.c | 3 +++ arch/tile/kernel/tlb.c | 15 +++++++++++++++ arch/tile/mm/homecache.c | 3 +++ 3 files changed, 21 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c index 4918d91..db7d858 100644 --- a/arch/tile/kernel/module.c +++ b/arch/tile/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region) vfree(module_region); /* Globally flush the L1 icache. */ + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, 0, 0, 0, NULL, NULL, 0); + put_online_cpus_atomic(); /* * FIXME: If module_region == mod->module_init, trim exception diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c index 3fd54d5..a32b9dd 100644 --- a/arch/tile/kernel/tlb.c +++ b/arch/tile/kernel/tlb.c @@ -14,6 +14,7 @@ */ #include +#include #include #include #include @@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm) { HV_Remote_ASID asids[NR_CPUS]; int i = 0, cpu; + + get_online_cpus_atomic(); for_each_cpu(cpu, mm_cpumask(mm)) { HV_Remote_ASID *asid = &asids[i++]; asid->y = cpu / smp_topology.width; @@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm) } flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm), 0, 0, 0, NULL, asids, i); + put_online_cpus_atomic(); } void flush_tlb_current_task(void) @@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm, { unsigned long size = vma_kernel_pagesize(vma); int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), va, size, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long va) @@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long size = vma_kernel_pagesize(vma); struct mm_struct *mm = vma->vm_mm; int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), start, end - start, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_all(void) { int i; + + get_online_cpus_atomic(); for (i = 0; ; ++i) { HV_VirtAddrRange r = hv_inquire_virtual(i); if (r.size == 0) @@ -89,10 +101,13 @@ void flush_tlb_all(void) r.start, r.size, HPAGE_SIZE, cpu_online_mask, NULL, 0); } + put_online_cpus_atomic(); } void flush_tlb_kernel_range(unsigned long start, unsigned long end) { + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); } diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c index 1ae9119..7ff5bf0 100644 --- a/arch/tile/mm/homecache.c +++ b/arch/tile/mm/homecache.c @@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home) BUG_ON(page_count(page) > 1); BUG_ON(page_mapcount(page) != 0); kva = (unsigned long) page_address(page); + + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map, kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); for (i = 0; i < pages; ++i, kva += PAGE_SIZE) { pte_t *ptep = virt_to_pte(NULL, kva);