From patchwork Sun Jun 23 13:48:11 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2767931 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DDF0A9F245 for ; Sun, 23 Jun 2013 13:52:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EE3E320188 for ; Sun, 23 Jun 2013 13:52:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E16E520181 for ; Sun, 23 Jun 2013 13:51:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753184Ab3FWNvl (ORCPT ); Sun, 23 Jun 2013 09:51:41 -0400 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:57754 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753160Ab3FWNvg (ORCPT ); Sun, 23 Jun 2013 09:51:36 -0400 Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 23 Jun 2013 23:45:29 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sun, 23 Jun 2013 23:45:27 +1000 Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 9FD462CE8044; Sun, 23 Jun 2013 23:51:32 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5NDajU965011964; Sun, 23 Jun 2013 23:36:45 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5NDpUqL029338; Sun, 23 Jun 2013 23:51:32 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.195.141]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5NDpN66029221; Sun, 23 Jun 2013 23:51:24 +1000 From: "Srivatsa S. Bhat" Subject: [PATCH 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Chris Metcalf Date: Sun, 23 Jun 2013 19:18:11 +0530 Message-ID: <20130623134807.19094.82081.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> References: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062313-1396-0000-0000-00000328595C Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-8.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Chris Metcalf Signed-off-by: Srivatsa S. Bhat --- arch/tile/kernel/module.c | 3 +++ arch/tile/kernel/tlb.c | 15 +++++++++++++++ arch/tile/mm/homecache.c | 3 +++ 3 files changed, 21 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c index 4918d91..db7d858 100644 --- a/arch/tile/kernel/module.c +++ b/arch/tile/kernel/module.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region) vfree(module_region); /* Globally flush the L1 icache. */ + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, 0, 0, 0, NULL, NULL, 0); + put_online_cpus_atomic(); /* * FIXME: If module_region == mod->module_init, trim exception diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c index 3fd54d5..a32b9dd 100644 --- a/arch/tile/kernel/tlb.c +++ b/arch/tile/kernel/tlb.c @@ -14,6 +14,7 @@ */ #include +#include #include #include #include @@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm) { HV_Remote_ASID asids[NR_CPUS]; int i = 0, cpu; + + get_online_cpus_atomic(); for_each_cpu(cpu, mm_cpumask(mm)) { HV_Remote_ASID *asid = &asids[i++]; asid->y = cpu / smp_topology.width; @@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm) } flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm), 0, 0, 0, NULL, asids, i); + put_online_cpus_atomic(); } void flush_tlb_current_task(void) @@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm, { unsigned long size = vma_kernel_pagesize(vma); int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), va, size, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long va) @@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long size = vma_kernel_pagesize(vma); struct mm_struct *mm = vma->vm_mm; int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0; + + get_online_cpus_atomic(); flush_remote(0, cache, mm_cpumask(mm), start, end - start, size, mm_cpumask(mm), NULL, 0); + put_online_cpus_atomic(); } void flush_tlb_all(void) { int i; + + get_online_cpus_atomic(); for (i = 0; ; ++i) { HV_VirtAddrRange r = hv_inquire_virtual(i); if (r.size == 0) @@ -89,10 +101,13 @@ void flush_tlb_all(void) r.start, r.size, HPAGE_SIZE, cpu_online_mask, NULL, 0); } + put_online_cpus_atomic(); } void flush_tlb_kernel_range(unsigned long start, unsigned long end) { + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask, start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); } diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c index 1ae9119..7ff5bf0 100644 --- a/arch/tile/mm/homecache.c +++ b/arch/tile/mm/homecache.c @@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home) BUG_ON(page_count(page) > 1); BUG_ON(page_mapcount(page) != 0); kva = (unsigned long) page_address(page); + + get_online_cpus_atomic(); flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map, kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask, NULL, 0); + put_online_cpus_atomic(); for (i = 0; i < pages; ++i, kva += PAGE_SIZE) { pte_t *ptep = virt_to_pte(NULL, kva);