From patchwork Tue Jan 22 07:40:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2016091 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id AEE97DF2EB for ; Tue, 22 Jan 2013 07:43:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755113Ab3AVHmN (ORCPT ); Tue, 22 Jan 2013 02:42:13 -0500 Received: from e28smtp04.in.ibm.com ([122.248.162.4]:37272 "EHLO e28smtp04.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754792Ab3AVHmK (ORCPT ); Tue, 22 Jan 2013 02:42:10 -0500 Received: from /spool/local by e28smtp04.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Jan 2013 13:10:33 +0530 Received: from d28dlp01.in.ibm.com (9.184.220.126) by e28smtp04.in.ibm.com (192.168.1.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 22 Jan 2013 13:10:29 +0530 Received: from d28relay04.in.ibm.com (d28relay04.in.ibm.com [9.184.220.61]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 2452CE004D; Tue, 22 Jan 2013 13:12:30 +0530 (IST) Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0M7g0cQ58589220; Tue, 22 Jan 2013 13:12:01 +0530 Received: from d28av01.in.ibm.com (loopback [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0M7fw4n004822; Tue, 22 Jan 2013 07:42:02 GMT Received: from srivatsabhat.in.ibm.com (srivatsabhat.in.ibm.com [9.124.35.112]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0M7fwWh004818; Tue, 22 Jan 2013 07:41:58 GMT From: "Srivatsa S. Bhat" Subject: [PATCH v5 27/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, fweisbec@gmail.com, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 22 Jan 2013 13:10:13 +0530 Message-ID: <20130122074009.13822.3090.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> References: <20130122073210.13822.50434.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012207-5564-0000-0000-0000064EA60A Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() or local_irq_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Marcelo Tosatti Cc: Gleb Natapov Cc: kvm@vger.kernel.org Signed-off-by: Srivatsa S. Bhat --- virt/kvm/kvm_main.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1cd693a..47f9c30 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -174,7 +174,8 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req) zalloc_cpumask_var(&cpus, GFP_ATOMIC); - me = get_cpu(); + get_online_cpus_atomic(); + me = smp_processor_id(); kvm_for_each_vcpu(i, vcpu, kvm) { kvm_make_request(req, vcpu); cpu = vcpu->cpu; @@ -192,7 +193,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req) smp_call_function_many(cpus, ack_flush, NULL, 1); else called = false; - put_cpu(); + put_online_cpus_atomic(); free_cpumask_var(cpus); return called; } @@ -1621,11 +1622,12 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) ++vcpu->stat.halt_wakeup; } - me = get_cpu(); + get_online_cpus_atomic(); + me = smp_processor_id(); if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) if (kvm_arch_vcpu_should_kick(vcpu)) smp_send_reschedule(cpu); - put_cpu(); + put_online_cpus_atomic(); } #endif /* !CONFIG_S390 */