From patchwork Tue Dec 11 14:05:32 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 1862541 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 904AD3FC71 for ; Tue, 11 Dec 2012 14:08:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753746Ab2LKOHH (ORCPT ); Tue, 11 Dec 2012 09:07:07 -0500 Received: from e23smtp07.au.ibm.com ([202.81.31.140]:58306 "EHLO e23smtp07.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753739Ab2LKOHE (ORCPT ); Tue, 11 Dec 2012 09:07:04 -0500 Received: from /spool/local by e23smtp07.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 12 Dec 2012 00:02:06 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp07.au.ibm.com (202.81.31.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 12 Dec 2012 00:02:04 +1000 Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 2A72A2CE804A; Wed, 12 Dec 2012 01:07:00 +1100 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id qBBDu4OV1442114; Wed, 12 Dec 2012 00:56:04 +1100 Received: from d23av01.au.ibm.com (loopback [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id qBBE6vSd016011; Wed, 12 Dec 2012 01:06:59 +1100 Received: from srivatsabhat.in.ibm.com (srivatsabhat.in.ibm.com [9.124.35.51]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id qBBE6siV015931; Wed, 12 Dec 2012 01:06:54 +1100 From: "Srivatsa S. Bhat" Subject: [RFC PATCH v4 8/9] kvm, vmx: Add atomic synchronization with CPU Hotplug To: tglx@linutronix.de, peterz@infradead.org, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, vincent.guittot@linaro.org, tj@kernel.org, oleg@redhat.com Cc: sbw@mit.edu, amit.kucheria@linaro.org, rostedt@goodmis.org, rjw@sisk.pl, srivatsa.bhat@linux.vnet.ibm.com, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 11 Dec 2012 19:35:32 +0530 Message-ID: <20121211140528.23621.52490.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20121211140314.23621.64088.stgit@srivatsabhat.in.ibm.com> References: <20121211140314.23621.64088.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12121114-0260-0000-0000-000002479F04 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org preempt_disable() will no longer help prevent CPUs from going offline, once stop_machine() gets removed from the CPU offline path. So use get/put_online_cpus_atomic() in vmx_vcpu_load() to prevent CPUs from going offline while clearing vmcs. Reported-by: Michael Wang Debugged-by: Xiao Guangrong Signed-off-by: Srivatsa S. Bhat --- arch/x86/kvm/vmx.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index f858159..d8a4cf1 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1519,10 +1519,14 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct vcpu_vmx *vmx = to_vmx(vcpu); u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); - if (!vmm_exclusive) + if (!vmm_exclusive) { kvm_cpu_vmxon(phys_addr); - else if (vmx->loaded_vmcs->cpu != cpu) + } else if (vmx->loaded_vmcs->cpu != cpu) { + /* Prevent any CPU from going offline */ + get_online_cpus_atomic(); loaded_vmcs_clear(vmx->loaded_vmcs); + put_online_cpus_atomic(); + } if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;