From patchwork Wed Mar 29 05:11:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Gao X-Patchwork-Id: 9651453 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 093886034C for ; Wed, 29 Mar 2017 12:17:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECBF628456 for ; Wed, 29 Mar 2017 12:17:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E1A3028469; Wed, 29 Mar 2017 12:17:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=2.0 tests=BAYES_00, DATE_IN_PAST_06_12, DKIM_SIGNED, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB06728304 for ; Wed, 29 Mar 2017 12:17:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ctCUt-00040D-A9; Wed, 29 Mar 2017 12:14:43 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ctCUr-0003y0-Rd for xen-devel@lists.xen.org; Wed, 29 Mar 2017 12:14:41 +0000 Received: from [193.109.254.147] by server-6.bemta-6.messagelabs.com id B9/25-03430-135ABD85; Wed, 29 Mar 2017 12:14:41 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrDLMWRWlGSWpSXmKPExsVywNwkQtdw6e0 Igx/3OC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1owtixYzFdxUqPi1/SJbA2OzZBcjFweLwC0m id+/XrGCOEIC0xgl9t5ZwNbFyMkhIcArcWTZDFYI20/i5KF9jCC2kEC5xJzmL2A2m4CyxMWvv WD1IgLSEtc+X2YEGcQs8JRJoulRJ1hCWMBVYt6KE+wgNouAqsTkkz3MIDavgKPE7oX3mSEWKE hMefgezOYUcJI4v+sMM8QyR4l1Kx6yTGDkW8DIsIpRozi1qCy1SNfIQi+pKDM9oyQ3MTNH19D ATC83tbg4MT01JzGpWC85P3cTIzBUGIBgB+P5tYGHGCU5mJREeU8Y3o4Q4kvKT6nMSCzOiC8q zUktPsQow8GhJMGrsgQoJ1iUmp5akZaZAwxamLQEB4+SCK/FYqA0b3FBYm5xZjpE6hSjopQ47 2+QhABIIqM0D64NFimXGGWlhHkZgQ4R4ilILcrNLEGVf8UozsGoJMz7F2QKT2ZeCdz0V0CLmY AWi9vcAllckoiQkmpgnHM9duk3LW7ZvTvCmtKP3OfKVA+tK2lJP9/bZmjW7WJUsTVq0ozdfgF R+YuUqhzP3vDinVMgfk6+LCOCxXXjLc719vMP2u2dfPSjzL1pOu8tBF6+fLlGkqfIcNL0CVdV 2FZPmWbO53ptrp/QYrOvdyrqxDcVOst6J654wXfELPvpqcgbv746KbEUZyQaajEXFScCADxs/ RePAgAA X-Env-Sender: chao.gao@intel.com X-Msg-Ref: server-9.tower-27.messagelabs.com!1490789671!94414712!5 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 20877 invoked from network); 29 Mar 2017 12:14:40 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 29 Mar 2017 12:14:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490789680; x=1522325680; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=gZj9RcdNSzmMpeukfCtOSN2hitJvOcXbxsgOUtYiNDY=; b=ItZRPul8ZDQF1R1VLyXEPA9242PVkgC3F0ve8s4A1HdSSd1HbmLudMYF zgSAyquiXoXA0/dKId2udQZjeRPxMA==; Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2017 05:14:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,241,1486454400"; d="scan'208";a="80557036" Received: from skl-2s3.sh.intel.com ([10.239.48.35]) by orsmga005.jf.intel.com with ESMTP; 29 Mar 2017 05:14:38 -0700 From: Chao Gao To: xen-devel@lists.xen.org Date: Wed, 29 Mar 2017 13:11:53 +0800 Message-Id: <1490764315-7162-5-git-send-email-chao.gao@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1490764315-7162-1-git-send-email-chao.gao@intel.com> References: <1490764315-7162-1-git-send-email-chao.gao@intel.com> Cc: Kevin Tian , Feng Wu , Jun Nakajima , George Dunlap , Andrew Cooper , Dario Faggioli , Jan Beulich , Chao Gao Subject: [Xen-devel] [PATCH v11 4/6] VMX: Fixup PI descriptor when cpu is offline X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Feng Wu When cpu is offline, we need to move all the vcpus in its blocking list to another online cpu, this patch handles it. Signed-off-by: Feng Wu Signed-off-by: Chao Gao Reviewed-by: Jan Beulich Acked-by: Kevin Tian --- xen/arch/x86/hvm/vmx/vmcs.c | 1 + xen/arch/x86/hvm/vmx/vmx.c | 70 +++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/vmx/vmx.h | 1 + 3 files changed, 72 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 934674c..99c77b9 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -591,6 +591,7 @@ void vmx_cpu_dead(unsigned int cpu) vmx_free_vmcs(per_cpu(vmxon_region, cpu)); per_cpu(vmxon_region, cpu) = 0; nvmx_cpu_dead(cpu); + vmx_pi_desc_fixup(cpu); } int vmx_cpu_up(void) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index d201956..25f9ec9 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -199,6 +199,76 @@ static void vmx_pi_do_resume(struct vcpu *v) vmx_pi_unblock_vcpu(v); } +void vmx_pi_desc_fixup(unsigned int cpu) +{ + unsigned int new_cpu, dest; + unsigned long flags; + struct arch_vmx_struct *vmx, *tmp; + spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock; + struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list; + + if ( !iommu_intpost ) + return; + + /* + * We are in the context of CPU_DEAD or CPU_UP_CANCELED notification, + * and it is impossible for a second CPU go down in parallel. So we + * can safely acquire the old cpu's lock and then acquire the new_cpu's + * lock after that. + */ + spin_lock_irqsave(old_lock, flags); + + list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list) + { + /* + * Suppress notification or we may miss an interrupt when the + * target cpu is dying. + */ + pi_set_sn(&vmx->pi_desc); + + /* + * Check whether a notification is pending before doing the + * movement, if that is the case we need to wake up it directly + * other than moving it to the new cpu's list. + */ + if ( pi_test_on(&vmx->pi_desc) ) + { + list_del(&vmx->pi_blocking.list); + vmx->pi_blocking.lock = NULL; + vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx)); + } + else + { + /* + * We need to find an online cpu as the NDST of the PI descriptor, it + * doesn't matter whether it is within the cpupool of the domain or + * not. As long as it is online, the vCPU will be woken up once the + * notification event arrives. + */ + new_cpu = cpumask_any(&cpu_online_map); + new_lock = &per_cpu(vmx_pi_blocking, new_cpu).lock; + + spin_lock(new_lock); + + ASSERT(vmx->pi_blocking.lock == old_lock); + + dest = cpu_physical_id(new_cpu); + write_atomic(&vmx->pi_desc.ndst, + x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK)); + + list_move(&vmx->pi_blocking.list, + &per_cpu(vmx_pi_blocking, new_cpu).list); + vmx->pi_blocking.lock = new_lock; + + spin_unlock(new_lock); + } + + pi_clear_sn(&vmx->pi_desc); + } + + spin_unlock_irqrestore(old_lock, flags); +} + /* * To handle posted interrupts correctly, we need to set the following * state: diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h index 2b781ab..5ead57c 100644 --- a/xen/include/asm-x86/hvm/vmx/vmx.h +++ b/xen/include/asm-x86/hvm/vmx/vmx.h @@ -597,6 +597,7 @@ void free_p2m_hap_data(struct p2m_domain *p2m); void p2m_init_hap_data(struct p2m_domain *p2m); void vmx_pi_per_cpu_init(unsigned int cpu); +void vmx_pi_desc_fixup(unsigned int cpu); void vmx_pi_hooks_assign(struct domain *d); void vmx_pi_hooks_deassign(struct domain *d);