From patchwork Mon Feb 27 01:45:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Gao X-Patchwork-Id: 9592847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AE0A160453 for ; Mon, 27 Feb 2017 08:50:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A345B2844C for ; Mon, 27 Feb 2017 08:50:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 981C128454; Mon, 27 Feb 2017 08:50:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 108202844C for ; Mon, 27 Feb 2017 08:50:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ciGzF-00069o-Qa; Mon, 27 Feb 2017 08:48:53 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ciGzE-00067y-Po for xen-devel@lists.xen.org; Mon, 27 Feb 2017 08:48:52 +0000 Received: from [85.158.139.211] by server-1.bemta-5.messagelabs.com id 12/25-23102-4F7E3B85; Mon, 27 Feb 2017 08:48:52 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRWlGSWpSXmKPExsVywNykQvfz880 RBpPnylks+biYxYHR4+ju30wBjFGsmXlJ+RUJrBm7en4zFZxQrnh1q5m5gXGVdBcjJ4eQwHRG iVWPk0BsCQFeiSPLZrBC2AES3Yc3M0LUlEu8X7yFCcRmE1CWuPi1lw3EFhGQkeg/MwWohouDW eApk0TTo06whLCAi8Tio7OZQWwWAVWJzvWNYIN4BZwkri7dywixQEFiysP3YDWcAs4St7d0sU Isc5K4POEZ4wRG3gWMDKsYNYpTi8pSi3SNTPWSijLTM0pyEzNzdA0NTPVyU4uLE9NTcxKTivW S83M3MQLDoZ6BgXEH4652v0OMkhxMSqK8i4w3RgjxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4D3+ bHOEkGBRanpqRVpmDjAwYdISHDxKIrwbQdK8xQWJucWZ6RCpU4yKUuK8h0ASAiCJjNI8uDZYN FxilJUS5mVkYGAQ4ilILcrNLEGVf8UozsGoJMz7EWQKT2ZeCdz0V0CLmYAWzwa5mbe4JBEhJd XAuPr0ks49ouEJrzrvdMe+9nyVqpnx6LrjvHfOLzpslniuF7vCVW+kEHLw1s9pjg+ufX4zk+f oM5EPvg+YV+b7pQj0n+ww2bH6X7Od1EHzXS1c8+9tEP7R/HmS6+3seRtsruY/ND3V9WTVy8Ds xDo9jkI5zz/fMjJFTUu5fs4zm32dZ70d07QNiUosxRmJhlrMRcWJANM5Ba+BAgAA X-Env-Sender: chao.gao@intel.com X-Msg-Ref: server-5.tower-206.messagelabs.com!1488185327!85216824!3 X-Originating-IP: [192.55.52.120] X-SpamReason: No, hits=1.3 required=7.0 tests=BODY_RANDOM_LONG, DATE_IN_PAST_06_12 X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 49191 invoked from network); 27 Feb 2017 08:48:51 -0000 Received: from mga04.intel.com (HELO mga04.intel.com) (192.55.52.120) by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 27 Feb 2017 08:48:51 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2017 00:48:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.35,213,1484035200"; d="scan'208"; a="1135354148" Received: from skl-2s3.sh.intel.com ([10.239.48.35]) by fmsmga002.fm.intel.com with ESMTP; 27 Feb 2017 00:48:25 -0800 From: Chao Gao To: xen-devel@lists.xen.org Date: Mon, 27 Feb 2017 09:45:48 +0800 Message-Id: <1488159949-15011-8-git-send-email-chao.gao@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1488159949-15011-1-git-send-email-chao.gao@intel.com> References: <1488159949-15011-1-git-send-email-chao.gao@intel.com> Cc: Kevin Tian , Feng Wu , Jun Nakajima , George Dunlap , Andrew Cooper , Dario Faggioli , Jan Beulich , Chao Gao Subject: [Xen-devel] [PATCH v9 7/8] VMX: Fixup PI descriptor when cpu is offline X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Feng Wu When cpu is offline, we need to move all the vcpus in its blocking list to another online cpu, this patch handles it. Signed-off-by: Feng Wu Signed-off-by: Chao Gao Reviewed-by: Jan Beulich Acked-by: Kevin Tian --- v7: - Pass unsigned int to vmx_pi_desc_fixup() v6: - Carefully suppress 'SN' to avoid missing notification event during moving the vcpu to the new list v5: - Add some comments to explain why it doesn't cause deadlock for the ABBA deadlock scenario. v4: - Remove the pointless check since we are in machine stop context and no other cpus go down in parallel. xen/arch/x86/hvm/vmx/vmcs.c | 1 + xen/arch/x86/hvm/vmx/vmx.c | 70 +++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/vmx/vmx.h | 1 + 3 files changed, 72 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 7905d3e..7e3e093 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -578,6 +578,7 @@ void vmx_cpu_dead(unsigned int cpu) vmx_free_vmcs(per_cpu(vmxon_region, cpu)); per_cpu(vmxon_region, cpu) = 0; nvmx_cpu_dead(cpu); + vmx_pi_desc_fixup(cpu); } int vmx_cpu_up(void) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index e03786b..b8a385b 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -200,6 +200,76 @@ static void vmx_pi_do_resume(struct vcpu *v) vmx_pi_unblock_vcpu(v); } +void vmx_pi_desc_fixup(unsigned int cpu) +{ + unsigned int new_cpu, dest; + unsigned long flags; + struct arch_vmx_struct *vmx, *tmp; + spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock; + struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list; + + if ( !iommu_intpost ) + return; + + /* + * We are in the context of CPU_DEAD or CPU_UP_CANCELED notification, + * and it is impossible for a second CPU go down in parallel. So we + * can safely acquire the old cpu's lock and then acquire the new_cpu's + * lock after that. + */ + spin_lock_irqsave(old_lock, flags); + + list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list) + { + /* + * Suppress notification or we may miss an interrupt when the + * target cpu is dying. + */ + pi_set_sn(&vmx->pi_desc); + + /* + * Check whether a notification is pending before doing the + * movement, if that is the case we need to wake up it directly + * other than moving it to the new cpu's list. + */ + if ( pi_test_on(&vmx->pi_desc) ) + { + list_del(&vmx->pi_blocking.list); + vmx->pi_blocking.lock = NULL; + vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx)); + } + else + { + /* + * We need to find an online cpu as the NDST of the PI descriptor, it + * doesn't matter whether it is within the cpupool of the domain or + * not. As long as it is online, the vCPU will be woken up once the + * notification event arrives. + */ + new_cpu = cpumask_any(&cpu_online_map); + new_lock = &per_cpu(vmx_pi_blocking, new_cpu).lock; + + spin_lock(new_lock); + + ASSERT(vmx->pi_blocking.lock == old_lock); + + dest = cpu_physical_id(new_cpu); + write_atomic(&vmx->pi_desc.ndst, + x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK)); + + list_move(&vmx->pi_blocking.list, + &per_cpu(vmx_pi_blocking, new_cpu).list); + vmx->pi_blocking.lock = new_lock; + + spin_unlock(new_lock); + } + + pi_clear_sn(&vmx->pi_desc); + } + + spin_unlock_irqrestore(old_lock, flags); +} + /* * To handle posted interrupts correctly, we need to set the following * state: diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h index 00e6f0d..70155fb 100644 --- a/xen/include/asm-x86/hvm/vmx/vmx.h +++ b/xen/include/asm-x86/hvm/vmx/vmx.h @@ -597,6 +597,7 @@ void free_p2m_hap_data(struct p2m_domain *p2m); void p2m_init_hap_data(struct p2m_domain *p2m); void vmx_pi_per_cpu_init(unsigned int cpu); +void vmx_pi_desc_fixup(unsigned int cpu); void vmx_pi_hooks_assign(struct domain *d); void vmx_pi_hooks_deassign(struct domain *d);