From patchwork Tue Oct 11 00:57:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9370049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0A0C860865 for ; Tue, 11 Oct 2016 01:30:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03C1028C9D for ; Tue, 11 Oct 2016 01:30:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EB83F28D30; Tue, 11 Oct 2016 01:30:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1839E28C9D for ; Tue, 11 Oct 2016 01:30:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1btls6-0005vQ-KW; Tue, 11 Oct 2016 01:28:46 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1btls5-0005uh-7h for xen-devel@lists.xen.org; Tue, 11 Oct 2016 01:28:45 +0000 Received: from [85.158.143.35] by server-2.bemta-6.messagelabs.com id 9D/AA-13744-D404CF75; Tue, 11 Oct 2016 01:28:45 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrBLMWRWlGSWpSXmKPExsVywNxEW9fH4U+ 4wYR74hZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa0bL0c9MBU+lKm4tvc7cwHhZtIuRk0NIoFLi xZRL7CC2hACvxJFlM1ghbH+JYxMOMULU1Ev8e3cBzGYTUJQ4ePEQWI2IgLTEtc+XgeJcHMwCC xglGi8eZQZJCAtESnw81soCYrMIqEq0njgGZvMKOEr8W9LHBLFATmLD7v9gQzkFnCQmv53K1s XIAbTMUaJjP/sERt4FjAyrGNWLU4vKUot0TfWSijLTM0pyEzNzdA0NzPRyU4uLE9NTcxKTivW S83M3MQJDgQEIdjBOv+x/iFGSg0lJlLfI6E+4EF9SfkplRmJxRnxRaU5q8SFGGQ4OJQlePXug nGBRanpqRVpmDjAoYdISHDxKIrz5IGne4oLE3OLMdIjUKUZdjiW7HqxlEmLJy89LlRLn5QQpE gApyijNgxsBi5BLjLJSwryMQEcJ8RSkFuVmlqDKv2IU52BUEuY1AZnCk5lXArfpFdARTEBHsC z+AXJESSJCSqqB0dhE5fyX5GSpopwP04WtPugvviNddYPP5USMvSBjAsOp+AlhiSlLn7yNXca V5szSdWfJhi2FS4/fm8Z6RW+LyfWEk6s6LKPcg3Jfc21uT3NRtmVe9lgqzDlikvaylEl5AdXb Ek9ZSXedSVxw/4TTuvXXT1ZyGk1//EzE+IA2R2s0+/mfF2LNlFiKMxINtZiLihMBSk3ii4sCA AA= X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1476149320!30791318!3 X-Originating-IP: [192.55.52.43] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54559 invoked from network); 11 Oct 2016 01:28:44 -0000 Received: from mga05.intel.com (HELO mga05.intel.com) (192.55.52.43) by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 Oct 2016 01:28:44 -0000 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP; 10 Oct 2016 18:28:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,327,1473145200"; d="scan'208";a="178535713" Received: from feng-bdw-de-pi.bj.intel.com ([10.238.154.57]) by fmsmga004.fm.intel.com with ESMTP; 10 Oct 2016 18:28:42 -0700 From: Feng Wu To: xen-devel@lists.xen.org Date: Tue, 11 Oct 2016 08:57:48 +0800 Message-Id: <1476147473-30970-3-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1476147473-30970-1-git-send-email-feng.wu@intel.com> References: <1476147473-30970-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, Feng Wu , george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v5 2/7] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch handles some concern cases when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. - No remaining vcpus of the domain in the per-cpu blocking list. Here we call vmx_pi_list_remove() to remove the vCPU from the blocking list if it is on the list. However, this could happen when vmx_vcpu_block() is being called, hence we might incorrectly add the vCPU to the blocking list while the last devcie is detached from the domain. Consider that the situation can only occur when detaching the last device from the domain and it is not a frequent operation, so we use domain_pause before that, which is considered as an clean and maintainable solution for the situation. Signed-off-by: Feng Wu --- v5: - Remove a no-op wrapper xen/arch/x86/hvm/vmx/vmx.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 623d5bc..d210516 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -163,14 +163,12 @@ static void vmx_pi_switch_to(struct vcpu *v) pi_clear_sn(pi_desc); } -static void vmx_pi_do_resume(struct vcpu *v) +static void vmx_pi_list_remove(struct vcpu *v) { unsigned long flags; spinlock_t *pi_blocking_list_lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); - /* * Set 'NV' field back to posted_intr_vector, so the * Posted-Interrupts can be delivered to the vCPU when @@ -178,12 +176,12 @@ static void vmx_pi_do_resume(struct vcpu *v) */ write_atomic(&pi_desc->nv, posted_intr_vector); - /* The vCPU is not on any blocking list. */ pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; /* Prevent the compiler from eliminating the local variable.*/ smp_rmb(); + /* The vCPU is not on any blocking list. */ if ( pi_blocking_list_lock == NULL ) return; @@ -203,6 +201,13 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_do_resume(struct vcpu *v) +{ + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); + + vmx_pi_list_remove(v); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { @@ -220,14 +225,29 @@ void vmx_pi_hooks_assign(struct domain *d) /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; ASSERT(d->arch.hvm_domain.vmx.vcpu_block); + /* + * Pausing the domain can make sure the vCPU is not + * running and hence calling the hooks simultaneously + * when deassigning the PI hooks and removing the vCPU + * from the blocking list. + */ + domain_pause(d); + d->arch.hvm_domain.vmx.vcpu_block = NULL; d->arch.hvm_domain.vmx.pi_do_resume = NULL; d->arch.hvm_domain.vmx.pi_switch_from = NULL; + + for_each_vcpu ( d, v ) + vmx_pi_list_remove(v); + + domain_unpause(d); } static int vmx_domain_initialise(struct domain *d)