From patchwork Mon Feb 27 01:45:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Gao X-Patchwork-Id: 9592835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 747E660453 for ; Mon, 27 Feb 2017 08:50:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6742928455 for ; Mon, 27 Feb 2017 08:50:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A3FC28458; Mon, 27 Feb 2017 08:50:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00, DATE_IN_PAST_06_12, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9278328452 for ; Mon, 27 Feb 2017 08:50:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ciGyg-0005tv-NT; Mon, 27 Feb 2017 08:48:18 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ciGyf-0005so-2J for xen-devel@lists.xen.org; Mon, 27 Feb 2017 08:48:17 +0000 Received: from [85.158.137.68] by server-6.bemta-3.messagelabs.com id 61/32-08534-0D7E3B85; Mon, 27 Feb 2017 08:48:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrNLMWRWlGSWpSXmKPExsXS1taRonv++eY Ig1PrLC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1oyDdyMKzipW3OttYmxgnC/dxcjBISRQKbFq umgXIyeHhACvxJFlM1hBwhICARI7u21BwkIC5RIdNzuZQWw2AWWJi1972UBsEQFpiWufLzN2M XJxMAs8ZZJoetQJlhAWiJTonzUfrIFFQFVi1aZHYDavgJPE28kLGSF2KUhMefgeLM4p4Cxxe0 sXK8QyJ4nLE54xTmDkXcDIsIpRozi1qCy1SNfITC+pKDM9oyQ3MTNH19DAWC83tbg4MT01JzG pWC85P3cTIzAQ6hkYGHcwNuz1O8QoycGkJMq7yHhjhBBfUn5KZUZicUZ8UWlOavEhRhkODiUJ 3uPPNkcICRalpqdWpGXmAEMSJi3BwaMkwrsRJM1bXJCYW5yZDpE6xagoJc57CCQhAJLIKM2Da 4PFwSVGWSlhXkYGBgYhnoLUotzMElT5V4ziHIxKwrxbQKbwZOaVwE1/BbSYCWjxbJCbeYtLEh FSUg2M9gIycbeWVRofuvzpYcLuXwvPV2ttUjtifOTCtdkvzS8ynDy/8zP3ac367kcVfz0nuun maByNnZRS4zIpJOtP3Wz79A2ix32VlW4/Z55vNnltktR5HYlDlh8ebGxj9O84e+K+pqnQt9LE iTu/3ZlZkz5llujkriWbD/B5yMYdvjspctdzf65KPiWW4oxEQy3mouJEACSmPh9+AgAA X-Env-Sender: chao.gao@intel.com X-Msg-Ref: server-4.tower-31.messagelabs.com!1488185284!29477171!4 X-Originating-IP: [134.134.136.100] X-SpamReason: No, hits=0.8 required=7.0 tests=DATE_IN_PAST_06_12 X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 39599 invoked from network); 27 Feb 2017 08:48:15 -0000 Received: from mga07.intel.com (HELO mga07.intel.com) (134.134.136.100) by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 27 Feb 2017 08:48:15 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP; 27 Feb 2017 00:48:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.35,213,1484035200"; d="scan'208"; a="1135354073" Received: from skl-2s3.sh.intel.com ([10.239.48.35]) by fmsmga002.fm.intel.com with ESMTP; 27 Feb 2017 00:48:12 -0800 From: Chao Gao To: xen-devel@lists.xen.org Date: Mon, 27 Feb 2017 09:45:44 +0800 Message-Id: <1488159949-15011-4-git-send-email-chao.gao@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1488159949-15011-1-git-send-email-chao.gao@intel.com> References: <1488159949-15011-1-git-send-email-chao.gao@intel.com> Cc: Kevin Tian , Feng Wu , Jun Nakajima , George Dunlap , Andrew Cooper , Dario Faggioli , Jan Beulich , Chao Gao Subject: [Xen-devel] [PATCH v9 3/8] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Feng Wu This patch handles some corner cases when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. - No remaining vcpus of the domain in the per-cpu blocking list. Here we call vmx_pi_unblock_vcpu() to remove the vCPU from the blocking list if it is on the list. However, this could happen when vmx_vcpu_block() is being called, hence we might incorrectly add the vCPU to the blocking list while the last devcie is detached from the domain. Consider that the situation can only occur when detaching the last device from the domain and it is not a frequent operation, so we use domain_pause before that, which is considered as an clean and maintainable solution for the situation. Signed-off-by: Feng Wu Signed-off-by: Chao Gao Reviewed-by: Jan Beulich Acked-by: Kevin Tian --- v9: - Based on [v8 2/7]. Add a assertion before domain pause. v7: - Prevent the domain from pausing itself. v6: - Comments changes - Rename vmx_pi_list_remove() to vmx_pi_unblock_vcpu() v5: - Remove a no-op wrapper v4: - Rename some functions: vmx_pi_remove_vcpu_from_blocking_list() -> vmx_pi_list_remove() vmx_pi_blocking_cleanup() -> vmx_pi_list_cleanup() - Remove the check in vmx_pi_list_cleanup() - Comments adjustment xen/arch/x86/hvm/vmx/vmx.c | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 5fa16dd..a7a70e7 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -155,14 +155,12 @@ static void vmx_pi_switch_to(struct vcpu *v) pi_clear_sn(pi_desc); } -static void vmx_pi_do_resume(struct vcpu *v) +static void vmx_pi_unblock_vcpu(struct vcpu *v) { unsigned long flags; spinlock_t *pi_blocking_list_lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); - /* * Set 'NV' field back to posted_intr_vector, so the * Posted-Interrupts can be delivered to the vCPU when @@ -170,12 +168,12 @@ static void vmx_pi_do_resume(struct vcpu *v) */ write_atomic(&pi_desc->nv, posted_intr_vector); - /* The vCPU is not on any blocking list. */ pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; /* Prevent the compiler from eliminating the local variable.*/ smp_rmb(); + /* The vCPU is not on any blocking list. */ if ( pi_blocking_list_lock == NULL ) return; @@ -195,6 +193,13 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_do_resume(struct vcpu *v) +{ + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); + + vmx_pi_unblock_vcpu(v); +} + /* * To handle posted interrupts correctly, we need to set the following * state: @@ -255,12 +260,23 @@ void vmx_pi_hooks_assign(struct domain *d) /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; ASSERT(d->arch.hvm_domain.pi_ops.vcpu_block); /* + * Pausing the domain can make sure the vCPUs are not + * running and hence not calling the hooks simultaneously + * when deassigning the PI hooks and removing the vCPU + * from the blocking list. + */ + ASSERT(current->domain != d); + domain_pause(d); + + /* * Note that we don't set 'd->arch.hvm_domain.pi_ops.switch_to' to NULL * here. If we deassign the hooks while the vCPU is runnable in the * runqueue with 'SN' set, all the future notification event will be @@ -270,6 +286,11 @@ void vmx_pi_hooks_deassign(struct domain *d) d->arch.hvm_domain.pi_ops.vcpu_block = NULL; d->arch.hvm_domain.pi_ops.switch_from = NULL; d->arch.hvm_domain.pi_ops.do_resume = NULL; + + for_each_vcpu ( d, v ) + vmx_pi_unblock_vcpu(v); + + domain_unpause(d); } static int vmx_domain_initialise(struct domain *d)