From patchwork Fri Oct 28 02:37:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9400997 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A7727605EE for ; Fri, 28 Oct 2016 03:12:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 972802A4DE for ; Fri, 28 Oct 2016 03:12:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C0F12A4E0; Fri, 28 Oct 2016 03:12:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2E7B62A4DF for ; Fri, 28 Oct 2016 03:12:14 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bzxXk-0006KI-Ia; Fri, 28 Oct 2016 03:09:20 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bzxXj-0006Jc-4p for xen-devel@lists.xen.org; Fri, 28 Oct 2016 03:09:19 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 9D/F0-10083-E51C2185; Fri, 28 Oct 2016 03:09:18 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrJLMWRWlGSWpSXmKPExsVywNwkQjf2oFC EwflJlhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8a+jb9ZC/7KVUw//pq1gXGiZBcjJ4eQQKVE z+weNhBbQoBX4siyGawQdoDE7u/PWCBq+hglOi44gNhsAooSBy8eAqsREZCWuPb5MmMXIxcHs 8ACRonGi0eZQRLCApESl9Z/Zu9i5OBgEVCVmLwlFSTMK+Ag8XrdNyaI+XISG3b/ZwSxOQUcJX 4eusMIsctBYs6OV6wTGHkXMDKsYtQoTi0qSy3SNTbQSyrKTM8oyU3MzNE1NDDTy00tLk5MT81 JTCrWS87P3cQIDAYGINjB+Hdt4CFGSQ4mJVHeC5OFIoT4kvJTKjMSizPii0pzUosPMcpwcChJ 8D7ZD5QTLEpNT61Iy8wBhiVMWoKDR0mEd/s+oDRvcUFibnFmOkTqFKMux7c9nx8wCbHk5eelS onzahwAKhIAKcoozYMbAYuRS4yyUsK8jEBHCfEUpBblZpagyr9iFOdgVBLmXQsyhSczrwRu0y ugI5iAjpieLgByREkiQkqqgXF2XNRpTRGbBDEmtZPxX6xzzpvfv8kee4rxatVV9369nYHF6y2 PVp9/q9vwI22dn8pSWff7ty73JYffuPprdq5MIXP+rFiNHU8rrENjlh6JM5ugs7K1V/fq5Cl5 R5rUfks9l+EK42Pxkz14muejmOHp5JXnGE6+Y3860WazUt0dsTeXXC0VbimxFGckGmoxFxUnA gD8njffjAIAAA== X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-16.tower-27.messagelabs.com!1477624154!67798050!3 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 9.0.13; banners=-,-,- X-VirusChecked: Checked Received: (qmail 31538 invoked from network); 28 Oct 2016 03:09:17 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-16.tower-27.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 28 Oct 2016 03:09:17 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 27 Oct 2016 20:09:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.31,407,1473145200"; d="scan'208"; a="1077010819" Received: from unknown (HELO feng-bdw-de-pi.bj.intel.com) ([10.238.154.53]) by fmsmga002.fm.intel.com with ESMTP; 27 Oct 2016 20:09:13 -0700 From: Feng Wu To: xen-devel@lists.xen.org Date: Fri, 28 Oct 2016 10:37:34 +0800 Message-Id: <1477622259-3476-3-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1477622259-3476-1-git-send-email-feng.wu@intel.com> References: <1477622259-3476-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, Feng Wu , george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v6 2/7] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch handles some corner cases when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. - No remaining vcpus of the domain in the per-cpu blocking list. Here we call vmx_pi_list_remove() to remove the vCPU from the blocking list if it is on the list. However, this could happen when vmx_vcpu_block() is being called, hence we might incorrectly add the vCPU to the blocking list while the last devcie is detached from the domain. Consider that the situation can only occur when detaching the last device from the domain and it is not a frequent operation, so we use domain_pause before that, which is considered as an clean and maintainable solution for the situation. Signed-off-by: Feng Wu --- v6: - Comments changes - Rename vmx_pi_list_remove() to vmx_pi_unblock_vcpu() v5: - Remove a no-op wrapper v4: - Rename some functions: vmx_pi_remove_vcpu_from_blocking_list() -> vmx_pi_list_remove() vmx_pi_blocking_cleanup() -> vmx_pi_list_cleanup() - Remove the check in vmx_pi_list_cleanup() - Comments adjustment xen/arch/x86/hvm/vmx/vmx.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index faaa987..508be7c 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -158,14 +158,12 @@ static void vmx_pi_switch_to(struct vcpu *v) pi_clear_sn(pi_desc); } -static void vmx_pi_do_resume(struct vcpu *v) +static void vmx_pi_unblock_vcpu(struct vcpu *v) { unsigned long flags; spinlock_t *pi_blocking_list_lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); - /* * Set 'NV' field back to posted_intr_vector, so the * Posted-Interrupts can be delivered to the vCPU when @@ -173,12 +171,12 @@ static void vmx_pi_do_resume(struct vcpu *v) */ write_atomic(&pi_desc->nv, posted_intr_vector); - /* The vCPU is not on any blocking list. */ pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; /* Prevent the compiler from eliminating the local variable.*/ smp_rmb(); + /* The vCPU is not on any blocking list. */ if ( pi_blocking_list_lock == NULL ) return; @@ -198,6 +196,13 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_do_resume(struct vcpu *v) +{ + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); + + vmx_pi_unblock_vcpu(v); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { @@ -215,11 +220,21 @@ void vmx_pi_hooks_assign(struct domain *d) /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; ASSERT(d->arch.hvm_domain.vmx.vcpu_block); + /* + * Pausing the domain can make sure the vCPU is not + * running and hence not calling the hooks simultaneously + * when deassigning the PI hooks and removing the vCPU + * from the blocking list. + */ + domain_pause(d); + d->arch.hvm_domain.vmx.vcpu_block = NULL; d->arch.hvm_domain.vmx.pi_do_resume = NULL; d->arch.hvm_domain.vmx.pi_switch_from = NULL; @@ -229,6 +244,11 @@ void vmx_pi_hooks_deassign(struct domain *d) * is in the process of getting assigned and "from" hook is NULL. However, * it is not straightforward to find a clear solution, so just leave it here. */ + + for_each_vcpu ( d, v ) + vmx_pi_unblock_vcpu(v); + + domain_unpause(d); } static int vmx_domain_initialise(struct domain *d)