From patchwork Wed Sep 21 02:37:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 9342715 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5781D60B16 for ; Wed, 21 Sep 2016 03:10:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E28229651 for ; Wed, 21 Sep 2016 03:10:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 523D72969F; Wed, 21 Sep 2016 03:10:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D66CB29D14 for ; Wed, 21 Sep 2016 03:10:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bmXt5-0002fY-Et; Wed, 21 Sep 2016 03:07:55 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bmXt4-0002fD-79 for xen-devel@lists.xen.org; Wed, 21 Sep 2016 03:07:54 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id 69/CE-04595-989F1E75; Wed, 21 Sep 2016 03:07:53 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsVywNwkVrfz58N wg/+nmSyWfFzM4sDocXT3b6YAxijWzLyk/IoE1oz/h5YyFxyTrLh55zpTA+N9kS5GDg4hgQqJ 50/Luhg5OSQEeCWOLJvBChKWEAiQ2HjNEyQsJFAvse1SLyuIzSagKHHw4iEwW0RAWuLa58uMX YxcHMwCCxglGi8eZQZJCAtESiz4OhnMZhFQlTh24RQjiM0r4CCx/PQdJohdchIbdv8Hi3MKOE rcaPvEBLHMQeLh14/sExh5FzAyrGLUKE4tKkst0jW00EsqykzPKMlNzMzRNTQw08tNLS5OTE/ NSUwq1kvOz93ECAwFBiDYwXhzY8AhRkkOJiVRXjm+B+FCfEn5KZUZicUZ8UWlOanFhxhlODiU JHi3/XgYLiRYlJqeWpGWmQMMSpi0BAePkgivEEiat7ggMbc4Mx0idYpRUUqctwIkIQCSyCjNg 2uDRcIlRlkpYV5GoEOEeApSi3IzS1DlXzGKczAqCfPOB5nCk5lXAjf9FdBiJqDFW34+AFlcko iQkmpgrEqLE3V88FXdIsd95cOWzSf9pa/8vfmoiDs9x+fHgqymt2EFcvPCSh2q2kpvSAg3CJw zzL8frVax49TeoNmGf23DuT8oHAkK3/Rhx8qadV2/Hk7t0J6zgP8mh8cfTucIXWl3jn8nq2Xz PZ839z/OKvg+c1KYlJzRapntyy+qrOipfVbBt+SLEktxRqKhFnNRcSIADxSIJn8CAAA= X-Env-Sender: feng.wu@intel.com X-Msg-Ref: server-12.tower-21.messagelabs.com!1474427268!34296260!3 X-Originating-IP: [192.55.52.93] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50129 invoked from network); 21 Sep 2016 03:07:52 -0000 Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by server-12.tower-21.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 21 Sep 2016 03:07:52 -0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 20 Sep 2016 20:07:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.30,371,1470726000"; d="scan'208"; a="1033977532" Received: from feng-bdw-de-pi.bj.intel.com ([10.238.154.76]) by orsmga001.jf.intel.com with ESMTP; 20 Sep 2016 20:07:50 -0700 From: Feng Wu To: xen-devel@lists.xen.org Date: Wed, 21 Sep 2016 10:37:46 +0800 Message-Id: <1474425470-3629-3-git-send-email-feng.wu@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1474425470-3629-1-git-send-email-feng.wu@intel.com> References: <1474425470-3629-1-git-send-email-feng.wu@intel.com> Cc: kevin.tian@intel.com, Feng Wu , george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch handles some concern cases when the last assigned device is removed from the domain. In this case we should carefully handle pi descriptor and the per-cpu blocking list, to make sure: - all the PI descriptor are in the right state when next time a devices is assigned to the domain again. - No remaining vcpus of the domain in the per-cpu blocking list. Basically, we pause the domain before zapping the PI hooks and removing the vCPU from the blocking list, then unpause it after that. Signed-off-by: Feng Wu --- v4: - Rename some functions: vmx_pi_remove_vcpu_from_blocking_list() -> vmx_pi_list_remove() vmx_pi_blocking_cleanup() -> vmx_pi_list_cleanup() - Remove the check in vmx_pi_list_cleanup() - Comments adjustment xen/arch/x86/hvm/vmx/vmx.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 355936a..7305f40 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -158,14 +158,12 @@ static void vmx_pi_switch_to(struct vcpu *v) pi_clear_sn(pi_desc); } -static void vmx_pi_do_resume(struct vcpu *v) +static void vmx_pi_list_remove(struct vcpu *v) { unsigned long flags; spinlock_t *pi_blocking_list_lock; struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc; - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); - /* * Set 'NV' field back to posted_intr_vector, so the * Posted-Interrupts can be delivered to the vCPU when @@ -173,12 +171,12 @@ static void vmx_pi_do_resume(struct vcpu *v) */ write_atomic(&pi_desc->nv, posted_intr_vector); - /* The vCPU is not on any blocking list. */ pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock; /* Prevent the compiler from eliminating the local variable.*/ smp_rmb(); + /* The vCPU is not on any blocking list. */ if ( pi_blocking_list_lock == NULL ) return; @@ -198,6 +196,18 @@ static void vmx_pi_do_resume(struct vcpu *v) spin_unlock_irqrestore(pi_blocking_list_lock, flags); } +static void vmx_pi_do_resume(struct vcpu *v) +{ + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags)); + + vmx_pi_list_remove(v); +} + +static void vmx_pi_list_cleanup(struct vcpu *v) +{ + vmx_pi_list_remove(v); +} + /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_assign(struct domain *d) { @@ -215,13 +225,28 @@ void vmx_pi_hooks_assign(struct domain *d) /* This function is called when pcidevs_lock is held */ void vmx_pi_hooks_deassign(struct domain *d) { + struct vcpu *v; + if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; ASSERT(d->arch.hvm_domain.vmx.vcpu_block); + /* + * Pausing the domain can make sure the vCPU is not + * running and hence calling the hooks simultaneously + * when deassigning the PI hooks and removing the vCPU + * from the blocking list. + */ + domain_pause(d); + d->arch.hvm_domain.vmx.vcpu_block = NULL; d->arch.hvm_domain.vmx.pi_do_resume = NULL; + + for_each_vcpu ( d, v ) + vmx_pi_list_cleanup(v); + + domain_unpause(d); } static int vmx_domain_initialise(struct domain *d)