From patchwork Wed Nov 27 12:00:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11263897 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C32F6C1 for ; Wed, 27 Nov 2019 12:02:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67091206F0 for ; Wed, 27 Nov 2019 12:02:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="WdyFGWi+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67091206F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZw0K-0007AT-6L; Wed, 27 Nov 2019 12:01:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZw0I-0007AM-B1 for xen-devel@lists.xenproject.org; Wed, 27 Nov 2019 12:01:06 +0000 X-Inumbo-ID: 96ca704c-110d-11ea-a55d-bc764e2007e4 Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 96ca704c-110d-11ea-a55d-bc764e2007e4; Wed, 27 Nov 2019 12:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1574856066; x=1606392066; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=buyP8zKvN+d/Th3dOXPzR0xdGKO2tUZV5zTPr3Gx1FY=; b=WdyFGWi+fSPGT8yp3vnvXJSB0R26n2PYzaINGvYqqgOgTNKnbr1JRsIB tZfKKb8FDd50e0rGqb5C94SWune8s9mj8z9LcXHY3qPQNBclbIW2noMT4 6petUiRompRZhpnwmyF0kqJsmYeNFvJdRru/BHeIsZNrgJ7Jw0osYHWq8 o=; IronPort-SDR: bqD0xatzURmI6uvRRop2cS+Y2L8iPBpawBWDuAEVK1MK9qjAz0O/oN20uW/Bndcr/2tZlVPHBF cx0VF1g5jIZA== X-IronPort-AV: E=Sophos;i="5.69,249,1571702400"; d="scan'208";a="11570540" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 27 Nov 2019 12:00:55 +0000 Received: from EX13MTAUEA001.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com (Postfix) with ESMTPS id D1716A1F36; Wed, 27 Nov 2019 12:00:53 +0000 (UTC) Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by EX13MTAUEA001.ant.amazon.com (10.43.61.243) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 27 Nov 2019 12:00:53 +0000 Received: from EX13MTAUEB001.ant.amazon.com (10.43.60.96) by EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 27 Nov 2019 12:00:52 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.60.129) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 27 Nov 2019 12:00:50 +0000 From: Paul Durrant To: Date: Wed, 27 Nov 2019 12:00:46 +0000 Message-ID: <20191127120046.1246-1-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Jun Nakajima , Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , Paul Durrant , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall A guest will setup a shared page with the hypervisor for each vCPU via XENPMU_init. The page will then get mapped in the hypervisor and only released when XENPMU_finish is called. This means that if the guest fails to invoke XENPMU_finish, e.g if it is destroyed rather than cleanly shut down, the page will stay mapped in the hypervisor. One of the consequences is the domain can never be fully destroyed as a page reference is still held. As Xen should never rely on the guest to correctly clean-up any allocation in the hypervisor, we should also unmap such pages during the domain destruction if there are any left. We can re-use the same logic as in pvpmu_finish(). To avoid duplication, move the logic in a new function that can also be called from vpmu_destroy(). NOTE: The call to vpmu_destroy() must also be moved from arch_vcpu_destroy() into domain_relinquish_resources() such that the reference on the mapped page does not prevent domain_destroy() (which calls arch_vcpu_destroy()) from being called. Also, whils it appears that vpmu_arch_destroy() is idempotent it is by no means obvious. Hence move manipulation of the VPMU_CONTEXT_ALLOCATED flag out of implementation specific code and make sure it is cleared at the end of vpmu_arch_destroy(). Signed-off-by: Julien Grall Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: Jun Nakajima Cc: Kevin Tian v2: - Re-word commit comment slightly - Re-enforce idempotency of vmpu_arch_destroy() - Move invocation of vpmu_destroy() earlier in domain_relinquish_resources() --- xen/arch/x86/cpu/vpmu.c | 49 +++++++++++++++++++++-------------- xen/arch/x86/cpu/vpmu_amd.c | 1 - xen/arch/x86/cpu/vpmu_intel.c | 2 -- xen/arch/x86/domain.c | 10 ++++--- 4 files changed, 35 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index f397183ec3..08742a5e22 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -479,6 +479,8 @@ static int vpmu_arch_initialise(struct vcpu *v) if ( ret ) printk(XENLOG_G_WARNING "VPMU: Initialization failed for %pv\n", v); + else + vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED); return ret; } @@ -576,11 +578,36 @@ static void vpmu_arch_destroy(struct vcpu *v) vpmu->arch_vpmu_ops->arch_vpmu_destroy(v); } + + vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED); } -void vpmu_destroy(struct vcpu *v) +static void vpmu_cleanup(struct vcpu *v) { + struct vpmu_struct *vpmu = vcpu_vpmu(v); + mfn_t mfn; + void *xenpmu_data; + + spin_lock(&vpmu->vpmu_lock); + vpmu_arch_destroy(v); + xenpmu_data = vpmu->xenpmu_data; + vpmu->xenpmu_data = NULL; + + spin_unlock(&vpmu->vpmu_lock); + + if ( xenpmu_data ) + { + mfn = domain_page_map_to_mfn(xenpmu_data); + ASSERT(mfn_valid(mfn)); + unmap_domain_page_global(xenpmu_data); + put_page_and_type(mfn_to_page(mfn)); + } +} + +void vpmu_destroy(struct vcpu *v) +{ + vpmu_cleanup(v); put_vpmu(v); } @@ -639,9 +666,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params) static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) { struct vcpu *v; - struct vpmu_struct *vpmu; - mfn_t mfn; - void *xenpmu_data; if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) ) return; @@ -650,22 +674,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) if ( v != current ) vcpu_pause(v); - vpmu = vcpu_vpmu(v); - spin_lock(&vpmu->vpmu_lock); - - vpmu_arch_destroy(v); - xenpmu_data = vpmu->xenpmu_data; - vpmu->xenpmu_data = NULL; - - spin_unlock(&vpmu->vpmu_lock); - - if ( xenpmu_data ) - { - mfn = domain_page_map_to_mfn(xenpmu_data); - ASSERT(mfn_valid(mfn)); - unmap_domain_page_global(xenpmu_data); - put_page_and_type(mfn_to_page(mfn)); - } + vpmu_cleanup(v); if ( v != current ) vcpu_unpause(v); diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 3c6799b42c..8ca26f1e3a 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -534,7 +534,6 @@ int svm_vpmu_initialise(struct vcpu *v) vpmu->arch_vpmu_ops = &amd_vpmu_ops; - vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED); return 0; } diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index 6e27f6ec8e..a92d882597 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -483,8 +483,6 @@ static int core2_vpmu_alloc_resource(struct vcpu *v) memcpy(&vpmu->xenpmu_data->pmu.c.intel, core2_vpmu_cxt, regs_off); } - vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED); - return 1; out_err: diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f1dd86e12e..f5c0c378ef 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -454,9 +454,6 @@ void arch_vcpu_destroy(struct vcpu *v) xfree(v->arch.msrs); v->arch.msrs = NULL; - if ( !is_idle_domain(v->domain) ) - vpmu_destroy(v); - if ( is_hvm_vcpu(v) ) hvm_vcpu_destroy(v); else @@ -2136,12 +2133,17 @@ int domain_relinquish_resources(struct domain *d) PROGRESS(vcpu_pagetables): - /* Drop the in-use references to page-table bases. */ + /* + * Drop the in-use references to page-table bases and clean + * up vPMU instances. + */ for_each_vcpu ( d, v ) { ret = vcpu_destroy_pagetables(v); if ( ret ) return ret; + + vpmu_destroy(v); } if ( altp2m_active(d) )