From patchwork Thu Nov 28 09:38:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11265653 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D6186C1 for ; Thu, 28 Nov 2019 09:40:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 53A4221741 for ; Thu, 28 Nov 2019 09:40:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="GB0kp6gD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53A4221741 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iaGG9-0002BI-FD; Thu, 28 Nov 2019 09:38:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iaGG8-0002BD-G7 for xen-devel@lists.xenproject.org; Thu, 28 Nov 2019 09:38:48 +0000 X-Inumbo-ID: e07007e2-11c2-11ea-83b8-bc764e2007e4 Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e07007e2-11c2-11ea-83b8-bc764e2007e4; Thu, 28 Nov 2019 09:38:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1574933929; x=1606469929; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=cnE4wp3Gvh0dN8Y6RkNpdZ7BH1NpmlZXdE8zknmqTD0=; b=GB0kp6gDzPAukTygA6sxaS1AYd7nWwPaMpxfGqFatZzkEFLWUSec0Q8+ wf0J7+30mLQbTskIEHP1M+dDAIMyxeVd5GXEcblEtnjuz73kzemx6arOu cUGJJ4OFhV4xKd44h2slRWizCG9/Q4vooDT+hM3t8UuRvyukWF5QeGQh6 o=; IronPort-SDR: wQTftSKkfcvje7E3aD9obNdhzz/FAqMtvN54Lqlwisju0SNQ3SCDDJ0XU50ioOzE80/DodIuY0 sBWGbctZB2Ag== X-IronPort-AV: E=Sophos;i="5.69,252,1571702400"; d="scan'208";a="1974742" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 28 Nov 2019 09:38:36 +0000 Received: from EX13MTAUEA001.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS id C9108A04F7; Thu, 28 Nov 2019 09:38:35 +0000 (UTC) Received: from EX13D32EUB003.ant.amazon.com (10.43.166.165) by EX13MTAUEA001.ant.amazon.com (10.43.61.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 28 Nov 2019 09:38:35 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D32EUB003.ant.amazon.com (10.43.166.165) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 28 Nov 2019 09:38:34 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 28 Nov 2019 09:38:32 +0000 From: Paul Durrant To: Date: Thu, 28 Nov 2019 09:38:28 +0000 Message-ID: <20191128093828.8462-1-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Jun Nakajima , Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , Paul Durrant , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall A guest will setup a shared page with the hypervisor for each vCPU via XENPMU_init. The page will then get mapped in the hypervisor and only released when XENPMU_finish is called. This means that if the guest fails to invoke XENPMU_finish, e.g if it is destroyed rather than cleanly shut down, the page will stay mapped in the hypervisor. One of the consequences is the domain can never be fully destroyed as a page reference is still held. As Xen should never rely on the guest to correctly clean-up any allocation in the hypervisor, we should also unmap such pages during the domain destruction if there are any left. We can re-use the same logic as in pvpmu_finish(). To avoid duplication, move the logic in a new function that can also be called from vpmu_destroy(). NOTE: - The call to vpmu_destroy() must also be moved from arch_vcpu_destroy() into domain_relinquish_resources() such that the reference on the mapped page does not prevent domain_destroy() (which calls arch_vcpu_destroy()) from being called. - Whilst it appears that vpmu_arch_destroy() is idempotent it is by no means obvious. Hence make sure the VPMU_CONTEXT_ALLOCATED flag is cleared at the end of vpmu_arch_destroy(). - This is not an XSA because vPMU is not security supported (see XSA-163). Signed-off-by: Julien Grall Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" Cc: Jun Nakajima Cc: Kevin Tian v2: - Re-word commit comment slightly - Re-enforce idempotency of vmpu_arch_destroy() - Move invocation of vpmu_destroy() earlier in domain_relinquish_resources() --- xen/arch/x86/cpu/vpmu.c | 47 +++++++++++++++++++++++------------------ xen/arch/x86/domain.c | 10 +++++---- 2 files changed, 33 insertions(+), 24 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index f397183ec3..792953e7cb 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -576,11 +576,36 @@ static void vpmu_arch_destroy(struct vcpu *v) vpmu->arch_vpmu_ops->arch_vpmu_destroy(v); } + + vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED); } -void vpmu_destroy(struct vcpu *v) +static void vpmu_cleanup(struct vcpu *v) { + struct vpmu_struct *vpmu = vcpu_vpmu(v); + mfn_t mfn; + void *xenpmu_data; + + spin_lock(&vpmu->vpmu_lock); + vpmu_arch_destroy(v); + xenpmu_data = vpmu->xenpmu_data; + vpmu->xenpmu_data = NULL; + + spin_unlock(&vpmu->vpmu_lock); + + if ( xenpmu_data ) + { + mfn = domain_page_map_to_mfn(xenpmu_data); + ASSERT(mfn_valid(mfn)); + unmap_domain_page_global(xenpmu_data); + put_page_and_type(mfn_to_page(mfn)); + } +} + +void vpmu_destroy(struct vcpu *v) +{ + vpmu_cleanup(v); put_vpmu(v); } @@ -639,9 +664,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params) static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) { struct vcpu *v; - struct vpmu_struct *vpmu; - mfn_t mfn; - void *xenpmu_data; if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) ) return; @@ -650,22 +672,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) if ( v != current ) vcpu_pause(v); - vpmu = vcpu_vpmu(v); - spin_lock(&vpmu->vpmu_lock); - - vpmu_arch_destroy(v); - xenpmu_data = vpmu->xenpmu_data; - vpmu->xenpmu_data = NULL; - - spin_unlock(&vpmu->vpmu_lock); - - if ( xenpmu_data ) - { - mfn = domain_page_map_to_mfn(xenpmu_data); - ASSERT(mfn_valid(mfn)); - unmap_domain_page_global(xenpmu_data); - put_page_and_type(mfn_to_page(mfn)); - } + vpmu_cleanup(v); if ( v != current ) vcpu_unpause(v); diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f1dd86e12e..f5c0c378ef 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -454,9 +454,6 @@ void arch_vcpu_destroy(struct vcpu *v) xfree(v->arch.msrs); v->arch.msrs = NULL; - if ( !is_idle_domain(v->domain) ) - vpmu_destroy(v); - if ( is_hvm_vcpu(v) ) hvm_vcpu_destroy(v); else @@ -2136,12 +2133,17 @@ int domain_relinquish_resources(struct domain *d) PROGRESS(vcpu_pagetables): - /* Drop the in-use references to page-table bases. */ + /* + * Drop the in-use references to page-table bases and clean + * up vPMU instances. + */ for_each_vcpu ( d, v ) { ret = vcpu_destroy_pagetables(v); if ( ret ) return ret; + + vpmu_destroy(v); } if ( altp2m_active(d) )