From patchwork Tue Nov 26 17:17:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11262911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3AE6C930 for ; Tue, 26 Nov 2019 17:19:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 126292073F for ; Tue, 26 Nov 2019 17:19:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="gwBHvHEQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 126292073F Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeUD-0002CO-9m; Tue, 26 Nov 2019 17:18:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iZeUB-0002C1-OC for xen-devel@lists.xenproject.org; Tue, 26 Nov 2019 17:18:47 +0000 X-Inumbo-ID: ce20168e-1070-11ea-b155-bc764e2007e4 Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ce20168e-1070-11ea-b155-bc764e2007e4; Tue, 26 Nov 2019 17:18:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1574788728; x=1606324728; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Z6J5/hXta2S22gaWo21MAF1aBFejb5nQtmeLzX8YuUs=; b=gwBHvHEQ/sJxFBO5F5Y6/fiu7hODZlyJrJJ8Uez412qgI7lptRBX7pIe wRHaQFAvzYu004MBnYWUZPRtX6uOm0cKVp5R2mJcV5D133Ncir1zaIKzC +tF0HYtT35slu9+L1Ys1J/rtydqsLTcC0fN9fX6PNaBvIQIdaFp29uTd6 A=; IronPort-SDR: BwYEWlJRZgssE+ZJXZc/w5tW4s1VZRosdAZ8KsdMi15vW8L8eanw9+B1DDw0ih2QUsQJbV9FIa OEfxZ/Ib1QFg== X-IronPort-AV: E=Sophos;i="5.69,246,1571702400"; d="scan'208";a="10012929" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-c300ac87.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 26 Nov 2019 17:17:27 +0000 Received: from EX13MTAUEA001.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-c300ac87.us-west-2.amazon.com (Postfix) with ESMTPS id E97DCA2563; Tue, 26 Nov 2019 17:17:26 +0000 (UTC) Received: from EX13D37EUA004.ant.amazon.com (10.43.165.124) by EX13MTAUEA001.ant.amazon.com (10.43.61.243) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 26 Nov 2019 17:17:26 +0000 Received: from EX13MTAUEB001.ant.amazon.com (10.43.60.96) by EX13D37EUA004.ant.amazon.com (10.43.165.124) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 26 Nov 2019 17:17:25 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.60.129) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 26 Nov 2019 17:17:23 +0000 From: Paul Durrant To: Date: Tue, 26 Nov 2019 17:17:15 +0000 Message-ID: <20191126171715.10881-1-pdurrant@amazon.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , Paul Durrant , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall A guest will setup a shared page with the hypervisor for each vCPU via XENPMU_init. The page will then get mapped in the hypervisor and only released when XEMPMU_finish is called. This means that if the guest is not shutdown gracefully (such as via xl destroy), the page will stay mapped in the hypervisor. One of the consequence is the domain can never be fully destroyed as some of its memory is still mapped. As Xen should never rely on the guest to correctly clean-up any allocation in the hypervisor, we should also unmap pages during the domain destruction if there are any left. We can re-use the same logic as in pvpmu_finish(). To avoid duplication, move the logic in a new function that can also be called from vpmu_destroy(). NOTE: The call to vpmu_destroy() must also be moved from arch_vcpu_destroy() into domain_relinquish_resources() such that the mapped page does not prevent domain_destroy() (which calls arch_vcpu_destroy()) from being called. Signed-off-by: Julien Grall Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" --- xen/arch/x86/cpu/vpmu.c | 45 +++++++++++++++++++++++------------------ xen/arch/x86/domain.c | 6 +++--- 2 files changed, 28 insertions(+), 23 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index f397183ec3..9ae4ed48c8 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -578,9 +578,32 @@ static void vpmu_arch_destroy(struct vcpu *v) } } -void vpmu_destroy(struct vcpu *v) +static void vpmu_cleanup(struct vcpu *v) { + struct vpmu_struct *vpmu = vcpu_vpmu(v); + mfn_t mfn; + void *xenpmu_data; + + spin_lock(&vpmu->vpmu_lock); + vpmu_arch_destroy(v); + xenpmu_data = vpmu->xenpmu_data; + vpmu->xenpmu_data = NULL; + + spin_unlock(&vpmu->vpmu_lock); + + if ( xenpmu_data ) + { + mfn = domain_page_map_to_mfn(xenpmu_data); + ASSERT(mfn_valid(mfn)); + unmap_domain_page_global(xenpmu_data); + put_page_and_type(mfn_to_page(mfn)); + } +} + +void vpmu_destroy(struct vcpu *v) +{ + vpmu_cleanup(v); put_vpmu(v); } @@ -639,9 +662,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params) static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) { struct vcpu *v; - struct vpmu_struct *vpmu; - mfn_t mfn; - void *xenpmu_data; if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) ) return; @@ -650,22 +670,7 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params) if ( v != current ) vcpu_pause(v); - vpmu = vcpu_vpmu(v); - spin_lock(&vpmu->vpmu_lock); - - vpmu_arch_destroy(v); - xenpmu_data = vpmu->xenpmu_data; - vpmu->xenpmu_data = NULL; - - spin_unlock(&vpmu->vpmu_lock); - - if ( xenpmu_data ) - { - mfn = domain_page_map_to_mfn(xenpmu_data); - ASSERT(mfn_valid(mfn)); - unmap_domain_page_global(xenpmu_data); - put_page_and_type(mfn_to_page(mfn)); - } + vpmu_cleanup(v); if ( v != current ) vcpu_unpause(v); diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f1dd86e12e..1d75b2e6c3 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -454,9 +454,6 @@ void arch_vcpu_destroy(struct vcpu *v) xfree(v->arch.msrs); v->arch.msrs = NULL; - if ( !is_idle_domain(v->domain) ) - vpmu_destroy(v); - if ( is_hvm_vcpu(v) ) hvm_vcpu_destroy(v); else @@ -2224,6 +2221,9 @@ int domain_relinquish_resources(struct domain *d) if ( is_hvm_domain(d) ) hvm_domain_relinquish_resources(d); + for_each_vcpu ( d, v ) + vpmu_destroy(v); + return 0; }