From patchwork Tue Mar 29 10:47:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 8685761 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 77AA49F3D1 for ; Tue, 29 Mar 2016 11:01:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 87B2420279 for ; Tue, 29 Mar 2016 11:01:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C28D20253 for ; Tue, 29 Mar 2016 11:01:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1akrM4-000433-C6; Tue, 29 Mar 2016 10:58:36 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1akrM2-00042W-JC for xen-devel@lists.xenproject.org; Tue, 29 Mar 2016 10:58:34 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id 6E/C3-22167-9DF5AF65; Tue, 29 Mar 2016 10:58:33 +0000 X-Env-Sender: prvs=889c84d62=Paul.Durrant@citrix.com X-Msg-Ref: server-15.tower-206.messagelabs.com!1459249112!23430100!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 63872 invoked from network); 29 Mar 2016 10:58:33 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 29 Mar 2016 10:58:33 -0000 X-IronPort-AV: E=Sophos;i="5.24,410,1454976000"; d="scan'208";a="349672874" From: Paul Durrant To: Date: Tue, 29 Mar 2016 11:47:28 +0100 Message-ID: <1459248448-13511-1-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 X-DLP: MIA2 Cc: Andrew Cooper , Paul Durrant , Keir Fraser , Jan Beulich Subject: [Xen-devel] [PATCH] x86/hvm/viridian: fix APIC assist page leak X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit a6f2cdb6 "keep APIC assist page mapped..." introduced a page leak because it relied on viridian_vcpu_deinit() always being called to release the page mapping. This does not happen in the case a normal domain shutdown. This patch fixes the problem by introducing a new function, viridian_domain_deinit(), which will iterate through the vCPUs and release any page mappings still present. Signed-off-by: Paul Durrant Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper Reviewed-by: Jan Beulich --- xen/arch/x86/hvm/hvm.c | 2 ++ xen/arch/x86/hvm/viridian.c | 16 ++++++++++++++++ xen/include/asm-x86/hvm/viridian.h | 1 + 3 files changed, 19 insertions(+) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 80d59ff..611470e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1726,6 +1726,8 @@ void hvm_domain_relinquish_resources(struct domain *d) if ( hvm_funcs.nhvm_domain_relinquish_resources ) hvm_funcs.nhvm_domain_relinquish_resources(d); + viridian_domain_deinit(d); + hvm_destroy_all_ioreq_servers(d); msixtbl_pt_cleanup(d); diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index dceed2c..5c76c1a 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -251,6 +251,14 @@ static void initialize_apic_assist(struct vcpu *v) if ( viridian_feature_mask(v->domain) & HVMPV_apic_assist ) { + /* + * If we overwrite an existing address here then something has + * gone wrong and a domain page will leak. Instead crash the + * domain to make the problem obvious. + */ + if ( v->arch.hvm_vcpu.viridian.apic_assist.va ) + domain_crash(d); + v->arch.hvm_vcpu.viridian.apic_assist.va = va; return; } @@ -608,6 +616,14 @@ void viridian_vcpu_deinit(struct vcpu *v) teardown_apic_assist(v); } +void viridian_domain_deinit(struct domain *d) +{ + struct vcpu *v; + + for_each_vcpu ( d, v ) + teardown_apic_assist(v); +} + static DEFINE_PER_CPU(cpumask_t, ipi_cpumask); int viridian_hypercall(struct cpu_user_regs *regs) diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 7f281b2..bdbccd5 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -122,6 +122,7 @@ void viridian_time_ref_count_freeze(struct domain *d); void viridian_time_ref_count_thaw(struct domain *d); void viridian_vcpu_deinit(struct vcpu *v); +void viridian_domain_deinit(struct domain *d); void viridian_start_apic_assist(struct vcpu *v, int vector); int viridian_complete_apic_assist(struct vcpu *v);