From patchwork Tue Feb 9 15:28:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 12078449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A59CC433E9 for ; Tue, 9 Feb 2021 15:28:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF76764E50 for ; Tue, 9 Feb 2021 15:28:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF76764E50 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83289.154635 (Exim 4.92) (envelope-from ) id 1l9UwK-0007Ft-Ap; Tue, 09 Feb 2021 15:28:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83289.154635; Tue, 09 Feb 2021 15:28:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9UwK-0007FY-3C; Tue, 09 Feb 2021 15:28:32 +0000 Received: by outflank-mailman (input) for mailman id 83289; Tue, 09 Feb 2021 15:28:30 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9UwI-0007BU-39 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 15:28:30 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9UwH-0000uI-88; Tue, 09 Feb 2021 15:28:29 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1l9UwG-0007gX-Vo; Tue, 09 Feb 2021 15:28:29 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=LJMsVj+8JqORqU40yjLdihq8PI5kGbY5vaa1vtJvoyo=; b=1HwMrxON3gHEeOkxPa3M1UOf+ PxJapha9kW1ZcdNlaab938I9SyajXZKb4wtLsi9UMdJvbYCsxZdlgkybphOLDLdLX8XJQB6ivNgvb PmT/kg9fLGCMKgIrqQjeML6vRPUaY/9cT6kxiR0SO9aj9RI9tJwQUR9UlabxCjRO/tMvI=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Jan Beulich , Paul Durrant Subject: [for-4.15][PATCH v2 4/5] xen/iommu: x86: Don't leak the IOMMU page-tables Date: Tue, 9 Feb 2021 15:28:15 +0000 Message-Id: <20210209152816.15792-5-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210209152816.15792-1-julien@xen.org> References: <20210209152816.15792-1-julien@xen.org> From: Julien Grall The new IOMMU page-tables allocator will release the pages when relinquish the domain resources. However, this is not sufficient when the domain is dying because nothing prevents page-table to be allocated. iommu_alloc_pgtable() is now checking if the domain is dying before adding the page in the list. We are relying on &hd->arch.pgtables.lock to synchronize d->is_dying. Take the opportunity to check in arch_iommu_domain_destroy() that all that page tables have been freed. Signed-off-by: Julien Grall Reviewed-by: Paul Durrant --- There is one more bug that will be solved in the next patch as I felt they each needed a long explanation. Changes in v2: - Rework the approach - Move the patch earlier in the series --- xen/drivers/passthrough/x86/iommu.c | 36 ++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index cea1032b3d02..82d770107a47 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d) void arch_iommu_domain_destroy(struct domain *d) { + /* + * There should be not page-tables left allocated by the time the + * domain is destroyed. Note that arch_iommu_domain_destroy() is + * called unconditionally, so pgtables may be unitialized. + */ + ASSERT(dom_iommu(d)->platform_ops == NULL || + page_list_empty(&dom_iommu(d)->arch.pgtables.list)); } static bool __hwdom_init hwdom_iommu_map(const struct domain *d, @@ -267,6 +274,12 @@ int iommu_free_pgtables(struct domain *d) struct page_info *pg; unsigned int done = 0; + if ( !is_iommu_enabled(d) ) + return 0; + + /* After this barrier no new page allocations can occur. */ + spin_barrier(&hd->arch.pgtables.lock); + while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); @@ -284,6 +297,7 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) unsigned int memflags = 0; struct page_info *pg; void *p; + bool alive = false; #ifdef CONFIG_NUMA if ( hd->node != NUMA_NO_NODE ) @@ -303,9 +317,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) unmap_domain_page(p); spin_lock(&hd->arch.pgtables.lock); - page_list_add(pg, &hd->arch.pgtables.list); + /* + * The IOMMU page-tables are freed when relinquishing the domain, but + * nothing prevent allocation to happen afterwards. There is no valid + * reasons to continue to update the IOMMU page-tables while the + * domain is dying. + * + * So prevent page-table allocation when the domain is dying. + * + * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying. + */ + if ( likely(!d->is_dying) ) + { + alive = true; + page_list_add(pg, &hd->arch.pgtables.list); + } spin_unlock(&hd->arch.pgtables.lock); + if ( unlikely(!alive) ) + { + free_domheap_page(pg); + pg = NULL; + } + return pg; }