From patchwork Tue Dec 22 15:43:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11986991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66E49C433E0 for ; Tue, 22 Dec 2020 15:44:00 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2BCE523105 for ; Tue, 22 Dec 2020 15:44:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2BCE523105 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.57960.101633 (Exim 4.92) (envelope-from ) id 1krjpA-0006b7-Jy; Tue, 22 Dec 2020 15:43:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 57960.101633; Tue, 22 Dec 2020 15:43:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpA-0006av-DQ; Tue, 22 Dec 2020 15:43:44 +0000 Received: by outflank-mailman (input) for mailman id 57960; Tue, 22 Dec 2020 15:43:43 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjp9-0006aK-Ag for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:43 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjp8-0001pi-Qq; Tue, 22 Dec 2020 15:43:42 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1krjp8-0002Vd-H9; Tue, 22 Dec 2020 15:43:42 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=+mBSW1/mVahokRFnfqlLHIzGDQdUfDu04JYjUwyK9PU=; b=Qp/G0A2gJEUTvBOWR3byyha/1 bd8DT9cdnddbL/4W14DjPbG4qC1nxHQ5HPACtFDpIZ2JzFS7oSuoaS7PBf5Nou0z9L5Hb0URikLda KqSC3dQSiOSorCUdQE07hYlyY5WQ5f8uo2Lga71/V7zYUIFmcNJyiJpCSECv51XkytyjM=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, Julien Grall , Jan Beulich , Paul Durrant Subject: [PATCH for-4.15 1/4] xen/iommu: Check if the IOMMU was initialized before tearing down Date: Tue, 22 Dec 2020 15:43:35 +0000 Message-Id: <20201222154338.9459-2-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201222154338.9459-1-julien@xen.org> References: <20201222154338.9459-1-julien@xen.org> From: Julien Grall is_iommu_enabled() will return true even if the IOMMU has not been initialized (e.g. the ops are not set). In the case of an early failure in arch_domain_init(), the function iommu_destroy_domain() will be called even if the IOMMU is initialized. This will result to dereference the ops which will be NULL and an host crash. Fix the issue by checking that ops has been set before accessing it. Note that we are assuming that arch_iommu_domain_init() will cleanup an intermediate failure if it failed. Fixes: 71e617a6b8f6 ("use is_iommu_enabled() where appropriate...") Signed-off-by: Julien Grall --- xen/drivers/passthrough/iommu.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 2358b6eb09f4..f976d5a0b0a5 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -226,7 +226,15 @@ static void iommu_teardown(struct domain *d) void iommu_domain_destroy(struct domain *d) { - if ( !is_iommu_enabled(d) ) + struct domain_iommu *hd = dom_iommu(d); + + /* + * In case of failure during the domain construction, it would be + * possible to reach this function with the IOMMU enabled but not + * yet initialized. We assume that hd->platforms will be non-NULL as + * soon as we start to initialize the IOMMU. + */ + if ( !is_iommu_enabled(d) || !hd->platform_ops ) return; iommu_teardown(d); From patchwork Tue Dec 22 15:43:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11986987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8384BC433E0 for ; Tue, 22 Dec 2020 15:43:57 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3DC2B23105 for ; Tue, 22 Dec 2020 15:43:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DC2B23105 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.57961.101650 (Exim 4.92) (envelope-from ) id 1krjpB-0006cn-S3; Tue, 22 Dec 2020 15:43:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 57961.101650; Tue, 22 Dec 2020 15:43:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpB-0006cg-O5; Tue, 22 Dec 2020 15:43:45 +0000 Received: by outflank-mailman (input) for mailman id 57961; Tue, 22 Dec 2020 15:43:44 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpA-0006ab-72 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:44 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjp9-0001pp-Qe; Tue, 22 Dec 2020 15:43:43 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1krjp9-0002Vd-H1; Tue, 22 Dec 2020 15:43:43 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=oGKgq5COKIk8NxpKicCWyg/HHbH44EXhBG+TPo1gMSg=; b=6F1sIpK8zXz525iNpH+YZy4J3 7CHZgeRZybwOTkVZ61q5urpWMtfSXXpDnObgRW2hK+49+shv6uDJaRniyyRhf4yEUuRbZPWTVdrkD ZhrkwGNjSSbPUCzDwM4+4E6fhZ46DK1HNbyal5aytfHiXgVlbyIiiuGJC0FrLuObJsjOw=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, Julien Grall , Jan Beulich , Paul Durrant Subject: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables with the pgtables.lock held Date: Tue, 22 Dec 2020 15:43:36 +0000 Message-Id: <20201222154338.9459-3-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201222154338.9459-1-julien@xen.org> References: <20201222154338.9459-1-julien@xen.org> From: Julien Grall The pgtables.lock is protecting access to the page list pgtables.list. However, iommu_free_pgtables() will not held it. I guess it was assumed that page-tables cannot be allocated while the domain is dying. Unfortunately, there is no guarantee that iommu_map() will not be called while a domain is dying (it looks like to be possible from XEN_DOMCTL_memory_mapping). So it would be possible to be concurrently allocate memory and free the page-tables. Therefore, we need to held the lock when freeing the page tables. There are more issues around the IOMMU page-allocator. They will be handled in follow-up patches. Signed-off-by: Julien Grall --- xen/drivers/passthrough/x86/iommu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index cea1032b3d02..779dbb5b98ba 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -267,13 +267,18 @@ int iommu_free_pgtables(struct domain *d) struct page_info *pg; unsigned int done = 0; + spin_lock(&hd->arch.pgtables.lock); while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); if ( !(++done & 0xff) && general_preempt_check() ) + { + spin_unlock(&hd->arch.pgtables.lock); return -ERESTART; + } } + spin_unlock(&hd->arch.pgtables.lock); return 0; } From patchwork Tue Dec 22 15:43:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11986993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD3ABC433DB for ; Tue, 22 Dec 2020 15:44:00 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8101A23105 for ; Tue, 22 Dec 2020 15:44:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8101A23105 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.57962.101662 (Exim 4.92) (envelope-from ) id 1krjpD-0006es-5j; Tue, 22 Dec 2020 15:43:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 57962.101662; Tue, 22 Dec 2020 15:43:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpD-0006ed-27; Tue, 22 Dec 2020 15:43:47 +0000 Received: by outflank-mailman (input) for mailman id 57962; Tue, 22 Dec 2020 15:43:45 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpB-0006cH-8H for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:45 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpA-0001pv-Q2; Tue, 22 Dec 2020 15:43:44 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1krjpA-0002Vd-Gp; Tue, 22 Dec 2020 15:43:44 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=kSPSsNxqjRsah0K2F/khe3S48PU9Vn+ea2ql+1qJVow=; b=u7zwD9TpUghLdk5/4t796dN7R OOEKsJTI9epORlTXPMuVoXxnmmbIjmRfUdkldFMZAy9GA+ZhknOA4hcvMVw4n4cUV2qqawv7dMhOd Of74k7GR4v8G7Rb07S7LfRq8ISeZ5tbsc5hBO8WlTOoK7umNxl8HSn0bLLoh193yGWF7c=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, Julien Grall , Jan Beulich , Paul Durrant Subject: [PATCH for-4.15 3/4] [RFC] xen/iommu: x86: Clear the root page-table before freeing the page-tables Date: Tue, 22 Dec 2020 15:43:37 +0000 Message-Id: <20201222154338.9459-4-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201222154338.9459-1-julien@xen.org> References: <20201222154338.9459-1-julien@xen.org> From: Julien Grall The new per-domain IOMMU page-table allocator will now free the page-tables when domain's resources are relinquished. However, the root page-table (i.e. hd->arch.pg_maddr) will not be cleared. Xen may access the IOMMU page-tables afterwards at least in the case of PV domain: (XEN) Xen call trace: (XEN) [] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8 (XEN) [] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8 (XEN) [] F iommu_unmap+0x9c/0x129 (XEN) [] F iommu_legacy_unmap+0x26/0x63 (XEN) [] F mm.c#cleanup_page_mappings+0x139/0x144 (XEN) [] F put_page+0x4b/0xb3 (XEN) [] F put_page_from_l1e+0x136/0x13b (XEN) [] F devalidate_page+0x256/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l2e+0x8a/0xcf (XEN) [] F devalidate_page+0x3a3/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l3e+0x8a/0xcf (XEN) [] F devalidate_page+0x56c/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l4e+0x69/0x6d (XEN) [] F devalidate_page+0x6a0/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F put_page_type_preemptible+0x13/0x15 (XEN) [] F domain.c#relinquish_memory+0x1ff/0x4e9 (XEN) [] F domain_relinquish_resources+0x2b6/0x36a (XEN) [] F domain_kill+0xb8/0x141 (XEN) [] F do_domctl+0xb6f/0x18e5 (XEN) [] F pv_hypercall+0x2f0/0x55f (XEN) [] F lstar_enter+0x112/0x120 This will result to a use after-free and possibly an host crash or memory corruption. Freeing the page-tables further down in domain_relinquish_resources() would not work because pages may not be released until later if another domain hold a reference on them. Once all the PCI devices have been de-assigned, it is actually pointless to access modify the IOMMU page-tables. So we can simply clear the root page-table address. Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table allocator") Signed-off-by: Julien Grall --- This is an RFC because it would break AMD IOMMU driver. One option would be to move the call to the teardown callback earlier on. Any opinions? --- xen/drivers/passthrough/x86/iommu.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index 779dbb5b98ba..99a23177b3d2 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -267,6 +267,16 @@ int iommu_free_pgtables(struct domain *d) struct page_info *pg; unsigned int done = 0; + /* + * Pages will be moved to the free list in a bit. So we want to + * clear the root page-table to avoid any potential use after-free. + * + * XXX: This only code works for Intel vT-D. + */ + spin_lock(&hd->arch.mapping_lock); + hd->arch.vtd.pgd_maddr = 0; + spin_unlock(&hd->arch.mapping_lock); + spin_lock(&hd->arch.pgtables.lock); while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) { From patchwork Tue Dec 22 15:43:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11986989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53ACAC433E9 for ; Tue, 22 Dec 2020 15:43:58 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BBC023105 for ; Tue, 22 Dec 2020 15:43:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BBC023105 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.57963.101674 (Exim 4.92) (envelope-from ) id 1krjpE-0006hj-HL; Tue, 22 Dec 2020 15:43:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 57963.101674; Tue, 22 Dec 2020 15:43:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpE-0006hY-BP; Tue, 22 Dec 2020 15:43:48 +0000 Received: by outflank-mailman (input) for mailman id 57963; Tue, 22 Dec 2020 15:43:46 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpC-0006e6-E6 for xen-devel@lists.xenproject.org; Tue, 22 Dec 2020 15:43:46 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1krjpC-0001q2-3U; Tue, 22 Dec 2020 15:43:46 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1krjpB-0002Vd-RL; Tue, 22 Dec 2020 15:43:46 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=mOorAOnuty5dT4XVPxHkS2yx8rloYCwOnZM9EKJjgIk=; b=ZnR5j8EBlvK2vr5hUlW5i/rcV vFAjgpRLLsxxT33LPriWdzoJlmk8Mxs4++xBSS6onpZt6PnkaU0lUdzvpZ4YvDtbGuXEOqFgSBL3Q QUPEXZc5pIUociG5agRoDz1F5tSnWr8aD3p1hhih0IZIZlPbn31UKxD5zdekhhy9Ze9p8=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, Julien Grall , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Paul Durrant Subject: [PATCH for-4.15 4/4] xen/iommu: x86: Don't leak the IOMMU page-tables Date: Tue, 22 Dec 2020 15:43:38 +0000 Message-Id: <20201222154338.9459-5-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201222154338.9459-1-julien@xen.org> References: <20201222154338.9459-1-julien@xen.org> From: Julien Grall The new IOMMU page-tables allocator will release the pages when relinquish the domain resources. However, this is not sufficient in two cases: 1) domain_relinquish_resources() is not called when the domain creation fails. 2) There is nothing preventing page-table allocations when the domain is dying. In both cases, this can be solved by freeing the page-tables again when the domain destruction. Although, this may result to an high number of page-tables to free. In the second case, it is pointless to allow page-table allocation when the domain is going to die. iommu_alloc_pgtable() will now return an error when it is called while the domain is dying. Signed-off-by: Julien Grall --- xen/arch/x86/domain.c | 2 +- xen/drivers/passthrough/x86/iommu.c | 32 +++++++++++++++++++++++++++-- xen/include/asm-x86/iommu.h | 2 +- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index b9ba04633e18..1b7ee5c1a8cb 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2290,7 +2290,7 @@ int domain_relinquish_resources(struct domain *d) PROGRESS(iommu_pagetables): - ret = iommu_free_pgtables(d); + ret = iommu_free_pgtables(d, false); if ( ret ) return ret; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index 99a23177b3d2..4a083e4b8f11 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -149,6 +149,21 @@ int arch_iommu_domain_init(struct domain *d) void arch_iommu_domain_destroy(struct domain *d) { + struct domain_iommu *hd = dom_iommu(d); + int rc; + + /* + * The relinquish code will not be executed if the domain creation + * failed. To avoid any memory leak, we want to free any IOMMU + * page-tables that may have been allocated. + */ + rc = iommu_free_pgtables(d, false); + + /* The preemption was disabled, so the call should never fail. */ + if ( rc ) + ASSERT_UNREACHABLE(); + + ASSERT(page_list_empty(&hd->arch.pgtables.list)); } static bool __hwdom_init hwdom_iommu_map(const struct domain *d, @@ -261,7 +276,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d) return; } -int iommu_free_pgtables(struct domain *d) +int iommu_free_pgtables(struct domain *d, bool preempt) { struct domain_iommu *hd = dom_iommu(d); struct page_info *pg; @@ -282,7 +297,7 @@ int iommu_free_pgtables(struct domain *d) { free_domheap_page(pg); - if ( !(++done & 0xff) && general_preempt_check() ) + if ( !(++done & 0xff) && preempt && general_preempt_check() ) { spin_unlock(&hd->arch.pgtables.lock); return -ERESTART; @@ -305,6 +320,19 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) memflags = MEMF_node(hd->node); #endif + /* + * The IOMMU page-tables are freed when relinquishing the domain, but + * nothing prevent allocation to happen afterwards. There is no valid + * reasons to continue to update the IOMMU page-tables while the + * domain is dying. + * + * So prevent page-table allocation when the domain is dying. Note + * this doesn't fully prevent the race because d->is_dying may not + * yet be seen. + */ + if ( d->is_dying ) + return NULL; + pg = alloc_domheap_page(NULL, memflags); if ( !pg ) return NULL; diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index 970eb06ffac5..874bb5bbfbde 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -135,7 +135,7 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq, iommu_vcall(ops, sync_cache, addr, size); \ }) -int __must_check iommu_free_pgtables(struct domain *d); +int __must_check iommu_free_pgtables(struct domain *d, bool preempt); struct page_info *__must_check iommu_alloc_pgtable(struct domain *d); #endif /* !__ARCH_X86_IOMMU_H__ */