From patchwork Fri Feb 26 10:56:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 12106461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55BADC433E9 for ; Fri, 26 Feb 2021 10:57:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE77064EE7 for ; Fri, 26 Feb 2021 10:57:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE77064EE7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.90209.170733 (Exim 4.92) (envelope-from ) id 1lFani-0003JE-2t; Fri, 26 Feb 2021 10:56:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 90209.170733; Fri, 26 Feb 2021 10:56:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFanh-0003J7-Vk; Fri, 26 Feb 2021 10:56:49 +0000 Received: by outflank-mailman (input) for mailman id 90209; Fri, 26 Feb 2021 10:56:48 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFang-0003I4-Hd for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:48 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFane-0001Q9-9O; Fri, 26 Feb 2021 10:56:46 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lFane-0007D9-0b; Fri, 26 Feb 2021 10:56:46 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=+iKLLRCgdyK7ZcIpTbQEg0QmDjhs81xpS53/hC1UIvE=; b=WpNhpDTv4cWmLJ3MkfreZ7i/i H5fN4RFkQhtnKchIqL9lwYYm4X9hri3P7Zc++SmnR3r/PI2U1uwrJhSaWFrd/C6ZW4cyoYUu/07/X su+ouT+FC3m+eEkL1oMeHtKj3A3IB8WAsKM8JitYl2ZICV3dCSKHG9uASOteTc11reqmY=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Jan Beulich , Andrew Cooper , Kevin Tian , Paul Durrant Subject: [PATCH for-4.15 v5 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying Date: Fri, 26 Feb 2021 10:56:39 +0000 Message-Id: <20210226105640.12037-3-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210226105640.12037-1-julien@xen.org> References: <20210226105640.12037-1-julien@xen.org> From: Julien Grall The new x86 IOMMU page-tables allocator will release the pages when relinquishing the domain resources. However, this is not sufficient when the domain is dying because nothing prevents page-table to be allocated. As the domain is dying, it is not necessary to continue to modify the IOMMU page-tables as they are going to be destroyed soon. At the moment, page-table allocates will only happen when iommu_map(). So after this change there will be no more page-table allocation happening because we don't use superpage mappings yet when not sharing page tables. In order to observe d->is_dying correctly, we need to rely on per-arch locking, so the check to ignore IOMMU mapping is added on the per-driver map_page() callback. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- As discussed in v3, this is only covering 4.15. We can discuss post-4.15 how to catch page-table allocations if another caller (e.g. iommu_unmap() if we ever decide to support superpages) start to use the page-table allocator. Changes in v5: - Clarify in the commit message why fixing iommu_map() is enough - Split "if ( !is_iommu_enabled(d) )" in a separate patch - Update the comment on top of the spin_barrier() Changes in v4: - Move the patch to the top of the queue - Reword the commit message Changes in v3: - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't crash the domain if it is dying" --- xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++++++++ xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++ xen/drivers/passthrough/x86/iommu.c | 3 +++ 3 files changed, 27 insertions(+) diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index d3a8b1aec766..560af54b765b 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -285,6 +285,18 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, spin_lock(&hd->arch.mapping_lock); + /* + * IOMMU mapping request can be safely ignored when the domain is dying. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables()). + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + rc = amd_iommu_alloc_root(d); if ( rc ) { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index d136fe36883b..b549a71530d5 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1762,6 +1762,18 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, spin_lock(&hd->arch.mapping_lock); + /* + * IOMMU mapping request can be safely ignored when the domain is dying. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables()) + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1); if ( !pg_maddr ) { diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index 58a330e82247..ad19b7dd461c 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -270,6 +270,9 @@ int iommu_free_pgtables(struct domain *d) if ( !is_iommu_enabled(d) ) return 0; + /* After this barrier, no new IOMMU mappings can be inserted. */ + spin_barrier(&hd->arch.mapping_lock); + while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg);