From patchwork Wed Nov 6 15:18:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11230597 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30FAD1747 for ; Wed, 6 Nov 2019 15:19:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1767A2166E for ; Wed, 6 Nov 2019 15:19:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1767A2166E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN4j-0003WV-4w; Wed, 06 Nov 2019 15:18:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN4i-0003WQ-50 for xen-devel@lists.xenproject.org; Wed, 06 Nov 2019 15:18:24 +0000 X-Inumbo-ID: abd735d0-00a8-11ea-984a-bc764e2007e4 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id abd735d0-00a8-11ea-984a-bc764e2007e4; Wed, 06 Nov 2019 15:18:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4EAA6B071; Wed, 6 Nov 2019 15:18:22 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: Date: Wed, 6 Nov 2019 16:18:31 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Subject: [Xen-devel] [PATCH 1/3] AMD/IOMMU: don't needlessly trigger errors/crashes when unmapping a page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Sander Eikelenboom Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Unmapping a page which has never been mapped should be a no-op (note how it already is in case there was no root page table allocated). There's in particular no need to grow the number of page table levels in use, and there's also no need to allocate intermediate page tables except when needing to split a large page. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper Reviewed-by: Paul Durrant --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -176,7 +176,7 @@ void iommu_dte_set_guest_cr3(struct amd_ * page tables. */ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn, - unsigned long pt_mfn[]) + unsigned long pt_mfn[], bool map) { struct amd_iommu_pte *pde, *next_table_vaddr; unsigned long next_table_mfn; @@ -189,6 +189,13 @@ static int iommu_pde_from_dfn(struct dom BUG_ON( table == NULL || level < 1 || level > 6 ); + /* + * A frame number past what the current page tables can represent can't + * possibly have a mapping. + */ + if ( dfn >> (PTE_PER_TABLE_SHIFT * level) ) + return 0; + next_table_mfn = mfn_x(page_to_mfn(table)); if ( level == 1 ) @@ -246,6 +253,9 @@ static int iommu_pde_from_dfn(struct dom /* Install lower level page table for non-present entries */ else if ( !pde->pr ) { + if ( !map ) + return 0; + if ( next_table_mfn == 0 ) { table = alloc_amd_iommu_pgtable(); @@ -404,7 +414,7 @@ int amd_iommu_map_page(struct domain *d, } } - if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) ) + if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn, true) || (pt_mfn[1] == 0) ) { spin_unlock(&hd->arch.mapping_lock); AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n", @@ -439,24 +449,7 @@ int amd_iommu_unmap_page(struct domain * return 0; } - /* Since HVM domain is initialized with 2 level IO page table, - * we might need a deeper page table for lager dfn now */ - if ( is_hvm_domain(d) ) - { - int rc = update_paging_mode(d, dfn_x(dfn)); - - if ( rc ) - { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Update page mode failed dfn = %"PRI_dfn"\n", - dfn_x(dfn)); - if ( rc != -EADDRNOTAVAIL ) - domain_crash(d); - return rc; - } - } - - if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn) || (pt_mfn[1] == 0) ) + if ( iommu_pde_from_dfn(d, dfn_x(dfn), pt_mfn, false) ) { spin_unlock(&hd->arch.mapping_lock); AMD_IOMMU_DEBUG("Invalid IO pagetable entry dfn = %"PRI_dfn"\n", @@ -465,8 +458,11 @@ int amd_iommu_unmap_page(struct domain * return -EFAULT; } - /* mark PTE as 'page not present' */ - *flush_flags |= clear_iommu_pte_present(pt_mfn[1], dfn_x(dfn)); + if ( pt_mfn[1] ) + { + /* Mark PTE as 'page not present'. */ + *flush_flags |= clear_iommu_pte_present(pt_mfn[1], dfn_x(dfn)); + } spin_unlock(&hd->arch.mapping_lock);