From patchwork Mon Jan 10 16:22:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12708944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 657D8C433F5 for ; Mon, 10 Jan 2022 16:23:14 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.255398.437626 (Exim 4.92) (envelope-from ) id 1n6xRm-0002kY-T5; Mon, 10 Jan 2022 16:23:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 255398.437626; Mon, 10 Jan 2022 16:23:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n6xRm-0002jp-PK; Mon, 10 Jan 2022 16:23:02 +0000 Received: by outflank-mailman (input) for mailman id 255398; Mon, 10 Jan 2022 16:23:01 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n6xRl-0002jc-Gr for xen-devel@lists.xenproject.org; Mon, 10 Jan 2022 16:23:01 +0000 Received: from de-smtp-delivery-102.mimecast.com (de-smtp-delivery-102.mimecast.com [194.104.109.102]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 93da5b12-7231-11ec-81c1-a30af7de8005; Mon, 10 Jan 2022 17:23:00 +0100 (CET) Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-he1eur02lp2051.outbound.protection.outlook.com [104.47.5.51]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id de-mta-23-oevbNLcfNSClFExGJMpDug-1; Mon, 10 Jan 2022 17:22:58 +0100 Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16) by VI1PR0401MB2446.eurprd04.prod.outlook.com (2603:10a6:800:4e::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4867.7; Mon, 10 Jan 2022 16:22:53 +0000 Received: from VI1PR04MB5600.eurprd04.prod.outlook.com ([fe80::5951:a489:1cf0:19fe]) by VI1PR04MB5600.eurprd04.prod.outlook.com ([fe80::5951:a489:1cf0:19fe%6]) with mapi id 15.20.4867.011; Mon, 10 Jan 2022 16:22:53 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 93da5b12-7231-11ec-81c1-a30af7de8005 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619; t=1641831780; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JfXG07vrt481Wpu05jJkO/De1cKPglNTB2enOwwMPZg=; b=cpC2Uf4BzPpH5jDon7JPC7753LrLLuAv0PvBo4XvsgvFQ1Yh04tHkkoi+DEdihNqXFJRvA TDtJYQ0jf9utrj056dlL9l8C6o0oFsuPiUUoctf/glSVRO9kPOcfBzH5NTri7uSTLa5STQ Cmq0s+d6TaaG9MCD9ufEYtskFulsd1s= X-MC-Unique: oevbNLcfNSClFExGJMpDug-1 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GUeLs/T18jsWZIr7lp/1FLB2eVg2ZktDCCHatpM0oSbCu+d+uDkbzRjks6IM8xVgbN3NV91sriLrJbxXIMeoe/Wubo+mSGk0NvLaQgLGnTyt9lCu+ylbZRN7JlWmtxA5JN9+UVIOUbBht1EHbKdwhaCKue7gNM7inNI6FGzdiO926TImQzqHI7qiaZ7KabuqE0YJt5+VRKgYdb6zD9tzJ3tKS02q0I7ZS2Kprf6F+N7+rs1DvlFB1K549sSfhDmDYfcvy5i684YU1ewBHMsN8NIquibSqSjftKb1FPrqWVq5JvI8JURK8ReXKeAdUYEtIbH5UXjxlzCvrR8SZHnTKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JfXG07vrt481Wpu05jJkO/De1cKPglNTB2enOwwMPZg=; b=XYOvXFRLSr4sCABI85JX2SUUQqA3Kbba83/jKmVq1cj74VK8l/NGdlJqrUvH9pHTImyOrVqgofDD1BoWgAILQ+LfQGnaG0jtlR7frhKvMByEdb8o5DPHewsJuucMCmQtAAVjgsZ4rKBZfSf30sxS/E1GV4f0Viys1JHNmiIbflHieEQYhc5yP4jO1Rb22PtQdnKhr3qbZSX3LmmutkWPXvWNkdtoHVnNTB3KSe+scSLQUgo45rYcUrJcfHEv+a+HFoLvGo2hy1zPOGQ80OwYXhIZ3d45vyZG9u9S9z8UpQLQ2jOCD9KDFyQ1x4VM4n2OW4ZLhAwKMn0kh8VBXzz7mg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com; Message-ID: <5bfc3618-12ff-f673-e880-6ea13bcf8fa3@suse.com> Date: Mon, 10 Jan 2022 17:22:51 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.4.1 Subject: [PATCH v3 02/23] VT-d: have callers specify the target level for page table walks Content-Language: en-US From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Paul Durrant , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Kevin Tian References: <76cb9f26-e316-98a2-b1ba-e51e3d20f335@suse.com> In-Reply-To: <76cb9f26-e316-98a2-b1ba-e51e3d20f335@suse.com> X-ClientProxiedBy: AS8PR07CA0034.eurprd07.prod.outlook.com (2603:10a6:20b:459::19) To VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4a1c8f0e-354d-403e-5a79-08d9d455740a X-MS-TrafficTypeDiagnostic: VI1PR0401MB2446:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5Cz2cNjnif+1WS93d8MP1vODFqWE2mHAl+j+uBfd/GdkUQVYkGg8NgoFUulqYzlgFARYDwqZtAR0UyMujQwnLZU88SHo8cW0YKD9MbqZsw5goz0TURpLvg/N71MBKCDAXqE3QzQA/IjO2BzAGgbFz8PQJxCFCp30lslLzitIfRvS468bgkR+m3LT5f7A71c/Fm39s5sIwDhbQXMD2kSPf4AxkQEurU2QAhA6WVhkVsqlje8xzm7GatGqnwhM34441kzkbIloEDEQcg4GyKYs0fZd+kWkiYIsIjj4gZHnimss1G9rfSAXHKWm6SIe2swJYZxwEYVilbu+o1LV5OM5QfgW7Kq3hB1AcS7BbjFAFcjnKT4I9PWddA51o8oy1vTl0VeAGpycJ6B2kUtandukNHj9nqJ+rHbwnjW8lZ0SoB6Tbon+5JW6ZaDMDnHf0PaDliKqw1823tkabtRIKdCRdtxWCeiLW70X2JLknmFywlxp80BXfJdsRfm4CuyHBbRRWq7UgLjEVP4yiLxZWyMYxECizY2I38+e/47vOgkpRoPVFTNA5s/HTBMwYvAKmzm+iUTf37v8WyyRi95IVWc6plV/H62gG/+8IA5XQiY+LgsunJrLb8y1nrird1xJ8UZOitYt8fD6fZZk35vi6CszACsmHbNw+nZrdvq4NgKBFhL0F4TxZj/BSwDEygewDHn4LIb8SJzCWbVNhE4WHg5UbcLaqJa8HR+bcKwN/ohf8EhEpU8XqqbufqBxm11/RtQZ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(31686004)(54906003)(508600001)(26005)(31696002)(6512007)(8676002)(5660300002)(36756003)(83380400001)(4326008)(2616005)(86362001)(66946007)(316002)(66476007)(2906002)(6486002)(6916009)(66556008)(38100700002)(6506007)(8936002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?UfE8gM7dX5qM6Ni5pl/8pAUsp4c+?= =?utf-8?q?UlhG3IdS4fNzw3pnB2qL46oRs1Cz0KyeL6dzt63GY0yED5J3gjb/ohoyWrHAaUepc?= =?utf-8?q?f5YBxxjuc8DWoC3EeSiiJ1t0JmjJuoufgsk7h1KyHmDElCCwfch8dKI0u3VDHkE+x?= =?utf-8?q?RjEj00/U8RYVC40sJ1SGmBBXI56wSiNLkOGy7bGlO8tgEpj8J6qZmfjwWXbO1DQjh?= =?utf-8?q?/24UYc6dK7OCpLuUerkNTLMBNVhAcwGjEhhufx2uhJrf8QkdHwhybqoKpd3Juer6R?= =?utf-8?q?HkI9TmV2VokL4C6kNxc3T4F2SpLediTRkpApPjnqNyOacVXxi+CFnPtJP+6AQjTAq?= =?utf-8?q?x62OTlxv2sgFhyB7lG+dHfivqpA3fcODfi8NglE+B93I9VUWjzajfXv9V/52qWxse?= =?utf-8?q?BzE5R0vZkyVGlm5uILHVno8RABvZHh5Ce+FsmmOGnCtJeL51OTZd7nxG5t6efd5Qi?= =?utf-8?q?Vtrrdwo6P/e2Z4cI9DTLnnnt0UohGxqgtq0HkYLl8PRnYOU2xgLtQIepbFOivgkf9?= =?utf-8?q?nghN5TA9eboxq4JRztDCG8OF/+FwaWfIBgFqoUgfymHvJuFf+0XyULbacyl/5u4GU?= =?utf-8?q?Y0EzoVLWb+E/qApYv2RE79gSUlcC2TaH7D9ezBUtGejFN5yvZNeG4oDCfqZ905wyH?= =?utf-8?q?MFAPdO7eFAKGLbwgpFxENnAm8TvF1iT8j1e7KiRG+caTrzpts0ul3ctRD01UBaSwt?= =?utf-8?q?2I9Rc5XJhZg34qWyykZgWP7xF/emV6mq1/RIdrdgBAidfBMJyYV3im03SJSVpMu2f?= =?utf-8?q?ueJRhMhlPP0aa4SQv5Ls63PYyS4LYACCegvhM5OYiqsPduJjUVjiXFuMqh87u/0OM?= =?utf-8?q?Lt241U3uZn3/3o0zAH0xHJ/vD1j57KscYFbX24izymw6fHYAbNl7xNXA2uCuk8b6M?= =?utf-8?q?mwVOVEHTKq/Rb6FabXJn8AhpsCJ47n3LXofDbhFmA+NKvieZsYmST4rZnaY22DJeu?= =?utf-8?q?/hL5+2zjWIVWUN3XmB6ER++L1yI8E+RieSvy9ffx6ragoVyeqNB9V4U+oDdKx0Wg1?= =?utf-8?q?4iD92B9bhq0YU3H80dl/EVb9jfMkpyfhWsYhH5qyPFULkTQptbD+ebkfdWltLnici?= =?utf-8?q?rqJ/4OmozV5adO042lHzLWDKJEcVa0KllHqaVmsTfrkOuGGFjW5geZZqcJsaiJyVU?= =?utf-8?q?39p9hLepOF0twcOaEXz4pmM7P060dp/+/N0FBRwD+h9Q0SiRNGr0UdjM4ZI36TR9H?= =?utf-8?q?ieN4bqpmwaUnblrbzyhob1+G4o56GfuWWXSOaU1YuWS7EHUrvTYDWF4J429aldSWw?= =?utf-8?q?d4ukIZzPW36GZnJsCSLPSnZBPy5jQ1ZjvUgZp/rfdKaeicLn7E+j1QAlcaCmhK2ZS?= =?utf-8?q?khvrPbkb12N3MvuvfaaTgIoTnksmETqTXfjYv/0G0hOnP0r1YKuLbGVu7YO1fYsdW?= =?utf-8?q?8Hgb81wv1uDeDfC3Gu8xcxKSlfbc9vLL0Px+R8SQwa5YUURL/PfoH++jgUO2um61C?= =?utf-8?q?T53zsxcu8eANNbUmYHJ0f7gwWIgIlJh2bcTN8Ok75X8YesTKkxdmPvbK0XJSIEusN?= =?utf-8?q?7UJDyW9MjnZ/N6MyLWll3Ls8CBt3Kf/XWdzY3CYzEqKrLj36mqXc3SM=3D?= X-OriginatorOrg: suse.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4a1c8f0e-354d-403e-5a79-08d9d455740a X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2022 16:22:53.4769 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 2LrsU0bzYd/IRH/NtgHy5VCtUTfWP5E9B+RGg3ARiyAbRUyIrXjqCr55nbScVvdoDw+GprccZGCW2J2mSEqW5Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2446 In order to be able to insert/remove super-pages we need to allow callers of the walking function to specify at which point to stop the walk. For intel_iommu_lookup_page() integrate the last level access into the main walking function. dma_pte_clear_one() gets only partly adjusted for now: Error handling and order parameter get put in place, but the order parameter remains ignored (just like intel_iommu_map_page()'s order part of the flags). Signed-off-by: Jan Beulich --- I was actually wondering whether it wouldn't make sense to integrate dma_pte_clear_one() into its only caller intel_iommu_unmap_page(), for better symmetry with intel_iommu_map_page(). --- v2: Fix build. --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -347,63 +347,116 @@ static u64 bus_to_context_maddr(struct v return maddr; } -static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc) +/* + * This function walks (and if requested allocates) page tables to the + * designated target level. It returns + * - 0 when a non-present entry was encountered and no allocation was + * requested, + * - a small positive value (the level, i.e. below PAGE_SIZE) upon allocation + * failure, + * - for target > 0 the physical address of the page table holding the leaf + * PTE for the requested address, + * - for target == 0 the full PTE. + */ +static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr, + unsigned int target, + unsigned int *flush_flags, bool alloc) { struct domain_iommu *hd = dom_iommu(domain); int addr_width = agaw_to_width(hd->arch.vtd.agaw); struct dma_pte *parent, *pte = NULL; - int level = agaw_to_level(hd->arch.vtd.agaw); - int offset; + unsigned int level = agaw_to_level(hd->arch.vtd.agaw), offset; u64 pte_maddr = 0; addr &= (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); + ASSERT(target || !alloc); + if ( !hd->arch.vtd.pgd_maddr ) { struct page_info *pg; - if ( !alloc || !(pg = iommu_alloc_pgtable(domain)) ) + if ( !alloc ) + goto out; + + pte_maddr = level; + if ( !(pg = iommu_alloc_pgtable(domain)) ) goto out; hd->arch.vtd.pgd_maddr = page_to_maddr(pg); } - parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_maddr); - while ( level > 1 ) + pte_maddr = hd->arch.vtd.pgd_maddr; + parent = map_vtd_domain_page(pte_maddr); + while ( level > target ) { offset = address_level_offset(addr, level); pte = &parent[offset]; pte_maddr = dma_pte_addr(*pte); - if ( !pte_maddr ) + if ( !dma_pte_present(*pte) || (level > 1 && dma_pte_superpage(*pte)) ) { struct page_info *pg; + /* + * Higher level tables always set r/w, last level page table + * controls read/write. + */ + struct dma_pte new_pte = { DMA_PTE_PROT }; if ( !alloc ) - break; + { + pte_maddr = 0; + if ( !dma_pte_present(*pte) ) + break; + + /* + * When the leaf entry was requested, pass back the full PTE, + * with the address adjusted to account for the residual of + * the walk. + */ + pte_maddr = pte->val + + (addr & ((1UL << level_to_offset_bits(level)) - 1) & + PAGE_MASK); + if ( !target ) + break; + } + pte_maddr = level - 1; pg = iommu_alloc_pgtable(domain); if ( !pg ) break; pte_maddr = page_to_maddr(pg); - dma_set_pte_addr(*pte, pte_maddr); + dma_set_pte_addr(new_pte, pte_maddr); - /* - * high level table always sets r/w, last level - * page table control read/write - */ - dma_set_pte_readable(*pte); - dma_set_pte_writable(*pte); + if ( dma_pte_present(*pte) ) + { + struct dma_pte *split = map_vtd_domain_page(pte_maddr); + unsigned long inc = 1UL << level_to_offset_bits(level - 1); + + split[0].val = pte->val; + if ( inc == PAGE_SIZE ) + split[0].val &= ~DMA_PTE_SP; + + for ( offset = 1; offset < PTE_NUM; ++offset ) + split[offset].val = split[offset - 1].val + inc; + + iommu_sync_cache(split, PAGE_SIZE); + unmap_vtd_domain_page(split); + + if ( flush_flags ) + *flush_flags |= IOMMU_FLUSHF_modified; + } + + write_atomic(&pte->val, new_pte.val); iommu_sync_cache(pte, sizeof(struct dma_pte)); } - if ( level == 2 ) + if ( --level == target ) break; unmap_vtd_domain_page(parent); parent = map_vtd_domain_page(pte_maddr); - level--; } unmap_vtd_domain_page(parent); @@ -430,7 +483,7 @@ static uint64_t domain_pgd_maddr(struct if ( !hd->arch.vtd.pgd_maddr ) { /* Ensure we have pagetables allocated down to leaf PTE. */ - addr_to_dma_page_maddr(d, 0, 1); + addr_to_dma_page_maddr(d, 0, 1, NULL, true); if ( !hd->arch.vtd.pgd_maddr ) return 0; @@ -770,8 +823,9 @@ static int __must_check iommu_flush_iotl } /* clear one page's page table */ -static void dma_pte_clear_one(struct domain *domain, uint64_t addr, - unsigned int *flush_flags) +static int dma_pte_clear_one(struct domain *domain, daddr_t addr, + unsigned int order, + unsigned int *flush_flags) { struct domain_iommu *hd = dom_iommu(domain); struct dma_pte *page = NULL, *pte = NULL; @@ -779,11 +833,11 @@ static void dma_pte_clear_one(struct dom spin_lock(&hd->arch.mapping_lock); /* get last level pte */ - pg_maddr = addr_to_dma_page_maddr(domain, addr, 0); - if ( pg_maddr == 0 ) + pg_maddr = addr_to_dma_page_maddr(domain, addr, 1, flush_flags, false); + if ( pg_maddr < PAGE_SIZE ) { spin_unlock(&hd->arch.mapping_lock); - return; + return pg_maddr ? -ENOMEM : 0; } page = (struct dma_pte *)map_vtd_domain_page(pg_maddr); @@ -793,7 +847,7 @@ static void dma_pte_clear_one(struct dom { spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); - return; + return 0; } dma_clear_pte(*pte); @@ -803,6 +857,8 @@ static void dma_pte_clear_one(struct dom iommu_sync_cache(pte, sizeof(struct dma_pte)); unmap_vtd_domain_page(page); + + return 0; } static int iommu_set_root_entry(struct vtd_iommu *iommu) @@ -1914,8 +1970,9 @@ static int __must_check intel_iommu_map_ return 0; } - pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1); - if ( !pg_maddr ) + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1, flush_flags, + true); + if ( pg_maddr < PAGE_SIZE ) { spin_unlock(&hd->arch.mapping_lock); return -ENOMEM; @@ -1965,17 +2022,14 @@ static int __must_check intel_iommu_unma if ( iommu_hwdom_passthrough && is_hardware_domain(d) ) return 0; - dma_pte_clear_one(d, dfn_to_daddr(dfn), flush_flags); - - return 0; + return dma_pte_clear_one(d, dfn_to_daddr(dfn), 0, flush_flags); } static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags) { struct domain_iommu *hd = dom_iommu(d); - struct dma_pte *page, val; - u64 pg_maddr; + uint64_t val; /* * If VT-d shares EPT page table or if the domain is the hardware @@ -1987,25 +2041,16 @@ static int intel_iommu_lookup_page(struc spin_lock(&hd->arch.mapping_lock); - pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0); - if ( !pg_maddr ) - { - spin_unlock(&hd->arch.mapping_lock); - return -ENOENT; - } - - page = map_vtd_domain_page(pg_maddr); - val = page[dfn_x(dfn) & LEVEL_MASK]; + val = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0, NULL, false); - unmap_vtd_domain_page(page); spin_unlock(&hd->arch.mapping_lock); - if ( !dma_pte_present(val) ) + if ( val < PAGE_SIZE ) return -ENOENT; - *mfn = maddr_to_mfn(dma_pte_addr(val)); - *flags = dma_pte_read(val) ? IOMMUF_readable : 0; - *flags |= dma_pte_write(val) ? IOMMUF_writable : 0; + *mfn = maddr_to_mfn(val); + *flags = val & DMA_PTE_READ ? IOMMUF_readable : 0; + *flags |= val & DMA_PTE_WRITE ? IOMMUF_writable : 0; return 0; }