From patchwork Tue Jun 18 11:54:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Stefan ISAILA X-Patchwork-Id: 11001465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B44B114B6 for ; Tue, 18 Jun 2019 11:55:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A45B626E40 for ; Tue, 18 Jun 2019 11:55:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 985EC28A41; Tue, 18 Jun 2019 11:55:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D37E226E40 for ; Tue, 18 Jun 2019 11:55:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdCgw-0004um-LS; Tue, 18 Jun 2019 11:54:22 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hdCgv-0004ub-64 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2019 11:54:21 +0000 X-Inumbo-ID: cd3b97af-91bf-11e9-8980-bc764e045a96 Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown [40.107.1.125]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id cd3b97af-91bf-11e9-8980-bc764e045a96; Tue, 18 Jun 2019 11:54:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bitdefender.onmicrosoft.com; s=selector1-bitdefender-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+pccwpvk3cxmmHmux/mkq4+0HbwU9QENIhgLyBEBsjE=; b=dwyv9x+L1aEQKgqnkEwCMMG6Q1OfXGD4Aa6UEyJj7f00GCelq/Hg268OC4bc9GLQaMPwMoELbW2uVO2Rg+BjBF+HiFkMc5i1jWBxaPjyRqID1b8vFmBI4cufMs0iyMQpu/t6/+YIsGHy+OJLb5YNgstBOdvcFPWke5ru7W0H+EU= Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com (10.171.106.11) by VI1PR0202MB3343.eurprd02.prod.outlook.com (52.134.16.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1987.12; Tue, 18 Jun 2019 11:54:16 +0000 Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::647b:2795:a1b:ee09]) by VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::647b:2795:a1b:ee09%5]) with mapi id 15.20.1987.014; Tue, 18 Jun 2019 11:54:16 +0000 From: Alexandru Stefan ISAILA To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v1] x86/mm: Clean IOMMU flags from p2m-pt code Thread-Index: AQHVJcyNF0rcLAooGkWvUTAvg3olXQ== Date: Tue, 18 Jun 2019 11:54:16 +0000 Message-ID: <20190618115401.15044-1-aisaila@bitdefender.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: PR0P264CA0166.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:1b::34) To VI1PR0202MB2928.eurprd02.prod.outlook.com (2603:10a6:800:e3::11) authentication-results: spf=none (sender IP is ) smtp.mailfrom=aisaila@bitdefender.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [91.199.104.6] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 74316e77-5bb6-4658-abcf-08d6f3e3b01e x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:VI1PR0202MB3343; x-ms-traffictypediagnostic: VI1PR0202MB3343:|VI1PR0202MB3343: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 007271867D x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(39860400002)(366004)(396003)(346002)(376002)(136003)(199004)(189003)(6306002)(36756003)(52116002)(81166006)(81156014)(186003)(8676002)(4326008)(316002)(5660300002)(6512007)(26005)(478600001)(2501003)(25786009)(2351001)(1076003)(6916009)(99286004)(2906002)(8936002)(107886003)(486006)(53936002)(3846002)(256004)(50226002)(6116002)(14444005)(64756008)(66556008)(5640700003)(73956011)(66946007)(66476007)(6436002)(71190400001)(2616005)(102836004)(68736007)(7736002)(71200400001)(14454004)(86362001)(305945005)(386003)(6506007)(66446008)(66066001)(54906003)(476003)(6486002); DIR:OUT; SFP:1102; SCL:1; SRVR:VI1PR0202MB3343; H:VI1PR0202MB2928.eurprd02.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: bitdefender.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: Yb8fVP1lIMQJRZ74FQtgO+vd7DRerDEMZXf+Ui5aX2ZdCYhbypCglIcUMS2hCEUPftq2yshup7ooy5NxAwppn7DxHBs/mJ7BfgRWaoDuJ9mhzUPlE3UZQ3JUl3PhFGuSjdvEVaKBOUtOKDo5mbjjeckFLib789aIKQBYyPUvGEJKvvmOXM1sDQG+4Y/FsQ7/FUvkDYUcW3nIsBjNGNbFW3yRJljZeBySZl/B8h1b5KsbJTAfrWi0vcAZFtkiPnsexM4BoZO7mN5FYg0XqX52OcyvK1EdEl7e96CjngCtcVVZWMI5dzxtkvDkuZHAtiDfaGHe9PmN7zVSJqJp7xIRlMIUwMquKJseNBfKi6usdxVK6vt98qiNzn4Sp21PNB79EjkdnmL02mncu+os+B0nPQWGt4zrwPBD9Ic/hWfxY2s= MIME-Version: 1.0 X-OriginatorOrg: bitdefender.com X-MS-Exchange-CrossTenant-Network-Message-Id: 74316e77-5bb6-4658-abcf-08d6f3e3b01e X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2019 11:54:16.0922 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: aisaila@bbu.bitdefender.biz X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0202MB3343 Subject: [Xen-devel] [PATCH v1] x86/mm: Clean IOMMU flags from p2m-pt code X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "wl@xen.org" , "george.dunlap@eu.citrix.com" , "andrew.cooper3@citrix.com" , "jbeulich@suse.com" , Alexandru Stefan ISAILA , "roger.pau@citrix.com" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP At the moment the IOMMU flags are not used in p2m-pt and could be used on other application. This patch aims to clean the use of IOMMU flags on the AMD p2m side. Signed-off-by: Alexandru Isaila Suggested-by: George Dunlap --- xen/arch/x86/mm/p2m-pt.c | 85 ++-------------------------------------- 1 file changed, 3 insertions(+), 82 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index cafc9f299b..ce6d7cdf9b 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -24,7 +24,6 @@ * along with this program; If not, see . */ -#include #include #include #include @@ -36,13 +35,12 @@ #include #include #include -#include #include "mm-locks.h" /* * We may store INVALID_MFN in PTEs. We need to clip this to avoid trampling - * over higher-order bits (NX, p2m type, IOMMU flags). We seem to not need + * over higher-order bits (NX, p2m type). We seem to not need * to unclip on the read path, as callers are concerned only with p2m type in * such cases. */ @@ -165,16 +163,6 @@ p2m_free_entry(struct p2m_domain *p2m, l1_pgentry_t *p2m_entry, int page_order) // Returns 0 on error. // -/* AMD IOMMU: Convert next level bits and r/w bits into 24 bits p2m flags */ -#define iommu_nlevel_to_flags(nl, f) ((((nl) & 0x7) << 9 )|(((f) & 0x3) << 21)) - -static void p2m_add_iommu_flags(l1_pgentry_t *p2m_entry, - unsigned int nlevel, unsigned int flags) -{ - if ( iommu_hap_pt_share ) - l1e_add_flags(*p2m_entry, iommu_nlevel_to_flags(nlevel, flags)); -} - /* Returns: 0 for success, -errno for failure */ static int p2m_next_level(struct p2m_domain *p2m, void **table, @@ -203,7 +191,6 @@ p2m_next_level(struct p2m_domain *p2m, void **table, new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW); - p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable); rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1); if ( rc ) goto error; @@ -242,13 +229,6 @@ p2m_next_level(struct p2m_domain *p2m, void **table, l1_entry = map_domain_page(mfn); - /* Inherit original IOMMU permissions, but update Next Level. */ - if ( iommu_hap_pt_share ) - { - flags &= ~iommu_nlevel_to_flags(~0, 0); - flags |= iommu_nlevel_to_flags(level - 1, 0); - } - for ( i = 0; i < (1u << PAGETABLE_ORDER); i++ ) { new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)), @@ -264,8 +244,6 @@ p2m_next_level(struct p2m_domain *p2m, void **table, unmap_domain_page(l1_entry); new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW); - p2m_add_iommu_flags(&new_entry, level, - IOMMUF_readable|IOMMUF_writable); rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1); if ( rc ) @@ -470,9 +448,6 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) } e = l1e_from_pfn(mfn, flags); - p2m_add_iommu_flags(&e, level, - (nt == p2m_ram_rw) - ? IOMMUF_readable|IOMMUF_writable : 0); ASSERT(!needs_recalc(l1, e)); } else @@ -540,18 +515,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, l2_pgentry_t l2e_content; l3_pgentry_t l3e_content; int rc; - unsigned int iommu_pte_flags = p2m_get_iommu_flags(p2mt, mfn); - /* - * old_mfn and iommu_old_flags control possible flush/update needs on the - * IOMMU: We need to flush when MFN or flags (i.e. permissions) change. - * iommu_old_flags being initialized to zero covers the case of the entry - * getting replaced being a non-present (leaf or intermediate) one. For - * present leaf entries the real value will get calculated below, while - * for present intermediate entries ~0 (guaranteed != iommu_pte_flags) - * will be used (to cover all cases of what the leaf entries underneath - * the intermediate one might be). - */ - unsigned int flags, iommu_old_flags = 0; + unsigned int flags; unsigned long old_mfn = mfn_x(INVALID_MFN); if ( !sve ) @@ -599,17 +563,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, if ( flags & _PAGE_PRESENT ) { if ( flags & _PAGE_PSE ) - { old_mfn = l1e_get_pfn(*p2m_entry); - iommu_old_flags = - p2m_get_iommu_flags(p2m_flags_to_type(flags), - _mfn(old_mfn)); - } else - { - iommu_old_flags = ~0; intermediate_entry = *p2m_entry; - } } check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order); @@ -619,9 +575,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, : l3e_empty(); entry_content.l1 = l3e_content.l3; - if ( entry_content.l1 != 0 ) - p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags); - rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3); /* NB: paging_write_p2m_entry() handles tlb flushes properly */ if ( rc ) @@ -648,9 +601,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, 0, L1_PAGETABLE_ENTRIES); ASSERT(p2m_entry); old_mfn = l1e_get_pfn(*p2m_entry); - iommu_old_flags = - p2m_get_iommu_flags(p2m_flags_to_type(l1e_get_flags(*p2m_entry)), - _mfn(old_mfn)); if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ) entry_content = p2m_l1e_from_pfn(mfn_x(mfn), @@ -658,9 +608,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, else entry_content = l1e_empty(); - if ( entry_content.l1 != 0 ) - p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags); - /* level 1 entry */ rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1); /* NB: paging_write_p2m_entry() handles tlb flushes properly */ @@ -677,17 +624,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, if ( flags & _PAGE_PRESENT ) { if ( flags & _PAGE_PSE ) - { old_mfn = l1e_get_pfn(*p2m_entry); - iommu_old_flags = - p2m_get_iommu_flags(p2m_flags_to_type(flags), - _mfn(old_mfn)); - } else - { - iommu_old_flags = ~0; intermediate_entry = *p2m_entry; - } } check_entry(mfn, p2mt, p2m_flags_to_type(flags), page_order); @@ -697,9 +636,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, : l2e_empty(); entry_content.l1 = l2e_content.l2; - if ( entry_content.l1 != 0 ) - p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags); - rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2); /* NB: paging_write_p2m_entry() handles tlb flushes properly */ if ( rc ) @@ -711,24 +647,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn, && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) ) p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1; - if ( iommu_enabled && (iommu_old_flags != iommu_pte_flags || - old_mfn != mfn_x(mfn)) ) - { - ASSERT(rc == 0); - - if ( need_iommu_pt_sync(p2m->domain) ) - rc = iommu_pte_flags ? - iommu_legacy_map(d, _dfn(gfn), mfn, page_order, - iommu_pte_flags) : - iommu_legacy_unmap(d, _dfn(gfn), page_order); - else if ( iommu_use_hap_pt(d) && iommu_old_flags ) - amd_iommu_flush_pages(p2m->domain, gfn, page_order); - } - /* * Free old intermediate tables if necessary. This has to be the - * last thing we do, after removal from the IOMMU tables, so as to - * avoid a potential use-after-free. + * last thing we do so as to avoid a potential use-after-free. */ if ( l1e_get_flags(intermediate_entry) & _PAGE_PRESENT ) p2m_free_entry(p2m, &intermediate_entry, page_order);