From patchwork Fri Aug 11 13:19:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9895845 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 16AE460325 for ; Fri, 11 Aug 2017 13:21:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04F6C1FFDA for ; Fri, 11 Aug 2017 13:21:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED30628C2C; Fri, 11 Aug 2017 13:21:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7405B1FFDA for ; Fri, 11 Aug 2017 13:21:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dg9qk-0000IB-WA; Fri, 11 Aug 2017 13:19:38 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dg9qj-0000I5-92 for xen-devel@lists.xenproject.org; Fri, 11 Aug 2017 13:19:37 +0000 Received: from [193.109.254.147] by server-6.bemta-6.messagelabs.com id BB/88-03937-3EEAD895; Fri, 11 Aug 2017 13:19:31 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrHIsWRWlGSWpSXmKPExsXS6fjDS/fRut5 Ig0snFCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ox/D66zF6zTr/h9qqaBcbpiFyMnh5BAnsT9 qROZQGxeATuJNdfWMYLYEgKGEqcX3mQBsVkEVCUeXfrADmKzCahLtD3bztrFyMEhImAgce5oE ojJLBAvcXWdPUiFsICtxM71fawQ04skJvw7B2ZzCthLLDq2gw2knFdAUOLvDmGQMLOAlsTDX7 dYIGxtiWULXzNDTJSWWP6PYwIj3yyEhllIGmYhaZiF0LCAkWUVo0ZxalFZapGuoYFeUlFmekZ JbmJmDpBnppebWlycmJ6ak5hUrJecn7uJERh2DECwg/H4+7hDjJIcTEqivAk+vZFCfEn5KZUZ icUZ8UWlOanFhxhlODiUJHi1gWEsJFiUmp5akZaZA4wAmLQEB4+SCO+VtUBp3uKCxNzizHSI1 ClGXY5XE/5/YxJiycvPS5US5/0HUiQAUpRRmgc3AhaNlxhlpYR5GYGOEuIpSC3KzSxBlX/FKM 7BqCTMKwhyCU9mXgncpldARzABHdHnA3ZESSJCSqqBsWRroqiLu8WxH5IHfmycFKO7fksX+ze FVXk2k8qYHJJf3Fp8fWWguONz/ra+88teWKUW8h5f8inWoXnqrNfcPqsuXtxVdiWlWOHxQ8sT vqmN87hlA++InWLPkPh6aX8/15oqSdX8r5oJuyzv3t0y5+NN/heHHrqb3ljxv7dnx+k+gwfTO 2ao5imxFGckGmoxFxUnAgCRRNdMwQIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1502457568!110592811!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54500 invoked from network); 11 Aug 2017 13:19:30 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 Aug 2017 13:19:30 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Fri, 11 Aug 2017 07:19:28 -0600 Message-Id: <598DCB00020000780016EE0D@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Fri, 11 Aug 2017 07:19:28 -0600 From: "Jan Beulich" To: "xen-devel" References: <598DC9B2020000780016EDF1@prv-mh.provo.novell.com> <598DC9B2020000780016EDF1@prv-mh.provo.novell.com> In-Reply-To: <598DC9B2020000780016EDF1@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Cc: George Dunlap , Andrew Cooper Subject: [Xen-devel] [PATCH v3 1/3] x86/p2m-pt: simplify p2m_next_level() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Calculate entry PFN and flags just once. Convert the two successive main if()-s to and if/else-if chain. Restrict variable scope where reasonable. Take the opportunity and also make the induction variable unsigned. This at once fixes excessive permissions granted in the 2M PTEs resulting from splitting a 1G one - original permissions should be inherited instead. This is not a security issue only because all of this takes no effect anyway, as iommu_hap_pt_share is always false on AMD systems for all supported branches. Signed-off-by: Jan Beulich Acked-by: George Dunlap --- v3: Fix IOMMU permission handling for shattered PTEs. v2: Re-do mostly from scratch following review feedback. --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -191,18 +191,18 @@ p2m_next_level(struct p2m_domain *p2m, v unsigned long *gfn_remainder, unsigned long gfn, u32 shift, u32 max, unsigned long type, bool_t unmap) { - l1_pgentry_t *l1_entry; - l1_pgentry_t *p2m_entry; - l1_pgentry_t new_entry; + l1_pgentry_t *p2m_entry, new_entry; void *next; - int i; + unsigned int flags; if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn, shift, max)) ) return -ENOENT; + flags = l1e_get_flags(*p2m_entry); + /* PoD/paging: Not present doesn't imply empty. */ - if ( !l1e_get_flags(*p2m_entry) ) + if ( !flags ) { struct page_info *pg; @@ -231,70 +231,67 @@ p2m_next_level(struct p2m_domain *p2m, v break; } } - - ASSERT(l1e_get_flags(*p2m_entry) & (_PAGE_PRESENT|_PAGE_PSE)); - - /* split 1GB pages into 2MB pages */ - if ( type == PGT_l2_page_table && (l1e_get_flags(*p2m_entry) & _PAGE_PSE) ) + else if ( flags & _PAGE_PSE ) { - unsigned long flags, pfn; + /* Split superpages pages into smaller ones. */ + unsigned long pfn = l1e_get_pfn(*p2m_entry); struct page_info *pg; + l1_pgentry_t *l1_entry; + unsigned int i, level; - pg = p2m_alloc_ptp(p2m, PGT_l2_page_table); - if ( pg == NULL ) - return -ENOMEM; - - flags = l1e_get_flags(*p2m_entry); - pfn = l1e_get_pfn(*p2m_entry); - - l1_entry = __map_domain_page(pg); - for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) + switch ( type ) { - new_entry = l1e_from_pfn(pfn | (i * L1_PAGETABLE_ENTRIES), flags); - p2m_add_iommu_flags(&new_entry, 1, IOMMUF_readable|IOMMUF_writable); - p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 2); - } - unmap_domain_page(l1_entry); - new_entry = l1e_from_pfn(mfn_x(page_to_mfn(pg)), - P2M_BASE_FLAGS | _PAGE_RW); /* disable PSE */ - p2m_add_iommu_flags(&new_entry, 2, IOMMUF_readable|IOMMUF_writable); - p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, 3); - } + case PGT_l2_page_table: + level = 2; + break; + case PGT_l1_page_table: + /* + * New splintered mappings inherit the flags of the old superpage, + * with a little reorganisation for the _PAGE_PSE_PAT bit. + */ + if ( pfn & 1 ) /* ==> _PAGE_PSE_PAT was set */ + pfn -= 1; /* Clear it; _PAGE_PSE becomes _PAGE_PAT */ + else + flags &= ~_PAGE_PSE; /* Clear _PAGE_PSE (== _PAGE_PAT) */ - /* split single 2MB large page into 4KB page in P2M table */ - if ( type == PGT_l1_page_table && (l1e_get_flags(*p2m_entry) & _PAGE_PSE) ) - { - unsigned long flags, pfn; - struct page_info *pg; + level = 1; + break; + + default: + ASSERT_UNREACHABLE(); + return -EINVAL; + } - pg = p2m_alloc_ptp(p2m, PGT_l1_page_table); + pg = p2m_alloc_ptp(p2m, type); if ( pg == NULL ) return -ENOMEM; - /* New splintered mappings inherit the flags of the old superpage, - * with a little reorganisation for the _PAGE_PSE_PAT bit. */ - flags = l1e_get_flags(*p2m_entry); - pfn = l1e_get_pfn(*p2m_entry); - if ( pfn & 1 ) /* ==> _PAGE_PSE_PAT was set */ - pfn -= 1; /* Clear it; _PAGE_PSE becomes _PAGE_PAT */ - else - flags &= ~_PAGE_PSE; /* Clear _PAGE_PSE (== _PAGE_PAT) */ - l1_entry = __map_domain_page(pg); - for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) + + /* Inherit original IOMMU permissions, but update Next Level. */ + if ( iommu_hap_pt_share ) { - new_entry = l1e_from_pfn(pfn | i, flags); - p2m_add_iommu_flags(&new_entry, 0, 0); - p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 1); + flags &= ~iommu_nlevel_to_flags(~0, 0); + flags |= iommu_nlevel_to_flags(level - 1, 0); } + + for ( i = 0; i < (1u << PAGETABLE_ORDER); i++ ) + { + new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)), + flags); + p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level); + } + unmap_domain_page(l1_entry); - + new_entry = l1e_from_pfn(mfn_x(page_to_mfn(pg)), P2M_BASE_FLAGS | _PAGE_RW); - p2m_add_iommu_flags(&new_entry, 1, IOMMUF_readable|IOMMUF_writable); - p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, 2); + p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable); + p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1); } + else + ASSERT(flags & _PAGE_PRESENT); next = map_domain_page(_mfn(l1e_get_pfn(*p2m_entry))); if ( unmap )