From patchwork Mon Jan 6 15:34:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11319513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3585713A0 for ; Mon, 6 Jan 2020 15:35:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BA9A2081E for ; Mon, 6 Jan 2020 15:35:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BA9A2081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUOF-0001RF-5e; Mon, 06 Jan 2020 15:33:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUOD-0001RA-CZ for xen-devel@lists.xenproject.org; Mon, 06 Jan 2020 15:33:57 +0000 X-Inumbo-ID: eea9d860-3099-11ea-88e7-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id eea9d860-3099-11ea-88e7-bc764e2007e4; Mon, 06 Jan 2020 15:33:48 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C6D56AD0F; Mon, 6 Jan 2020 15:33:47 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Message-ID: <3bd38586-d76b-2ce5-a8bb-0777b30d5b61@suse.com> Date: Mon, 6 Jan 2020 16:34:33 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH v2 1/3] x86/mm: mod_l_entry() have no need to use __copy_from_user() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" mod_l1_entry()'s need to do so went away with commit 2d0557c5cb ("x86: Fold page_info lock into type_info"), and the other three never had such a need, at least going back as far as 3.2.0. Replace the uses by newly introduced le_access_once(). Signed-off-by: Jan Beulich --- v2: Use ACCESS_ONCE() clones. --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2124,13 +2124,10 @@ static int mod_l1_entry(l1_pgentry_t *pl struct vcpu *pt_vcpu, struct domain *pg_dom) { bool preserve_ad = (cmd == MMU_PT_UPDATE_PRESERVE_AD); - l1_pgentry_t ol1e; + l1_pgentry_t ol1e = l1e_access_once(*pl1e); struct domain *pt_dom = pt_vcpu->domain; int rc = 0; - if ( unlikely(__copy_from_user(&ol1e, pl1e, sizeof(ol1e)) != 0) ) - return -EFAULT; - ASSERT(!paging_mode_refcounts(pt_dom)); if ( l1e_get_flags(nl1e) & _PAGE_PRESENT ) @@ -2248,8 +2245,7 @@ static int mod_l2_entry(l2_pgentry_t *pl return -EPERM; } - if ( unlikely(__copy_from_user(&ol2e, pl2e, sizeof(ol2e)) != 0) ) - return -EFAULT; + ol2e = l2e_access_once(*pl2e); if ( l2e_get_flags(nl2e) & _PAGE_PRESENT ) { @@ -2311,8 +2307,7 @@ static int mod_l3_entry(l3_pgentry_t *pl if ( is_pv_32bit_domain(d) && (pgentry_ptr_to_slot(pl3e) >= 3) ) return -EINVAL; - if ( unlikely(__copy_from_user(&ol3e, pl3e, sizeof(ol3e)) != 0) ) - return -EFAULT; + ol3e = l3e_access_once(*pl3e); if ( l3e_get_flags(nl3e) & _PAGE_PRESENT ) { @@ -2378,8 +2373,7 @@ static int mod_l4_entry(l4_pgentry_t *pl return -EINVAL; } - if ( unlikely(__copy_from_user(&ol4e, pl4e, sizeof(ol4e)) != 0) ) - return -EFAULT; + ol4e = l4e_access_once(*pl4e); if ( l4e_get_flags(nl4e) & _PAGE_PRESENT ) { --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -55,6 +55,16 @@ #define l4e_write(l4ep, l4e) \ pte_write(&l4e_get_intpte(*(l4ep)), l4e_get_intpte(l4e)) +/* Type-correct ACCESS_ONCE() wrappers for PTE accesses. */ +#define l1e_access_once(l1e) (*container_of(&ACCESS_ONCE(l1e_get_intpte(l1e)), \ + volatile l1_pgentry_t, l1)) +#define l2e_access_once(l2e) (*container_of(&ACCESS_ONCE(l2e_get_intpte(l2e)), \ + volatile l2_pgentry_t, l2)) +#define l3e_access_once(l3e) (*container_of(&ACCESS_ONCE(l3e_get_intpte(l3e)), \ + volatile l3_pgentry_t, l3)) +#define l4e_access_once(l4e) (*container_of(&ACCESS_ONCE(l4e_get_intpte(l4e)), \ + volatile l4_pgentry_t, l4)) + /* Get direct integer representation of a pte's contents (intpte_t). */ #define l1e_get_intpte(x) ((x).l1) #define l2e_get_intpte(x) ((x).l2) From patchwork Mon Jan 6 15:35:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11319529 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BC571395 for ; Mon, 6 Jan 2020 15:35:36 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 825262081E for ; Mon, 6 Jan 2020 15:35:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 825262081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUOf-0001UT-0s; Mon, 06 Jan 2020 15:34:25 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUOe-0001UJ-1E for xen-devel@lists.xenproject.org; Mon, 06 Jan 2020 15:34:24 +0000 X-Inumbo-ID: 033fe8e6-309a-11ea-ab1f-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 033fe8e6-309a-11ea-ab1f-12813bfff9fa; Mon, 06 Jan 2020 15:34:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5034DAEAC; Mon, 6 Jan 2020 15:34:22 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Message-ID: Date: Mon, 6 Jan 2020 16:35:07 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH v2 2/3] x86/mm: rename and tidy create_pae_xen_mappings() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" After dad74b0f9e ("i386: fix handling of Xen entries in final L2 page table") and the removal of 32-bit support the function doesn't modify state anymore, and hence its name has been misleading. Change its name, constify parameters and a local variable, and make it return bool. Also drop the call to it from mod_l3_entry(): The function explicitly disallows 32-bit domains to modify slot 3. This way we also won't re-check slot 3 when a slot other than slot 3 changes. Doing so has needlessly disallowed making some L2 table recursively link back to an L2 used in some L3's 3rd slot, as we check for the type ref count to be 1. (Note that allowing dynamic changes of L3 entries in the way we do is bogus anyway, as that's not how L3s behave in the native and EPT cases: They get re-evaluated only upon CR3 reloads. NPT is different in this regard.) As a result of this we no longer need to play games to get at the start of the L3 table. Additionally move the single remaining call site, allowing to drop one is_pv_32bit_domain() invocation and a _PAGE_PRESENT check (in the function itself) as well as to exit the loop early (remaining entries have all ben set to empty just ahead of this loop). Further move a BUG_ON() such that in the common case its condition wouldn't need evaluating. Finally, since we're at it, move init_xen_pae_l2_slots() next to the renamed function, as they really belong together (in fact init_xen_pae_l2_slots() was [indirectly] broken out of this function). Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper --- v2: Refine description. Drop an ASSERT(). Add a comment ahead of the function. --- We could go further here and delete the function altogether: There are no linear mappings in a PGT_pae_xen_l2 table anymore (this was on 32-bit only). The corresponding conditional in mod_l3_entry() could then go away as well (or, more precisely, would need to be replaced by correct handling of 3rd slot updates). This would mean that a 32-bit guest functioning on new Xen may fail to work on older (possibly 32-bit) Xen. --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1414,23 +1414,22 @@ static int promote_l1_table(struct page_ return ret; } -static int create_pae_xen_mappings(struct domain *d, l3_pgentry_t *pl3e) +/* + * Note: The checks performed by this function are just to enforce a + * legacy restriction necessary on 32-bit hosts. There's not much point in + * relaxing (dropping) this though, as 32-bit guests would still need to + * conform to the original restrictions in order to be able to run on (old) + * 32-bit Xen. + */ +static bool pae_xen_mappings_check(const struct domain *d, + const l3_pgentry_t *pl3e) { - struct page_info *page; - l3_pgentry_t l3e3; - - if ( !is_pv_32bit_domain(d) ) - return 1; - - pl3e = (l3_pgentry_t *)((unsigned long)pl3e & PAGE_MASK); - - /* 3rd L3 slot contains L2 with Xen-private mappings. It *must* exist. */ - l3e3 = pl3e[3]; - if ( !(l3e_get_flags(l3e3) & _PAGE_PRESENT) ) - { - gdprintk(XENLOG_WARNING, "PAE L3 3rd slot is empty\n"); - return 0; - } + /* + * 3rd L3 slot contains L2 with Xen-private mappings. It *must* exist, + * which our caller has already verified. + */ + l3_pgentry_t l3e3 = pl3e[3]; + const struct page_info *page = l3e_get_page(l3e3); /* * The Xen-private mappings include linear mappings. The L2 thus cannot @@ -1441,17 +1440,24 @@ static int create_pae_xen_mappings(struc * a. promote_l3_table() calls this function and this check will fail * b. mod_l3_entry() disallows updates to slot 3 in an existing table */ - page = l3e_get_page(l3e3); BUG_ON(page->u.inuse.type_info & PGT_pinned); - BUG_ON((page->u.inuse.type_info & PGT_count_mask) == 0); BUG_ON(!(page->u.inuse.type_info & PGT_pae_xen_l2)); if ( (page->u.inuse.type_info & PGT_count_mask) != 1 ) { + BUG_ON(!(page->u.inuse.type_info & PGT_count_mask)); gdprintk(XENLOG_WARNING, "PAE L3 3rd slot is shared\n"); - return 0; + return false; } - return 1; + return true; +} + +void init_xen_pae_l2_slots(l2_pgentry_t *l2t, const struct domain *d) +{ + memcpy(&l2t[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)], + &compat_idle_pg_table_l2[ + l2_table_offset(HIRO_COMPAT_MPT_VIRT_START)], + COMPAT_L2_PAGETABLE_XEN_SLOTS(d) * sizeof(*l2t)); } static int promote_l2_table(struct page_info *page, unsigned long type) @@ -1592,6 +1598,16 @@ static int promote_l3_table(struct page_ l3e_get_mfn(l3e), PGT_l2_page_table | PGT_pae_xen_l2, d, partial_flags | PTF_preemptible | PTF_retain_ref_on_restart); + + if ( !rc ) + { + if ( pae_xen_mappings_check(d, pl3e) ) + { + pl3e[i] = adjust_guest_l3e(l3e, d); + break; + } + rc = -EINVAL; + } } else if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ) { @@ -1621,8 +1637,6 @@ static int promote_l3_table(struct page_ pl3e[i] = adjust_guest_l3e(l3e, d); } - if ( !rc && !create_pae_xen_mappings(d, pl3e) ) - rc = -EINVAL; if ( rc < 0 && rc != -ERESTART && rc != -EINTR ) { gdprintk(XENLOG_WARNING, @@ -1663,14 +1677,6 @@ static int promote_l3_table(struct page_ unmap_domain_page(pl3e); return rc; } - -void init_xen_pae_l2_slots(l2_pgentry_t *l2t, const struct domain *d) -{ - memcpy(&l2t[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)], - &compat_idle_pg_table_l2[ - l2_table_offset(HIRO_COMPAT_MPT_VIRT_START)], - COMPAT_L2_PAGETABLE_XEN_SLOTS(d) * sizeof(*l2t)); -} #endif /* CONFIG_PV */ /* @@ -2347,10 +2353,6 @@ static int mod_l3_entry(l3_pgentry_t *pl return -EFAULT; } - if ( likely(rc == 0) ) - if ( !create_pae_xen_mappings(d, pl3e) ) - BUG(); - put_page_from_l3e(ol3e, mfn, PTF_defer); return rc; } From patchwork Mon Jan 6 15:35:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11319531 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 253CD13A0 for ; Mon, 6 Jan 2020 15:35:59 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B2AB2081E for ; Mon, 6 Jan 2020 15:35:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B2AB2081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUP6-0001aE-AP; Mon, 06 Jan 2020 15:34:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ioUP4-0001a0-NQ for xen-devel@lists.xenproject.org; Mon, 06 Jan 2020 15:34:50 +0000 X-Inumbo-ID: 0e9a40ba-309a-11ea-88e7-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 0e9a40ba-309a-11ea-88e7-bc764e2007e4; Mon, 06 Jan 2020 15:34:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 60AE7ADBB; Mon, 6 Jan 2020 15:34:41 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Message-ID: <01b3307a-a9cf-fb7b-a011-ded5753d74f3@suse.com> Date: Mon, 6 Jan 2020 16:35:27 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <73ea220a-d234-7a87-464e-59683fc3d815@suse.com> Content-Language: en-US Subject: [Xen-devel] [PATCH v2 3/3] x86/mm: re-order a few conditionals X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" is_{hvm,pv}_*() can be expensive now, so where possible evaluate cheaper conditions first. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- v2: New. --- I couldn't really decide whether to drop the two involved unlikely(). --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1588,7 +1588,7 @@ static int promote_l3_table(struct page_ if ( i > page->nr_validated_ptes && hypercall_preempt_check() ) rc = -EINTR; - else if ( is_pv_32bit_domain(d) && (i == 3) ) + else if ( i == 3 && is_pv_32bit_domain(d) ) { if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_flags(l3e) & l3_disallow_mask(d)) ) @@ -2310,7 +2310,7 @@ static int mod_l3_entry(l3_pgentry_t *pl * Disallow updates to final L3 slot. It contains Xen mappings, and it * would be a pain to ensure they remain continuously valid throughout. */ - if ( is_pv_32bit_domain(d) && (pgentry_ptr_to_slot(pl3e) >= 3) ) + if ( pgentry_ptr_to_slot(pl3e) >= 3 && is_pv_32bit_domain(d) ) return -EINVAL; ol3e = l3e_access_once(*pl3e); @@ -2470,7 +2470,7 @@ static int cleanup_page_mappings(struct { struct domain *d = page_get_owner(page); - if ( d && is_pv_domain(d) && unlikely(need_iommu_pt_sync(d)) ) + if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K); @@ -2984,7 +2984,7 @@ static int _get_page_type(struct page_in /* Special pages should not be accessible from devices. */ struct domain *d = page_get_owner(page); - if ( d && is_pv_domain(d) && unlikely(need_iommu_pt_sync(d)) ) + if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { mfn_t mfn = page_to_mfn(page);