From patchwork Tue Apr 21 09:11:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11500901 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3E3C913 for ; Tue, 21 Apr 2020 09:11:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9806320857 for ; Tue, 21 Apr 2020 09:11:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9806320857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQovr-0008Uv-8S; Tue, 21 Apr 2020 09:11:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQovq-0008Ul-FM for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 09:11:06 +0000 X-Inumbo-ID: 073a9a18-83b0-11ea-911a-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 073a9a18-83b0-11ea-911a-12813bfff9fa; Tue, 21 Apr 2020 09:11:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 873A4AC19; Tue, 21 Apr 2020 09:11:03 +0000 (UTC) Subject: [PATCH v2 1/4] x86/mm: no-one passes a NULL domain to init_xen_l4_slots() From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Message-ID: <8787b72e-c71e-b75d-2ca0-0c6fe7c8259f@suse.com> Date: Tue, 21 Apr 2020 11:11:03 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tim Deegan , George Dunlap , Wei Liu , =?utf-8?q?R?= =?utf-8?q?oger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Drop the NULL checks - they've been introduced by commit 8d7b633ada ("x86/mm: Consolidate all Xen L4 slot writing into init_xen_l4_slots()") for no apparent reason. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v2: Adjust comment ahead of the function. --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1653,7 +1653,7 @@ static int promote_l3_table(struct page_ * This function must write all ROOT_PAGETABLE_PV_XEN_SLOTS, to clobber any * values a guest may have left there from promote_l4_table(). * - * l4t and l4mfn are mandatory, but l4mfn doesn't need to be the mfn under + * l4t, l4mfn, and d are mandatory, but l4mfn doesn't need to be the mfn under * *l4t. All other parameters are optional and will either fill or zero the * appropriate slots. Pagetables not shared with guests will gain the * extended directmap. @@ -1665,7 +1665,7 @@ void init_xen_l4_slots(l4_pgentry_t *l4t * PV vcpus need a shortened directmap. HVM and Idle vcpus get the full * directmap. */ - bool short_directmap = d && !paging_mode_external(d); + bool short_directmap = !paging_mode_external(d); /* Slot 256: RO M2P (if applicable). */ l4t[l4_table_offset(RO_MPT_VIRT_START)] = @@ -1686,10 +1686,9 @@ void init_xen_l4_slots(l4_pgentry_t *l4t mfn_eq(sl4mfn, INVALID_MFN) ? l4e_empty() : l4e_from_mfn(sl4mfn, __PAGE_HYPERVISOR_RW); - /* Slot 260: Per-domain mappings (if applicable). */ + /* Slot 260: Per-domain mappings. */ l4t[l4_table_offset(PERDOMAIN_VIRT_START)] = - d ? l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW) - : l4e_empty(); + l4e_from_page(d->arch.perdomain_l3_pg, __PAGE_HYPERVISOR_RW); /* Slot 261-: text/data/bss, RW M2P, vmap, frametable, directmap. */ #ifndef NDEBUG From patchwork Tue Apr 21 09:11:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11500903 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53D47913 for ; Tue, 21 Apr 2020 09:12:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39FD020857 for ; Tue, 21 Apr 2020 09:12:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39FD020857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQowP-0000B2-Q7; Tue, 21 Apr 2020 09:11:41 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQowO-0000Ai-4E for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 09:11:40 +0000 X-Inumbo-ID: 1b9eefae-83b0-11ea-83d8-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1b9eefae-83b0-11ea-83d8-bc764e2007e4; Tue, 21 Apr 2020 09:11:39 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C10C7ACAE; Tue, 21 Apr 2020 09:11:37 +0000 (UTC) Subject: [PATCH v2 2/4] x86/shadow: sh_update_linear_entries() is a no-op for PV From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Message-ID: Date: Tue, 21 Apr 2020 11:11:37 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tim Deegan , George Dunlap , Wei Liu , =?utf-8?q?R?= =?utf-8?q?oger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Consolidate the shadow_mode_external() in here: Check this once at the start of the function. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper Acked-by: Tim Deegan Acked-by: Tim Deegan --- v2: Delete stale part of comment. --- Tim - I'm re-posting as I wasn't entirely sure whether you meant to drop the entire PV part of the comment, or only the last two sentences. --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3680,20 +3680,7 @@ sh_update_linear_entries(struct vcpu *v) { struct domain *d = v->domain; - /* Linear pagetables in PV guests - * ------------------------------ - * - * Guest linear pagetables, which map the guest pages, are at - * LINEAR_PT_VIRT_START. Shadow linear pagetables, which map the - * shadows, are at SH_LINEAR_PT_VIRT_START. Most of the time these - * are set up at shadow creation time, but (of course!) the PAE case - * is subtler. Normal linear mappings are made by having an entry - * in the top-level table that points to itself (shadow linear) or - * to the guest top-level table (guest linear). For PAE, to set up - * a linear map requires us to copy the four top-level entries into - * level-2 entries. That means that every time we change a PAE l3e, - * we need to reflect the change into the copy. - * + /* * Linear pagetables in HVM guests * ------------------------------- * @@ -3711,34 +3698,30 @@ sh_update_linear_entries(struct vcpu *v) */ /* Don't try to update the monitor table if it doesn't exist */ - if ( shadow_mode_external(d) - && pagetable_get_pfn(v->arch.monitor_table) == 0 ) + if ( !shadow_mode_external(d) || + pagetable_get_pfn(v->arch.monitor_table) == 0 ) return; #if SHADOW_PAGING_LEVELS == 4 - /* For PV, one l4e points at the guest l4, one points at the shadow - * l4. No maintenance required. - * For HVM, just need to update the l4e that points to the shadow l4. */ + /* For HVM, just need to update the l4e that points to the shadow l4. */ - if ( shadow_mode_external(d) ) + /* Use the linear map if we can; otherwise make a new mapping */ + if ( v == current ) { - /* Use the linear map if we can; otherwise make a new mapping */ - if ( v == current ) - { - __linear_l4_table[l4_linear_offset(SH_LINEAR_PT_VIRT_START)] = - l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), - __PAGE_HYPERVISOR_RW); - } - else - { - l4_pgentry_t *ml4e; - ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table)); - ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] = - l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), - __PAGE_HYPERVISOR_RW); - unmap_domain_page(ml4e); - } + __linear_l4_table[l4_linear_offset(SH_LINEAR_PT_VIRT_START)] = + l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), + __PAGE_HYPERVISOR_RW); + } + else + { + l4_pgentry_t *ml4e; + + ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table)); + ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] = + l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), + __PAGE_HYPERVISOR_RW); + unmap_domain_page(ml4e); } #elif SHADOW_PAGING_LEVELS == 3 @@ -3752,7 +3735,6 @@ sh_update_linear_entries(struct vcpu *v) * the shadows. */ - ASSERT(shadow_mode_external(d)); { /* Install copies of the shadow l3es into the monitor l2 table * that maps SH_LINEAR_PT_VIRT_START. */ @@ -3803,20 +3785,16 @@ sh_update_linear_entries(struct vcpu *v) #error this should not happen #endif - if ( shadow_mode_external(d) ) - { - /* - * Having modified the linear pagetable mapping, flush local host TLBs. - * This was not needed when vmenter/vmexit always had the side effect - * of flushing host TLBs but, with ASIDs, it is possible to finish - * this CR3 update, vmenter the guest, vmexit due to a page fault, - * without an intervening host TLB flush. Then the page fault code - * could use the linear pagetable to read a top-level shadow page - * table entry. But, without this change, it would fetch the wrong - * value due to a stale TLB. - */ - flush_tlb_local(); - } + /* + * Having modified the linear pagetable mapping, flush local host TLBs. + * This was not needed when vmenter/vmexit always had the side effect of + * flushing host TLBs but, with ASIDs, it is possible to finish this CR3 + * update, vmenter the guest, vmexit due to a page fault, without an + * intervening host TLB flush. Then the page fault code could use the + * linear pagetable to read a top-level shadow page table entry. But, + * without this change, it would fetch the wrong value due to a stale TLB. + */ + flush_tlb_local(); } From patchwork Tue Apr 21 09:12:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11500905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF49381 for ; Tue, 21 Apr 2020 09:13:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4C072087E for ; Tue, 21 Apr 2020 09:13:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4C072087E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQox7-0000Lr-D0; Tue, 21 Apr 2020 09:12:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQox6-0000Le-Ez for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 09:12:24 +0000 X-Inumbo-ID: 3602b524-83b0-11ea-b4f4-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3602b524-83b0-11ea-b4f4-bc764e2007e4; Tue, 21 Apr 2020 09:12:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 04591AC19; Tue, 21 Apr 2020 09:12:21 +0000 (UTC) Subject: [PATCH v2 3/4] x86/mm: monitor table is HVM-only From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Message-ID: Date: Tue, 21 Apr 2020 11:12:21 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tim Deegan , George Dunlap , Wei Liu , =?utf-8?q?R?= =?utf-8?q?oger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Move the per-vCPU field to the HVM sub-structure. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper Acked-by: Tim Deegan --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -545,7 +545,7 @@ void write_ptbase(struct vcpu *v) * Should be called after CR3 is updated. * * Uses values found in vcpu->arch.(guest_table and guest_table_user), and - * for HVM guests, arch.monitor_table and hvm's guest CR3. + * for HVM guests, arch.hvm.monitor_table and hvm's guest CR3. * * Update ref counts to shadow tables appropriately. */ --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -393,7 +393,7 @@ static mfn_t hap_make_monitor_table(stru l4_pgentry_t *l4e; mfn_t m4mfn; - ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0); + ASSERT(pagetable_get_pfn(v->arch.hvm.monitor_table) == 0); if ( (pg = hap_alloc(d)) == NULL ) goto oom; @@ -579,10 +579,10 @@ void hap_teardown(struct domain *d, bool { if ( paging_get_hostmode(v) && paging_mode_external(d) ) { - mfn = pagetable_get_mfn(v->arch.monitor_table); + mfn = pagetable_get_mfn(v->arch.hvm.monitor_table); if ( mfn_valid(mfn) && (mfn_x(mfn) != 0) ) hap_destroy_monitor_table(v, mfn); - v->arch.monitor_table = pagetable_null(); + v->arch.hvm.monitor_table = pagetable_null(); } } } @@ -758,10 +758,10 @@ static void hap_update_paging_modes(stru v->arch.paging.mode = hap_paging_get_mode(v); - if ( pagetable_is_null(v->arch.monitor_table) ) + if ( pagetable_is_null(v->arch.hvm.monitor_table) ) { mfn_t mmfn = hap_make_monitor_table(v); - v->arch.monitor_table = pagetable_from_mfn(mmfn); + v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn); make_cr3(v, mmfn); hvm_update_host_cr3(v); } --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2465,10 +2465,10 @@ static void sh_update_paging_modes(struc &SHADOW_INTERNAL_NAME(sh_paging_mode, 2); } - if ( pagetable_is_null(v->arch.monitor_table) ) + if ( pagetable_is_null(v->arch.hvm.monitor_table) ) { mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v); - v->arch.monitor_table = pagetable_from_mfn(mmfn); + v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn); make_cr3(v, mmfn); hvm_update_host_cr3(v); } @@ -2502,10 +2502,10 @@ static void sh_update_paging_modes(struc return; } - old_mfn = pagetable_get_mfn(v->arch.monitor_table); - v->arch.monitor_table = pagetable_null(); + old_mfn = pagetable_get_mfn(v->arch.hvm.monitor_table); + v->arch.hvm.monitor_table = pagetable_null(); new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v); - v->arch.monitor_table = pagetable_from_mfn(new_mfn); + v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn); SHADOW_PRINTK("new monitor table %"PRI_mfn "\n", mfn_x(new_mfn)); @@ -2724,11 +2724,11 @@ void shadow_teardown(struct domain *d, b #ifdef CONFIG_HVM if ( shadow_mode_external(d) ) { - mfn_t mfn = pagetable_get_mfn(v->arch.monitor_table); + mfn_t mfn = pagetable_get_mfn(v->arch.hvm.monitor_table); if ( mfn_valid(mfn) && (mfn_x(mfn) != 0) ) v->arch.paging.mode->shadow.destroy_monitor_table(v, mfn); - v->arch.monitor_table = pagetable_null(); + v->arch.hvm.monitor_table = pagetable_null(); } #endif /* CONFIG_HVM */ } --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -1521,7 +1521,7 @@ sh_make_monitor_table(struct vcpu *v) { struct domain *d = v->domain; - ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0); + ASSERT(pagetable_get_pfn(v->arch.hvm.monitor_table) == 0); /* Guarantee we can get the memory we need */ shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS); @@ -3699,7 +3699,7 @@ sh_update_linear_entries(struct vcpu *v) /* Don't try to update the monitor table if it doesn't exist */ if ( !shadow_mode_external(d) || - pagetable_get_pfn(v->arch.monitor_table) == 0 ) + pagetable_get_pfn(v->arch.hvm.monitor_table) == 0 ) return; #if SHADOW_PAGING_LEVELS == 4 @@ -3717,7 +3717,7 @@ sh_update_linear_entries(struct vcpu *v) { l4_pgentry_t *ml4e; - ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table)); + ml4e = map_domain_page(pagetable_get_mfn(v->arch.hvm.monitor_table)); ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] = l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), __PAGE_HYPERVISOR_RW); @@ -3752,7 +3752,7 @@ sh_update_linear_entries(struct vcpu *v) l4_pgentry_t *ml4e; l3_pgentry_t *ml3e; int linear_slot = shadow_l4_table_offset(SH_LINEAR_PT_VIRT_START); - ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table)); + ml4e = map_domain_page(pagetable_get_mfn(v->arch.hvm.monitor_table)); ASSERT(l4e_get_flags(ml4e[linear_slot]) & _PAGE_PRESENT); l3mfn = l4e_get_mfn(ml4e[linear_slot]); @@ -4087,7 +4087,7 @@ sh_update_cr3(struct vcpu *v, int do_loc /// if ( shadow_mode_external(d) ) { - make_cr3(v, pagetable_get_mfn(v->arch.monitor_table)); + make_cr3(v, pagetable_get_mfn(v->arch.hvm.monitor_table)); } #if SHADOW_PAGING_LEVELS == 4 else // not shadow_mode_external... --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -583,7 +583,6 @@ struct arch_vcpu /* guest_table holds a ref to the page, and also a type-count unless * shadow refcounts are in use */ pagetable_t shadow_table[4]; /* (MFN) shadow(s) of guest */ - pagetable_t monitor_table; /* (MFN) hypervisor PT (for HVM) */ unsigned long cr3; /* (MA) value to install in HW CR3 */ /* --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -176,6 +176,9 @@ struct hvm_vcpu { uint16_t p2midx; } fast_single_step; + /* (MFN) hypervisor page table */ + pagetable_t monitor_table; + struct hvm_vcpu_asid n1asid; u64 msr_tsc_adjust; From patchwork Tue Apr 21 09:13:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11500907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A0C681 for ; Tue, 21 Apr 2020 09:14:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6D08120857 for ; Tue, 21 Apr 2020 09:14:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D08120857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQoy7-0000VU-R5; Tue, 21 Apr 2020 09:13:27 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jQoy6-0000VK-Rk for xen-devel@lists.xenproject.org; Tue, 21 Apr 2020 09:13:26 +0000 X-Inumbo-ID: 5ad83180-83b0-11ea-911a-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 5ad83180-83b0-11ea-911a-12813bfff9fa; Tue, 21 Apr 2020 09:13:25 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id B1559AD23; Tue, 21 Apr 2020 09:13:23 +0000 (UTC) Subject: [PATCH v2 4/4] x86: adjustments to guest handle treatment From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Message-ID: Date: Tue, 21 Apr 2020 11:13:23 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com> Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Tim Deegan , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" First of all avoid excessive conversions. copy_{from,to}_guest(), for example, work fine with all of XEN_GUEST_HANDLE{,_64,_PARAM}(). Further - do_physdev_op_compat() didn't use the param form for its parameter, - {hap,shadow}_track_dirty_vram() wrongly used the param form, - compat processor Px logic failed to check compatibility of native and compat structures not further converted. As this eliminates all users of guest_handle_from_param() and as there's no real need to allow for conversions in both directions, drop the macros as well. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Julien Grall --- v2: New. --- a/xen/arch/x86/compat.c +++ b/xen/arch/x86/compat.c @@ -15,7 +15,7 @@ typedef long ret_t; #endif /* Legacy hypercall (as of 0x00030202). */ -ret_t do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop) +ret_t do_physdev_op_compat(XEN_GUEST_HANDLE_PARAM(physdev_op_t) uop) { typeof(do_physdev_op) *fn = (void *)pv_hypercall_table[__HYPERVISOR_physdev_op].native; --- a/xen/arch/x86/cpu/microcode/core.c +++ b/xen/arch/x86/cpu/microcode/core.c @@ -678,7 +678,7 @@ static long microcode_update_helper(void return ret; } -int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len) +int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len) { int ret; struct ucode_buf *buffer; --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4441,20 +4441,16 @@ static int _handle_iomem_range(unsigned { if ( s > ctxt->s && !(s >> (paddr_bits - PAGE_SHIFT)) ) { - e820entry_t ent; - XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param; - XEN_GUEST_HANDLE(e820entry_t) buffer; - if ( !guest_handle_is_null(ctxt->map.buffer) ) { + e820entry_t ent; + if ( ctxt->n + 1 >= ctxt->map.nr_entries ) return -EINVAL; ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT; ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT; ent.type = E820_RESERVED; - buffer_param = guest_handle_cast(ctxt->map.buffer, e820entry_t); - buffer = guest_handle_from_param(buffer_param, e820entry_t); - if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) ) + if ( __copy_to_guest_offset(ctxt->map.buffer, ctxt->n, &ent, 1) ) return -EFAULT; } ctxt->n++; @@ -4715,8 +4711,7 @@ long arch_memory_op(unsigned long cmd, X case XENMEM_machine_memory_map: { struct memory_map_context ctxt; - XEN_GUEST_HANDLE(e820entry_t) buffer; - XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param; + XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer; unsigned int i; bool store; @@ -4732,8 +4727,7 @@ long arch_memory_op(unsigned long cmd, X if ( store && ctxt.map.nr_entries < e820.nr_map + 1 ) return -EINVAL; - buffer_param = guest_handle_cast(ctxt.map.buffer, e820entry_t); - buffer = guest_handle_from_param(buffer_param, e820entry_t); + buffer = guest_handle_cast(ctxt.map.buffer, e820entry_t); if ( store && !guest_handle_okay(buffer, ctxt.map.nr_entries) ) return -EFAULT; --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -59,7 +59,7 @@ int hap_track_dirty_vram(struct domain *d, unsigned long begin_pfn, unsigned long nr, - XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap) + XEN_GUEST_HANDLE(void) guest_dirty_bitmap) { long rc = 0; struct sh_dirty_vram *dirty_vram; --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3171,7 +3171,7 @@ static void sh_clean_dirty_bitmap(struct int shadow_track_dirty_vram(struct domain *d, unsigned long begin_pfn, unsigned long nr, - XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap) + XEN_GUEST_HANDLE(void) guest_dirty_bitmap) { int rc = 0; unsigned long end_pfn = begin_pfn + nr; --- a/xen/arch/x86/oprofile/backtrace.c +++ b/xen/arch/x86/oprofile/backtrace.c @@ -74,11 +74,8 @@ dump_guest_backtrace(struct vcpu *vcpu, } else { - XEN_GUEST_HANDLE(const_frame_head_t) guest_head; - XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head_param = + XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head = const_guest_handle_from_ptr(head, frame_head_t); - guest_head = guest_handle_from_param(guest_head_param, - const_frame_head_t); /* Also check accessibility of one struct frame_head beyond */ if (!guest_handle_okay(guest_head, 2)) --- a/xen/arch/x86/platform_hypercall.c +++ b/xen/arch/x86/platform_hypercall.c @@ -285,9 +285,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA guest_from_compat_handle(data, op->u.microcode.data); - ret = microcode_update( - guest_handle_to_param(data, const_void), - op->u.microcode.length); + ret = microcode_update(data, op->u.microcode.length); } break; @@ -531,9 +529,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA XEN_GUEST_HANDLE(uint32) pdc; guest_from_compat_handle(pdc, op->u.set_pminfo.u.pdc); - ret = acpi_set_pdc_bits( - op->u.set_pminfo.id, - guest_handle_to_param(pdc, uint32)); + ret = acpi_set_pdc_bits(op->u.set_pminfo.id, pdc); } break; --- a/xen/arch/x86/x86_64/compat.c +++ b/xen/arch/x86/x86_64/compat.c @@ -15,6 +15,7 @@ EMIT_FILE; #define COMPAT #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t) +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t) typedef int ret_t; #include "../compat.c" --- a/xen/arch/x86/x86_64/cpu_idle.c +++ b/xen/arch/x86/x86_64/cpu_idle.c @@ -52,13 +52,9 @@ static int copy_from_compat_state(xen_pr compat_processor_cx_t *state) { #define XLAT_processor_cx_HNDL_dp(_d_, _s_) do { \ - XEN_GUEST_HANDLE(compat_processor_csd_t) dps; \ - XEN_GUEST_HANDLE_PARAM(xen_processor_csd_t) dps_param; \ if ( unlikely(!compat_handle_okay((_s_)->dp, (_s_)->dpcnt)) ) \ - return -EFAULT; \ - guest_from_compat_handle(dps, (_s_)->dp); \ - dps_param = guest_handle_cast(dps, xen_processor_csd_t); \ - (_d_)->dp = guest_handle_from_param(dps_param, xen_processor_csd_t); \ + return -EFAULT; \ + guest_from_compat_handle((_d_)->dp, (_s_)->dp); \ } while (0) XLAT_processor_cx(xen_state, state); #undef XLAT_processor_cx_HNDL_dp --- a/xen/arch/x86/x86_64/cpufreq.c +++ b/xen/arch/x86/x86_64/cpufreq.c @@ -26,6 +26,8 @@ #include #include +CHECK_processor_px; + DEFINE_XEN_GUEST_HANDLE(compat_processor_px_t); int @@ -42,13 +44,9 @@ compat_set_px_pminfo(uint32_t cpu, struc return -EFAULT; #define XLAT_processor_performance_HNDL_states(_d_, _s_) do { \ - XEN_GUEST_HANDLE(compat_processor_px_t) states; \ - XEN_GUEST_HANDLE_PARAM(xen_processor_px_t) states_t; \ if ( unlikely(!compat_handle_okay((_s_)->states, (_s_)->state_count)) ) \ return -EFAULT; \ - guest_from_compat_handle(states, (_s_)->states); \ - states_t = guest_handle_cast(states, xen_processor_px_t); \ - (_d_)->states = guest_handle_from_param(states_t, xen_processor_px_t); \ + guest_from_compat_handle((_d_)->states, (_s_)->states); \ } while (0) XLAT_processor_performance(xen_perf, perf); --- a/xen/drivers/acpi/pmstat.c +++ b/xen/drivers/acpi/pmstat.c @@ -492,7 +492,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op return ret; } -int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc) +int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc) { u32 bits[3]; int ret; --- a/xen/include/asm-arm/guest_access.h +++ b/xen/include/asm-arm/guest_access.h @@ -40,7 +40,7 @@ int access_guest_memory_by_ipa(struct do (XEN_GUEST_HANDLE_PARAM(type)) { _x }; \ }) -/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */ +/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */ #define guest_handle_to_param(hnd, type) ({ \ typeof((hnd).p) _x = (hnd).p; \ XEN_GUEST_HANDLE_PARAM(type) _y = { _x }; \ @@ -51,18 +51,6 @@ int access_guest_memory_by_ipa(struct do _y; \ }) - -/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */ -#define guest_handle_from_param(hnd, type) ({ \ - typeof((hnd).p) _x = (hnd).p; \ - XEN_GUEST_HANDLE(type) _y = { _x }; \ - /* type checking: make sure that the pointers inside \ - * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of \ - * the same type, then return hnd */ \ - (void)(&_x == &_y.p); \ - _y; \ -}) - #define guest_handle_for_field(hnd, type, fld) \ ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld }) --- a/xen/include/asm-x86/guest_access.h +++ b/xen/include/asm-x86/guest_access.h @@ -52,21 +52,11 @@ (XEN_GUEST_HANDLE_PARAM(type)) { _x }; \ }) -/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */ +/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */ #define guest_handle_to_param(hnd, type) ({ \ /* type checking: make sure that the pointers inside \ * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of \ * the same type, then return hnd */ \ - (void)((typeof(&(hnd).p)) 0 == \ - (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \ - (hnd); \ -}) - -/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */ -#define guest_handle_from_param(hnd, type) ({ \ - /* type checking: make sure that the pointers inside \ - * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of \ - * the same type, then return hnd */ \ (void)((typeof(&(hnd).p)) 0 == \ (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \ (hnd); \ --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -41,7 +41,7 @@ void hap_vcpu_init(struct vcpu *v); int hap_track_dirty_vram(struct domain *d, unsigned long begin_pfn, unsigned long nr, - XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap); + XEN_GUEST_HANDLE(void) dirty_bitmap); extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted); --- a/xen/include/asm-x86/microcode.h +++ b/xen/include/asm-x86/microcode.h @@ -20,7 +20,7 @@ struct cpu_signature { DECLARE_PER_CPU(struct cpu_signature, cpu_sig); void microcode_set_module(unsigned int idx); -int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len); +int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len); int early_microcode_init(void); int microcode_update_one(bool start_update); --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -64,7 +64,7 @@ int shadow_enable(struct domain *d, u32 int shadow_track_dirty_vram(struct domain *d, unsigned long first_pfn, unsigned long nr, - XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap); + XEN_GUEST_HANDLE(void) dirty_bitmap); /* Handler for shadow control ops: operations from user-space to enable * and disable ephemeral shadow modes (test mode and log-dirty mode) and --- a/xen/include/xen/acpi.h +++ b/xen/include/xen/acpi.h @@ -184,8 +184,8 @@ static inline unsigned int acpi_get_csub static inline void acpi_set_csubstate_limit(unsigned int new_limit) { return; } #endif -#ifdef XEN_GUEST_HANDLE_PARAM -int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32)); +#ifdef XEN_GUEST_HANDLE +int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32)); #endif int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);