From patchwork Thu Jun 18 06:38:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11611461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6ABD5618 for ; Thu, 18 Jun 2020 06:38:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 50F2721852 for ; Thu, 18 Jun 2020 06:38:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50F2721852 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jloBw-0001I1-6u; Thu, 18 Jun 2020 06:38:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jloBv-0001Hw-F8 for xen-devel@lists.xenproject.org; Thu, 18 Jun 2020 06:38:27 +0000 X-Inumbo-ID: 504bccf8-b12e-11ea-bca7-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 504bccf8-b12e-11ea-bca7-bc764e2007e4; Thu, 18 Jun 2020 06:38:26 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9AA4EAC7F; Thu, 18 Jun 2020 06:38:29 +0000 (UTC) To: "xen-devel@lists.xenproject.org" From: Jan Beulich Subject: [PATCH] VT-x: simplify/clarify vmx_load_pdptrs() Message-ID: Date: Thu, 18 Jun 2020 08:38:27 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Kevin Tian , Wei Liu , Jun Nakajima , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" * Guests outside of long mode can't have PCID enabled. Drop the respective check to make more obvious that there's no security issue (from potentially accessing past the mapped page's boundary). * Only the low 32 bits of CR3 are relevant outside of long mode, even if they remained unchanged after leaving that mode. * Drop the unnecessary and badly typed local variable p. * Don't open-code hvm_long_mode_active() (and extend this to the related nested VT-x code). * Constify guest_pdptes to clarify that we're only reading from the page. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Reviewed-by: Kevin Tian --- This is intentionally not addressing any of the other shortcomings of the function, as was done by the previously posted https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01790.html. This will need to be the subject of a further change. --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1312,17 +1312,16 @@ static void vmx_set_interrupt_shadow(str static void vmx_load_pdptrs(struct vcpu *v) { - unsigned long cr3 = v->arch.hvm.guest_cr[3]; - uint64_t *guest_pdptes; + uint32_t cr3 = v->arch.hvm.guest_cr[3]; + const uint64_t *guest_pdptes; struct page_info *page; p2m_type_t p2mt; - char *p; /* EPT needs to load PDPTRS into VMCS for PAE. */ - if ( !hvm_pae_enabled(v) || (v->arch.hvm.guest_efer & EFER_LMA) ) + if ( !hvm_pae_enabled(v) || hvm_long_mode_active(v) ) return; - if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) ) + if ( cr3 & 0x1f ) goto crash; page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE); @@ -1332,14 +1331,12 @@ static void vmx_load_pdptrs(struct vcpu * queue, but this is the wrong place. We're holding at least * the paging lock */ gdprintk(XENLOG_ERR, - "Bad cr3 on load pdptrs gfn %lx type %d\n", + "Bad cr3 on load pdptrs gfn %"PRIx32" type %d\n", cr3 >> PAGE_SHIFT, (int) p2mt); goto crash; } - p = __map_domain_page(page); - - guest_pdptes = (uint64_t *)(p + (cr3 & ~PAGE_MASK)); + guest_pdptes = __map_domain_page(page) + (cr3 & ~PAGE_MASK); /* * We do not check the PDPTRs for validity. The CPU will do this during @@ -1356,7 +1353,7 @@ static void vmx_load_pdptrs(struct vcpu vmx_vmcs_exit(v); - unmap_domain_page(p); + unmap_domain_page(guest_pdptes); put_page(page); return; --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1234,7 +1234,7 @@ static void virtual_vmentry(struct cpu_u paging_update_paging_modes(v); if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) && - !(v->arch.hvm.guest_efer & EFER_LMA) ) + !hvm_long_mode_active(v) ) vvmcs_to_shadow_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields); regs->rip = get_vvmcs(v, GUEST_RIP); @@ -1437,7 +1437,7 @@ static void virtual_vmexit(struct cpu_us sync_exception_state(v); if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) && - !(v->arch.hvm.guest_efer & EFER_LMA) ) + !hvm_long_mode_active(v) ) shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields); /* This will clear current pCPU bit in p2m->dirty_cpumask */