From patchwork Mon Sep 21 08:10:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krish Sadhukhan X-Patchwork-Id: 11788901 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6671D139F for ; Mon, 21 Sep 2020 08:12:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 458C2214F1 for ; Mon, 21 Sep 2020 08:12:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="iF/4Geci" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726532AbgIUIMo (ORCPT ); Mon, 21 Sep 2020 04:12:44 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:46710 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726334AbgIUIMo (ORCPT ); Mon, 21 Sep 2020 04:12:44 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L898vp163663; Mon, 21 Sep 2020 08:12:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=9P1BqW5eEbN0sm2+B7OnQjLwAserdN/C+elcGXIC7Lo=; b=iF/4Geci4wf7es0sEgE2g4Hcw6u9oheYE0df46kGUeKuHGrMAo0PMNT4WQNhNMFsJ9HM 8HVO5E97oiBFuBMgF5Gb/2jay9tbEWWjGxpEUTdM0tyxPEbBuqakVtGTpipv4WSC+xqW bIfjTA2UI7glr9KO8+NPnDCKo0gaQAiKSz7x3Y2EWIp3FDl4SPzDSTamVxeLT37BhAQ8 5OHjFnQ7ltkIOHpCKMLa2uYhLp623Xhkonzd8+sT83efQufSiAmt7KajgBguxyhSSPLB zVdvEbZcEAx2j/X5qpOq+Xgbq3lxwX9OYMa1rqrk0dleQTa0vr4/qpMZTpHd6I6bZE4h Kw== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 33n9xkm3aa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 21 Sep 2020 08:12:39 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L86GHV142937; Mon, 21 Sep 2020 08:10:38 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 33nujkb3df-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 21 Sep 2020 08:10:38 +0000 Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 08L8AbcB022458; Mon, 21 Sep 2020 08:10:37 GMT Received: from sadhukhan-nvmx.osdevelopmeniad.oraclevcn.com (/100.100.230.226) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 21 Sep 2020 01:10:37 -0700 From: Krish Sadhukhan To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Subject: [PATCH 1/3 v3] KVM: nVMX: KVM needs to unset "unrestricted guest" VM-execution control in vmcs02 if vmcs12 doesn't set it Date: Mon, 21 Sep 2020 08:10:25 +0000 Message-Id: <20200921081027.23047-2-krish.sadhukhan@oracle.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200921081027.23047-1-krish.sadhukhan@oracle.com> References: <20200921081027.23047-1-krish.sadhukhan@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 malwarescore=0 mlxlogscore=999 phishscore=0 adultscore=0 spamscore=0 suspectscore=1 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 malwarescore=0 suspectscore=1 priorityscore=1501 adultscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 bulkscore=0 mlxscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, prepare_vmcs02_early() does not check if the "unrestricted guest" VM-execution control in vmcs12 is turned off and leaves the corresponding bit on in vmcs02. Due to this setting, vmentry checks which are supposed to render the nested guest state as invalid when this VM-execution control is not set, are passing in hardware. This patch turns off the "unrestricted guest" VM-execution control in vmcs02 if vmcs12 has turned it off. Suggested-by: Jim Mattson Suggested-by: Sean Christopherson Signed-off-by: Krish Sadhukhan --- arch/x86/kvm/vmx/nested.c | 3 +++ arch/x86/kvm/vmx/vmx.c | 17 +++++++++-------- arch/x86/kvm/vmx/vmx.h | 7 +++++++ 3 files changed, 19 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1bb6b31eb646..86fc044e5af9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2314,6 +2314,9 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) vmcs_write16(GUEST_INTR_STATUS, vmcs12->guest_intr_status); + if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_UNRESTRICTED_GUEST)) + exec_control &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST; + secondary_exec_controls_set(vmx, exec_control); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8646a797b7a8..0559c11d227c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1441,7 +1441,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long old_rflags; - if (enable_unrestricted_guest) { + if (is_unrestricted_guest(vcpu)) { kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS); vmx->rflags = rflags; vmcs_writel(GUEST_RFLAGS, rflags); @@ -2267,7 +2267,8 @@ static void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) vcpu->arch.cr0 |= vmcs_readl(GUEST_CR0) & guest_owned_bits; break; case VCPU_EXREG_CR3: - if (enable_unrestricted_guest || (enable_ept && is_paging(vcpu))) + if (is_unrestricted_guest(vcpu) || + (enable_ept && is_paging(vcpu))) vcpu->arch.cr3 = vmcs_readl(GUEST_CR3); break; case VCPU_EXREG_CR4: @@ -3033,7 +3034,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) unsigned long hw_cr0; hw_cr0 = (cr0 & ~KVM_VM_CR0_ALWAYS_OFF); - if (enable_unrestricted_guest) + if (is_unrestricted_guest(vcpu)) hw_cr0 |= KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST; else { hw_cr0 |= KVM_VM_CR0_ALWAYS_ON; @@ -3054,7 +3055,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) } #endif - if (enable_ept && !enable_unrestricted_guest) + if (enable_ept && !is_unrestricted_guest(vcpu)) ept_update_paging_mode_cr0(&hw_cr0, cr0, vcpu); vmcs_writel(CR0_READ_SHADOW, cr0); @@ -3134,7 +3135,7 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) unsigned long hw_cr4; hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE); - if (enable_unrestricted_guest) + if (is_unrestricted_guest(vcpu)) hw_cr4 |= KVM_VM_CR4_ALWAYS_ON_UNRESTRICTED_GUEST; else if (vmx->rmode.vm86_active) hw_cr4 |= KVM_RMODE_VM_CR4_ALWAYS_ON; @@ -3169,7 +3170,7 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) vcpu->arch.cr4 = cr4; kvm_register_mark_available(vcpu, VCPU_EXREG_CR4); - if (!enable_unrestricted_guest) { + if (!is_unrestricted_guest(vcpu)) { if (enable_ept) { if (!is_paging(vcpu)) { hw_cr4 &= ~X86_CR4_PAE; @@ -3309,7 +3310,7 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg) * tree. Newer qemu binaries with that qemu fix would not need this * kvm hack. */ - if (enable_unrestricted_guest && (seg != VCPU_SREG_LDTR)) + if (is_unrestricted_guest(vcpu) && (seg != VCPU_SREG_LDTR)) var->type |= 0x1; /* Accessed */ vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(var)); @@ -3500,7 +3501,7 @@ static bool cs_ss_rpl_check(struct kvm_vcpu *vcpu) */ static bool guest_state_valid(struct kvm_vcpu *vcpu) { - if (enable_unrestricted_guest) + if (is_unrestricted_guest(vcpu)) return true; /* real mode guest state checks */ diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a2f82127c170..0a39a7831ba9 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -555,6 +555,13 @@ static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu) return !enable_ept || cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits; } +static inline bool is_unrestricted_guest(struct kvm_vcpu *vcpu) +{ + return enable_unrestricted_guest && (!is_guest_mode(vcpu) || + (secondary_exec_controls_get(to_vmx(vcpu)) & + SECONDARY_EXEC_UNRESTRICTED_GUEST)); +} + void dump_vmcs(void); #endif /* __KVM_X86_VMX_H */ From patchwork Mon Sep 21 08:10:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Krish Sadhukhan X-Patchwork-Id: 11788895 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 214FB139F for ; Mon, 21 Sep 2020 08:10:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3F1820BED for ; Mon, 21 Sep 2020 08:10:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="AlU18KRi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726755AbgIUIKq (ORCPT ); Mon, 21 Sep 2020 04:10:46 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:45056 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726444AbgIUIKp (ORCPT ); Mon, 21 Sep 2020 04:10:45 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L896XT163587; Mon, 21 Sep 2020 08:10:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2020-01-29; bh=Hv/dqC0QFifa4/r82vmdrwp0CRePakr43scoy0mLXNU=; b=AlU18KRiNJozaN7Oq8LsresczdgW0ui8CvVB7qQCvUENUUgZWgUV0HjswXlVb66QurEB LB4IHkjs9HvOYg3Vjgx2EMdN+UkIOTfhMcH3wiEWQ+vaRpD+FbI/GEk1FzyIhz13QcBT KZ7n1OihiBoCmdkBVODAqCnvTKfc15Uk966SBFiLytMXb17EDfbS4aSgUhKPxGQgzTBS yN+ja3JaTbZu9CsIbP4vzQLf2gme2yXfS23pukVBHRJqmPSDF6xzxmWcQRXjKQybKziK 9XbVd/Nx+jp8MoK/2YeaZD/5HgLt24pjxLR+M/8AdNx+rkmFgB0A8+qIPoH/hJ56CwUI 2A== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2120.oracle.com with ESMTP id 33n9xkm31x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 21 Sep 2020 08:10:39 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L85mij188337; Mon, 21 Sep 2020 08:10:39 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 33nuw04r8g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 21 Sep 2020 08:10:38 +0000 Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 08L8AbT7022464; Mon, 21 Sep 2020 08:10:37 GMT Received: from sadhukhan-nvmx.osdevelopmeniad.oraclevcn.com (/100.100.230.226) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 21 Sep 2020 01:10:37 -0700 From: Krish Sadhukhan To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Subject: [PATCH 2/3 v3] nVMX: Test Selector and Base Address fields of Guest Segment Registers on vmentry of nested guests Date: Mon, 21 Sep 2020 08:10:26 +0000 Message-Id: <20200921081027.23047-3-krish.sadhukhan@oracle.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200921081027.23047-1-krish.sadhukhan@oracle.com> References: <20200921081027.23047-1-krish.sadhukhan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 malwarescore=0 mlxscore=0 suspectscore=1 adultscore=0 mlxlogscore=999 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 malwarescore=0 suspectscore=1 priorityscore=1501 adultscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 bulkscore=0 mlxscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to section "Checks on Guest Segment Registers" in Intel SDM vol 3C, the following checks are performed on the Guest Segment Registers on vmentry of nested guests: Selector fields: — TR. The TI flag (bit 2) must be 0. — LDTR. If LDTR is usable, the TI flag (bit 2) must be 0. — SS. If the guest will not be virtual-8086 and the "unrestricted guest" VM-execution control is 0, the RPL (bits 1:0) must equal the RPL of the selector field for CS.1 Base-address fields: — CS, SS, DS, ES, FS, GS. If the guest will be virtual-8086, the address must be the selector field shifted left 4 bits (multiplied by 16). — The following checks are performed on processors that support Intel 64 architecture: TR, FS, GS. The address must be canonical. LDTR. If LDTR is usable, the address must be canonical. CS. Bits 63:32 of the address must be zero. SS, DS, ES. If the register is usable, bits 63:32 of the address must be zero. Signed-off-by: Krish Sadhukhan --- lib/x86/processor.h | 1 + x86/vmx_tests.c | 200 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 201 insertions(+) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 74a2498..c2c487c 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -63,6 +63,7 @@ #define X86_EFLAGS_OF 0x00000800 #define X86_EFLAGS_IOPL 0x00003000 #define X86_EFLAGS_NT 0x00004000 +#define X86_EFLAGS_VM 0x00020000 #define X86_EFLAGS_AC 0x00040000 #define X86_EFLAGS_ALU (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF | \ diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 22f0c7b..2b6e47b 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -7980,6 +7980,203 @@ static void test_load_guest_bndcfgs(void) vmcs_write(GUEST_BNDCFGS, bndcfgs_saved); } +#define GUEST_SEG_UNUSABLE_MASK (1u << 16) +#define GUEST_SEG_SEL_TI_MASK (1u << 2) +#define TEST_SEGMENT_SEL(xfail, sel, sel_name, val) \ + vmcs_write(sel, val); \ + test_guest_state("Test Guest Segment Selector", xfail, val, \ + sel_name); + +/* + * The following checks are done on the Selector field of the Guest Segment + * Registers: + * — TR. The TI flag (bit 2) must be 0. + * — LDTR. If LDTR is usable, the TI flag (bit 2) must be 0. + * — SS. If the guest will not be virtual-8086 and the "unrestricted + * guest" VM-execution control is 0, the RPL (bits 1:0) must equal + * the RPL of the selector field for CS. + * + * [Intel SDM] + */ +static void test_guest_segment_sel_fields(void) +{ + u16 sel_saved; + u32 ar_saved; + u32 cpu_ctrl0_saved; + u32 cpu_ctrl1_saved; + u16 cs_rpl_bits; + + /* + * Test for GUEST_SEL_TR + */ + sel_saved = vmcs_read(GUEST_SEL_TR); + TEST_SEGMENT_SEL(true, GUEST_SEL_TR, "GUEST_SEL_TR", + sel_saved | GUEST_SEG_SEL_TI_MASK); + vmcs_write(GUEST_SEL_TR, sel_saved); + + /* + * Test for GUEST_SEL_LDTR + */ + sel_saved = vmcs_read(GUEST_SEL_LDTR); + ar_saved = vmcs_read(GUEST_AR_LDTR); + /* LDTR is set unusable */ + vmcs_write(GUEST_AR_LDTR, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_SEL(false, GUEST_SEL_LDTR, "GUEST_SEL_LDTR", + sel_saved | GUEST_SEG_SEL_TI_MASK); + TEST_SEGMENT_SEL(false, GUEST_SEL_LDTR, "GUEST_SEL_LDTR", + sel_saved & ~GUEST_SEG_SEL_TI_MASK); + /* LDTR is set usable */ + vmcs_write(GUEST_AR_LDTR, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_SEL(true, GUEST_SEL_LDTR, "GUEST_SEL_LDTR", + sel_saved | GUEST_SEG_SEL_TI_MASK); + + TEST_SEGMENT_SEL(false, GUEST_SEL_LDTR, "GUEST_SEL_LDTR", + sel_saved & ~GUEST_SEG_SEL_TI_MASK); + + vmcs_write(GUEST_AR_LDTR, ar_saved); + vmcs_write(GUEST_SEL_LDTR, sel_saved); + + /* + * Test for GUEST_SEL_SS + */ + cpu_ctrl0_saved = vmcs_read(CPU_EXEC_CTRL0); + cpu_ctrl1_saved = vmcs_read(CPU_EXEC_CTRL1); + ar_saved = vmcs_read(GUEST_AR_SS); + /* Turn off "unrestricted guest" vm-execution control */ + vmcs_write(CPU_EXEC_CTRL1, cpu_ctrl1_saved & ~CPU_URG); + cs_rpl_bits = vmcs_read(GUEST_SEL_CS) & 0x3; + sel_saved = vmcs_read(GUEST_SEL_SS); + TEST_SEGMENT_SEL(true, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (~cs_rpl_bits & 0x3))); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (cs_rpl_bits & 0x3))); + /* Make SS usable if it's unusable or vice-versa */ + if (ar_saved & GUEST_SEG_UNUSABLE_MASK) + vmcs_write(GUEST_AR_SS, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + else + vmcs_write(GUEST_AR_SS, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_SEL(true, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (~cs_rpl_bits & 0x3))); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (cs_rpl_bits & 0x3))); + + /* Turn on "unrestricted guest" vm-execution control */ + vmcs_write(CPU_EXEC_CTRL0, cpu_ctrl0_saved | CPU_SECONDARY); + vmcs_write(CPU_EXEC_CTRL1, cpu_ctrl1_saved | CPU_URG); + /* EPT and EPTP must be setup when "unrestricted guest" is on */ + setup_ept(false); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (~cs_rpl_bits & 0x3))); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (cs_rpl_bits & 0x3))); + /* Make SS usable if it's unusable or vice-versa */ + if (vmcs_read(GUEST_AR_SS) & GUEST_SEG_UNUSABLE_MASK) + vmcs_write(GUEST_AR_SS, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + else + vmcs_write(GUEST_AR_SS, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (~cs_rpl_bits & 0x3))); + TEST_SEGMENT_SEL(false, GUEST_SEL_SS, "GUEST_SEL_SS", + ((sel_saved & ~0x3) | (cs_rpl_bits & 0x3))); + + vmcs_write(GUEST_AR_SS, ar_saved); + vmcs_write(GUEST_SEL_SS, sel_saved); + vmcs_write(CPU_EXEC_CTRL0, cpu_ctrl0_saved); + vmcs_write(CPU_EXEC_CTRL1, cpu_ctrl1_saved); +} + +#define TEST_SEGMENT_BASE_ADDR_UPPER_BITS(xfail, seg_base, seg_base_name)\ + addr_saved = vmcs_read(seg_base); \ + for (i = 32; i < 63; i = i + 4) { \ + addr = addr_saved | 1ull << i; \ + vmcs_write(seg_base, addr); \ + test_guest_state(seg_base_name, xfail, addr, \ + seg_base_name); \ + } \ + vmcs_write(seg_base, addr_saved); + +#define TEST_SEGMENT_BASE_ADDR_CANONICAL(xfail, seg_base, seg_base_name)\ + addr_saved = vmcs_read(seg_base); \ + vmcs_write(seg_base, NONCANONICAL); \ + test_guest_state(seg_base_name, xfail, NONCANONICAL, \ + seg_base_name); \ + vmcs_write(seg_base, addr_saved); + +/* + * The following checks are done on the Base Address field of the Guest + * Segment Registers on processors that support Intel 64 architecture: + * - TR, FS, GS : The address must be canonical. + * - LDTR : If LDTR is usable, the address must be canonical. + * - CS : Bits 63:32 of the address must be zero. + * - SS, DS, ES : If the register is usable, bits 63:32 of the address + * must be zero. + * + * [Intel SDM] + */ +static void test_guest_segment_base_addr_fields(void) +{ + u64 addr_saved; + u64 addr; + u32 ar_saved; + int i; + + /* + * The address of TR, FS, GS and LDTR must be canonical. + */ + TEST_SEGMENT_BASE_ADDR_CANONICAL(true, GUEST_BASE_TR, "GUEST_BASE_TR"); + TEST_SEGMENT_BASE_ADDR_CANONICAL(true, GUEST_BASE_FS, "GUEST_BASE_FS"); + TEST_SEGMENT_BASE_ADDR_CANONICAL(true, GUEST_BASE_GS, "GUEST_BASE_GS"); + ar_saved = vmcs_read(GUEST_AR_LDTR); + /* Make LDTR unusable */ + vmcs_write(GUEST_AR_LDTR, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_CANONICAL(false, GUEST_BASE_LDTR, + "GUEST_BASE_LDTR"); + /* Make LDTR usable */ + vmcs_write(GUEST_AR_LDTR, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_CANONICAL(true, GUEST_BASE_LDTR, + "GUEST_BASE_LDTR"); + + vmcs_write(GUEST_AR_LDTR, ar_saved); + + /* + * Bits 63:32 in CS, SS, DS and ES base address must be zero + */ + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(true, GUEST_BASE_CS, + "GUEST_BASE_CS"); + ar_saved = vmcs_read(GUEST_AR_SS); + /* Make SS unusable */ + vmcs_write(GUEST_AR_SS, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(false, GUEST_BASE_SS, + "GUEST_BASE_SS"); + /* Make SS usable */ + vmcs_write(GUEST_AR_SS, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(true, GUEST_BASE_SS, + "GUEST_BASE_SS"); + vmcs_write(GUEST_AR_SS, ar_saved); + + ar_saved = vmcs_read(GUEST_AR_DS); + /* Make DS unusable */ + vmcs_write(GUEST_AR_DS, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(false, GUEST_BASE_DS, + "GUEST_BASE_DS"); + /* Make DS usable */ + vmcs_write(GUEST_AR_DS, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(true, GUEST_BASE_DS, + "GUEST_BASE_DS"); + vmcs_write(GUEST_AR_DS, ar_saved); + + ar_saved = vmcs_read(GUEST_AR_ES); + /* Make ES unusable */ + vmcs_write(GUEST_AR_ES, ar_saved | GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(false, GUEST_BASE_ES, + "GUEST_BASE_ES"); + /* Make ES usable */ + vmcs_write(GUEST_AR_ES, ar_saved & ~GUEST_SEG_UNUSABLE_MASK); + TEST_SEGMENT_BASE_ADDR_UPPER_BITS(true, GUEST_BASE_ES, + "GUEST_BASE_ES"); + vmcs_write(GUEST_AR_ES, ar_saved); +} + /* * Check that the virtual CPU checks the VMX Guest State Area as * documented in the Intel SDM. @@ -8002,6 +8199,9 @@ static void vmx_guest_state_area_test(void) test_load_guest_perf_global_ctrl(); test_load_guest_bndcfgs(); + test_guest_segment_sel_fields(); + test_guest_segment_base_addr_fields(); + test_canonical(GUEST_BASE_GDTR, "GUEST_BASE_GDTR", false); test_canonical(GUEST_BASE_IDTR, "GUEST_BASE_IDTR", false); From patchwork Mon Sep 21 08:10:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Krish Sadhukhan X-Patchwork-Id: 11788893 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8213139F for ; Mon, 21 Sep 2020 08:10:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D7B820EDD for ; Mon, 21 Sep 2020 08:10:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="k6X3Xb3Y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726681AbgIUIKo (ORCPT ); Mon, 21 Sep 2020 04:10:44 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:56470 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726571AbgIUIKo (ORCPT ); Mon, 21 Sep 2020 04:10:44 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L88c44121738; Mon, 21 Sep 2020 08:10:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=DRZWyZnEtFgl+m+sGuHTIN6A1v0HmzpTKH+x+HWb6gA=; b=k6X3Xb3Y69Q0refHTqeQXEuU1tX5BvR4fQpcjh2glu1QTQteXUbw8IIbzpu2CRO3feMS iNc4J2dh4xjoGBxxByT2+McOXOQKTtQ+5ztSCNHzRvZoiuXaxB02oDNzM5DaFjKQJfMp nRTo7aOhBIdJi7y38O1GM+eR9od1PdboUBHjp1YjQTsObcduQvt+SsdnNdOiHuL4nW3m RXEc06coScFsZEDXyjd57zywWpcep+LFFhIGYm3VGtwBuwFq/qoTmFr0dLjzn3WdYda7 GwzF3jqg4fddfW1/agmlsy2iXmBXyinwNntJqnujJ5iIM8g5LgBdRhh+wjyZ8FltjOl2 yw== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 33ndnu3u90-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 21 Sep 2020 08:10:39 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08L86HtY143077; Mon, 21 Sep 2020 08:10:38 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserp3030.oracle.com with ESMTP id 33nujkb3dp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 21 Sep 2020 08:10:38 +0000 Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 08L8AbfB009303; Mon, 21 Sep 2020 08:10:38 GMT Received: from sadhukhan-nvmx.osdevelopmeniad.oraclevcn.com (/100.100.230.226) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 21 Sep 2020 01:10:37 -0700 From: Krish Sadhukhan To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, sean.j.christopherson@intel.com Subject: [PATCH 3/3 v3] nVMX: Test vmentry of unrestricted (unpaged protected) nested guest Date: Mon, 21 Sep 2020 08:10:27 +0000 Message-Id: <20200921081027.23047-4-krish.sadhukhan@oracle.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200921081027.23047-1-krish.sadhukhan@oracle.com> References: <20200921081027.23047-1-krish.sadhukhan@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 malwarescore=0 mlxlogscore=999 phishscore=0 adultscore=0 spamscore=0 suspectscore=1 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9750 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 lowpriorityscore=0 phishscore=0 adultscore=0 suspectscore=1 bulkscore=0 clxscore=1015 impostorscore=0 mlxlogscore=999 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009210059 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to section "UNRESTRICTED GUESTS" in SDM vol 3c, if the "unrestricted guest" secondary VM-execution control is set, guests can run in unpaged protected mode or in real mode. This patch tests vmetnry of an unrestricted guest in unpaged protected mode. Signed-off-by: Jim Mattson Signed-off-by: Krish Sadhukhan --- x86/vmx.c | 2 +- x86/vmx.h | 1 + x86/vmx_tests.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/x86/vmx.c b/x86/vmx.c index 07415b4..1a84a74 100644 --- a/x86/vmx.c +++ b/x86/vmx.c @@ -1699,7 +1699,7 @@ static void test_vmx_caps(void) } /* This function can only be called in guest */ -static void __attribute__((__used__)) hypercall(u32 hypercall_no) +void __attribute__((__used__)) hypercall(u32 hypercall_no) { u64 val = 0; val = (hypercall_no & HYPERCALL_MASK) | HYPERCALL_BIT; diff --git a/x86/vmx.h b/x86/vmx.h index d1c2436..e29301e 100644 --- a/x86/vmx.h +++ b/x86/vmx.h @@ -895,6 +895,7 @@ bool ept_ad_bits_supported(void); void __enter_guest(u8 abort_flag, struct vmentry_result *result); void enter_guest(void); void enter_guest_with_bad_controls(void); +void hypercall(u32 hypercall_no); typedef void (*test_guest_func)(void); typedef void (*test_teardown_func)(void *data); diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 22f0c7b..1cadc56 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -8029,6 +8029,53 @@ static void vmx_guest_state_area_test(void) enter_guest(); } +extern void unrestricted_guest_main(void); +asm (".code32\n" + "unrestricted_guest_main:\n" + "vmcall\n" + "nop\n" + "mov $1, %edi\n" + "call hypercall\n" + ".code64\n"); + +static void setup_unrestricted_guest(void) +{ + vmcs_write(GUEST_CR0, vmcs_read(GUEST_CR0) & ~(X86_CR0_PG)); + vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) & ~ENT_GUEST_64); + vmcs_write(GUEST_EFER, vmcs_read(GUEST_EFER) & ~EFER_LMA); + vmcs_write(GUEST_RIP, virt_to_phys(unrestricted_guest_main)); +} + +static void unsetup_unrestricted_guest(void) +{ + vmcs_write(GUEST_CR0, vmcs_read(GUEST_CR0) | X86_CR0_PG); + vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) | ENT_GUEST_64); + vmcs_write(GUEST_EFER, vmcs_read(GUEST_EFER) | EFER_LMA); + vmcs_write(GUEST_RIP, (u64) phys_to_virt(vmcs_read(GUEST_RIP))); + vmcs_write(GUEST_RSP, (u64) phys_to_virt(vmcs_read(GUEST_RSP))); +} + +/* + * If "unrestricted guest" secondary VM-execution control is set, guests + * can run in unpaged protected mode. + */ +static void vmentry_unrestricted_guest_test(void) +{ + test_set_guest(unrestricted_guest_main); + setup_unrestricted_guest(); + if (setup_ept(false)) + test_skip("EPT not supported"); + vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | CPU_URG); + test_guest_state("Unrestricted guest test", false, CPU_URG, "CPU_URG"); + + /* + * Let the guest finish execution as a regular guest + */ + unsetup_unrestricted_guest(); + vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) & ~CPU_URG); + enter_guest(); +} + static bool valid_vmcs_for_vmentry(void) { struct vmcs *current_vmcs = NULL; @@ -10234,6 +10281,7 @@ struct vmx_test vmx_tests[] = { TEST(vmx_host_state_area_test), TEST(vmx_guest_state_area_test), TEST(vmentry_movss_shadow_test), + TEST(vmentry_unrestricted_guest_test), /* APICv tests */ TEST(vmx_eoi_bitmap_ioapic_scan_test), TEST(vmx_hlt_with_rvi_test),