From patchwork Fri Apr 21 16:53:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 9693161 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5111B6038D for ; Fri, 21 Apr 2017 17:03:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D28F28648 for ; Fri, 21 Apr 2017 17:03:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FAB328635; Fri, 21 Apr 2017 17:03:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92E7B28635 for ; Fri, 21 Apr 2017 17:02:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162141AbdDURCp (ORCPT ); Fri, 21 Apr 2017 13:02:45 -0400 Received: from mail-yb0-f176.google.com ([209.85.213.176]:33943 "EHLO mail-yb0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162131AbdDURCi (ORCPT ); Fri, 21 Apr 2017 13:02:38 -0400 Received: by mail-yb0-f176.google.com with SMTP id 11so44084077ybw.1 for ; Fri, 21 Apr 2017 10:02:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=zmRnLzieM0yU+OBLafFEPgGs0VqZ0x7kC0MK360nzx4=; b=g/VULQzOvP8kI5HifTtJVUHd4bi7e9i37vPmceCl27TOZTcn7F3ynUy/ZUovJLQ1vp EMDulDh7HU4jdfJ2bY6ihIn6h77atTsY0inyj0iJ2/hCFYWl4gNFr0AmMXfCUumtqh+4 hDDC2DTqtFkIFhDMwr780TKaWXc4hJ7YWiKpNPqiQ0BYhOjkfhyFi0xzFtYzgfhgE9eJ yujpZxxffzL464JoOFRqdX0dKdtUxeTlyhnZgNuOI2dmn27YH5uqMd8qaUkTmcAoGIXF PrkHmuHjbLwLsTJnmx7VUZ0bbbQ2XI1riHYYres/ejE2ms6Sog8diI8+isOi0fXh78oe zcVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=zmRnLzieM0yU+OBLafFEPgGs0VqZ0x7kC0MK360nzx4=; b=a6zuCokF4iGBsXfrnuXILTprfKLIzwTS+69NH7ZLbUYfxOm1hWzdkE+K4z7ZoxcXjv flZP9VKv/PsX93MxFVJSjMSIk/p3J+OS86f/+LQ5R2HlY/VPeJIz89mcvh7NqWA+i2rH lxK03r4vIg9ZwAliOSJyFq883/9HhUpYZQrn4e+lMDZqw60CMCiGhw6Z+/ioKQ856Efn kVTpioSUChW1r7+HYlNNLje12sPHNm0xCSduYtXPa9MDGDKtWCOa7QG71rQ+9ZRBKfql zxr+WAJVQo9yk2vw1vqtd3pOBvk5nqk4HJ5ZkVbWwXhbSCZPna53HtEKQiq5RgkYE4H+ qN2g== X-Gm-Message-State: AN3rC/7USb05plIj1rLK/SSurqiCNUnsi9pI5Q5DP0rddb5P/+YdYi8o VfUyZCI6+N1TsA99 X-Received: by 10.84.134.131 with SMTP id 3mr17273237plh.116.1492793668725; Fri, 21 Apr 2017 09:54:28 -0700 (PDT) Received: from turtle.sea.corp.google.com ([172.31.88.24]) by smtp.gmail.com with ESMTPSA id s68sm17096422pfj.77.2017.04.21.09.54.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 21 Apr 2017 09:54:27 -0700 (PDT) From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson Subject: [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Date: Fri, 21 Apr 2017 09:53:40 -0700 Message-Id: <20170421165340.92716-1-jmattson@google.com> X-Mailer: git-send-email 2.12.2.816.g2cccc81164-goog Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP According to the Intel SDM, "Certain exceptions have priority over VM exits. These include invalid-opcode exceptions, faults based on privilege level*, and general-protection exceptions that are based on checking I/O permission bits in the task-state segment (TSS)." There is no need to check for faulting conditions that the hardware has already checked. One of the constraints on the VMX instructions is that they are not allowed in real-address mode. Though the hardware checks for this condition as well, when real-address mode is emulated, the faulting condition does have to be checked in software. * These include faults generated by attempts to execute, in virtual-8086 mode, privileged instructions that are not recognized in that mode. Signed-off-by: Jim Mattson --- arch/x86/kvm/vmx.c | 58 ++++++++++++++---------------------------------------- 1 file changed, 15 insertions(+), 43 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 259e9b28ccf8..1a975e942b87 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -7115,25 +7115,14 @@ static int handle_vmon(struct kvm_vcpu *vcpu) /* The Intel VMX Instruction Reference lists a bunch of bits that * are prerequisite to running VMXON, most notably cr4.VMXE must be * set to 1 (see vmx_set_cr4() for when we allow the guest to set this). - * Otherwise, we should fail with #UD. We test these now: + * Otherwise, we should fail with #UD. Hardware has already tested + * most or all of these conditions, with the exception of real-address + * mode, when real-address mode is emulated. */ - if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) || - !kvm_read_cr0_bits(vcpu, X86_CR0_PE) || - (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) { - kvm_queue_exception(vcpu, UD_VECTOR); - return 1; - } - vmx_get_segment(vcpu, &cs, VCPU_SREG_CS); - if (is_long_mode(vcpu) && !cs.l) { + if ((!enable_unrestricted_guest && + !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) { kvm_queue_exception(vcpu, UD_VECTOR); - return 1; - } - - if (vmx_get_cpl(vcpu)) { - kvm_inject_gp(vcpu, 0); - return 1; - } if (vmx->nested.vmxon) { nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION); @@ -7161,30 +7150,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu) * Intel's VMX Instruction Reference specifies a common set of prerequisites * for running VMX instructions (except VMXON, whose prerequisites are * slightly different). It also specifies what exception to inject otherwise. + * Note that many of these exceptions have priority over VM exits, so they + * don't have to be checked again here. */ -static int nested_vmx_check_permission(struct kvm_vcpu *vcpu) +static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu) { - struct kvm_segment cs; - struct vcpu_vmx *vmx = to_vmx(vcpu); - - if (!vmx->nested.vmxon) { + if (!to_vmx(vcpu)->nested.vmxon || + (!enable_unrestricted_guest && + !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) { kvm_queue_exception(vcpu, UD_VECTOR); - return 0; - } - - vmx_get_segment(vcpu, &cs, VCPU_SREG_CS); - if ((vmx_get_rflags(vcpu) & X86_EFLAGS_VM) || - (is_long_mode(vcpu) && !cs.l)) { - kvm_queue_exception(vcpu, UD_VECTOR); - return 0; - } - - if (vmx_get_cpl(vcpu)) { - kvm_inject_gp(vcpu, 0); - return 0; + return false; } - - return 1; + return true; } static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) @@ -7527,7 +7504,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) if (get_vmx_mem_address(vcpu, exit_qualification, vmx_instruction_info, true, &gva)) return 1; - /* _system ok, as nested_vmx_check_permission verified cpl=0 */ + /* _system ok, as hardware has verified cpl=0 */ kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva, &field_value, (is_long_mode(vcpu) ? 8 : 4), NULL); } @@ -7660,7 +7637,7 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu) if (get_vmx_mem_address(vcpu, exit_qualification, vmx_instruction_info, true, &vmcs_gva)) return 1; - /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */ + /* ok to use *_system, as hardware has verified cpl=0 */ if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva, (void *)&to_vmx(vcpu)->nested.current_vmptr, sizeof(u64), &e)) { @@ -7693,11 +7670,6 @@ static int handle_invept(struct kvm_vcpu *vcpu) if (!nested_vmx_check_permission(vcpu)) return 1; - if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) { - kvm_queue_exception(vcpu, UD_VECTOR); - return 1; - } - vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);