From patchwork Fri Jun 15 17:33:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10467153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 13C536020F for ; Fri, 15 Jun 2018 17:34:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0382928E22 for ; Fri, 15 Jun 2018 17:34:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC00A28E25; Fri, 15 Jun 2018 17:34:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CC2828E22 for ; Fri, 15 Jun 2018 17:34:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965850AbeFORd7 (ORCPT ); Fri, 15 Jun 2018 13:33:59 -0400 Received: from mail-yw0-f202.google.com ([209.85.161.202]:42347 "EHLO mail-yw0-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936136AbeFORd6 (ORCPT ); Fri, 15 Jun 2018 13:33:58 -0400 Received: by mail-yw0-f202.google.com with SMTP id m8-v6so7808602ywd.9 for ; Fri, 15 Jun 2018 10:33:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:date:message-id:subject:from:to:cc; bh=hpj2gQuOK+uRRHzqTeYRx1d4LgcjqLilAue1PVGwGKA=; b=oPUyJ2PWBu+Z+SHZvJoHFOWSGQnDNz0f5ToulOzwAwmzUC8qdL8Tw4lFkNGTAso3yZ DGrN0WjMmycFsyRsR2wakFxprhVJ7plZXaA6AEvN2esHYcM2+tjff17SHX2VjGUkH3Lq 6Wn79kdMJo+aZymStTKICxUV0o1zssmVRCUO/lQ5VG0sFCtrZHV9/7Q7KOQgFizrAmj/ 76GOfEENq7ksh6hbq3KnbRmk+pXSAoJKus/fZM/ejZDUkrf4Z8sQWPWuyQe6GHNpxwgp XBnsy3lU6eVpzk1dpvakbK1b58mfmlfcSkOK0eey9K0r241ydVaUUh3S+g1BlEwE2oat JYSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc; bh=hpj2gQuOK+uRRHzqTeYRx1d4LgcjqLilAue1PVGwGKA=; b=J7QdDuJPn5fZpDDbxW1JV4LZNhNPdjK2FG6hMSG2BBmC4CHsqAbGjXkKFnwNB44SId pJ5te+Y6hnwCZi3PgV/MEKtISpwrZS4PRd6fbxme16cSPgZkHtfN1Ca/RRFO2MO/OlJe 3RKuIF0QjdaScW80lgj0+0XJxYap3Zx327iTNo3ITvVLqNiLBfz4Mb8NIiOJnneaGoCC lBIx0JBXBd5gfdXb5yBVQvDRqYoxPTfwkbnkvdZvoqfkle0XOXJBXHVFLCA0mKQs8ArS sfaUVCCF2CasNilTKvYcTWQnNT0N6/I4nwSx6GrLKrhRg/GOgxCHP1sPDpVYSS5He58f yWqw== X-Gm-Message-State: APt69E3HZ/Fsn2XdiTUFd0m8P4pGP0IzpiuZ6aslXHmrgNEo8TIQAZ9o hT8IutmYc8EMoWH8az79yfyiN1ePoqEojjj2uHpKgeB/YfCYws2rwgG1n/HFZgxM0kBdXa2Raqa 3bVMmIm4K5B4K0V6N2govaxM3ceBlnGhCbgo9RwnLfBJCkO7FSc3AcIAi2vzH X-Google-Smtp-Source: ADUXVKKjk+5Z9iZqgrWyPnXZG0Uj8NOGIPWHTUsadT0BgyBR4k+REXzPV6Kl3mAxXwpLzVYw0dqV2O9eYIAK MIME-Version: 1.0 X-Received: by 2002:a25:6409:: with SMTP id y9-v6mr767656ybb.20.1529084037189; Fri, 15 Jun 2018 10:33:57 -0700 (PDT) Date: Fri, 15 Jun 2018 10:33:30 -0700 Message-Id: <20180615173330.101572-1-marcorr@google.com> X-Mailer: git-send-email 2.18.0.rc1.244.gcf134e6275-goog Subject: [PATCH v2] kvm: vmx: Nested VM-entry prereqs for event inj. From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, pbonzini@redhat.com, rkrcmar@redhat.com, pshier@google.com, dvyukov@google.com Cc: Marc Orr Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch extends the checks done prior to a nested VM entry. Specifically, it extends the check_vmentry_prereqs function with checks for fields relevant to the VM-entry event injection information, as described in the Intel SDM, volume 3. This patch is motivated by a syzkaller bug, where a bad VM-entry interruption information field is generated in the VMCS02, which causes the nested VM launch to fail. Then, KVM fails to resume L1. While KVM should be improved to correctly resume L1 execution after a failed nested launch, this change is justified because the existing code to resume L1 is flaky/ad-hoc and the test coverage for resuming L1 is sparse. Reported-by: syzbot Signed-off-by: Marc Orr Reviewed-by: Krish Sadhukhan --- Changelog since v1: - Renamed nr to vector - Colocate error code checks - Add Reported-by tag to commit message arch/x86/include/asm/vmx.h | 3 ++ arch/x86/kvm/vmx.c | 67 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 14 ++++++++ 3 files changed, 84 insertions(+) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 425e6b8b9547..6aa8499e1f62 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -114,6 +114,7 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 #define VMX_MISC_ACTIVITY_HLT 0x00000040 +#define VMX_MISC_ZERO_LEN_INS 0x40000000 /* VMFUNC functions */ #define VMX_VMFUNC_EPTP_SWITCHING 0x00000001 @@ -351,11 +352,13 @@ enum vmcs_field { #define VECTORING_INFO_VALID_MASK INTR_INFO_VALID_MASK #define INTR_TYPE_EXT_INTR (0 << 8) /* external interrupt */ +#define INTR_TYPE_RESERVED (1 << 8) /* reserved */ #define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */ #define INTR_TYPE_HARD_EXCEPTION (3 << 8) /* processor exception */ #define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */ #define INTR_TYPE_PRIV_SW_EXCEPTION (5 << 8) /* ICE breakpoint - undocumented */ #define INTR_TYPE_SOFT_EXCEPTION (6 << 8) /* software exception */ +#define INTR_TYPE_OTHER_EVENT (7 << 8) /* other event */ /* GUEST_INTERRUPTIBILITY_INFO flags. */ #define GUEST_INTR_STATE_STI 0x00000001 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 48989f78be60..ef956b308f08 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1705,6 +1705,17 @@ static inline bool nested_cpu_has_vmwrite_any_field(struct kvm_vcpu *vcpu) MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS; } +static inline bool nested_cpu_has_zero_length_injection(struct kvm_vcpu *vcpu) +{ + return to_vmx(vcpu)->nested.msrs.misc_low & VMX_MISC_ZERO_LEN_INS; +} + +static inline bool nested_cpu_has_monitor_trap_flag(struct kvm_vcpu *vcpu) +{ + return to_vmx(vcpu)->nested.msrs.procbased_ctls_low & + CPU_BASED_MONITOR_TRAP_FLAG; +} + static inline bool nested_cpu_has(struct vmcs12 *vmcs12, u32 bit) { return vmcs12->cpu_based_vm_exec_control & bit; @@ -11612,6 +11623,62 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) !nested_cr3_valid(vcpu, vmcs12->host_cr3)) return VMXERR_ENTRY_INVALID_HOST_STATE_FIELD; + /* + * From the Intel SDM, volume 3: + * Fields relevant to VM-entry event injection must be set properly. + * These fields are the VM-entry interruption-information field, the + * VM-entry exception error code, and the VM-entry instruction length. + */ + if (vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) { + u32 intr_info = vmcs12->vm_entry_intr_info_field; + u8 nr = intr_info & INTR_INFO_VECTOR_MASK; + u32 intr_type = intr_info & INTR_INFO_INTR_TYPE_MASK; + bool has_error_code = intr_info & INTR_INFO_DELIVER_CODE_MASK; + bool should_have_error_code; + bool urg = nested_cpu_has2(vmcs12, + SECONDARY_EXEC_UNRESTRICTED_GUEST); + bool prot_mode = !urg || vmcs12->guest_cr0 & X86_CR0_PE; + + /* VM-entry interruption-info field: interruption type */ + if (intr_type == INTR_TYPE_RESERVED || + (intr_type == INTR_TYPE_OTHER_EVENT && + !nested_cpu_has_monitor_trap_flag(vcpu))) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: vector */ + if ((intr_type == INTR_TYPE_NMI_INTR && nr != NMI_VECTOR) || + (intr_type == INTR_TYPE_HARD_EXCEPTION && nr > 31) || + (intr_type == INTR_TYPE_OTHER_EVENT && nr != 0)) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: deliver error code */ + should_have_error_code = + intr_type == INTR_TYPE_HARD_EXCEPTION && + x86_exception_has_error_code(nr, prot_mode); + if (has_error_code != should_have_error_code) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry exception error code */ + if (has_error_code && + vmcs12->vm_entry_exception_error_code & GENMASK(31, 15)) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry interruption-info field: reserved bits */ + if (intr_info & INTR_INFO_RESVD_BITS_MASK) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + + /* VM-entry instruction length */ + switch (intr_type) { + case INTR_TYPE_SOFT_EXCEPTION: + case INTR_TYPE_SOFT_INTR: + case INTR_TYPE_PRIV_SW_EXCEPTION: + if ((vmcs12->vm_entry_instruction_len > 15) || + (vmcs12->vm_entry_instruction_len == 0 && + !nested_cpu_has_zero_length_injection(vcpu))) + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; + } + } + return 0; } diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 331993c49dae..294ab4c2dd96 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -110,6 +110,20 @@ static inline bool is_la57_mode(struct kvm_vcpu *vcpu) #endif } +/* + * vector: x86 exception number; often called nr + * protected_mode: true if !unrestricted-guest || protected mode + */ +static inline bool x86_exception_has_error_code(unsigned int vector, + bool protected_mode) +{ + static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) | + BIT(NP_VECTOR) | BIT(SS_VECTOR) | BIT(GP_VECTOR) | + BIT(PF_VECTOR) | BIT(AC_VECTOR); + + return protected_mode && ((1U << vector) & exception_has_error_code); +} + static inline bool mmu_is_nested(struct kvm_vcpu *vcpu) { return vcpu->arch.walk_mmu == &vcpu->arch.nested_mmu;