From patchwork Tue Oct 16 21:29:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 10644225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DE621057 for ; Tue, 16 Oct 2018 21:29:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F12C2A9D3 for ; Tue, 16 Oct 2018 21:29:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 632FD2A9DF; Tue, 16 Oct 2018 21:29:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B64182A9D3 for ; Tue, 16 Oct 2018 21:29:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727042AbeJQFWH (ORCPT ); Wed, 17 Oct 2018 01:22:07 -0400 Received: from mail-qk1-f201.google.com ([209.85.222.201]:51561 "EHLO mail-qk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726936AbeJQFWH (ORCPT ); Wed, 17 Oct 2018 01:22:07 -0400 Received: by mail-qk1-f201.google.com with SMTP id x75-v6so24879716qka.18 for ; Tue, 16 Oct 2018 14:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xaCxoW4tBwgLfUca2HzN2KL9ZMTCiTfV/4NCplqiscs=; b=kgs5fiC2+njJbIm0KSr/yqMD9NhGEHHAZyzWiX2O3RDIonzWSq29hxl/eMtrqiFGVd hUXx/U1OjwqIREzyI1eUJFKvIjooqhVleynPuzrJ8cfQrNBJBKf4fME69sDsR/Toy3oj JOkGaBqvbjqZOz7U1CR7F37exLXnqNqS2NYCztDpZoXBRjtK27b51MLVE0tiSAzPxiNX igPoOot5x5gAYLWTZR0NuM/F3Y4gnw9SUt9zMZPEKKqg4JbSopSPAMeviYVqBvMl263O xpZEH9UTMorRN9ituY37DYeIf1evHP3JwOiiZdMPv2u4rnJiw0OiT+yYucaBdbpl7GH1 QoKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xaCxoW4tBwgLfUca2HzN2KL9ZMTCiTfV/4NCplqiscs=; b=kNLnqzPpxzWiqSzBHidHX3OhgwaWJQrnM5EjMuLXWTLB2INjrno6W8GOGZ3keCGu+L KbsqJUANqr7khEALxybGpYOXsP3N90i4WQ2YR+X/4mrM8q5xDxQ3KQSO/z3785l4ZbNo 3bsjXmGAcjU3l2uCInpfw+YOXGFyQQ1U/UgEIfO/DjIgvaxlqRjAt3c7mb3lhPVmsYcS l6lSPTzXmfNXJEwbLrBTanjcX+rKBmyhOTuJ20Fpz+ijxOFy4QLiC2Zl1IR0Wk9tAVvB inxuA51DzkAdfrOmKOzolJkLT1UtjTqB1l4RgC3wmlxDCJdsQqfg8jhw/kfHn1W6TFHo zEIA== X-Gm-Message-State: ABuFfoh/GyEDzAGwmn2CKeaGTAJD++gXXq7oUrThSJf4137rHhavLNgp FxlD4MuzYjJA+aS3L3K1x7a+j7PDVP5vDdkyeIB+y2mHtgbkHQXnvDyzR55E3Jm4a3rSHRKHPMR MaJ4FxG7fzXidXl6bA+X2lXwjtXRrlUDHnj+e0hqeEHI774R01sq1MK2AsiH8XKA= X-Google-Smtp-Source: ACcGV61OnFbTtH+56+EjEyMX97qYeF96H6DtEla98Q8e7defsOtQMSp8DxjH5t0xhnqomhL0DDgmNPCrvU2PcQ== X-Received: by 2002:aed:25dd:: with SMTP id y29-v6mr19574613qtc.10.1539725388476; Tue, 16 Oct 2018 14:29:48 -0700 (PDT) Date: Tue, 16 Oct 2018 14:29:22 -0700 In-Reply-To: <20181016212924.130307-1-jmattson@google.com> Message-Id: <20181016212924.130307-5-jmattson@google.com> Mime-Version: 1.0 References: <20181016212924.130307-1-jmattson@google.com> X-Mailer: git-send-email 2.19.1.331.ge82ca0e54c-goog Subject: [PATCH v2 5/7] kvm: x86: Defer setting of CR2 until #PF delivery From: Jim Mattson To: kvm@vger.kernel.org Cc: Peter Shier , Liran Alon , Paolo Bonzini , Jim Mattson Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When exception payloads are enabled by userspace (which is not yet possible) and a #PF is raised in L2, defer the setting of CR2 until the #PF is delivered. This allows the L1 hypervisor to intercept the fault before CR2 is modified. For backwards compatibility, when exception payloads are not enabled by userspace, kvm_multiple_exception modifies CR2 when the #PF exception is raised. Reported-by: Jim Mattson Suggested-by: Paolo Bonzini Signed-off-by: Jim Mattson --- arch/x86/kvm/svm.c | 13 ++++++------ arch/x86/kvm/vmx.c | 19 +++++++++--------- arch/x86/kvm/x86.c | 49 ++++++++++++++++++++++++++++++++++++++++++---- arch/x86/kvm/x86.h | 2 ++ 4 files changed, 62 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 61ccfb13899e..6079e4dec263 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -809,6 +809,8 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu) nested_svm_check_exception(svm, nr, has_error_code, error_code)) return; + kvm_deliver_exception_payload(&svm->vcpu); + if (nr == BP_VECTOR && !static_cpu_has(X86_FEATURE_NRIPS)) { unsigned long rip, old_rip = kvm_rip_read(&svm->vcpu); @@ -2969,16 +2971,13 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, svm->vmcb->control.exit_info_1 = error_code; /* - * FIXME: we should not write CR2 when L1 intercepts an L2 #PF exception. - * The fix is to add the ancillary datum (CR2 or DR6) to structs - * kvm_queued_exception and kvm_vcpu_events, so that CR2 and DR6 can be - * written only when inject_pending_event runs (DR6 would written here - * too). This should be conditional on a new capability---if the - * capability is disabled, kvm_multiple_exception would write the - * ancillary information to CR2 or DR6, for backwards ABI-compatibility. + * EXITINFO2 is undefined for all exception intercepts other + * than #PF. */ if (svm->vcpu.arch.exception.nested_apf) svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token; + else if (svm->vcpu.arch.exception.has_payload) + svm->vmcb->control.exit_info_2 = svm->vcpu.arch.exception.payload; else svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index e665aa7167cf..6d55a2213e12 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3272,27 +3272,24 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); unsigned int nr = vcpu->arch.exception.nr; + bool has_payload = vcpu->arch.exception.has_payload; + unsigned long payload = vcpu->arch.exception.payload; if (nr == PF_VECTOR) { if (vcpu->arch.exception.nested_apf) { *exit_qual = vcpu->arch.apf.nested_apf_token; return 1; } - /* - * FIXME: we must not write CR2 when L1 intercepts an L2 #PF exception. - * The fix is to add the ancillary datum (CR2 or DR6) to structs - * kvm_queued_exception and kvm_vcpu_events, so that CR2 and DR6 - * can be written only when inject_pending_event runs. This should be - * conditional on a new capability---if the capability is disabled, - * kvm_multiple_exception would write the ancillary information to - * CR2 or DR6, for backwards ABI-compatibility. - */ if (nested_vmx_is_page_fault_vmexit(vmcs12, vcpu->arch.exception.error_code)) { - *exit_qual = vcpu->arch.cr2; + *exit_qual = has_payload ? payload : vcpu->arch.cr2; return 1; } } else { + /* + * FIXME: we must not write DR6 when L1 intercepts an + * L2 #DB exception. + */ if (vmcs12->exception_bitmap & (1u << nr)) { if (nr == DB_VECTOR) *exit_qual = vcpu->arch.dr6; @@ -3326,6 +3323,8 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) u32 error_code = vcpu->arch.exception.error_code; u32 intr_info = nr | INTR_INFO_VALID_MASK; + kvm_deliver_exception_payload(vcpu); + if (has_error_code) { vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code); intr_info |= INTR_INFO_DELIVER_CODE_MASK; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dcd2cd6351fb..872da22c7514 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -400,6 +400,26 @@ static int exception_type(int vector) return EXCPT_FAULT; } +void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu) +{ + unsigned nr = vcpu->arch.exception.nr; + bool has_payload = vcpu->arch.exception.has_payload; + unsigned long payload = vcpu->arch.exception.payload; + + if (!has_payload) + return; + + switch (nr) { + case PF_VECTOR: + vcpu->arch.cr2 = payload; + break; + } + + vcpu->arch.exception.has_payload = false; + vcpu->arch.exception.payload = 0; +} +EXPORT_SYMBOL_GPL(kvm_deliver_exception_payload); + static void kvm_multiple_exception(struct kvm_vcpu *vcpu, unsigned nr, bool has_error, u32 error_code, bool has_payload, unsigned long payload, bool reinject) @@ -441,6 +461,18 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, vcpu->arch.exception.error_code = error_code; vcpu->arch.exception.has_payload = has_payload; vcpu->arch.exception.payload = payload; + /* + * In guest mode, payload delivery should be deferred, + * so that the L1 hypervisor can intercept #PF before + * CR2 is modified. However, for ABI compatibility + * with KVM_GET_VCPU_EVENTS and KVM_SET_VCPU_EVENTS, + * we can't delay payload delivery unless userspace + * has enabled this functionality via the per-VM + * capability, KVM_CAP_EXCEPTION_PAYLOAD. + */ + if (!vcpu->kvm->arch.exception_payload_enabled || + !is_guest_mode(vcpu)) + kvm_deliver_exception_payload(vcpu); return; } @@ -486,6 +518,13 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr) } EXPORT_SYMBOL_GPL(kvm_requeue_exception); +static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr, + u32 error_code, unsigned long payload) +{ + kvm_multiple_exception(vcpu, nr, true, error_code, + true, payload, false); +} + int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err) { if (err) @@ -502,11 +541,13 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault) ++vcpu->stat.pf_guest; vcpu->arch.exception.nested_apf = is_guest_mode(vcpu) && fault->async_page_fault; - if (vcpu->arch.exception.nested_apf) + if (vcpu->arch.exception.nested_apf) { vcpu->arch.apf.nested_apf_token = fault->address; - else - vcpu->arch.cr2 = fault->address; - kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code); + kvm_queue_exception_e(vcpu, PF_VECTOR, fault->error_code); + } else { + kvm_queue_exception_e_p(vcpu, PF_VECTOR, fault->error_code, + fault->address); + } } EXPORT_SYMBOL_GPL(kvm_inject_page_fault); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 67b9568613f3..224cd0a47568 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -266,6 +266,8 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, int handle_ud(struct kvm_vcpu *vcpu); +void kvm_deliver_exception_payload(struct kvm_vcpu *vcpu); + void kvm_vcpu_mtrr_init(struct kvm_vcpu *vcpu); u8 kvm_mtrr_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn); bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data);