From patchwork Thu Dec 10 17:38:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 7821111 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 87FD19F3E6 for ; Thu, 10 Dec 2015 17:39:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B0E3C20592 for ; Thu, 10 Dec 2015 17:39:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 98B6520576 for ; Thu, 10 Dec 2015 17:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754787AbbLJRjG (ORCPT ); Thu, 10 Dec 2015 12:39:06 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:36979 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754619AbbLJRjE (ORCPT ); Thu, 10 Dec 2015 12:39:04 -0500 Received: by wmww144 with SMTP id w144so34762378wmw.0; Thu, 10 Dec 2015 09:39:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id; bh=T2wS+7eSa8hJOr8Zkuev39N1Boe/hbsKKU0Ecy1WWBE=; b=Q3tlLkM+aBa7BeiMd/3hIvS5hVPQmqZOuRwO+9uZw52jolPjfbOle9xDi+mTe4zm6p RMKvtyczQZAzY+Ex8kU8gHrTVInsHs1qQaJQVYKBoSg6FF/okH1FjVsvLP2xrVu1G0Al 5P9XhxKpxBE9p0UjfPmXs7iBZj8gcrn91Df4EO4tVUCZV4SC7gkmlSAtFzrhgj3F1eQ7 +aZekIb0JNRdH19XEOdGyrB0Re9Dvpi+OmRCiL2QJLxUVMeasHIhDWOZ/PU/m05NEwe9 ijGljwRDIiwfabjTO+WUtKKdKEArakI+dmQgGwr/modnhwFqiErIbeLJtDJKJIrnk5ph ca2Q== X-Received: by 10.28.93.144 with SMTP id r138mr341460wmb.84.1449769142864; Thu, 10 Dec 2015 09:39:02 -0800 (PST) Received: from 640k.lan (94-39-192-53.adsl-ull.clienti.tiscali.it. [94.39.192.53]) by smtp.gmail.com with ESMTPSA id n7sm13814736wmf.21.2015.12.10.09.39.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Dec 2015 09:39:01 -0800 (PST) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: bp@alien8.de, stable@vger.kernel.org Subject: [PATCH] kvm: x86: move tracepoints outside extended quiescent state Date: Thu, 10 Dec 2015 18:38:57 +0100 Message-Id: <1449769137-8668-1-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Invoking tracepoints within kvm_guest_enter/kvm_guest_exit causes a lockdep splat. Cc: stable@vger.kernel.org Reported-by: Borislav Petkov Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm.c | 4 ++-- arch/x86/kvm/vmx.c | 3 ++- arch/x86/kvm/x86.c | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 83a1c643f9a5..899c40f826dd 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3422,6 +3422,8 @@ static int handle_exit(struct kvm_vcpu *vcpu) struct kvm_run *kvm_run = vcpu->run; u32 exit_code = svm->vmcb->control.exit_code; + trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM); + if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE)) vcpu->arch.cr0 = svm->vmcb->save.cr0; if (npt_enabled) @@ -3892,8 +3894,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp; vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip; - trace_kvm_exit(svm->vmcb->control.exit_code, vcpu, KVM_ISA_SVM); - if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_handle_nmi(&svm->vcpu); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index af823a388c19..6b5605607849 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8042,6 +8042,8 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu) u32 exit_reason = vmx->exit_reason; u32 vectoring_info = vmx->idt_vectoring_info; + trace_kvm_exit(exit_reason, vcpu, KVM_ISA_VMX); + /* * Flush logged GPAs PML buffer, this will make dirty_bitmap more * updated. Another good is, in kvm_vm_ioctl_get_dirty_log, before @@ -8668,7 +8670,6 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->launched = 1; vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); - trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX); /* * the KVM_REQ_EVENT optimization bit is only on for one entry, and if diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index eed32283d22c..2a8c035ec7fe 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6515,6 +6515,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (req_immediate_exit) smp_send_reschedule(vcpu->cpu); + trace_kvm_entry(vcpu->vcpu_id); __kvm_guest_enter(); if (unlikely(vcpu->arch.switch_db_regs)) { @@ -6527,7 +6528,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD; } - trace_kvm_entry(vcpu->vcpu_id); wait_lapic_expire(vcpu); kvm_x86_ops->run(vcpu);