From patchwork Wed Jan 10 01:27:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515613 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95EEB33F9 for ; Wed, 10 Jan 2024 01:27:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GvAwooCE" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-daee86e2d70so3339441276.0 for ; Tue, 09 Jan 2024 17:27:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850029; x=1705454829; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=IIQammAHhw1ngHO6ANyDTfr+sZcY4cd5zo+XWlGta08=; b=GvAwooCEBsnM+KrjlD3OO3b/BonedW9mK/7R/QPo+0Atx0zpi/cZ/n32NEKbXx6id+ QAl8APSSF5B9opwNEGgdAPC+Wk3/KyLqIC2b02I9ep7isSxIHARNaD9asU20tIfN43aA fSpD5xm7fWEsyT52kRxjk8VZFuQAHbJjXEA2tqgZceoORJVsMYayzktc0TmOO/bSkdGl rEjQKJ+NdZCgDHj2phpA/3eVUQPRpFGk8tIhZ9tOfotZXk3GX8+8tTVX1ZCUWhde7ft4 kvnX5HmO2yrQeFLHdkh0ux7ZOYjbZPFR4z+Njri3q+/0ZkYO34OW9lJBT09MCO0zDG92 Kd8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850029; x=1705454829; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IIQammAHhw1ngHO6ANyDTfr+sZcY4cd5zo+XWlGta08=; b=XBwPnbwvw9FLEKN5rxBsz6LqoS164s+xWqdEuYK0j2uRdZOFqUmwzDR2A1WZXOYlgT IuhQHb0Xg8Lw9uAgDEPocXVl1ap1evesgyaaKbbD1knLG5e+kQ1i6FVDEBbfM82Acv/j CmJJTu86VFuFt756YUZam8EpnzmTeZ0DEudbU9szSbeI3kjd0/BeZRMEi1ib0PJKuNFl b0im1fyhPQaTzFZdkOwW8LbVJp2xspY6knhE3ZTKFni6Zbz2NSlIs/GXc2fTOQCq/aLj vayl7vGA4DHxLNCuZgQ4CbBEwQo4YD056fUgIt6vZpqvIM2L4uxZOoGzZVdGjnRbmR0E vmjg== X-Gm-Message-State: AOJu0YwG+yzdgLrXRQUeodGoOWF6+MxcigN8SKgLGLtZnr+TYT+aqDvd jsi7ZBideOZCWZWHYgn2Kitg2QMoUcv6bBO/NA== X-Google-Smtp-Source: AGHT+IG7zLpYOzJQoiOqhFIghEXci1Ti8H3tTaRAJgjun4AugcfabivAhkw/qGFXuhcVktgi7U6MJ1WANKU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:136d:b0:dbd:3fff:7c2a with SMTP id bt13-20020a056902136d00b00dbd3fff7c2amr7907ybb.3.1704850029694; Tue, 09 Jan 2024 17:27:09 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:00 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-2-seanjc@google.com> Subject: [PATCH 1/6] KVM: x86: Plumb "force_immediate_exit" into kvm_entry() tracepoint From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Annotate the kvm_entry() tracepoint with "immediate exit" when KVM is forcing a VM-Exit immediately after VM-Enter, e.g. when KVM wants to inject an event but needs to first complete some other operation. Knowing that KVM is (or isn't) forcing an exit is useful information when debugging issues related to event injection. Suggested-by: Maxim Levitsky Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/svm/svm.c | 5 +++-- arch/x86/kvm/trace.h | 9 ++++++--- arch/x86/kvm/vmx/vmx.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- 5 files changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7bc1daf68741..9c90664ef9fb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1663,7 +1663,8 @@ struct kvm_x86_ops { void (*flush_tlb_guest)(struct kvm_vcpu *vcpu); int (*vcpu_pre_run)(struct kvm_vcpu *vcpu); - enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu); + enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu, + bool force_immediate_exit); int (*handle_exit)(struct kvm_vcpu *vcpu, enum exit_fastpath_completion exit_fastpath); int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2171b0cda8d4..f5f3301d2a01 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4115,12 +4115,13 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in guest_state_exit_irqoff(); } -static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) +static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, + bool force_immediate_exit) { struct vcpu_svm *svm = to_svm(vcpu); bool spec_ctrl_intercepted = msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL); - trace_kvm_entry(vcpu); + trace_kvm_entry(vcpu, force_immediate_exit); svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h index 83843379813e..88659de4d2a7 100644 --- a/arch/x86/kvm/trace.h +++ b/arch/x86/kvm/trace.h @@ -15,20 +15,23 @@ * Tracepoint for guest mode entry. */ TRACE_EVENT(kvm_entry, - TP_PROTO(struct kvm_vcpu *vcpu), - TP_ARGS(vcpu), + TP_PROTO(struct kvm_vcpu *vcpu, bool force_immediate_exit), + TP_ARGS(vcpu, force_immediate_exit), TP_STRUCT__entry( __field( unsigned int, vcpu_id ) __field( unsigned long, rip ) + __field( bool, immediate_exit ) ), TP_fast_assign( __entry->vcpu_id = vcpu->vcpu_id; __entry->rip = kvm_rip_read(vcpu); + __entry->immediate_exit = force_immediate_exit; ), - TP_printk("vcpu %u, rip 0x%lx", __entry->vcpu_id, __entry->rip) + TP_printk("vcpu %u, rip 0x%lx%s", __entry->vcpu_id, __entry->rip, + __entry->immediate_exit ? "[immediate exit]" : "") ); /* diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d21f55f323ea..51d0f3985463 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7268,7 +7268,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, guest_state_exit_irqoff(); } -static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) +static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long cr3, cr4; @@ -7295,7 +7295,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } - trace_kvm_entry(vcpu); + trace_kvm_entry(vcpu, force_immediate_exit); if (vmx->ple_window_dirty) { vmx->ple_window_dirty = false; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 27e23714e960..e4523ca3dedf 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10962,7 +10962,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) && (kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED)); - exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu); + exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu, req_immediate_exit); if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST)) break; From patchwork Wed Jan 10 01:27:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515614 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E48A3FEF for ; Wed, 10 Jan 2024 01:27:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="s54wu1cp" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5cdfa8dea37so3143725a12.0 for ; Tue, 09 Jan 2024 17:27:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850031; x=1705454831; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=uDSz3XZTGTdHd1XB0ese2a/q6yXzymb/AuDVpe3xIZo=; b=s54wu1cplNOmeztAv3PiqSXXjaJhFWU2OATKw9nQPrE3+0VGE9buzjDfBEGaAzTKzM 13VWBYZoIvqyoqqe5yYLk71I4LJpTwYFVDbezEyYoEXPev/obiugqYwWl7MKhawmPGqe 8BztRfX5OSuN+NaxvFdHRA4HIoi0dHrfW7DFhlb34ZB9IHqxxLpYijqeqxy1E0lbV1ns 46+IH+JlJrtb7uaydSL0Lya2MXak9sU/ep1kA83pO+H7d8nfNBIrC0Xvb+adue4qV+u/ rCacMQhq/UkVSFkt7gHJk6Daa519JHgxlhIWNBQjfID4xzzEp39KC/rB52vUI38zyd2Y 4WFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850031; x=1705454831; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uDSz3XZTGTdHd1XB0ese2a/q6yXzymb/AuDVpe3xIZo=; b=ruONG4SEL9p0y0UULk4J/iBkP0jE12eNTqVh9nT1GsvlwEDwkYW3OJmwtgLt3aSlWt Ax5gnNdyYFZ/E5Vtw4CgQl5eFx9pNdH1SaNu93O8PNgq7Cc4w3VSnhA4zb0kP/DpKT0o 0ouNZsH+sLVajGQS439k8EniiuhICH7QStFdjIr1RBsz6UQEuDiAl9XakuZvF8kACMeu iPNY6almxXhMvMITObCmb/HohNHgCCn3e2tKQD9Wi63bg1ePbzuUrsQW8Y4x/8MVgNiA ocVGYTaIug2I2FTlbVaF+AiLIDR7E7T1CHbDZzoqJc6A2MyQhbQ+70Gp8OqWuRtdf1kw jlag== X-Gm-Message-State: AOJu0YwqpbC/HmSCLLzyTqJ5AwvjH2ji6iyQo5+tW8AmM2FHd6HuxlVo +8bILnoZov/BavHg7js074Gthlilt587ITSK3g== X-Google-Smtp-Source: AGHT+IHMQGToTfzq5Il+OqaDv8F05pv4rv24DTaau8gLjNp+L0rsgUPbs8aaWg1c7lL/5TPqwD3CR/t9qqQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:2382:b0:28d:adc5:ce08 with SMTP id mr2-20020a17090b238200b0028dadc5ce08mr19940pjb.2.1704850031487; Tue, 09 Jan 2024 17:27:11 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:01 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-3-seanjc@google.com> Subject: [PATCH 2/6] KVM: VMX: Re-enter guest in fastpath for "spurious" preemption timer exits From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Re-enter the guest in the fast path if VMX preeemption timer VM-Exit was "spurious", i.e. if KVM "soft disabled" the timer by writing -1u and by some miracle the timer expired before any other VM-Exit occurred. This is just an intermediate step to cleaning up the preemption timer handling, optimizing these types of spurious VM-Exits is not interesting as they are extremely rare/infrequent. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 51d0f3985463..4caad881d9a0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5995,8 +5995,15 @@ static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - if (!vmx->req_immediate_exit && - !unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled)) { + /* + * In the *extremely* unlikely scenario that this is a spurious VM-Exit + * due to the timer expiring while it was "soft" disabled, just eat the + * exit and re-enter the guest. + */ + if (unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled)) + return EXIT_FASTPATH_REENTER_GUEST; + + if (!vmx->req_immediate_exit) { kvm_lapic_expired_hv_timer(vcpu); return EXIT_FASTPATH_REENTER_GUEST; } From patchwork Wed Jan 10 01:27:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515615 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B650D8F68 for ; Wed, 10 Jan 2024 01:27:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="usPyjKfz" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5ce12b4c1c9so1547638a12.1 for ; Tue, 09 Jan 2024 17:27:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850034; x=1705454834; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=iBBhsVReFdkY5Qcu4g5Me7VWeVaRdBMAkHcGlOP8ClI=; b=usPyjKfzpYYa6kyMLj88WwG3q5VDQRhqbd27V4H9+oZI0jM/4D6/lM8onS7JwQL+Ll 7+36bkBlEVwVjTzg7vI6E+eMKrMEACdJZUv26vN4heuFJh+F46ZGK93dU8oWMz1uxtQA Iq7ZP7AeYSBizmKZeDzwmWYI/WV9Yl7mYWWwY3JxRUGnj0taTY5Hbg40Zg/vNTT1f488 zUWwGvShCAaalkJmwFlSRW48Ftlp50cM9wmBgI5aaNwUzRKPewATgik+FzprBXI8Bpt+ 6kerwMsAJmRu5VVlriUUtrG+BGiE07I2tvVSvqOwalHVzb4F/TOQjwNoDIsmDzgSG4Hg Mj8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850034; x=1705454834; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iBBhsVReFdkY5Qcu4g5Me7VWeVaRdBMAkHcGlOP8ClI=; b=B774HADy3ciHnM8TxRlTMobv91+xzI5njCU7zVdpu438aBQLAOucAMBJUTnhD+kc0R g2oRHKY3H6ko5URd3+iEZtG5nUl8ZARxumM5qy7gCk/sQ6yaCTZerTxwJm5Y9fQpqh7+ NMwzfL4LLg4YaiqtD7otywDHTGQOg/zmmZXnXJGjkPertpdi98DSKgOfZzTUoLkBa139 c+LUBE86psed5hj/+RDM76b2EUqkxOZDUsOayQEO/5bks7VykWts0NH1p1WjrAZAtkwY RcDwCYYLKgiMvldOhtG6GHYTWq8pr/hAwdHJ1NNMfDXBBbYK+DDzfNG7GFUZ4BDmkrKV 0NBg== X-Gm-Message-State: AOJu0Yw0Owt8mZDOqr6mqqUxYj2kaPh/kndA+/QuWjAZHMI/ABywmd2f yvJK4s+P/f9IcUMJg/iFKnQePz1ptC5agrJr7A== X-Google-Smtp-Source: AGHT+IGlaUnQVcKuEIDin5adhGbNb0XYaNcIuk3AJ4BTPw2pEqhR7hpu7dF0Yycp6qIdFD8Y6Dk+Ty/yXM8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a63:9554:0:b0:5ca:450e:25cb with SMTP id t20-20020a639554000000b005ca450e25cbmr849pgn.6.1704850033419; Tue, 09 Jan 2024 17:27:13 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:02 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-4-seanjc@google.com> Subject: [PATCH 3/6] KVM: VMX: Handle forced exit due to preemption timer in fastpath From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Handle VMX preemption timer VM-Exits due to KVM forcing an exit in the exit fastpath, i.e. avoid calling back into handle_preemption_timer() for the same exit. There is no work to be done for forced exits, as the name suggests the goal is purely to get control back in KVM. In addition to shaving a few cycles, this will allow cleanly separating handle_fastpath_preemption_timer() from handle_preemption_timer(), e.g. it's not immediately obvious why _apparently_ calling handle_fastpath_preemption_timer() twice on a "slow" exit is necessary: the "slow" call is necessary to handle exits from L2, which are excluded from the fastpath by vmx_vcpu_run(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4caad881d9a0..c4a10d46d7a8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6003,12 +6003,15 @@ static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) if (unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled)) return EXIT_FASTPATH_REENTER_GUEST; - if (!vmx->req_immediate_exit) { - kvm_lapic_expired_hv_timer(vcpu); - return EXIT_FASTPATH_REENTER_GUEST; - } + /* + * If the timer expired because KVM used it to force an immediate exit, + * then mission accomplished. + */ + if (vmx->req_immediate_exit) + return EXIT_FASTPATH_EXIT_HANDLED; - return EXIT_FASTPATH_NONE; + kvm_lapic_expired_hv_timer(vcpu); + return EXIT_FASTPATH_REENTER_GUEST; } static int handle_preemption_timer(struct kvm_vcpu *vcpu) From patchwork Wed Jan 10 01:27:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515616 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21CA0DDAE for ; Wed, 10 Jan 2024 01:27:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ccVJjRaF" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e8d966fbc0so40394727b3.1 for ; Tue, 09 Jan 2024 17:27:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850036; x=1705454836; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KWtjxcr+sKa83fwm9XKbhXp8mShZD+B5e+jlbLU3DeA=; b=ccVJjRaFTBIbpymyx7OVq27nXMk8/0OJ2Sc6lwat/YpE4VDQtqKqQNaMmjGUlLLtPd 1PU4P48OjsFL6QKX0/Kk1dGySee7Chk8FEyhTgYqntcD2GVhwFcpRfX8ua50kXAr3qzx +jn6LTGqBK1iQplCi3HGbYUPG9WAqt4+NkdjYQMnLoo5NIy4UWqs5W/wTaaCgSJ/eOkp bWLu/u0uTRMjshv5FA06ryIZt4OAIVJGkK8he7BsFV+t4UH71q2igLimZ/90Xuvigd6S lo5xr8JnsNr0gUYTjUprK8MaN8dHMkCdv5J73R8na4J1lr6vf0X3URclJKUTn9LwFrPU MzSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850036; x=1705454836; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KWtjxcr+sKa83fwm9XKbhXp8mShZD+B5e+jlbLU3DeA=; b=fgvSkohmdjplGVuaxvBhPoxtb8Kvjyc8pZj7/MGzf7r9K36LtWSq/FeAQLBZo8xX4h sHkpIj7mCbM6BKzdk+oUvVFstYPwCACnVS+TBwG2y5amDvXCiGANoVFK7EYAAHdkAIkj sRCQ171pzFm5Vr6PsP6WFdyG9snViACYLZS+cOPCGkJm8MTQQMjiFTWuo+evnOMZ4Rut k2m6rEQuWZze7JA+K40hP45ZFEO8MLCwegZ8SQKXENOK16SYrqMUCLLWNJcoiRaWhvfe 1XRQGrLnjBU8mLdlt99z0g1j8m8aCv6PdmisUnxbJAhNAFod2Dwfp2c3TjDkSviy24ZA TgUQ== X-Gm-Message-State: AOJu0YySsThWkmcWylUnryvKHEPFIEHoG+jdGyU8Zen15ZE4Xnl1jq7k cLYOFVrE43nsSCIY0/jyVWVLFC9bYgrYMUrE9w== X-Google-Smtp-Source: AGHT+IFcAWQuBc3tZqBZxZIOMH1LF2jE9lNRYGm7n/qlagDrLr0lHXn2P3l8tjHdwxamBYe1l5cex6xTw4k= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:f00d:0:b0:5f0:92a1:18b2 with SMTP id p13-20020a81f00d000000b005f092a118b2mr577842ywm.2.1704850036153; Tue, 09 Jan 2024 17:27:16 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:03 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-5-seanjc@google.com> Subject: [PATCH 4/6] KVM: x86: Move handling of is_guest_mode() into fastpath exit handlers From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Let the fastpath code decide which exits can/can't be handled in the fastpath when L2 is active, e.g. when KVM generates a VMX preemption timer exit to forcefully regain control, there is no "work" to be done and so such exits can be handled in the fastpath regardless of whether L1 or L2 is active. Moving the is_guest_mode() check into the fastpath code also makes it easier to see that L2 isn't allowed to use the fastpath in most cases, e.g. it's not immediately obvious why handle_fastpath_preemption_timer() is called from the fastpath and the normal path. Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/svm.c | 6 +++--- arch/x86/kvm/vmx/vmx.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f5f3301d2a01..c32576c951ce 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4092,6 +4092,9 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu) static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) { + if (is_guest_mode(vcpu)) + return EXIT_FASTPATH_NONE; + if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR && to_svm(vcpu)->vmcb->control.exit_info_1) return handle_fastpath_set_msr_irqoff(vcpu); @@ -4238,9 +4241,6 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, svm_complete_interrupts(vcpu); - if (is_guest_mode(vcpu)) - return EXIT_FASTPATH_NONE; - return svm_exit_handlers_fastpath(vcpu); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c4a10d46d7a8..a602c5b52c64 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7217,6 +7217,9 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) { + if (is_guest_mode(vcpu)) + return EXIT_FASTPATH_NONE; + switch (to_vmx(vcpu)->exit_reason.basic) { case EXIT_REASON_MSR_WRITE: return handle_fastpath_set_msr_irqoff(vcpu); @@ -7428,9 +7431,6 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); - if (is_guest_mode(vcpu)) - return EXIT_FASTPATH_NONE; - return vmx_exit_handlers_fastpath(vcpu); } From patchwork Wed Jan 10 01:27:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515617 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2F15DF6B for ; Wed, 10 Jan 2024 01:27:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ag7RqRUc" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe9e13775aso4634744276.1 for ; Tue, 09 Jan 2024 17:27:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850038; x=1705454838; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=k0JgZYlV862UBpJdBl46NmXu7uSEFhCrwQYmZnAf8Ko=; b=Ag7RqRUc4WvdQUfp0Zu2M2XGRCxv6mklAK9WArWJoF3XuN+5/Y90Q74X1/NXP5JpDn yCgZ6k80Y14eo8C87Q0ta5OmRin+HKHH0FxNjWpCbPcqCg+CopN8tPIJXPoHeWB7gPry xv/7mp59bUvjr/4ADCteJqnhCbJH6j0c9cjDxCRfUdvGSJvIFbJSfenGIn+cDqzM7231 KAyfph4Qd9bYOvFrGVis9FJgArNLuClifd5c0QcfrajRkDkfZk0djFibVzCvXOl5Gb1I i5/gBCBV8lb58vPbbrWOSAHcHGTxiouRSdjggkZXIPG9uJIvljuePUV6IKqtxJmMFZXI WFWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850038; x=1705454838; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=k0JgZYlV862UBpJdBl46NmXu7uSEFhCrwQYmZnAf8Ko=; b=Hp5sGbS1qKsw5YbWF1mssQtVNzItuI5vjLrFBuOtEt8ka1QnBBwN9YjCK3s0w5U1Xp dGnDgc0X7knm1ZNdiS/mLPcsDO3AxfUKVW0L+9GdAv1NKX707HGP6uMqWalOYneHRfaj ZpkM/iLi6vPMY9Gnobeg60j4JJwkjpRwpy7N5IMKmZXvPzjCcf/ap0+x9Gnp/F8vwfRF bHPITEsxdDjIlpBuJBojwLv7dcbbfroa+T/I/5JZyVXXrCMq+wyTFRwxugD9vMUQCIBH 6oMwVnpgDJ24VsiGKQmtuwvstPwefIrk29Y//MTkfK5asHhy1rhoKsOGH27dK6Vj1dVL 1ldQ== X-Gm-Message-State: AOJu0Yy6Nz8jDF2tJE7xCdEoY2cxqyaDpQVJNnj61BkUGVrJ3bOy4mwz qM4F13WljoOd/EFzpkhBD0rh15/jmJedc5rkNA== X-Google-Smtp-Source: AGHT+IGWAcwjVR8XpeKCAnAGeisfwwzsu3mhpl/Ax5HkJqILeyjO1rCYv1Byr1KFDJNIOUptBvRBQZBeXds= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8142:0:b0:dbd:ae61:8dd2 with SMTP id j2-20020a258142000000b00dbdae618dd2mr98656ybm.4.1704850037937; Tue, 09 Jan 2024 17:27:17 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:04 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-6-seanjc@google.com> Subject: [PATCH 5/6] KVM: VMX: Handle KVM-induced preemption timer exits in fastpath for L2 From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Eat VMX treemption timer exits in the fastpath regardless of whether L1 or L2 is active. The VM-Exit is 100% KVM-induced, i.e. there is nothing directly related to the exit that KVM needs to do on behalf of the guest, thus there is no reason to wait until the slow path to do nothing. Opportunistically add comments explaining why preemption timer exits for emulating the guest's APIC timer need to go down the slow path. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a602c5b52c64..14658e794fbd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6010,13 +6010,26 @@ static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) if (vmx->req_immediate_exit) return EXIT_FASTPATH_EXIT_HANDLED; + /* + * If L2 is active, go down the slow path as emulating the guest timer + * expiration likely requires synthesizing a nested VM-Exit. + */ + if (is_guest_mode(vcpu)) + return EXIT_FASTPATH_NONE; + kvm_lapic_expired_hv_timer(vcpu); return EXIT_FASTPATH_REENTER_GUEST; } static int handle_preemption_timer(struct kvm_vcpu *vcpu) { - handle_fastpath_preemption_timer(vcpu); + /* + * This non-fastpath handler is reached if and only if the preemption + * timer was being used to emulate a guest timer while L2 is active. + * All other scenarios are supposed to be handled in the fastpath. + */ + WARN_ON_ONCE(!is_guest_mode(vcpu)); + kvm_lapic_expired_hv_timer(vcpu); return 1; } @@ -7217,7 +7230,12 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) { - if (is_guest_mode(vcpu)) + /* + * If L2 is active, some VMX preemption timer exits can be handled in + * the fastpath even, all other exits must use the slow path. + */ + if (is_guest_mode(vcpu) && + to_vmx(vcpu)->exit_reason.basic != EXIT_REASON_PREEMPTION_TIMER) return EXIT_FASTPATH_NONE; switch (to_vmx(vcpu)->exit_reason.basic) { From patchwork Wed Jan 10 01:27:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515618 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C44AE32C7E for ; Wed, 10 Jan 2024 01:27:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lHhDT1Cw" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e9a02c6d49so63052757b3.2 for ; Tue, 09 Jan 2024 17:27:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704850040; x=1705454840; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=8AeuQcWkRA9pgk7FbubWOP5IvjNtvpXOvAA15pmUTDA=; b=lHhDT1CwbXWC36UqIuCIn4gJ8XofBUk0IkD8sVqCrE7hb68Z+p3yJolZgsf3cjoF8C KPa5D0D9nS4jgK4UYsjaJ+kQyHOHyn+98ZdnPWWAz44ZQUcFnXFvkjo7b3dTwAdQbt2H noix7lMxSHxG6VinmWB/kA3sldYP0uX8YXTlVEGk0LQpz2LeT7aBJqlmPmcOmWn6XPHy S7HdaH/rkpXy9PCbS9793+KEmUKbcqj4TUvIzDPOwgMZxZzhGAY9AFVUYbTQ2fb+bYxd 1DWcfJNGN0O6uRu6IftVdNZlshmOuzrFtQZu0O3moTLKman2XYx/k0OlYl8hJBXupTKX +VCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704850040; x=1705454840; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8AeuQcWkRA9pgk7FbubWOP5IvjNtvpXOvAA15pmUTDA=; b=uchGf8gOE94TSgUUGVDvtHCV1bpYy3FzHIdQESzBxa+vcHy7HmVYwtDIXOZJ7DPsGp 8zfjPO152Z7o10LPjrJd9KuG57oSPQojnVBJLg9I+x+OIVjTN4ViRQa1xw2j1rG5R8Xi UpmcebE4B1gQ5NTSOMYjWJKDBLcvn5q2JGBrTgag6edW6oUWpnP8sLHxPS4e3R4VYuPd xBVhGVHSpzt6oEqZDTfb8yP3a3yDFXNSX3VCLc2u0Bz680kG0IF7izIvOQG/3fZm8NKD wxy0CRPQc5G9pYZm/hES3aeXxlo1ChhXantAtAd0AnDdNu1PrIuFVI4Brks2Uf3dCgb9 4zXQ== X-Gm-Message-State: AOJu0Yz4+XZgimNFVPA6QWnoOD462WBSRxv2TnA0VK0OemW6TlIrAMFd IsRf3bVwIWdom7DkEhVOgTMRv6nqaY6whCrdUg== X-Google-Smtp-Source: AGHT+IHgO4ytaqGpYEsoy7PVB0S3Folb+A80SPixRATRFBToiYI2fORWMXjnEFKVQKW+SfkEoLQb8Z5/wKc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:884:b0:5f4:f576:6441 with SMTP id cd4-20020a05690c088400b005f4f5766441mr189750ywb.0.1704850039968; Tue, 09 Jan 2024 17:27:19 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 17:27:05 -0800 In-Reply-To: <20240110012705.506918-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240110012705.506918-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240110012705.506918-7-seanjc@google.com> Subject: [PATCH 6/6] KVM: x86: Fully defer to vendor code to decide how to force immediate exit From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Maxim Levitsky Now that vmx->req_immediate_exit is used only in the scope of vmx_vcpu_run(), use force_immediate_exit to detect that KVM should usurp the VMX preemption to force a VM-Exit and let vendor code fully handle forcing a VM-Exit. Opportunsitically drop __kvm_request_immediate_exit() and just have vendor code call smp_send_reschedule() directly. SVM already does this when injecting an event while also trying to single-stepp an IRET, i.e. it's not exactly secret knowledge that KVM uses a reschedule IPI to force an exit. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/svm/svm.c | 7 ++++--- arch/x86/kvm/vmx/vmx.c | 32 +++++++++++++----------------- arch/x86/kvm/vmx/vmx.h | 2 -- arch/x86/kvm/x86.c | 10 +--------- 6 files changed, 19 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 378ed944b849..3942b74c1b75 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -103,7 +103,6 @@ KVM_X86_OP(write_tsc_multiplier) KVM_X86_OP(get_exit_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) -KVM_X86_OP(request_immediate_exit) KVM_X86_OP(sched_in) KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) KVM_X86_OP_OPTIONAL(vcpu_blocking) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9c90664ef9fb..c48dfc142438 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1732,8 +1732,6 @@ struct kvm_x86_ops { struct x86_exception *exception); void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu); - void (*request_immediate_exit)(struct kvm_vcpu *vcpu); - void (*sched_in)(struct kvm_vcpu *vcpu, int cpu); /* @@ -2239,7 +2237,6 @@ extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu); int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err); -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu); void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c32576c951ce..eabadbb4ffa3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4143,8 +4143,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, * is enough to force an immediate vmexit. */ disable_nmi_singlestep(svm); + force_immediate_exit = true; + } + + if (force_immediate_exit) smp_send_reschedule(vcpu->cpu); - } pre_svm_run(vcpu); @@ -4998,8 +5001,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .check_intercept = svm_check_intercept, .handle_exit_irqoff = svm_handle_exit_irqoff, - .request_immediate_exit = __kvm_request_immediate_exit, - .sched_in = svm_sched_in, .nested_ops = &svm_nested_ops, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 14658e794fbd..603412e0add3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -49,6 +49,8 @@ #include #include +#include + #include "capabilities.h" #include "cpuid.h" #include "hyperv.h" @@ -1281,8 +1283,6 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) u16 fs_sel, gs_sel; int i; - vmx->req_immediate_exit = false; - /* * Note that guest MSRs to be saved/restored can also be changed * when guest state is loaded. This happens when guest transitions @@ -5991,7 +5991,8 @@ static int handle_pml_full(struct kvm_vcpu *vcpu) return 1; } -static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) +static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu, + bool force_immediate_exit) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -6007,7 +6008,7 @@ static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) * If the timer expired because KVM used it to force an immediate exit, * then mission accomplished. */ - if (vmx->req_immediate_exit) + if (force_immediate_exit) return EXIT_FASTPATH_EXIT_HANDLED; /* @@ -7169,13 +7170,13 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx) msrs[i].host, false); } -static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) +static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediate_exit) { struct vcpu_vmx *vmx = to_vmx(vcpu); u64 tscl; u32 delta_tsc; - if (vmx->req_immediate_exit) { + if (force_immediate_exit) { vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, 0); vmx->loaded_vmcs->hv_timer_soft_disabled = false; } else if (vmx->hv_deadline_tsc != -1) { @@ -7228,7 +7229,8 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, barrier_nospec(); } -static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) +static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu, + bool force_immediate_exit) { /* * If L2 is active, some VMX preemption timer exits can be handled in @@ -7242,7 +7244,7 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) case EXIT_REASON_MSR_WRITE: return handle_fastpath_set_msr_irqoff(vcpu); case EXIT_REASON_PREEMPTION_TIMER: - return handle_fastpath_preemption_timer(vcpu); + return handle_fastpath_preemption_timer(vcpu, force_immediate_exit); default: return EXIT_FASTPATH_NONE; } @@ -7385,7 +7387,9 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) vmx_passthrough_lbr_msrs(vcpu); if (enable_preemption_timer) - vmx_update_hv_timer(vcpu); + vmx_update_hv_timer(vcpu, force_immediate_exit); + else if (force_immediate_exit) + smp_send_reschedule(vcpu->cpu); kvm_wait_lapic_expire(vcpu); @@ -7449,7 +7453,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); - return vmx_exit_handlers_fastpath(vcpu); + return vmx_exit_handlers_fastpath(vcpu, force_immediate_exit); } static void vmx_vcpu_free(struct kvm_vcpu *vcpu) @@ -7929,11 +7933,6 @@ static __init void vmx_set_cpu_caps(void) kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG); } -static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - to_vmx(vcpu)->req_immediate_exit = true; -} - static int vmx_check_intercept_io(struct kvm_vcpu *vcpu, struct x86_instruction_info *info) { @@ -8386,8 +8385,6 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .check_intercept = vmx_check_intercept, .handle_exit_irqoff = vmx_handle_exit_irqoff, - .request_immediate_exit = vmx_request_immediate_exit, - .sched_in = vmx_sched_in, .cpu_dirty_log_size = PML_ENTITY_NUM, @@ -8647,7 +8644,6 @@ static __init int hardware_setup(void) if (!enable_preemption_timer) { vmx_x86_ops.set_hv_timer = NULL; vmx_x86_ops.cancel_hv_timer = NULL; - vmx_x86_ops.request_immediate_exit = __kvm_request_immediate_exit; } kvm_caps.supported_mce_cap |= MCG_LMCE_P; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index e3b0985bb74a..65786dbe7d60 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -332,8 +332,6 @@ struct vcpu_vmx { unsigned int ple_window; bool ple_window_dirty; - bool req_immediate_exit; - /* Support for PML */ #define PML_ENTITY_NUM 512 struct page *pml_pg; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e4523ca3dedf..fb9f9029ccbf 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10673,12 +10673,6 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); } -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - smp_send_reschedule(vcpu->cpu); -} -EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit); - /* * Called within kvm->srcu read side. * Returns 1 to let vcpu_run() continue the guest execution loop without @@ -10928,10 +10922,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto cancel_injection; } - if (req_immediate_exit) { + if (req_immediate_exit) kvm_make_request(KVM_REQ_EVENT, vcpu); - static_call(kvm_x86_request_immediate_exit)(vcpu); - } fpregs_assert_state_consistent(); if (test_thread_flag(TIF_NEED_FPU_LOAD))