Message ID | 20250114175143.81438-23-vschneid@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | context_tracking,x86: Defer some IPIs until a user->kernel transition | expand |
Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit : > ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't > modify the actual CT state part context_tracking.state. This means that > upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL > transition only happens in ct_idle_exit(). > > One can note that ct_nmi_enter() can only ever be entered with the CT state > as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the > CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). Are you sure? An NMI can fire between guest_state_enter_irqoff() and __svm_vcpu_run(). And NMIs interrupting userspace don't call enter_from_user_mode(). In fact they don't call irqentry_enter_from_user_mode() like regular IRQs but irqentry_nmi_enter() instead. Well that's for archs implementing common entry code, I can't speak for the others. Unifying the behaviour between user and idle such that the IRQs/NMIs exit the CT_STATE can be interesting but I fear this may not come for free. You would need to save the old state on IRQ/NMI entry and restore it on exit. Do we really need it? Thanks.
On Wed, Jan 22, 2025, Frederic Weisbecker wrote: > Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit : > > ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't > > modify the actual CT state part context_tracking.state. This means that > > upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL > > transition only happens in ct_idle_exit(). > > > > One can note that ct_nmi_enter() can only ever be entered with the CT state > > as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the > > CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). > > Are you sure? An NMI can fire between guest_state_enter_irqoff() and > __svm_vcpu_run(). Heh, technically, they can't. On SVM, KVM clears GIF prior to svm_vcpu_enter_exit(), and restores GIF=1 only after it returns. I.e. NMIs are fully blocked _on SVM_. VMX unfortunately doesn't provide GIF, and so NMIs can arrive at any time. It's infeasible for software to prevent them, so we're stuck with that. [In theory, KVM could deliberately generate an NMI and not do IRET so that NMIs are blocked, but that would be beyond crazy]. > And NMIs interrupting userspace don't call enter_from_user_mode(). In fact > they don't call irqentry_enter_from_user_mode() like regular IRQs but > irqentry_nmi_enter() instead. Well that's for archs implementing common entry > code, I can't speak for the others. > > Unifying the behaviour between user and idle such that the IRQs/NMIs exit the > CT_STATE can be interesting but I fear this may not come for free. You would > need to save the old state on IRQ/NMI entry and restore it on exit. > > Do we really need it? > > Thanks.
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index a61498a8425e2..15f10ddec8cbe 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -236,7 +236,9 @@ void noinstr ct_nmi_exit(void) instrumentation_end(); // RCU is watching here ... - ct_kernel_exit_state(CT_RCU_WATCHING); + ct_kernel_exit_state(CT_RCU_WATCHING - + CT_STATE_KERNEL + + CT_STATE_IDLE); // ... but is no longer watching here. if (!in_nmi()) @@ -259,6 +261,7 @@ void noinstr ct_nmi_enter(void) { long incby = 2; struct context_tracking *ct = this_cpu_ptr(&context_tracking); + int curr_state; /* Complain about underflow. */ WARN_ON_ONCE(ct_nmi_nesting() < 0); @@ -271,13 +274,26 @@ void noinstr ct_nmi_enter(void) * to be in the outermost NMI handler that interrupted an RCU-idle * period (observation due to Andy Lutomirski). */ - if (!rcu_is_watching_curr_cpu()) { + curr_state = raw_atomic_read(this_cpu_ptr(&context_tracking.state)); + if (!(curr_state & CT_RCU_WATCHING)) { if (!in_nmi()) rcu_task_enter(); + /* + * RCU isn't watching, so we're one of + * CT_STATE_IDLE + * CT_STATE_USER + * CT_STATE_GUEST + * guest/user entry is handled by ct_user_enter(), so this has + * to be idle entry. + */ + WARN_ON_ONCE((curr_state & CT_STATE_MASK) != CT_STATE_IDLE); + // RCU is not watching here ... - ct_kernel_enter_state(CT_RCU_WATCHING); + ct_kernel_enter_state(CT_RCU_WATCHING + + CT_STATE_KERNEL - + CT_STATE_IDLE); // ... but is watching here. instrumentation_begin();
ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't modify the actual CT state part context_tracking.state. This means that upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL transition only happens in ct_idle_exit(). One can note that ct_nmi_enter() can only ever be entered with the CT state as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). Add/remove CT_STATE_IDLE from the context tracking state as needed in ct_nmi_{enter, exit}(). Note that this leaves the following window where the CPU is executing code in kernelspace, but the context tracking state is CT_STATE_IDLE: ~> IRQ ct_nmi_enter() state = state + CT_STATE_KERNEL - CT_STATE_IDLE [...] ct_nmi_exit() state = state - CT_STATE_KERNEL + CT_STATE_IDLE [...] /!\ CT_STATE_IDLE here while we're really in kernelspace! /!\ ct_cpuidle_exit() state = state + CT_STATE_KERNEL - CT_STATE_IDLE Signed-off-by: Valentin Schneider <vschneid@redhat.com> --- kernel/context_tracking.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-)