mbox series

[RFC,00/14] context_tracking,x86: Defer some IPIs until a user->kernel transition

Message ID 20230705181256.3539027-1-vschneid@redhat.com (mailing list archive)
Headers show
Series context_tracking,x86: Defer some IPIs until a user->kernel transition | expand

Message

Valentin Schneider July 5, 2023, 6:12 p.m. UTC
Context
=======

We've observed within Red Hat that isolated, NOHZ_FULL CPUs running a
pure-userspace application get regularly interrupted by IPIs sent from
housekeeping CPUs. Those IPIs are caused by activity on the housekeeping CPUs
leading to various on_each_cpu() calls, e.g.:

  64359.052209596    NetworkManager       0    1405     smp_call_function_many_cond (cpu=0, func=do_kernel_range_flush)
    smp_call_function_many_cond+0x1
    smp_call_function+0x39
    on_each_cpu+0x2a
    flush_tlb_kernel_range+0x7b
    __purge_vmap_area_lazy+0x70
    _vm_unmap_aliases.part.42+0xdf
    change_page_attr_set_clr+0x16a
    set_memory_ro+0x26
    bpf_int_jit_compile+0x2f9
    bpf_prog_select_runtime+0xc6
    bpf_prepare_filter+0x523
    sk_attach_filter+0x13
    sock_setsockopt+0x92c
    __sys_setsockopt+0x16a
    __x64_sys_setsockopt+0x20
    do_syscall_64+0x87
    entry_SYSCALL_64_after_hwframe+0x65

The heart of this series is the thought that while we cannot remove NOHZ_FULL
CPUs from the list of CPUs targeted by these IPIs, they may not have to execute
the callbacks immediately. Anything that only affects kernelspace can wait
until the next user->kernel transition, providing it can be executed "early
enough" in the entry code.

The original implementation is from Peter [1]. Nicolas then added kernel TLB
invalidation deferral to that [2], and I picked it up from there.

Deferral approach
=================

Storing each and every callback, like a secondary call_single_queue turned out
to be a no-go: the whole point of deferral is to keep NOHZ_FULL CPUs in
userspace for as long as possible - no signal of any form would be sent when
deferring an IPI. This means that any form of queuing for deferred callbacks
would end up as a convoluted memory leak.

Deferred IPIs must thus be coalesced, which this series achieves by assigning
IPIs a "type" and having a mapping of IPI type to callback, leveraged upon
kernel entry.

What about IPIs whose callback take a parameter, you may ask?

Peter suggested during OSPM23 [3] that since on_each_cpu() targets
housekeeping CPUs *and* isolated CPUs, isolated CPUs can access either global or
housekeeping-CPU-local state to "reconstruct" the data that would have been sent
via the IPI.

This series does not affect any IPI callback that requires an argument, but the
approach would remain the same (one coalescable callback executed on kernel
entry).

Kernel entry vs execution of the deferred operation
===================================================

There is a non-zero length of code that is executed upon kernel entry before the
deferred operation can be itself executed (i.e. before we start getting into
context_tracking.c proper).

This means one must take extra care to what can happen in the early entry code,
and that <bad things> cannot happen. For instance, we really don't want to hit
instructions that have been modified by a remote text_poke() while we're on our
way to execute a deferred sync_core().

Patches
=======

o Patches 1-5 have been submitted previously and are included for the sake of
  testing

o Patches 6-10 focus on having objtool detect problematic static key usage in early
  entry

  The context_tracking_key one causes a page_fault_oops() on my RHEL userspace
  due to the KVM module, cf. changelog.
  
o Patch 11 adds the infrastructure for IPI deferral.  
o Patch 12 adds text_poke() IPI deferral.

  This one I'm fairly confident about, we "just" need to do something about the
  __ro_after_init key vs module loading issue.

o Patches 13-14 add vunmap() flush_tlb_kernel_range() IPI deferral

  These ones I'm a lot less confident about, mostly due to lacking
  instrumentation/verification.
  
  The actual deferred callback is also incomplete as it's not properly noinstr:
    vmlinux.o: warning: objtool: __flush_tlb_all_noinstr+0x19: call to native_write_cr4() leaves .noinstr.text section
  and it doesn't support PARAVIRT - it's going to need a pv_ops.mmu entry, but I
  have *no idea* what a sane implementation would be for Xen so I haven't
  touched that yet.

Patches are also available at:

https://gitlab.com/vschneid/linux.git -b redhat/isolirq/defer/v1

Testing
=======

Xeon Platinum 8380 system with SMToff, NOHZ_FULL, isolated CPUs.
RHEL9 userspace.

Workload is using rteval (kernel compilation + hackbench) on housekeeping CPUs
and a dummy stay-in-userspace loop on the isolated CPUs. The main invocation is:

$ trace-cmd record -e "ipi_send_cpumask" -f "cpumask & MASK{$ISOL_CPUS}" \
	           -e "ipi_send_cpu"     -f "cpu & MASK{$ISOL_CPUS}" \
		   rteval --onlyload --loads-cpulist=$HK_CPUS \
		   --hackbench-runlowmem=True --duration=$DURATION

This only records IPIs sent to isolated CPUs, so any event there is interference
(with a bit of fuzz at the start/end of the workload when spawning the
processes). All tests were done with a duration of 20 minutes.
		   
v6.4 (+ cpumask filtering patches):
$ trace-cmd report | grep callback | awk '{ print $NF }' | sort | uniq -c
    236 callback=do_flush_tlb_all+0x0
    576 callback=do_sync_core+0x0
    814 callback=generic_smp_call_function_single_interrupt+0x0
    309 callback=nohz_full_kick_func+0x0

v6.4 + patches:
$ trace-cmd report | grep callback | awk '{ print $NF }' | sort | uniq -c
     22 callback=do_flush_tlb_all+0x0
     24 callback=generic_smp_call_function_single_interrupt+0x0
    307 callback=nohz_full_kick_func+0x0

o IPIs from instruction patching are entirely gone.

o Some TLB flushes remain as I only patched the vunmap cases:

  kworker/2:0-13856 [002]  3517.445719: ipi_send_cpumask:     cpumask=0-1,3-79 callsite=on_each_cpu_cond_mask+0x20 
  callback=do_flush_tlb_all+0x0
  kworker/2:0-13856 [002]  3517.445722: kernel_stack:         <stack trace >
  => trace_event_raw_event_ipi_send_cpumask (ffffffffa974a050)
  => smp_call_function_many_cond (ffffffffa97fb0a7)
  => on_each_cpu_cond_mask (ffffffffa97fb1f0)
  => pcpu_reclaim_populated (ffffffffa996b451)
  => pcpu_balance_workfn (ffffffffa996c399)
  => process_one_work (ffffffffa9730e14)
  => worker_thread (ffffffffa9731440)
  => kthread (ffffffffa973984e)

o The nohz_full_kick_func() ones seem to come from the dev_watchdog() but are
  anyway consistent across revisions

  <...>-3734  [042]   392.890491: ipi_send_cpu:         cpu=42 callsite=irq_work_queue_on+0x77 callback=nohz_full_kick_func+0x0
  <...>-3734  [042]   392.890497: kernel_stack:         <stack trace >
  => trace_event_raw_event_ipi_send_cpu (ffffffff901492d8)
  => __irq_work_queue_local (ffffffff902acb3d)
  => irq_work_queue_on (ffffffff902acc47)
  => __mod_timer (ffffffff901dcd81)
  => dev_watchdog (ffffffff90a75310)
  => call_timer_fn (ffffffff901dc174)
  => __run_timers.part.0 (ffffffff901dc47e)
  => run_timer_softirq (ffffffff901dc546)
  => __do_softirq (ffffffff90c52348)
  => __irq_exit_rcu (ffffffff90113329)
  => sysvec_apic_timer_interrupt (ffffffff90c3c895)
  => asm_sysvec_apic_timer_interrupt (ffffffff90e00d86)

Acknowledgements
================

Special thanks to:
o Clark Williams for listening to my ramblings about this and throwing ideas my way
o Josh Poimboeuf for his guidance regarding objtool and hinting at the
  .data..ro_after_init section.

Links
=====

[1]: https://lore.kernel.org/all/20210929151723.162004989@infradead.org/
[2]: https://github.com/vianpl/linux.git -b ct-work-defer-wip
[3]: https://youtu.be/0vjE6fjoVVE

Valentin Schneider (14):
  tracing/filters: Dynamically allocate filter_pred.regex
  tracing/filters: Enable filtering a cpumask field by another cpumask
  tracing/filters: Enable filtering a scalar field by a cpumask
  tracing/filters: Enable filtering the CPU common field by a cpumask
  tracing/filters: Document cpumask filtering
  objtool: Flesh out warning related to pv_ops[] calls
  objtool: Warn about non __ro_after_init static key usage in .noinstr
  BROKEN: context_tracking: Make context_tracking_key __ro_after_init
  x86/kvm: Make kvm_async_pf_enabled __ro_after_init
  x86/sev-es: Make sev_es_enable_key __ro_after_init
  context-tracking: Introduce work deferral infrastructure
  context_tracking,x86: Defer kernel text patching IPIs
  context_tracking,x86: Add infrastructure to defer kernel TLBI
  x86/mm, mm/vmalloc: Defer flush_tlb_kernel_range() targeting NOHZ_FULL
    CPUs

 Documentation/trace/events.rst               |  14 ++
 arch/Kconfig                                 |   9 +
 arch/x86/Kconfig                             |   1 +
 arch/x86/include/asm/context_tracking_work.h |  20 ++
 arch/x86/include/asm/text-patching.h         |   1 +
 arch/x86/include/asm/tlbflush.h              |   2 +
 arch/x86/kernel/alternative.c                |  24 +-
 arch/x86/kernel/kprobes/core.c               |   4 +-
 arch/x86/kernel/kprobes/opt.c                |   4 +-
 arch/x86/kernel/kvm.c                        |   2 +-
 arch/x86/kernel/module.c                     |   2 +-
 arch/x86/kernel/sev.c                        |   2 +-
 arch/x86/mm/tlb.c                            |  40 +++-
 include/linux/context_tracking.h             |   1 +
 include/linux/context_tracking_state.h       |   1 +
 include/linux/context_tracking_work.h        |  30 +++
 include/linux/trace_events.h                 |   1 +
 kernel/context_tracking.c                    |  65 +++++-
 kernel/time/Kconfig                          |   5 +
 kernel/trace/trace_events_filter.c           | 228 ++++++++++++++++---
 mm/vmalloc.c                                 |  15 +-
 tools/objtool/check.c                        |  22 +-
 tools/objtool/include/objtool/check.h        |   1 +
 tools/objtool/include/objtool/special.h      |   2 +
 tools/objtool/special.c                      |   3 +
 25 files changed, 451 insertions(+), 48 deletions(-)
 create mode 100644 arch/x86/include/asm/context_tracking_work.h
 create mode 100644 include/linux/context_tracking_work.h

--
2.31.1

Comments

Nadav Amit July 5, 2023, 6:48 p.m. UTC | #1
> On Jul 5, 2023, at 11:12 AM, Valentin Schneider <vschneid@redhat.com> wrote:
> 
> Deferral approach
> =================
> 
> Storing each and every callback, like a secondary call_single_queue turned out
> to be a no-go: the whole point of deferral is to keep NOHZ_FULL CPUs in
> userspace for as long as possible - no signal of any form would be sent when
> deferring an IPI. This means that any form of queuing for deferred callbacks
> would end up as a convoluted memory leak.
> 
> Deferred IPIs must thus be coalesced, which this series achieves by assigning
> IPIs a "type" and having a mapping of IPI type to callback, leveraged upon
> kernel entry.

I have some experience with similar an optimization. Overall, it can make
sense and as you show, it can reduce the number of interrupts.

The main problem of such an approach might be in cases where a process
frequently enters and exits the kernel between deferred-IPIs, or even worse -
the IPI is sent while the remote CPU is inside the kernel. In such cases, you
pay the extra cost of synchronization and cache traffic, and might not even
get the benefit of reducing the number of IPIs.

In a sense, it's a more extreme case of the overhead that x86’s lazy-TLB
mechanism introduces while tracking whether a process is running or not. But
lazy-TLB would change is_lazy much less frequently than context tracking,
which means that the deferring the IPIs as done in this patch-set has a
greater potential to hurt performance than lazy-TLB.

tl;dr - it would be beneficial to show some performance number for both a
“good” case where a process spends most of the time in userspace, and “bad”
one where a process enters and exits the kernel very frequently. Reducing
the number of IPIs is good but I don’t think it is a goal by its own.

[ BTW: I did not go over the patches in detail. Obviously, there are
  various delicate points that need to be checked, as avoiding the
  deferring of IPIs if page-tables are freed. ]
Steven Rostedt July 5, 2023, 7:03 p.m. UTC | #2
On Wed,  5 Jul 2023 19:12:42 +0100
Valentin Schneider <vschneid@redhat.com> wrote:

> o Patches 1-5 have been submitted previously and are included for the sake of
>   testing

I should have commented on the previous set, but I did my review on this set ;-)

Anyway, I'm all for the patches. Care to send a new version covering my input?

Thanks,

-- Steve
Valentin Schneider July 6, 2023, 11:29 a.m. UTC | #3
On 05/07/23 18:48, Nadav Amit wrote:
>> On Jul 5, 2023, at 11:12 AM, Valentin Schneider <vschneid@redhat.com> wrote:
>>
>> Deferral approach
>> =================
>>
>> Storing each and every callback, like a secondary call_single_queue turned out
>> to be a no-go: the whole point of deferral is to keep NOHZ_FULL CPUs in
>> userspace for as long as possible - no signal of any form would be sent when
>> deferring an IPI. This means that any form of queuing for deferred callbacks
>> would end up as a convoluted memory leak.
>>
>> Deferred IPIs must thus be coalesced, which this series achieves by assigning
>> IPIs a "type" and having a mapping of IPI type to callback, leveraged upon
>> kernel entry.
>
> I have some experience with similar an optimization. Overall, it can make
> sense and as you show, it can reduce the number of interrupts.
>
> The main problem of such an approach might be in cases where a process
> frequently enters and exits the kernel between deferred-IPIs, or even worse -
> the IPI is sent while the remote CPU is inside the kernel. In such cases, you
> pay the extra cost of synchronization and cache traffic, and might not even
> get the benefit of reducing the number of IPIs.
>
> In a sense, it's a more extreme case of the overhead that x86’s lazy-TLB
> mechanism introduces while tracking whether a process is running or not. But
> lazy-TLB would change is_lazy much less frequently than context tracking,
> which means that the deferring the IPIs as done in this patch-set has a
> greater potential to hurt performance than lazy-TLB.
>
> tl;dr - it would be beneficial to show some performance number for both a
> “good” case where a process spends most of the time in userspace, and “bad”
> one where a process enters and exits the kernel very frequently. Reducing
> the number of IPIs is good but I don’t think it is a goal by its own.
>

There already is a significant overhead incurred on kernel entry for
nohz_full CPUs due to all of context_tracking faff; now I *am* making it
worse with that extra atomic, but I get the feeling it's not going to stay
:D

nohz_full CPUs that do context transitions very frequently are
unfortunately in the realm of "you shouldn't do that". Due to what's out
there I have to care about *occasional* transitions, but some folks
consider even that to be broken usage, so I don't believe getting numbers
for that to be much relevant.

> [ BTW: I did not go over the patches in detail. Obviously, there are
>   various delicate points that need to be checked, as avoiding the
>   deferring of IPIs if page-tables are freed. ]
Valentin Schneider July 6, 2023, 11:30 a.m. UTC | #4
On 05/07/23 15:03, Steven Rostedt wrote:
> On Wed,  5 Jul 2023 19:12:42 +0100
> Valentin Schneider <vschneid@redhat.com> wrote:
>
>> o Patches 1-5 have been submitted previously and are included for the sake of
>>   testing
>
> I should have commented on the previous set, but I did my review on this set ;-)
>

Thanks for having a look!

> Anyway, I'm all for the patches. Care to send a new version covering my input?
>

Sure thing, I'll send a v2 of these patches soonish.

> Thanks,
>
> -- Steve