mbox series

[v5,00/7] Drain remote per-cpu directly

Message ID 20220624125423.6126-1-mgorman@techsingularity.net (mailing list archive)
Headers show
Series Drain remote per-cpu directly | expand

Message

Mel Gorman June 24, 2022, 12:54 p.m. UTC
This replaces the existing version on mm-unstable. While there are
some fixes, this is mostly refactoring of patch 5 based on Vlastimil's
feedback to reduce churn in later patches. The level of refactoring made
-fix patches excessively complicated.

Changelog since v4
o Fix lockdep issues in patch 7
o Refactor patch 5 to reduce churn in patches 6 and 7
o Rebase to 5.19-rc3

Some setups, notably NOHZ_FULL CPUs, may be running realtime or
latency-sensitive applications that cannot tolerate interference due to
per-cpu drain work queued by __drain_all_pages().  Introduce a new
mechanism to remotely drain the per-cpu lists. It is made possible by
remotely locking 'struct per_cpu_pages' new per-cpu spinlocks.  This has
two advantages, the time to drain is more predictable and other unrelated
tasks are not interrupted.

This series has the same intent as Nicolas' series "mm/page_alloc: Remote
per-cpu lists drain support" -- avoid interference of a high priority task
due to a workqueue item draining per-cpu page lists.  While many workloads
can tolerate a brief interruption, it may cause a real-time task running
on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is
non-deterministic.

Currently an IRQ-safe local_lock protects the page allocator per-cpu
lists. The local_lock on its own prevents migration and the IRQ disabling
protects from corruption due to an interrupt arriving while a page
allocation is in progress.

This series adjusts the locking.  A spinlock is added to struct
per_cpu_pages to protect the list contents while local_lock_irq is
ultimately replaced by just the spinlock in the final patch.  This allows
a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave
to be converted to spin_lock to avoid IRQs being disabled/enabled in
most cases. The follow-on patch will be one kernel release later as it
is relatively high risk and it'll make bisections more clear if there
are any problems.

Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages
	and when it is storing per-cpu pages.

Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking
	this is not necessary but it avoids per_cpu_pages consuming another
	cache line.

Patch 3 is a preparation patch to avoid code duplication.

Patch 4 is a minor correction.

Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still
	relying on local_lock to prevent migration, stabilise the pcp
	lookup and prevent IRQ reentrancy.

Patch 6 remote drains per-cpu pages directly instead of using a workqueue.

Patch 7 uses a normal spinlock instead of local_lock for remote draining

Nicolas Saenz Julienne (1):
  mm/page_alloc: Remotely drain per-cpu lists

 include/linux/mm_types.h |   5 +
 include/linux/mmzone.h   |  12 +-
 mm/page_alloc.c          | 386 ++++++++++++++++++++++++---------------
 3 files changed, 250 insertions(+), 153 deletions(-)

Comments

Andrew Morton July 3, 2022, 11:28 p.m. UTC | #1
On Fri, 24 Jun 2022 13:54:16 +0100 Mel Gorman <mgorman@techsingularity.net> wrote:

> Some setups, notably NOHZ_FULL CPUs, may be running realtime or
> latency-sensitive applications that cannot tolerate interference due to
> per-cpu drain work queued by __drain_all_pages().  Introduce a new
> mechanism to remotely drain the per-cpu lists. It is made possible by
> remotely locking 'struct per_cpu_pages' new per-cpu spinlocks.  This has
> two advantages, the time to drain is more predictable and other unrelated
> tasks are not interrupted.
> 
> This series has the same intent as Nicolas' series "mm/page_alloc: Remote
> per-cpu lists drain support" -- avoid interference of a high priority task
> due to a workqueue item draining per-cpu page lists.  While many workloads
> can tolerate a brief interruption, it may cause a real-time task running
> on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is
> non-deterministic.
> 
> Currently an IRQ-safe local_lock protects the page allocator per-cpu
> lists. The local_lock on its own prevents migration and the IRQ disabling
> protects from corruption due to an interrupt arriving while a page
> allocation is in progress.
> 
> This series adjusts the locking.  A spinlock is added to struct
> per_cpu_pages to protect the list contents while local_lock_irq is
> ultimately replaced by just the spinlock in the final patch.  This allows
> a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave
> to be converted to spin_lock to avoid IRQs being disabled/enabled in
> most cases. The follow-on patch will be one kernel release later as it
> is relatively high risk and it'll make bisections more clear if there
> are any problems.

I plan to move this and Mel's fix to [7/7] into mm-stable around July 8.
Yu Zhao July 3, 2022, 11:31 p.m. UTC | #2
On Sun, Jul 3, 2022 at 5:28 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Fri, 24 Jun 2022 13:54:16 +0100 Mel Gorman <mgorman@techsingularity.net> wrote:
>
> > Some setups, notably NOHZ_FULL CPUs, may be running realtime or
> > latency-sensitive applications that cannot tolerate interference due to
> > per-cpu drain work queued by __drain_all_pages().  Introduce a new
> > mechanism to remotely drain the per-cpu lists. It is made possible by
> > remotely locking 'struct per_cpu_pages' new per-cpu spinlocks.  This has
> > two advantages, the time to drain is more predictable and other unrelated
> > tasks are not interrupted.
> >
> > This series has the same intent as Nicolas' series "mm/page_alloc: Remote
> > per-cpu lists drain support" -- avoid interference of a high priority task
> > due to a workqueue item draining per-cpu page lists.  While many workloads
> > can tolerate a brief interruption, it may cause a real-time task running
> > on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is
> > non-deterministic.
> >
> > Currently an IRQ-safe local_lock protects the page allocator per-cpu
> > lists. The local_lock on its own prevents migration and the IRQ disabling
> > protects from corruption due to an interrupt arriving while a page
> > allocation is in progress.
> >
> > This series adjusts the locking.  A spinlock is added to struct
> > per_cpu_pages to protect the list contents while local_lock_irq is
> > ultimately replaced by just the spinlock in the final patch.  This allows
> > a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave
> > to be converted to spin_lock to avoid IRQs being disabled/enabled in
> > most cases. The follow-on patch will be one kernel release later as it
> > is relatively high risk and it'll make bisections more clear if there
> > are any problems.
>
> I plan to move this and Mel's fix to [7/7] into mm-stable around July 8.

I've thrown it together with the Maple Tree and passed a series of stress tests.
Andrew Morton July 3, 2022, 11:35 p.m. UTC | #3
On Sun, 3 Jul 2022 17:31:09 -0600 Yu Zhao <yuzhao@google.com> wrote:

> > > This series adjusts the locking.  A spinlock is added to struct
> > > per_cpu_pages to protect the list contents while local_lock_irq is
> > > ultimately replaced by just the spinlock in the final patch.  This allows
> > > a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave
> > > to be converted to spin_lock to avoid IRQs being disabled/enabled in
> > > most cases. The follow-on patch will be one kernel release later as it
> > > is relatively high risk and it'll make bisections more clear if there
> > > are any problems.
> >
> > I plan to move this and Mel's fix to [7/7] into mm-stable around July 8.
> 
> I've thrown it together with the Maple Tree and passed a series of stress tests.

Cool, thanks.  I added your Tested-by: to everything.