mbox series

[0/1] Add swappiness argument to memory.reclaim

Message ID 20231130153658.527556-1-schatzberg.dan@gmail.com (mailing list archive)
Headers show
Series Add swappiness argument to memory.reclaim | expand

Message

Dan Schatzberg Nov. 30, 2023, 3:36 p.m. UTC
(Sorry for the resend - forgot to cc the mailing lists)

This patch proposes augmenting the memory.reclaim interface with a
swappiness=<val> argument that overrides the swappiness value for that instance
of proactive reclaim.

Userspace proactive reclaimers use the memory.reclaim interface to trigger
reclaim. The memory.reclaim interface does not allow for any way to effect the
balance of file vs anon during proactive reclaim. The only approach is to adjust
the vm.swappiness setting. However, there are a few reasons we look to control
the balance of file vs anon during proactive reclaim, separately from reactive
reclaim:

* Swapout should be limited to manage SSD write endurance. In near-OOM
  situations we are fine with lots of swap-out to avoid OOMs. As these are
  typically rare events, they have relatively little impact on write endurance.
  However, proactive reclaim runs continuously and so its impact on SSD write
  endurance is more significant. Therefore it is desireable to control swap-out
  for proactive reclaim separately from reactive reclaim

* Some userspace OOM killers like systemd-oomd[1] support OOM killing on swap
  exhaustion. This makes sense if the swap exhaustion is triggered due to
  reactive reclaim but less so if it is triggered due to proactive reclaim (e.g.
  one could see OOMs when free memory is ample but anon is just particularly
  cold). Therefore, it's desireable to have proactive reclaim reduce or stop
  swap-out before the threshold at which OOM killing occurs.

In the case of Meta's Senpai proactive reclaimer, we adjust vm.swappiness before
writes to memory.reclaim[2]. This has been in production for nearly two years
and has addressed our needs to control proactive vs reactive reclaim behavior
but is still not ideal for a number of reasons:

* vm.swappiness is a global setting, adjusting it can race/interfere with other
  system administration that wishes to control vm.swappiness. In our case, we
  need to disable Senpai before adjusting vm.swappiness.

* vm.swappiness is stateful - so a crash or restart of Senpai can leave a
  misconfigured setting. This requires some additional management to record the
  "desired" setting and ensure Senpai always adjusts to it.

With this patch, we avoid these downsides of adjusting vm.swappiness globally.

Previously, this exact interface addition was proposed by Yosry[3]. In response,
Roman proposed instead an interface to specify precise file/anon/slab reclaim
amounts[4]. More recently Huan also proposed this as well[5] and others
similarly questioned if this was the proper interface.

Previous proposals sought to use this to allow proactive reclaimers to
effectively perform a custom reclaim algorithm by issuing proactive reclaim with
different settings to control file vs anon reclaim (e.g. to only reclaim anon
from some applications). Responses argued that adjusting swappiness is a poor
interface for custom reclaim.

In contrast, I argue in favor of a swappiness setting not as a way to implement
custom reclaim algorithms but rather to bias the balance of anon vs file due to
differences of proactive vs reactive reclaim. In this context, swappiness is the
existing interface for controlling this balance and this patch simply allows for
it to be configured differently for proactive vs reactive reclaim.

Specifying explicit amounts of anon vs file pages to reclaim feels inappropriate
for this prupose. Proactive reclaimers are un-aware of the relative age of file
vs anon for a cgroup which makes it difficult to manage proactive reclaim of
different memory pools. A proactive reclaimer would need some amount of anon
reclaim attempts separate from the amount of file reclaim attempts which seems
brittle given that it's difficult to observe the impact.

[1]https://www.freedesktop.org/software/systemd/man/latest/systemd-oomd.service.html
[2]https://github.com/facebookincubator/oomd/blob/main/src/oomd/plugins/Senpai.cpp#L585-L598
[3]https://lore.kernel.org/linux-mm/CAJD7tkbDpyoODveCsnaqBBMZEkDvshXJmNdbk51yKSNgD7aGdg@mail.gmail.com/
[4]https://lore.kernel.org/linux-mm/YoPHtHXzpK51F%2F1Z@carbon/
[5]https://lore.kernel.org/lkml/20231108065818.19932-1-link@vivo.com/

Dan Schatzberg (1):
  mm: add swapiness= arg to memory.reclaim

 include/linux/swap.h |  3 ++-
 mm/memcontrol.c      | 55 +++++++++++++++++++++++++++++++++++---------
 mm/vmscan.c          | 13 +++++++++--
 3 files changed, 57 insertions(+), 14 deletions(-)

Comments

Michal Hocko Nov. 30, 2023, 3:57 p.m. UTC | #1
On Thu 30-11-23 07:36:53, Dan Schatzberg wrote:
[...]
> In contrast, I argue in favor of a swappiness setting not as a way to implement
> custom reclaim algorithms but rather to bias the balance of anon vs file due to
> differences of proactive vs reactive reclaim. In this context, swappiness is the
> existing interface for controlling this balance and this patch simply allows for
> it to be configured differently for proactive vs reactive reclaim.

I do agree that swappiness is a better interface than explicit anon/file
but the problem with swappiness is that it is more of a hint for the reclaim
rather than a real control. Just look at get_scan_count and its history.
Not only its range has been extended also the extent when it is actually
used has been changing all the time and I think it is not a stretch to
assume that trend to continue.

Now if we extend the user interface to trigger the reclaim do we expect
that we always do SCAN_EQUAL if a user specifies swappiness or are we OK
that the implementation is free to ignore that "hint"?
Johannes Weiner Nov. 30, 2023, 4:56 p.m. UTC | #2
On Thu, Nov 30, 2023 at 04:57:41PM +0100, Michal Hocko wrote:
> On Thu 30-11-23 07:36:53, Dan Schatzberg wrote:
> [...]
> > In contrast, I argue in favor of a swappiness setting not as a way to implement
> > custom reclaim algorithms but rather to bias the balance of anon vs file due to
> > differences of proactive vs reactive reclaim. In this context, swappiness is the
> > existing interface for controlling this balance and this patch simply allows for
> > it to be configured differently for proactive vs reactive reclaim.
> 
> I do agree that swappiness is a better interface than explicit anon/file
> but the problem with swappiness is that it is more of a hint for the reclaim
> rather than a real control. Just look at get_scan_count and its history.
> Not only its range has been extended also the extent when it is actually
> used has been changing all the time and I think it is not a stretch to
> assume that trend to continue.

Right, we did tweak the edge behavior of e.g. swappiness=0. And we
extended the range to express "anon is cheaper than file", which
wasn't possible before, to support the compressed memory case.

However, its meaning and impact has been remarkably stable over the
years: it allows userspace to specify the relative cost of paging IO
between file and anon pages. This comment is from 2.6.28:

        /*
         * With swappiness at 100, anonymous and file have the same priority.
         * This scanning priority is essentially the inverse of IO cost.
         */
        anon_prio = sc->swappiness;
        file_prio = 200 - sc->swappiness;

And this is it today:

	/*
	 * Calculate the pressure balance between anon and file pages.
	 *
	 * The amount of pressure we put on each LRU is inversely
	 * proportional to the cost of reclaiming each list, as
	 * determined by the share of pages that are refaulting, times
	 * the relative IO cost of bringing back a swapped out
	 * anonymous page vs reloading a filesystem page (swappiness).
	 *
	 * Although we limit that influence to ensure no list gets
	 * left behind completely: at least a third of the pressure is
	 * applied, before swappiness.
	 *
	 * With swappiness at 100, anon and file have equal IO cost.
	 */
	total_cost = sc->anon_cost + sc->file_cost;
	anon_cost = total_cost + sc->anon_cost;
	file_cost = total_cost + sc->file_cost;
	total_cost = anon_cost + file_cost;

	ap = swappiness * (total_cost + 1);
	ap /= anon_cost + 1;

	fp = (200 - swappiness) * (total_cost + 1);
	fp /= file_cost + 1;

So swappiness still means the same it did 15 years ago. We haven't
changed the default swappiness setting, and we haven't broken any
existing swappiness configurations through VM changes in that time.

There are a few scenarios where swappiness doesn't apply:

- No swap. Oh well, that seems reasonable.

- Priority=0. This applies to near-OOM situations where the MM system
  tries to save itself. This isn't a range in which proactive
  reclaimers (should) operate.

- sc->file_is_tiny. This doesn't apply to cgroup reclaim and thus
  proactive reclaim.

- sc->cache_trim_mode. This implements clean cache dropbehind, and
  applies in the presence of large, non-refaulting inactive cache. The
  assumption there is that this data is reclaimable without involving
  IO to evict, and without the expectation of refault IO in the
  future. Without IO involvement, the relative IO cost isn't a
  factor. This will back off when refaults are observed, and the IO
  cost setting is then taken into account again as expected.

  If you consider swappiness to mean "reclaim what I ask you to", then
  this would override that, yes. But in the definition of relative IO
  cost, this decision making is permissible.

  Note that this applies to the global swappiness setting as well, and
  nobody has complained about it.

So I wouldn't say it's merely a reclaim hint. It controls a very
concrete and influential factor in VM decision making. And since the
global swappiness is long-established ABI, I don't expect its meaning
to change significantly any time soon.
Shakeel Butt Nov. 30, 2023, 6:44 p.m. UTC | #3
On Thu, Nov 30, 2023 at 07:36:53AM -0800, Dan Schatzberg wrote:
> (Sorry for the resend - forgot to cc the mailing lists)
> 
> This patch proposes augmenting the memory.reclaim interface with a
> swappiness=<val> argument that overrides the swappiness value for that instance
> of proactive reclaim.
> 
> Userspace proactive reclaimers use the memory.reclaim interface to trigger
> reclaim. The memory.reclaim interface does not allow for any way to effect the
> balance of file vs anon during proactive reclaim. The only approach is to adjust
> the vm.swappiness setting. However, there are a few reasons we look to control
> the balance of file vs anon during proactive reclaim, separately from reactive
> reclaim:
> 
> * Swapout should be limited to manage SSD write endurance. In near-OOM

Is this about swapout to SSD only?

>   situations we are fine with lots of swap-out to avoid OOMs. As these are
>   typically rare events, they have relatively little impact on write endurance.
>   However, proactive reclaim runs continuously and so its impact on SSD write
>   endurance is more significant. Therefore it is desireable to control swap-out
>   for proactive reclaim separately from reactive reclaim

This is understandable but swapout to zswap should be fine, right?
(Sorry I am not following the discussion on zswap patches from Nhat. Is
the answer there?)
Shakeel Butt Nov. 30, 2023, 6:49 p.m. UTC | #4
On Thu, Nov 30, 2023 at 11:56:42AM -0500, Johannes Weiner wrote:
> On Thu, Nov 30, 2023 at 04:57:41PM +0100, Michal Hocko wrote:
> > On Thu 30-11-23 07:36:53, Dan Schatzberg wrote:
> > [...]
> > > In contrast, I argue in favor of a swappiness setting not as a way to implement
> > > custom reclaim algorithms but rather to bias the balance of anon vs file due to
> > > differences of proactive vs reactive reclaim. In this context, swappiness is the
> > > existing interface for controlling this balance and this patch simply allows for
> > > it to be configured differently for proactive vs reactive reclaim.
> > 
> > I do agree that swappiness is a better interface than explicit anon/file
> > but the problem with swappiness is that it is more of a hint for the reclaim
> > rather than a real control. Just look at get_scan_count and its history.
> > Not only its range has been extended also the extent when it is actually
> > used has been changing all the time and I think it is not a stretch to
> > assume that trend to continue.
> 
> Right, we did tweak the edge behavior of e.g. swappiness=0. And we
> extended the range to express "anon is cheaper than file", which
> wasn't possible before, to support the compressed memory case.
> 
> However, its meaning and impact has been remarkably stable over the
> years: it allows userspace to specify the relative cost of paging IO
> between file and anon pages. This comment is from 2.6.28:
> 
>         /*
>          * With swappiness at 100, anonymous and file have the same priority.
>          * This scanning priority is essentially the inverse of IO cost.
>          */
>         anon_prio = sc->swappiness;
>         file_prio = 200 - sc->swappiness;
> 
> And this is it today:
> 
> 	/*
> 	 * Calculate the pressure balance between anon and file pages.
> 	 *
> 	 * The amount of pressure we put on each LRU is inversely
> 	 * proportional to the cost of reclaiming each list, as
> 	 * determined by the share of pages that are refaulting, times
> 	 * the relative IO cost of bringing back a swapped out
> 	 * anonymous page vs reloading a filesystem page (swappiness).
> 	 *
> 	 * Although we limit that influence to ensure no list gets
> 	 * left behind completely: at least a third of the pressure is
> 	 * applied, before swappiness.
> 	 *
> 	 * With swappiness at 100, anon and file have equal IO cost.
> 	 */
> 	total_cost = sc->anon_cost + sc->file_cost;
> 	anon_cost = total_cost + sc->anon_cost;
> 	file_cost = total_cost + sc->file_cost;
> 	total_cost = anon_cost + file_cost;
> 
> 	ap = swappiness * (total_cost + 1);
> 	ap /= anon_cost + 1;
> 
> 	fp = (200 - swappiness) * (total_cost + 1);
> 	fp /= file_cost + 1;
> 
> So swappiness still means the same it did 15 years ago. We haven't
> changed the default swappiness setting, and we haven't broken any
> existing swappiness configurations through VM changes in that time.
> 
> There are a few scenarios where swappiness doesn't apply:
> 
> - No swap. Oh well, that seems reasonable.
> 
> - Priority=0. This applies to near-OOM situations where the MM system
>   tries to save itself. This isn't a range in which proactive
>   reclaimers (should) operate.
> 
> - sc->file_is_tiny. This doesn't apply to cgroup reclaim and thus
>   proactive reclaim.
> 
> - sc->cache_trim_mode. This implements clean cache dropbehind, and
>   applies in the presence of large, non-refaulting inactive cache. The
>   assumption there is that this data is reclaimable without involving
>   IO to evict, and without the expectation of refault IO in the
>   future. Without IO involvement, the relative IO cost isn't a
>   factor. This will back off when refaults are observed, and the IO
>   cost setting is then taken into account again as expected.
> 
>   If you consider swappiness to mean "reclaim what I ask you to", then
>   this would override that, yes. But in the definition of relative IO
>   cost, this decision making is permissible.
> 
>   Note that this applies to the global swappiness setting as well, and
>   nobody has complained about it.
> 
> So I wouldn't say it's merely a reclaim hint. It controls a very
> concrete and influential factor in VM decision making. And since the
> global swappiness is long-established ABI, I don't expect its meaning
> to change significantly any time soon.

Are you saying the edge case behavior of global swappiness and the user
provided swappiness through memory.reclaim should remain same?
Matthew Wilcox Nov. 30, 2023, 6:54 p.m. UTC | #5
On Thu, Nov 30, 2023 at 06:44:24PM +0000, Shakeel Butt wrote:
> On Thu, Nov 30, 2023 at 07:36:53AM -0800, Dan Schatzberg wrote:
> > * Swapout should be limited to manage SSD write endurance. In near-OOM
> 
> Is this about swapout to SSD only?

If you're using spinning rust with its limit of, what, 200 seeks per
second, I'd suggest that swapout should also be limited but for
different reasons.  We can set you up with a laptop with insufficient
RAM and a 5400rpm drive if you'd like to be convinced ...
Johannes Weiner Nov. 30, 2023, 7:39 p.m. UTC | #6
On Thu, Nov 30, 2023 at 06:54:37PM +0000, Matthew Wilcox wrote:
> On Thu, Nov 30, 2023 at 06:44:24PM +0000, Shakeel Butt wrote:
> > On Thu, Nov 30, 2023 at 07:36:53AM -0800, Dan Schatzberg wrote:
> > > * Swapout should be limited to manage SSD write endurance. In near-OOM
> > 
> > Is this about swapout to SSD only?
> 
> If you're using spinning rust with its limit of, what, 200 seeks per
> second, I'd suggest that swapout should also be limited but for
> different reasons.  We can set you up with a laptop with insufficient
> RAM and a 5400rpm drive if you'd like to be convinced ...

^_^

Yeah, we don't enable disk swap on the hard drive hosts that are
left. This aspect is only about write endurance management on SSD
during proactive reclaim.
Dan Schatzberg Nov. 30, 2023, 7:47 p.m. UTC | #7
On Thu, Nov 30, 2023 at 11:56:42AM -0500, Johannes Weiner wrote:
> [...]
> So I wouldn't say it's merely a reclaim hint. It controls a very
> concrete and influential factor in VM decision making. And since the
> global swappiness is long-established ABI, I don't expect its meaning
> to change significantly any time soon.

I want to add to this last point. While swappiness does not have
terribly well-defined semantics - it is the (only?) existing mechanism
to control balance between anon and file reclaim. I'm merely
advocating for the ability to adjust swappiness during proactive
reclaim separately from reactive reclaim. To what degree the behavior
and semantics of swappiness change is a bit orthogonal here.
Johannes Weiner Nov. 30, 2023, 7:49 p.m. UTC | #8
On Thu, Nov 30, 2023 at 06:44:24PM +0000, Shakeel Butt wrote:
> On Thu, Nov 30, 2023 at 07:36:53AM -0800, Dan Schatzberg wrote:
> > (Sorry for the resend - forgot to cc the mailing lists)
> > 
> > This patch proposes augmenting the memory.reclaim interface with a
> > swappiness=<val> argument that overrides the swappiness value for that instance
> > of proactive reclaim.
> > 
> > Userspace proactive reclaimers use the memory.reclaim interface to trigger
> > reclaim. The memory.reclaim interface does not allow for any way to effect the
> > balance of file vs anon during proactive reclaim. The only approach is to adjust
> > the vm.swappiness setting. However, there are a few reasons we look to control
> > the balance of file vs anon during proactive reclaim, separately from reactive
> > reclaim:
> > 
> > * Swapout should be limited to manage SSD write endurance. In near-OOM
> 
> Is this about swapout to SSD only?
> 
> >   situations we are fine with lots of swap-out to avoid OOMs. As these are
> >   typically rare events, they have relatively little impact on write endurance.
> >   However, proactive reclaim runs continuously and so its impact on SSD write
> >   endurance is more significant. Therefore it is desireable to control swap-out
> >   for proactive reclaim separately from reactive reclaim
> 
> This is understandable but swapout to zswap should be fine, right?
> (Sorry I am not following the discussion on zswap patches from Nhat. Is
> the answer there?)

Memory compression alone would be fine, yes.

However, we don't use zswap in all cgroups. Lower priority things are
forced directly to disk. Some workloads compress poorly and also go
directly to disk for better memory efficiency. On such cgroups, it's
important for proactive reclaim to manage swap rates to avoid burning
out the flash.

Note that zswap also does SSD writes during writeback. I know this
doesn't apply to Google because of the ghost files, but we have SSD
swapfiles behind zswap. And this part will become more relevant with
Nhat's enhanced writeback patches.
Dan Schatzberg Nov. 30, 2023, 7:50 p.m. UTC | #9
On Thu, Nov 30, 2023 at 06:44:24PM +0000, Shakeel Butt wrote:
> [...]
> > * Swapout should be limited to manage SSD write endurance. In near-OOM
> 
> Is this about swapout to SSD only?

Correct

> >   situations we are fine with lots of swap-out to avoid OOMs. As these are
> >   typically rare events, they have relatively little impact on write endurance.
> >   However, proactive reclaim runs continuously and so its impact on SSD write
> >   endurance is more significant. Therefore it is desireable to control swap-out
> >   for proactive reclaim separately from reactive reclaim
> 
> This is understandable but swapout to zswap should be fine, right?
> (Sorry I am not following the discussion on zswap patches from Nhat. Is
> the answer there?)

You're correct here as well - we're not concerned about swapout to
zswap as that does not impact SSD write endurance. This is not related
to Nhat's patches.
Shakeel Butt Nov. 30, 2023, 8:30 p.m. UTC | #10
On Thu, Nov 30, 2023 at 11:47 AM Dan Schatzberg
<schatzberg.dan@gmail.com> wrote:
>
> On Thu, Nov 30, 2023 at 11:56:42AM -0500, Johannes Weiner wrote:
> > [...]
> > So I wouldn't say it's merely a reclaim hint. It controls a very
> > concrete and influential factor in VM decision making. And since the
> > global swappiness is long-established ABI, I don't expect its meaning
> > to change significantly any time soon.
>
> I want to add to this last point. While swappiness does not have
> terribly well-defined semantics - it is the (only?) existing mechanism
> to control balance between anon and file reclaim. I'm merely
> advocating for the ability to adjust swappiness during proactive
> reclaim separately from reactive reclaim. To what degree the behavior
> and semantics of swappiness change is a bit orthogonal here.

Let me ask my question in this chain as it might have been missed:

Whatever the semantics of swappiness are (including the edge cases
like no swap, file_is_tiny, trim cache), should the reclaim code treat
the global swappiness and user-provided swappiness differently?
Dan Schatzberg Nov. 30, 2023, 9:37 p.m. UTC | #11
On Thu, Nov 30, 2023 at 12:30:27PM -0800, Shakeel Butt wrote:
> On Thu, Nov 30, 2023 at 11:47 AM Dan Schatzberg
> <schatzberg.dan@gmail.com> wrote:
> >
> > On Thu, Nov 30, 2023 at 11:56:42AM -0500, Johannes Weiner wrote:
> > > [...]
> > > So I wouldn't say it's merely a reclaim hint. It controls a very
> > > concrete and influential factor in VM decision making. And since the
> > > global swappiness is long-established ABI, I don't expect its meaning
> > > to change significantly any time soon.
> >
> > I want to add to this last point. While swappiness does not have
> > terribly well-defined semantics - it is the (only?) existing mechanism
> > to control balance between anon and file reclaim. I'm merely
> > advocating for the ability to adjust swappiness during proactive
> > reclaim separately from reactive reclaim. To what degree the behavior
> > and semantics of swappiness change is a bit orthogonal here.
> 
> Let me ask my question in this chain as it might have been missed:
> 
> Whatever the semantics of swappiness are (including the edge cases
> like no swap, file_is_tiny, trim cache), should the reclaim code treat
> the global swappiness and user-provided swappiness differently?

I can't think of any reason why we would want swappiness interpreted
differently if it's provided at proactive reclaim time vs
globally. Did you have something in mind here?
Shakeel Butt Nov. 30, 2023, 9:52 p.m. UTC | #12
On Thu, Nov 30, 2023 at 1:37 PM Dan Schatzberg <schatzberg.dan@gmail.com> wrote:
>
> On Thu, Nov 30, 2023 at 12:30:27PM -0800, Shakeel Butt wrote:
> > On Thu, Nov 30, 2023 at 11:47 AM Dan Schatzberg
> > <schatzberg.dan@gmail.com> wrote:
> > >
> > > On Thu, Nov 30, 2023 at 11:56:42AM -0500, Johannes Weiner wrote:
> > > > [...]
> > > > So I wouldn't say it's merely a reclaim hint. It controls a very
> > > > concrete and influential factor in VM decision making. And since the
> > > > global swappiness is long-established ABI, I don't expect its meaning
> > > > to change significantly any time soon.
> > >
> > > I want to add to this last point. While swappiness does not have
> > > terribly well-defined semantics - it is the (only?) existing mechanism
> > > to control balance between anon and file reclaim. I'm merely
> > > advocating for the ability to adjust swappiness during proactive
> > > reclaim separately from reactive reclaim. To what degree the behavior
> > > and semantics of swappiness change is a bit orthogonal here.
> >
> > Let me ask my question in this chain as it might have been missed:
> >
> > Whatever the semantics of swappiness are (including the edge cases
> > like no swap, file_is_tiny, trim cache), should the reclaim code treat
> > the global swappiness and user-provided swappiness differently?
>
> I can't think of any reason why we would want swappiness interpreted
> differently if it's provided at proactive reclaim time vs
> globally. Did you have something in mind here?

Nah just wanted to know what you are aiming for.
Michal Hocko Dec. 1, 2023, 9:33 a.m. UTC | #13
On Thu 30-11-23 11:56:42, Johannes Weiner wrote:
[...]
> So I wouldn't say it's merely a reclaim hint. It controls a very
> concrete and influential factor in VM decision making. And since the
> global swappiness is long-established ABI, I don't expect its meaning
> to change significantly any time soon.

As I've said I am more worried about potential future changes which
would modify existing, reduce or add more corner cases which would be
seen as a change of behavior from the user space POV. That means that we
would have to be really explicit about the fact that the reclaim is free
to override the swappiness provided by user. So essentially a best
effort interface without any actual guarantees. That surely makes it
harder to use. Is it still useable?

Btw. IIRC these concerns were part of the reason why memcg v2 doesn't
have swappiness interface. If we decide to export swappiness via
memory.reclaim interface does it mean we will do so on per-memcg level
as well?
Dan Schatzberg Dec. 1, 2023, 3:49 p.m. UTC | #14
On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote:
> On Thu 30-11-23 11:56:42, Johannes Weiner wrote:
> [...]
> > So I wouldn't say it's merely a reclaim hint. It controls a very
> > concrete and influential factor in VM decision making. And since the
> > global swappiness is long-established ABI, I don't expect its meaning
> > to change significantly any time soon.
> 
> As I've said I am more worried about potential future changes which
> would modify existing, reduce or add more corner cases which would be
> seen as a change of behavior from the user space POV. That means that we
> would have to be really explicit about the fact that the reclaim is free
> to override the swappiness provided by user. So essentially a best
> effort interface without any actual guarantees. That surely makes it
> harder to use. Is it still useable?

For our needs (limiting swapout and avoiding swap-depletion) we rely
on two semantics of vm.swappiness.

1) Lower swappiness results in less swap-out, more swappiness results
in more swap-out - for the same workload. Our proactive reclaimer
monitors swap-out and lowers swappiness in response if we exceed our
target swap-out rate.

2) swappiness = 0 results in no or very little swap-out. We rely on
this to avoid exhausting swap due to proactive reclaim and triggering
OOMs.

We already depend on these semantics of vm.swappiness *today*. I think
changing either of these would be seen as a behavior change from user
space POV irrespective of this patch. The proposal in this patch only
allows for vm.swappiness (whatever its semantics) to be configured
separately for proactive reclaim.
Johannes Weiner Dec. 1, 2023, 5:09 p.m. UTC | #15
On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote:
> On Thu 30-11-23 11:56:42, Johannes Weiner wrote:
> [...]
> > So I wouldn't say it's merely a reclaim hint. It controls a very
> > concrete and influential factor in VM decision making. And since the
> > global swappiness is long-established ABI, I don't expect its meaning
> > to change significantly any time soon.
> 
> As I've said I am more worried about potential future changes which
> would modify existing, reduce or add more corner cases which would be
> seen as a change of behavior from the user space POV. That means that we
> would have to be really explicit about the fact that the reclaim is free
> to override the swappiness provided by user. So essentially a best
> effort interface without any actual guarantees. That surely makes it
> harder to use. Is it still useable?

But it's not free to override the setting as it pleases. I wrote a
detailed list of the current exceptions, and why the user wouldn't
have strong expectations of swappiness being respected in those
cases. Having reasonable limitations is not the same as everything
being up for grabs.

Again, the swappiness setting is ABI, and people would definitely
complain if we ignored their request in an unexpected situation and
regressed their workloads.

I'm not against documenting the exceptions and limitations. Not just
for proactive reclaim, but for swappiness in general. But I don't
think it's fair to say that there are NO rules and NO userspace
contract around this parameter (and I'm the one who wrote most of the
balancing code that implements the swappiness control).

So considering what swappiness DOES provide, and the definition and
behavior to which we're tied by ABI rules, yes I do think it's useful
to control this from the proactive reclaim context. In fact, we know
it's useful, because we've been doing it for a while in production now
- just in a hacky way, and this patch is merely making it less hacky.

> Btw. IIRC these concerns were part of the reason why memcg v2 doesn't
> have swappiness interface. If we decide to export swappiness via
> memory.reclaim interface does it mean we will do so on per-memcg level
> as well?

Well I'm the person who wrote the initial cgroup2 memory interface,
and I left it out because there was no clear usecase for why you'd
want to tweak it on a per-container basis.

But Dan did bring up a new and very concrete usecase: controlling for
write endurance. And it's not just a theoretical one, but a proven
real world application.

As far as adding a static memory.swappiness goes, I wouldn't add it
just because, but wait for a concrete usecase for that specifically. I
don't think Dan's rationale extends to it. But if a usecase comes up
and is convincing, I wouldn't be opposed to it.
Michal Hocko Dec. 4, 2023, 3:23 p.m. UTC | #16
On Fri 01-12-23 12:09:55, Johannes Weiner wrote:
> On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote:
> > On Thu 30-11-23 11:56:42, Johannes Weiner wrote:
> > [...]
> > > So I wouldn't say it's merely a reclaim hint. It controls a very
> > > concrete and influential factor in VM decision making. And since the
> > > global swappiness is long-established ABI, I don't expect its meaning
> > > to change significantly any time soon.
> > 
> > As I've said I am more worried about potential future changes which
> > would modify existing, reduce or add more corner cases which would be
> > seen as a change of behavior from the user space POV. That means that we
> > would have to be really explicit about the fact that the reclaim is free
> > to override the swappiness provided by user. So essentially a best
> > effort interface without any actual guarantees. That surely makes it
> > harder to use. Is it still useable?
> 
> But it's not free to override the setting as it pleases. I wrote a
> detailed list of the current exceptions, and why the user wouldn't
> have strong expectations of swappiness being respected in those
> cases. Having reasonable limitations is not the same as everything
> being up for grabs.

Well, I was not suggesting that future changes would be intentionally
breaking swappiness. But look at the history, we've had times when
swappiness was ignored most of the time due to heavy page cache bias.
Now it is really hard to assume future reclaim changes but I can easily
imagine that IO refault cost to balance file vs. anon lrus would be in
future reclaim improvements and extensions.

> Again, the swappiness setting is ABI, and people would definitely
> complain if we ignored their request in an unexpected situation and
> regressed their workloads.
> 
> I'm not against documenting the exceptions and limitations. Not just
> for proactive reclaim, but for swappiness in general. But I don't
> think it's fair to say that there are NO rules and NO userspace
> contract around this parameter (and I'm the one who wrote most of the
> balancing code that implements the swappiness control).

Right, but the behavior might change considerably between different
kernel versions and that is something to be really careful about. One
think I would really like to avoid is to provide any guarantee that
swappiness X and nr_to_reclaim has an exact anon/file pages reclaimed
or this is a regression because $VER-1 behaved that way. There might be
very ligitimate reasons to use different heuristics in the memory
reclaim.

Another option would be drop any heuristics when swappiness is provided
for the memory.reclaim interface which would be much more predictable
but it would also diverge from the normal reclaim and that is quite bad
IMHO.
Johannes Weiner Dec. 5, 2023, 4:19 p.m. UTC | #17
On Mon, Dec 04, 2023 at 04:23:47PM +0100, Michal Hocko wrote:
> On Fri 01-12-23 12:09:55, Johannes Weiner wrote:
> > On Fri, Dec 01, 2023 at 10:33:01AM +0100, Michal Hocko wrote:
> > > On Thu 30-11-23 11:56:42, Johannes Weiner wrote:
> > > [...]
> > > > So I wouldn't say it's merely a reclaim hint. It controls a very
> > > > concrete and influential factor in VM decision making. And since the
> > > > global swappiness is long-established ABI, I don't expect its meaning
> > > > to change significantly any time soon.
> > > 
> > > As I've said I am more worried about potential future changes which
> > > would modify existing, reduce or add more corner cases which would be
> > > seen as a change of behavior from the user space POV. That means that we
> > > would have to be really explicit about the fact that the reclaim is free
> > > to override the swappiness provided by user. So essentially a best
> > > effort interface without any actual guarantees. That surely makes it
> > > harder to use. Is it still useable?
> > 
> > But it's not free to override the setting as it pleases. I wrote a
> > detailed list of the current exceptions, and why the user wouldn't
> > have strong expectations of swappiness being respected in those
> > cases. Having reasonable limitations is not the same as everything
> > being up for grabs.
> 
> Well, I was not suggesting that future changes would be intentionally
> breaking swappiness. But look at the history, we've had times when
> swappiness was ignored most of the time due to heavy page cache bias.
> Now it is really hard to assume future reclaim changes but I can easily
> imagine that IO refault cost to balance file vs. anon lrus would be in
> future reclaim improvements and extensions.

That's a possibility, but it would be an explicit *replacement* of the
swappiness setting. Since swappiness is already exposed in more than
one way (global, cgroup1), truly overriding it would need to be opt-in.

We could only remove the heavy page cache bias without a switch
because it didn't actually break user expectations. In fact it
followed them more closely, since previously with a swappiness=60 the
kernel might not swap at all even with a heavily thrashing cache.

The thing Linus keeps stressing is not that we cannot change behavior,
but that we cannot break (reasonable) existing setups. So we can make
improvements as long as we don't violate general user expectations or
generate IO patterns that are completely contradictory to the setting.

> > Again, the swappiness setting is ABI, and people would definitely
> > complain if we ignored their request in an unexpected situation and
> > regressed their workloads.
> > 
> > I'm not against documenting the exceptions and limitations. Not just
> > for proactive reclaim, but for swappiness in general. But I don't
> > think it's fair to say that there are NO rules and NO userspace
> > contract around this parameter (and I'm the one who wrote most of the
> > balancing code that implements the swappiness control).
> 
> Right, but the behavior might change considerably between different
> kernel versions and that is something to be really careful about. One
> think I would really like to avoid is to provide any guarantee that
> swappiness X and nr_to_reclaim has an exact anon/file pages reclaimed
> or this is a regression because $VER-1 behaved that way. There might be
> very ligitimate reasons to use different heuristics in the memory
> reclaim.

Yes, it shouldn't mean any more than the global swappiness means.

> Another option would be drop any heuristics when swappiness is provided
> for the memory.reclaim interface which would be much more predictable
> but it would also diverge from the normal reclaim and that is quite bad
> IMHO.

I would prefer to keep the semantics for global/reactive and proactive
reclaim the same. Making an existing tunable available in cgroup2 is
much lower risk than providing something new and different under the
same name.
Michal Koutný Dec. 7, 2023, 6:57 p.m. UTC | #18
On Fri, Dec 01, 2023 at 12:09:55PM -0500, Johannes Weiner <hannes@cmpxchg.org> wrote:
> Well I'm the person who wrote the initial cgroup2 memory interface,
> and I left it out because there was no clear usecase for why you'd
> want to tweak it on a per-container basis.

There is also the issue of the default value and its relation to the
global vm.swappiness. Which disappeared with v2.
The "stateless" passing of per-reclaim swappiness looks cleaner in this
way.

0.02€,
Michal