diff mbox series

[1/2] memcg: expose vmpressure knobs

Message ID 20200413215750.7239-2-lmoiseichuk@magicleap.com (mailing list archive)
State New, archived
Headers show
Series memcg, vmpressure: expose vmpressure controls | expand

Commit Message

svc_lmoiseichuk@magicleap.com April 13, 2020, 9:57 p.m. UTC
From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>

Populating memcg vmpressure controls with legacy defaults:
- memory.pressure_window (512 or SWAP_CLUSTER_MAX * 16)
- memory.pressure_level_critical_prio (3)
- memory.pressure_level_medium (60)
- memory.pressure_level_critical (95)

Signed-off-by: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
---
 Documentation/admin-guide/cgroup-v1/memory.rst | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Comments

Chris Down April 14, 2020, 10:55 p.m. UTC | #1
svc_lmoiseichuk@magicleap.com writes:
>From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
>
>Populating memcg vmpressure controls with legacy defaults:
>- memory.pressure_window (512 or SWAP_CLUSTER_MAX * 16)
>- memory.pressure_level_critical_prio (3)
>- memory.pressure_level_medium (60)
>- memory.pressure_level_critical (95)
>
>Signed-off-by: Leonid Moiseichuk <lmoiseichuk@magicleap.com>

I'm against this even in the abstract, cgroup v1 is deprecated and its 
interface frozen, and vmpressure is pretty much already supplanted by PSI, 
which actually works (whereas vmpressure often doesn't since it mostly ends up 
just measuring reclaim efficiency, rather than actual memory pressure).

Without an extremely compelling reason to expose these, this just muddles the 
situation.
Leonid Moiseichuk April 14, 2020, 11 p.m. UTC | #2
No problem, since cgroups v1 are frozen that is a valid stopper.
Let me check what could be done about PSI for swapless cases.

On Tue, Apr 14, 2020 at 6:55 PM Chris Down <chris@chrisdown.name> wrote:

> svc_lmoiseichuk@magicleap.com writes:
> >From: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
> >
> >Populating memcg vmpressure controls with legacy defaults:
> >- memory.pressure_window (512 or SWAP_CLUSTER_MAX * 16)
> >- memory.pressure_level_critical_prio (3)
> >- memory.pressure_level_medium (60)
> >- memory.pressure_level_critical (95)
> >
> >Signed-off-by: Leonid Moiseichuk <lmoiseichuk@magicleap.com>
>
> I'm against this even in the abstract, cgroup v1 is deprecated and its
> interface frozen, and vmpressure is pretty much already supplanted by PSI,
> which actually works (whereas vmpressure often doesn't since it mostly
> ends up
> just measuring reclaim efficiency, rather than actual memory pressure).
>
> Without an extremely compelling reason to expose these, this just muddles
> the
> situation.
>
diff mbox series

Patch

diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 0ae4f564c2d6..42508123a8e1 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -79,6 +79,12 @@  Brief summary of control files.
  memory.use_hierarchy		     set/show hierarchical account enabled
  memory.force_empty		     trigger forced page reclaim
  memory.pressure_level		     set memory pressure notifications
+ memory.pressure_window 	     set window size for scanned pages, better
+				     to perform it as vmscan reclaimer logic
+				     in chunks in multiple SWAP_CLUSTER_MAX
+ memory.pressure_level_critical_prio vmscan priority for critical level
+ memory.pressure_level_medium	     medium level pressure percents
+ memory.pressure_level_critical      critical level pressure percents
  memory.swappiness		     set/show swappiness parameter of vmscan
 				     (See sysctl's vm.swappiness)
  memory.move_charge_at_immigrate     set/show controls of moving charges
@@ -893,12 +899,16 @@  pressure, the system might be making swap, paging out active file caches,
 etc. Upon this event applications may decide to further analyze
 vmstat/zoneinfo/memcg or internal memory usage statistics and free any
 resources that can be easily reconstructed or re-read from a disk.
+The level threshold could be tuned using memory.pressure_level_medium.
 
 The "critical" level means that the system is actively thrashing, it is
 about to out of memory (OOM) or even the in-kernel OOM killer is on its
 way to trigger. Applications should do whatever they can to help the
 system. It might be too late to consult with vmstat or any other
-statistics, so it's advisable to take an immediate action.
+statistics, so it's advisable to take an immediate action. The level
+threshold could be tuned using memory.pressure_level_critical. Number
+of pages and vmscan priority handled as memory.pressure_window and
+memory.pressure_level_critical_prio.
 
 By default, events are propagated upward until the event is handled, i.e. the
 events are not pass-through. For example, you have three cgroups: A->B->C. Now