Message ID | 20210709000509.2618345-2-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm, memcg: Optimizations to minimize overhead when memcgs are disabled | expand |
On Thu, Jul 08, 2021 at 05:05:07PM -0700, Suren Baghdasaryan wrote: > Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and > cgroup_throttle_swaprate functions. This minimizes the memcg overhead in > the pagefault and exit_mmap paths when memcgs are disabled using > cgroup_disable=memory command-line option. > This change results in ~2.1% overhead reduction when running PFT test > comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y, > CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core > ARM64 Android device. > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
On Thu, Jul 8, 2021 at 5:05 PM Suren Baghdasaryan <surenb@google.com> wrote: > [...] > --- a/mm/vmpressure.c > +++ b/mm/vmpressure.c > @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work) > void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, > unsigned long scanned, unsigned long reclaimed) > { > - struct vmpressure *vmpr = memcg_to_vmpressure(memcg); > + struct vmpressure *vmpr; > + > + if (mem_cgroup_disabled()) > + return; > + > + vmpr = memcg_to_vmpressure(memcg); I was wondering why this was not crashing but realized that we allocate root_mem_cgroup even in cgroup_disable=memory case. Reviewed-by: Shakeel Butt <shakeelb@google.com>
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ae1f5d0cb581..a228cd51c4bd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) struct mem_cgroup *memcg; unsigned short id; + if (mem_cgroup_disabled()) + return; + id = swap_cgroup_record(entry, 0, nr_pages); rcu_read_lock(); memcg = mem_cgroup_from_id(id); diff --git a/mm/swapfile.c b/mm/swapfile.c index 1e07d1c776f2..707fa0481bb4 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) struct swap_info_struct *si, *next; int nid = page_to_nid(page); + if (mem_cgroup_disabled()) + return; + if (!(gfp_mask & __GFP_IO)) return; diff --git a/mm/vmpressure.c b/mm/vmpressure.c index d69019fc3789..9b172561fded 100644 --- a/mm/vmpressure.c +++ b/mm/vmpressure.c @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work) void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, unsigned long scanned, unsigned long reclaimed) { - struct vmpressure *vmpr = memcg_to_vmpressure(memcg); + struct vmpressure *vmpr; + + if (mem_cgroup_disabled()) + return; + + vmpr = memcg_to_vmpressure(memcg); /* * Here we only want to account pressure that userland is able to
Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and cgroup_throttle_swaprate functions. This minimizes the memcg overhead in the pagefault and exit_mmap paths when memcgs are disabled using cgroup_disable=memory command-line option. This change results in ~2.1% overhead reduction when running PFT test comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y, CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core ARM64 Android device. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- mm/memcontrol.c | 3 +++ mm/swapfile.c | 3 +++ mm/vmpressure.c | 7 ++++++- 3 files changed, 12 insertions(+), 1 deletion(-)