diff mbox series

[mm-unstable] mm: multi-gen LRU: don't spin during memcg release

Message ID 20230814151636.1639123-1-tjmercier@google.com (mailing list archive)
State New
Headers show
Series [mm-unstable] mm: multi-gen LRU: don't spin during memcg release | expand

Commit Message

T.J. Mercier Aug. 14, 2023, 3:16 p.m. UTC
When a memcg is in the process of being released mem_cgroup_tryget will
fail because its reference count has already reached 0. This can happen
during reclaim if the memcg has already been offlined, and we reclaim
all remaining pages attributed to the offlined memcg. shrink_many
attempts to skip the empty memcg in this case, and continue reclaiming
from the remaining memcgs in the old generation. If there is only one
memcg remaining, or if all remaining memcgs are in the process of being
released then shrink_many will spin until all memcgs have finished
being released. The release occurs through a workqueue, so it can take
a while before kswapd is able to make any further progress.

This fix results in reductions in kswapd activity and direct reclaim in
a test where 28 apps (working set size > total memory) are repeatedly
launched in a random sequence:

                                       A          B      delta   ratio(%)
           allocstall_movable       5962       3539      -2423     -40.64
            allocstall_normal       2661       2417       -244      -9.17
kswapd_high_wmark_hit_quickly      53152       7594     -45558     -85.71
                   pageoutrun      57365      11750     -45615     -79.52

Fixes: e4dde56cd208 ("mm: multi-gen LRU: per-node lru_gen_folio lists")
Cc: stable@vger.kernel.org
Signed-off-by: T.J. Mercier <tjmercier@google.com>
---
 mm/vmscan.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Comments

Yu Zhao Aug. 14, 2023, 3:59 p.m. UTC | #1
On Mon, Aug 14, 2023 at 9:16 AM T.J. Mercier <tjmercier@google.com> wrote:
>
> When a memcg is in the process of being released mem_cgroup_tryget will
> fail because its reference count has already reached 0. This can happen
> during reclaim if the memcg has already been offlined, and we reclaim
> all remaining pages attributed to the offlined memcg. shrink_many
> attempts to skip the empty memcg in this case, and continue reclaiming
> from the remaining memcgs in the old generation. If there is only one
> memcg remaining, or if all remaining memcgs are in the process of being
> released then shrink_many will spin until all memcgs have finished
> being released. The release occurs through a workqueue, so it can take
> a while before kswapd is able to make any further progress.
>
> This fix results in reductions in kswapd activity and direct reclaim in
> a test where 28 apps (working set size > total memory) are repeatedly
> launched in a random sequence:
>
>                                        A          B      delta   ratio(%)
>            allocstall_movable       5962       3539      -2423     -40.64
>             allocstall_normal       2661       2417       -244      -9.17
> kswapd_high_wmark_hit_quickly      53152       7594     -45558     -85.71
>                    pageoutrun      57365      11750     -45615     -79.52
>
> Fixes: e4dde56cd208 ("mm: multi-gen LRU: per-node lru_gen_folio lists")
> Cc: stable@vger.kernel.org
> Signed-off-by: T.J. Mercier <tjmercier@google.com>

Acked-by: Yu Zhao <yuzhao@google.com>
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 157ed68470ee..c7c149cb8d66 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4856,16 +4856,17 @@  void lru_gen_release_memcg(struct mem_cgroup *memcg)
 
 		spin_lock_irq(&pgdat->memcg_lru.lock);
 
-		VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list));
+		if (hlist_nulls_unhashed(&lruvec->lrugen.list))
+			goto unlock;
 
 		gen = lruvec->lrugen.gen;
 
-		hlist_nulls_del_rcu(&lruvec->lrugen.list);
+		hlist_nulls_del_init_rcu(&lruvec->lrugen.list);
 		pgdat->memcg_lru.nr_memcgs[gen]--;
 
 		if (!pgdat->memcg_lru.nr_memcgs[gen] && gen == get_memcg_gen(pgdat->memcg_lru.seq))
 			WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1);
-
+unlock:
 		spin_unlock_irq(&pgdat->memcg_lru.lock);
 	}
 }
@@ -5447,8 +5448,10 @@  static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
 	rcu_read_lock();
 
 	hlist_nulls_for_each_entry_rcu(lrugen, pos, &pgdat->memcg_lru.fifo[gen][bin], list) {
-		if (op)
+		if (op) {
 			lru_gen_rotate_memcg(lruvec, op);
+			op = 0;
+		}
 
 		mem_cgroup_put(memcg);
 
@@ -5456,7 +5459,7 @@  static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
 		memcg = lruvec_memcg(lruvec);
 
 		if (!mem_cgroup_tryget(memcg)) {
-			op = 0;
+			lru_gen_release_memcg(memcg);
 			memcg = NULL;
 			continue;
 		}