From patchwork Wed Aug 14 17:42:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaiyang Zhao X-Patchwork-Id: 13763787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C47ADC3DA4A for ; Wed, 14 Aug 2024 17:43:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 592766B0083; Wed, 14 Aug 2024 13:43:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 51A8D6B0085; Wed, 14 Aug 2024 13:43:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36D276B0088; Wed, 14 Aug 2024 13:43:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 107426B0083 for ; Wed, 14 Aug 2024 13:43:06 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id ABA38A823A for ; Wed, 14 Aug 2024 17:43:05 +0000 (UTC) X-FDA: 82451571930.06.79ABBF7 Received: from mail-yb1-f179.google.com (mail-yb1-f179.google.com [209.85.219.179]) by imf10.hostedemail.com (Postfix) with ESMTP id C60DFC0029 for ; Wed, 14 Aug 2024 17:43:02 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=e4V9fvjz; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf10.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.219.179 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723657324; a=rsa-sha256; cv=none; b=GADsfhpcekpkkb225nQNiom1Ev6nhM/0NrWBIhwk++p2uQqiS7LKUG8eNy+JINyv7BH1GH NLm47snec7fOcOMqKT4Y7rxXkhjqJNtrtG5X+FxsRbqyaWxRakibi/WL4NDSgv9vo2DhqK spMkRS3/5yoy2o5SM3YwC/diOKvpows= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=e4V9fvjz; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf10.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.219.179 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723657324; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=FbydyfcnYMbQui7MtCStpXyMmVCUv/3iDljB6GAcDNY=; b=n2HCMVJyy6j9yZMAgH12QnL51ttgdl4650I7Ckvr/sxYMxYHu8jHQSFa5b89VfpS0elrNz Mn4m3V7yHue8TrwqlfB26mSCmB5MimsjQ0UQs/abwuIreTNcWUSjcZtGhMLjr+gOaRHx7n 056cl3vlE0FA/h859w4TJM3zN7tzWIE= Received: by mail-yb1-f179.google.com with SMTP id 3f1490d57ef6-dfef5980a69so171399276.3 for ; Wed, 14 Aug 2024 10:43:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.cmu.edu; s=google-2021; t=1723657382; x=1724262182; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=FbydyfcnYMbQui7MtCStpXyMmVCUv/3iDljB6GAcDNY=; b=e4V9fvjzmTyZA5KbKZEFfIGv0UKpdD/UAtSSlM5kEg5ai1U7PIyFtaMqjx8szLITt/ p7nEQoC22GfcIJmdJh8txQddWNQ3xfgaAvdJ0yE8kGCjhzmPYg7vwrbG4ntbg5uT1WAt 6TruSk2FqAoexkk2wWEYqyPTnX7R8HDmti5U9xgvtPpGE7lZSuGzm2oPpl1N627M07rR jJLGmJ2hQAVHwpoMJZKo6VfqUodcRx26bX1K4XndKSniCs+tms8KijWB/p20wfGp3+RJ 0qU2OdKQ7D3eQwMVWDzMQbKEzmMxNzG26McKd17ReS5hAuM5TFCcGO8m2erfombyfGJ0 0ciw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723657382; x=1724262182; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FbydyfcnYMbQui7MtCStpXyMmVCUv/3iDljB6GAcDNY=; b=j6uFoMzJ1A46fGsLrFHfqmUFMpFY8c/JrLnrZ3esN09KG/5JYxiIEJAG28uHxiRBBs cQmfelSYhuCx6B5R+5vfevK7Gu/pGxVcE0wOf70MwvucAYYqEf8nEhH8ZDQA5kowcBtD R1KgbrjCdFyVHjMDV+rojkj0XFNsOLjXqG9erLK5O3T5mPgLuEjFfysk4IzWuJSYw7T1 /kgzc5StZ9WDi3UIK3f3qfRAzqfYKfXJRkGVxy/TdyZtLYVRNrgClr8XbPMr7ZavgmIr 9eOnM5ot5+3n46CORrBWCHwkt12T+QduLYJ2gead63zS1Mfx2A5SjYpuM6n7S08ia1S7 QJvA== X-Gm-Message-State: AOJu0Ywgr7049vX7AxU/exf3VD7vKTpBHEwPBlTsL84k5ySWmJtudFyw 92wzeAvchkX9WgN2LcKB4I8HdejJ2omLKMuEMmO+9B9cRXejvz0U9UwIwcaVknQbF1imGJVWhyT u1sqKcfR2vydRq0GZ7ueZJVBH8uIyWGo7CSEWOetTeY+44SKI3SLuJmcY9mSTo53+CkV2bKDFVT MqyKUdJTsLV1oH2WlvgFgXDSQdosACelzEo7E= X-Google-Smtp-Source: AGHT+IF8uB1i/SxF6hVwiRjNBpd8rb7PbC/HUG5DUIPBJTeyUibX0DJXrmE7OXIMJialuKMzG/RIVw== X-Received: by 2002:a05:6902:2407:b0:e0b:ee88:7cda with SMTP id 3f1490d57ef6-e1155b9aa46mr3919060276.36.1723657381515; Wed, 14 Aug 2024 10:43:01 -0700 (PDT) Received: from localhost (pool-74-98-231-160.pitbpa.fios.verizon.net. [74.98.231.160]) by smtp.gmail.com with UTF8SMTPSA id 6a1803df08f44-6bd82c8305fsm46037126d6.41.2024.08.14.10.43.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 14 Aug 2024 10:43:01 -0700 (PDT) From: kaiyang2@cs.cmu.edu To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, mhocko@kernel.org, nehagholkar@meta.com, abhishekd@meta.com, hannes@cmpxchg.org, weixugc@google.com, rientjes@google.com, Kaiyang Zhao Subject: [PATCH v3] mm,memcg: provide per-cgroup counters for NUMA balancing operations Date: Wed, 14 Aug 2024 17:42:27 +0000 Message-ID: <20240814174227.30639-1-kaiyang2@cs.cmu.edu> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C60DFC0029 X-Stat-Signature: wcee1h775qkwhuw9caa4pihgk81knxzz X-Rspam-User: X-HE-Tag: 1723657382-271657 X-HE-Meta: U2FsdGVkX18pR/txMNKeCU/Kxoh+qq587jof+AuXxoOJLMyLHK+3Zi5f7yfPB1g313hoEF32x9lEJHAyvjPdvb2TKLM30aslwD5U4+eshII6Ait4urJlsRZhb7iiCHRxyvzf5cLaZwxk+pX3fPRrxWK8w6s/7coLY6NUsnkFdUOeBNtYnbkkOmVzT5STHPPmR4dW3TqjnJyOAaWUoHqjhT2pCQnjxDyysPuM8XVG9cyr8BWtv9RuK13/EPBf1HItQrGv4lhIbYQdDQlK8ZbAs6jm35zwaU4Xwb4/FHDZds/E86vOXVW3u2rg+Tf+bFYXRMoRu6LjfjBSLS8O+qPMoVroO6RUBNo1XkACZ5ZZdCyQBF/ACsaW1BrrRy6Gc8Bp3JRVmx6W+JuqW27ORpRC5kiBFrxmqNmepo7tCciyhXvwYrSmanx60PSJYCySmCZN0QrQy785OXAMUuLaAv/ubclI4+krIXj5xpSlNwW+UzzJuZi3pHsUcLZsfHMr83H9zc/92xwpGEvKKXbgNuVcGZCoFizaMQEY8A5QMb83ZKh4J0emeEOui9zYAs0s7Tq5TdtoKfIPzlk3bSmF1DkSfsqCNncdxyUQgYWxxCxoE6FsmMbJ6nfWtLJYcwxzFBQjj/zgXzYHU7f8MqpoEFmDf2KO4K1OokeKoeKzvA6wPVseOawnV+wtoZJAgGgMNHU27VJn/ehyA0eEInI6t2Ha5nUpCuZeYCumFLdBZaqJaCybKYU9UchZca873rRGNZa6HCCk/84p6A+0gGB/qxxpK6AgutsJEJg5f4bGwkxDF66Ci2eCrd6yNpxLU+5fmtl4w0D+kbosJvFbHHEUQX6gCCT1BQ8x4AOJfxOVAwfUi6u6HzaMsBh+rW2L07wPSR7ElCUsbeM4QIJQ0oLLS4pRghgWdj9/IrMp/NRy6tRVN4Rm4td4jZRR3IhSTq2eYxP64cLahe8flgBSGng4J/1 EoFmGET1 cPBi9Dgn3bwk293WNgQ/CN0TY4DZuh5IgyEIUqn7juFEwvsMTcInKXXi+VbXAEgjfvqddBQF2YivzCPqSC5bVuJ5DEKCrMfDfqnanuEcrv+cjbigMz16jRP8LGeQqbGnhiRF0599IB725feWTwi4QGlV6kJYxtR3IVhE6N/oqplndt1egUziQ73zmxsouQjy9F4LAvrs2N388lCgU7STy9abma1T7ACefEeibjAB1cerlrhVz2O/fSLFrABjY3rK1YG/32lxon+w7ZTsU6LbB727MUo4TlaUGvW5vIv9nou2f4LE7ebRIuzhulPf3c/pu/bbNOMCct75lqaUCPGvMVwNExiTtWiPxiXTA6qDF/AhUGJFPY9cPVh7i3g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kaiyang Zhao The ability to observe the demotion and promotion decisions made by the kernel on a per-cgroup basis is important for monitoring and tuning containerized workloads on either NUMA machines or machines equipped with tiered memory. Different containers in the system may experience drastically different memory tiering actions that cannot be distinguished from the global counters alone. For example, a container running a workload that has a much hotter memory accesses will likely see more promotions and fewer demotions, potentially depriving a colocated container of top tier memory to such an extent that its performance degrades unacceptably. For another example, some containers may exhibit longer periods between data reuse, causing much more numa_hint_faults than numa_pages_migrated. In this case, tuning hot_threshold_ms may be appropriate, but the signal can easily be lost if only global counters are available. This patch set adds seven counters to memory.stat in a cgroup: numa_pages_migrated, numa_pte_updates, numa_hint_faults, pgdemote_kswapd, pgdemote_khugepaged, pgdemote_direct and pgpromote_success. pgdemote_* and pgpromote_success are also available in memory.numa_stat. count_memcg_events_mm() is added to count multiple event occurrences at once, and get_mem_cgroup_from_folio() is added because we need to get a reference to the memcg of a folio before it's migrated to track numa_pages_migrated. The accounting of PGDEMOTE_* is moved to shrink_inactive_list() before being changed to per-cgroup. Signed-off-by: Kaiyang Zhao --- v3: - added pgpromote_success as suggested by Wei Xu v2: - fixed compilation error when CONFIG_NUMA_BALANCING is off - fixed doc warning due to missing parameter description in get_mem_cgroup_from_folio include/linux/memcontrol.h | 24 +++++++++++++++++--- include/linux/vmstat.h | 1 + mm/memcontrol.c | 45 ++++++++++++++++++++++++++++++++++++++ mm/memory.c | 3 +++ mm/mempolicy.c | 4 +++- mm/migrate.c | 7 ++++-- mm/vmscan.c | 8 +++---- 7 files changed, 82 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 44f7fb7dc0c8..90ecd2dbca06 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -768,6 +768,8 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); struct mem_cgroup *get_mem_cgroup_from_current(void); +struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio); + struct lruvec *folio_lruvec_lock(struct folio *folio); struct lruvec *folio_lruvec_lock_irq(struct folio *folio); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, @@ -1012,8 +1014,8 @@ static inline void count_memcg_folio_events(struct folio *folio, count_memcg_events(memcg, idx, nr); } -static inline void count_memcg_event_mm(struct mm_struct *mm, - enum vm_event_item idx) +static inline void count_memcg_events_mm(struct mm_struct *mm, + enum vm_event_item idx, unsigned long count) { struct mem_cgroup *memcg; @@ -1023,10 +1025,16 @@ static inline void count_memcg_event_mm(struct mm_struct *mm, rcu_read_lock(); memcg = mem_cgroup_from_task(rcu_dereference(mm->owner)); if (likely(memcg)) - count_memcg_events(memcg, idx, 1); + count_memcg_events(memcg, idx, count); rcu_read_unlock(); } +static inline void count_memcg_event_mm(struct mm_struct *mm, + enum vm_event_item idx) +{ + count_memcg_events_mm(mm, idx, 1); +} + static inline void memcg_memory_event(struct mem_cgroup *memcg, enum memcg_memory_event event) { @@ -1246,6 +1254,11 @@ static inline struct mem_cgroup *get_mem_cgroup_from_current(void) return NULL; } +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + return NULL; +} + static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css) { @@ -1468,6 +1481,11 @@ static inline void count_memcg_folio_events(struct folio *folio, { } +static inline void count_memcg_events_mm(struct mm_struct *mm, + enum vm_event_item idx, unsigned long count) +{ +} + static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 9eb77c9007e6..d2761bf8ff32 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -32,6 +32,7 @@ struct reclaim_stat { unsigned nr_ref_keep; unsigned nr_unmap_fail; unsigned nr_lazyfree_fail; + unsigned nr_demoted; }; /* Stat data for system wide items */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4884629f0ce5..9a338978eeae 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -307,6 +307,12 @@ static const unsigned int memcg_node_stat_items[] = { #ifdef CONFIG_SWAP NR_SWAPCACHE, #endif +#ifdef CONFIG_NUMA_BALANCING + PGPROMOTE_SUCCESS, +#endif + PGDEMOTE_KSWAPD, + PGDEMOTE_DIRECT, + PGDEMOTE_KHUGEPAGED, }; static const unsigned int memcg_stat_items[] = { @@ -437,6 +443,11 @@ static const unsigned int memcg_vm_event_stat[] = { THP_SWPOUT, THP_SWPOUT_FALLBACK, #endif +#ifdef CONFIG_NUMA_BALANCING + NUMA_PAGE_MIGRATE, + NUMA_PTE_UPDATES, + NUMA_HINT_FAULTS, +#endif }; #define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat) @@ -978,6 +989,24 @@ struct mem_cgroup *get_mem_cgroup_from_current(void) return memcg; } +/** + * get_mem_cgroup_from_folio - Obtain a reference on a given folio's memcg. + * @folio: folio from which memcg should be extracted. + */ +struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + if (mem_cgroup_disabled()) + return NULL; + + rcu_read_lock(); + if (!memcg || WARN_ON_ONCE(!css_tryget(&memcg->css))) + memcg = root_mem_cgroup; + rcu_read_unlock(); + return memcg; +} + /** * mem_cgroup_iter - iterate over memory cgroup hierarchy * @root: hierarchy root @@ -1383,6 +1412,13 @@ static const struct memory_stat memory_stats[] = { { "workingset_restore_anon", WORKINGSET_RESTORE_ANON }, { "workingset_restore_file", WORKINGSET_RESTORE_FILE }, { "workingset_nodereclaim", WORKINGSET_NODERECLAIM }, + + { "pgdemote_kswapd", PGDEMOTE_KSWAPD }, + { "pgdemote_direct", PGDEMOTE_DIRECT }, + { "pgdemote_khugepaged", PGDEMOTE_KHUGEPAGED }, +#ifdef CONFIG_NUMA_BALANCING + { "pgpromote_success", PGPROMOTE_SUCCESS }, +#endif }; /* The actual unit of the state item, not the same as the output unit */ @@ -1407,6 +1443,9 @@ static int memcg_page_state_output_unit(int item) /* * Workingset state is actually in pages, but we export it to userspace * as a scalar count of events, so special case it here. + * + * Demotion and promotion activities are exported in pages, consistent + * with their global counterparts. */ switch (item) { case WORKINGSET_REFAULT_ANON: @@ -1416,6 +1455,12 @@ static int memcg_page_state_output_unit(int item) case WORKINGSET_RESTORE_ANON: case WORKINGSET_RESTORE_FILE: case WORKINGSET_NODERECLAIM: + case PGDEMOTE_KSWAPD: + case PGDEMOTE_DIRECT: + case PGDEMOTE_KHUGEPAGED: +#ifdef CONFIG_NUMA_BALANCING + case PGPROMOTE_SUCCESS: +#endif return 1; default: return memcg_page_state_unit(item); diff --git a/mm/memory.c b/mm/memory.c index 0ed3603aaf31..13b679ad182c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5400,6 +5400,9 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, vma_set_access_pid_bit(vma); count_vm_numa_event(NUMA_HINT_FAULTS); +#ifdef CONFIG_NUMA_BALANCING + count_memcg_folio_events(folio, NUMA_HINT_FAULTS, 1); +#endif if (folio_nid(folio) == numa_node_id()) { count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); *flags |= TNF_FAULT_LOCAL; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b3b5f376471f..b646fab3e45e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -676,8 +676,10 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, tlb_gather_mmu(&tlb, vma->vm_mm); nr_updated = change_protection(&tlb, vma, addr, end, MM_CP_PROT_NUMA); - if (nr_updated > 0) + if (nr_updated > 0) { count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); + count_memcg_events_mm(vma->vm_mm, NUMA_PTE_UPDATES, nr_updated); + } tlb_finish_mmu(&tlb); diff --git a/mm/migrate.c b/mm/migrate.c index 6e32098ac2dc..dbfa910ec24b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2668,6 +2668,8 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, int nr_remaining; unsigned int nr_succeeded; LIST_HEAD(migratepages); + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio); + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); list_add(&folio->lru, &migratepages); nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio, @@ -2677,12 +2679,13 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma, putback_movable_pages(&migratepages); if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); + count_memcg_events(memcg, NUMA_PAGE_MIGRATE, nr_succeeded); if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && !node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) - mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, - nr_succeeded); + mod_lruvec_state(lruvec, PGPROMOTE_SUCCESS, nr_succeeded); } + mem_cgroup_put(memcg); BUG_ON(!list_empty(&migratepages)); return nr_remaining ? -EAGAIN : 0; } diff --git a/mm/vmscan.c b/mm/vmscan.c index da6ba3206827..a118a55bbed5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1018,9 +1018,6 @@ static unsigned int demote_folio_list(struct list_head *demote_folios, (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION, &nr_succeeded); - mod_node_page_state(pgdat, PGDEMOTE_KSWAPD + reclaimer_offset(), - nr_succeeded); - return nr_succeeded; } @@ -1519,7 +1516,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* 'folio_list' is always empty here */ /* Migrate folios selected for demotion */ - nr_reclaimed += demote_folio_list(&demote_folios, pgdat); + stat->nr_demoted = demote_folio_list(&demote_folios, pgdat); + nr_reclaimed += stat->nr_demoted; /* Folios that could not be demoted are still in @demote_folios */ if (!list_empty(&demote_folios)) { /* Folios which weren't demoted go back on @folio_list */ @@ -1985,6 +1983,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); + __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), + stat.nr_demoted); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item = PGSTEAL_KSWAPD + reclaimer_offset(); if (!cgroup_reclaim(sc))