Message ID | 20231212032640.6968-3-cuibixuan@vivo.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Make memory reclamation measurable | expand |
On Mon, 11 Dec 2023 19:26:40 -0800 Bixuan Cui <cuibixuan@vivo.com> wrote: > -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, > +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, Current kernels have a call to trace_mm_vmscan_lru_shrink_inactive() in evict_folios(), so this renaming broke the build.
在 2023/12/13 11:03, Andrew Morton 写道: >> -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, >> +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, > Current kernels have a call to trace_mm_vmscan_lru_shrink_inactive() in > evict_folios(), so this renaming broke the build. Sorry, I did not enable CONFIG_LRU_GEN when compiling and testing. I will double check my patches. Thanks Bixuan Cui
Hi Bixuan,
kernel test robot noticed the following build errors:
[auto build test ERROR on next-20231211]
url: https://github.com/intel-lab-lkp/linux/commits/Bixuan-Cui/mm-shrinker-add-new-event-to-trace-shrink-count/20231212-112824
base: next-20231211
patch link: https://lore.kernel.org/r/20231212032640.6968-3-cuibixuan%40vivo.com
patch subject: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru
config: i386-buildonly-randconfig-003-20231214 (https://download.01.org/0day-ci/archive/20231214/202312142212.vbSe7CMs-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231214/202312142212.vbSe7CMs-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312142212.vbSe7CMs-lkp@intel.com/
All errors (new ones prefixed by >>):
mm/vmscan.c: In function 'evict_folios':
>> mm/vmscan.c:4533:9: error: implicit declaration of function 'trace_mm_vmscan_lru_shrink_inactive'; did you mean 'trace_mm_vmscan_lru_shrink_inactive_end'? [-Werror=implicit-function-declaration]
4533 | trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| trace_mm_vmscan_lru_shrink_inactive_end
cc1: some warnings being treated as errors
vim +4533 mm/vmscan.c
ac35a490237446 Yu Zhao 2022-09-18 4500
a579086c99ed70 Yu Zhao 2022-12-21 4501 static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness)
ac35a490237446 Yu Zhao 2022-09-18 4502 {
ac35a490237446 Yu Zhao 2022-09-18 4503 int type;
ac35a490237446 Yu Zhao 2022-09-18 4504 int scanned;
ac35a490237446 Yu Zhao 2022-09-18 4505 int reclaimed;
ac35a490237446 Yu Zhao 2022-09-18 4506 LIST_HEAD(list);
359a5e1416caaf Yu Zhao 2022-11-15 4507 LIST_HEAD(clean);
ac35a490237446 Yu Zhao 2022-09-18 4508 struct folio *folio;
359a5e1416caaf Yu Zhao 2022-11-15 4509 struct folio *next;
ac35a490237446 Yu Zhao 2022-09-18 4510 enum vm_event_item item;
ac35a490237446 Yu Zhao 2022-09-18 4511 struct reclaim_stat stat;
bd74fdaea14602 Yu Zhao 2022-09-18 4512 struct lru_gen_mm_walk *walk;
359a5e1416caaf Yu Zhao 2022-11-15 4513 bool skip_retry = false;
ac35a490237446 Yu Zhao 2022-09-18 4514 struct mem_cgroup *memcg = lruvec_memcg(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4515 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4516
ac35a490237446 Yu Zhao 2022-09-18 4517 spin_lock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4518
ac35a490237446 Yu Zhao 2022-09-18 4519 scanned = isolate_folios(lruvec, sc, swappiness, &type, &list);
ac35a490237446 Yu Zhao 2022-09-18 4520
ac35a490237446 Yu Zhao 2022-09-18 4521 scanned += try_to_inc_min_seq(lruvec, swappiness);
ac35a490237446 Yu Zhao 2022-09-18 4522
ac35a490237446 Yu Zhao 2022-09-18 4523 if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS)
ac35a490237446 Yu Zhao 2022-09-18 4524 scanned = 0;
ac35a490237446 Yu Zhao 2022-09-18 4525
ac35a490237446 Yu Zhao 2022-09-18 4526 spin_unlock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4527
ac35a490237446 Yu Zhao 2022-09-18 4528 if (list_empty(&list))
ac35a490237446 Yu Zhao 2022-09-18 4529 return scanned;
359a5e1416caaf Yu Zhao 2022-11-15 4530 retry:
49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4531) reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false);
359a5e1416caaf Yu Zhao 2022-11-15 4532 sc->nr_reclaimed += reclaimed;
8c2214fc9a470a Jaewon Kim 2023-10-03 @4533 trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
8c2214fc9a470a Jaewon Kim 2023-10-03 4534 scanned, reclaimed, &stat, sc->priority,
8c2214fc9a470a Jaewon Kim 2023-10-03 4535 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
359a5e1416caaf Yu Zhao 2022-11-15 4536
359a5e1416caaf Yu Zhao 2022-11-15 4537 list_for_each_entry_safe_reverse(folio, next, &list, lru) {
359a5e1416caaf Yu Zhao 2022-11-15 4538 if (!folio_evictable(folio)) {
359a5e1416caaf Yu Zhao 2022-11-15 4539 list_del(&folio->lru);
359a5e1416caaf Yu Zhao 2022-11-15 4540 folio_putback_lru(folio);
359a5e1416caaf Yu Zhao 2022-11-15 4541 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4542 }
ac35a490237446 Yu Zhao 2022-09-18 4543
359a5e1416caaf Yu Zhao 2022-11-15 4544 if (folio_test_reclaim(folio) &&
359a5e1416caaf Yu Zhao 2022-11-15 4545 (folio_test_dirty(folio) || folio_test_writeback(folio))) {
ac35a490237446 Yu Zhao 2022-09-18 4546 /* restore LRU_REFS_FLAGS cleared by isolate_folio() */
ac35a490237446 Yu Zhao 2022-09-18 4547 if (folio_test_workingset(folio))
ac35a490237446 Yu Zhao 2022-09-18 4548 folio_set_referenced(folio);
359a5e1416caaf Yu Zhao 2022-11-15 4549 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4550 }
ac35a490237446 Yu Zhao 2022-09-18 4551
359a5e1416caaf Yu Zhao 2022-11-15 4552 if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) ||
359a5e1416caaf Yu Zhao 2022-11-15 4553 folio_mapped(folio) || folio_test_locked(folio) ||
359a5e1416caaf Yu Zhao 2022-11-15 4554 folio_test_dirty(folio) || folio_test_writeback(folio)) {
359a5e1416caaf Yu Zhao 2022-11-15 4555 /* don't add rejected folios to the oldest generation */
359a5e1416caaf Yu Zhao 2022-11-15 4556 set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS,
359a5e1416caaf Yu Zhao 2022-11-15 4557 BIT(PG_active));
359a5e1416caaf Yu Zhao 2022-11-15 4558 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4559 }
359a5e1416caaf Yu Zhao 2022-11-15 4560
359a5e1416caaf Yu Zhao 2022-11-15 4561 /* retry folios that may have missed folio_rotate_reclaimable() */
359a5e1416caaf Yu Zhao 2022-11-15 4562 list_move(&folio->lru, &clean);
359a5e1416caaf Yu Zhao 2022-11-15 4563 sc->nr_scanned -= folio_nr_pages(folio);
ac35a490237446 Yu Zhao 2022-09-18 4564 }
ac35a490237446 Yu Zhao 2022-09-18 4565
ac35a490237446 Yu Zhao 2022-09-18 4566 spin_lock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4567
49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4568) move_folios_to_lru(lruvec, &list);
ac35a490237446 Yu Zhao 2022-09-18 4569
bd74fdaea14602 Yu Zhao 2022-09-18 4570 walk = current->reclaim_state->mm_walk;
bd74fdaea14602 Yu Zhao 2022-09-18 4571 if (walk && walk->batched)
bd74fdaea14602 Yu Zhao 2022-09-18 4572 reset_batch_size(lruvec, walk);
bd74fdaea14602 Yu Zhao 2022-09-18 4573
57e9cc50f4dd92 Johannes Weiner 2022-10-26 4574 item = PGSTEAL_KSWAPD + reclaimer_offset();
ac35a490237446 Yu Zhao 2022-09-18 4575 if (!cgroup_reclaim(sc))
ac35a490237446 Yu Zhao 2022-09-18 4576 __count_vm_events(item, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4577 __count_memcg_events(memcg, item, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4578 __count_vm_events(PGSTEAL_ANON + type, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4579
ac35a490237446 Yu Zhao 2022-09-18 4580 spin_unlock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4581
ac35a490237446 Yu Zhao 2022-09-18 4582 mem_cgroup_uncharge_list(&list);
ac35a490237446 Yu Zhao 2022-09-18 4583 free_unref_page_list(&list);
ac35a490237446 Yu Zhao 2022-09-18 4584
359a5e1416caaf Yu Zhao 2022-11-15 4585 INIT_LIST_HEAD(&list);
359a5e1416caaf Yu Zhao 2022-11-15 4586 list_splice_init(&clean, &list);
359a5e1416caaf Yu Zhao 2022-11-15 4587
359a5e1416caaf Yu Zhao 2022-11-15 4588 if (!list_empty(&list)) {
359a5e1416caaf Yu Zhao 2022-11-15 4589 skip_retry = true;
359a5e1416caaf Yu Zhao 2022-11-15 4590 goto retry;
359a5e1416caaf Yu Zhao 2022-11-15 4591 }
ac35a490237446 Yu Zhao 2022-09-18 4592
ac35a490237446 Yu Zhao 2022-09-18 4593 return scanned;
ac35a490237446 Yu Zhao 2022-09-18 4594 }
ac35a490237446 Yu Zhao 2022-09-18 4595
Hi Bixuan,
kernel test robot noticed the following build errors:
[auto build test ERROR on next-20231211]
url: https://github.com/intel-lab-lkp/linux/commits/Bixuan-Cui/mm-shrinker-add-new-event-to-trace-shrink-count/20231212-112824
base: next-20231211
patch link: https://lore.kernel.org/r/20231212032640.6968-3-cuibixuan%40vivo.com
patch subject: [PATCH -next 2/2] mm: vmscan: add new event to trace shrink lru
config: i386-randconfig-014-20231214 (https://download.01.org/0day-ci/archive/20231215/202312150018.EIE4fkeF-lkp@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231215/202312150018.EIE4fkeF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312150018.EIE4fkeF-lkp@intel.com/
All errors (new ones prefixed by >>):
>> mm/vmscan.c:4533:2: error: call to undeclared function 'trace_mm_vmscan_lru_shrink_inactive'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
^
mm/vmscan.c:4533:2: note: did you mean 'trace_mm_vmscan_lru_shrink_inactive_end'?
include/trace/events/vmscan.h:415:1: note: 'trace_mm_vmscan_lru_shrink_inactive_end' declared here
TRACE_EVENT(mm_vmscan_lru_shrink_inactive_end,
^
include/linux/tracepoint.h:566:2: note: expanded from macro 'TRACE_EVENT'
DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
^
include/linux/tracepoint.h:432:2: note: expanded from macro 'DECLARE_TRACE'
__DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \
^
include/linux/tracepoint.h:255:21: note: expanded from macro '__DECLARE_TRACE'
static inline void trace_##name(proto) \
^
<scratch space>:60:1: note: expanded from here
trace_mm_vmscan_lru_shrink_inactive_end
^
1 error generated.
vim +/trace_mm_vmscan_lru_shrink_inactive +4533 mm/vmscan.c
ac35a490237446 Yu Zhao 2022-09-18 4500
a579086c99ed70 Yu Zhao 2022-12-21 4501 static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness)
ac35a490237446 Yu Zhao 2022-09-18 4502 {
ac35a490237446 Yu Zhao 2022-09-18 4503 int type;
ac35a490237446 Yu Zhao 2022-09-18 4504 int scanned;
ac35a490237446 Yu Zhao 2022-09-18 4505 int reclaimed;
ac35a490237446 Yu Zhao 2022-09-18 4506 LIST_HEAD(list);
359a5e1416caaf Yu Zhao 2022-11-15 4507 LIST_HEAD(clean);
ac35a490237446 Yu Zhao 2022-09-18 4508 struct folio *folio;
359a5e1416caaf Yu Zhao 2022-11-15 4509 struct folio *next;
ac35a490237446 Yu Zhao 2022-09-18 4510 enum vm_event_item item;
ac35a490237446 Yu Zhao 2022-09-18 4511 struct reclaim_stat stat;
bd74fdaea14602 Yu Zhao 2022-09-18 4512 struct lru_gen_mm_walk *walk;
359a5e1416caaf Yu Zhao 2022-11-15 4513 bool skip_retry = false;
ac35a490237446 Yu Zhao 2022-09-18 4514 struct mem_cgroup *memcg = lruvec_memcg(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4515 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
ac35a490237446 Yu Zhao 2022-09-18 4516
ac35a490237446 Yu Zhao 2022-09-18 4517 spin_lock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4518
ac35a490237446 Yu Zhao 2022-09-18 4519 scanned = isolate_folios(lruvec, sc, swappiness, &type, &list);
ac35a490237446 Yu Zhao 2022-09-18 4520
ac35a490237446 Yu Zhao 2022-09-18 4521 scanned += try_to_inc_min_seq(lruvec, swappiness);
ac35a490237446 Yu Zhao 2022-09-18 4522
ac35a490237446 Yu Zhao 2022-09-18 4523 if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS)
ac35a490237446 Yu Zhao 2022-09-18 4524 scanned = 0;
ac35a490237446 Yu Zhao 2022-09-18 4525
ac35a490237446 Yu Zhao 2022-09-18 4526 spin_unlock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4527
ac35a490237446 Yu Zhao 2022-09-18 4528 if (list_empty(&list))
ac35a490237446 Yu Zhao 2022-09-18 4529 return scanned;
359a5e1416caaf Yu Zhao 2022-11-15 4530 retry:
49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4531) reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false);
359a5e1416caaf Yu Zhao 2022-11-15 4532 sc->nr_reclaimed += reclaimed;
8c2214fc9a470a Jaewon Kim 2023-10-03 @4533 trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
8c2214fc9a470a Jaewon Kim 2023-10-03 4534 scanned, reclaimed, &stat, sc->priority,
8c2214fc9a470a Jaewon Kim 2023-10-03 4535 type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
359a5e1416caaf Yu Zhao 2022-11-15 4536
359a5e1416caaf Yu Zhao 2022-11-15 4537 list_for_each_entry_safe_reverse(folio, next, &list, lru) {
359a5e1416caaf Yu Zhao 2022-11-15 4538 if (!folio_evictable(folio)) {
359a5e1416caaf Yu Zhao 2022-11-15 4539 list_del(&folio->lru);
359a5e1416caaf Yu Zhao 2022-11-15 4540 folio_putback_lru(folio);
359a5e1416caaf Yu Zhao 2022-11-15 4541 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4542 }
ac35a490237446 Yu Zhao 2022-09-18 4543
359a5e1416caaf Yu Zhao 2022-11-15 4544 if (folio_test_reclaim(folio) &&
359a5e1416caaf Yu Zhao 2022-11-15 4545 (folio_test_dirty(folio) || folio_test_writeback(folio))) {
ac35a490237446 Yu Zhao 2022-09-18 4546 /* restore LRU_REFS_FLAGS cleared by isolate_folio() */
ac35a490237446 Yu Zhao 2022-09-18 4547 if (folio_test_workingset(folio))
ac35a490237446 Yu Zhao 2022-09-18 4548 folio_set_referenced(folio);
359a5e1416caaf Yu Zhao 2022-11-15 4549 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4550 }
ac35a490237446 Yu Zhao 2022-09-18 4551
359a5e1416caaf Yu Zhao 2022-11-15 4552 if (skip_retry || folio_test_active(folio) || folio_test_referenced(folio) ||
359a5e1416caaf Yu Zhao 2022-11-15 4553 folio_mapped(folio) || folio_test_locked(folio) ||
359a5e1416caaf Yu Zhao 2022-11-15 4554 folio_test_dirty(folio) || folio_test_writeback(folio)) {
359a5e1416caaf Yu Zhao 2022-11-15 4555 /* don't add rejected folios to the oldest generation */
359a5e1416caaf Yu Zhao 2022-11-15 4556 set_mask_bits(&folio->flags, LRU_REFS_MASK | LRU_REFS_FLAGS,
359a5e1416caaf Yu Zhao 2022-11-15 4557 BIT(PG_active));
359a5e1416caaf Yu Zhao 2022-11-15 4558 continue;
359a5e1416caaf Yu Zhao 2022-11-15 4559 }
359a5e1416caaf Yu Zhao 2022-11-15 4560
359a5e1416caaf Yu Zhao 2022-11-15 4561 /* retry folios that may have missed folio_rotate_reclaimable() */
359a5e1416caaf Yu Zhao 2022-11-15 4562 list_move(&folio->lru, &clean);
359a5e1416caaf Yu Zhao 2022-11-15 4563 sc->nr_scanned -= folio_nr_pages(folio);
ac35a490237446 Yu Zhao 2022-09-18 4564 }
ac35a490237446 Yu Zhao 2022-09-18 4565
ac35a490237446 Yu Zhao 2022-09-18 4566 spin_lock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4567
49fd9b6df54e61 Matthew Wilcox (Oracle 2022-09-02 4568) move_folios_to_lru(lruvec, &list);
ac35a490237446 Yu Zhao 2022-09-18 4569
bd74fdaea14602 Yu Zhao 2022-09-18 4570 walk = current->reclaim_state->mm_walk;
bd74fdaea14602 Yu Zhao 2022-09-18 4571 if (walk && walk->batched)
bd74fdaea14602 Yu Zhao 2022-09-18 4572 reset_batch_size(lruvec, walk);
bd74fdaea14602 Yu Zhao 2022-09-18 4573
57e9cc50f4dd92 Johannes Weiner 2022-10-26 4574 item = PGSTEAL_KSWAPD + reclaimer_offset();
ac35a490237446 Yu Zhao 2022-09-18 4575 if (!cgroup_reclaim(sc))
ac35a490237446 Yu Zhao 2022-09-18 4576 __count_vm_events(item, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4577 __count_memcg_events(memcg, item, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4578 __count_vm_events(PGSTEAL_ANON + type, reclaimed);
ac35a490237446 Yu Zhao 2022-09-18 4579
ac35a490237446 Yu Zhao 2022-09-18 4580 spin_unlock_irq(&lruvec->lru_lock);
ac35a490237446 Yu Zhao 2022-09-18 4581
ac35a490237446 Yu Zhao 2022-09-18 4582 mem_cgroup_uncharge_list(&list);
ac35a490237446 Yu Zhao 2022-09-18 4583 free_unref_page_list(&list);
ac35a490237446 Yu Zhao 2022-09-18 4584
359a5e1416caaf Yu Zhao 2022-11-15 4585 INIT_LIST_HEAD(&list);
359a5e1416caaf Yu Zhao 2022-11-15 4586 list_splice_init(&clean, &list);
359a5e1416caaf Yu Zhao 2022-11-15 4587
359a5e1416caaf Yu Zhao 2022-11-15 4588 if (!list_empty(&list)) {
359a5e1416caaf Yu Zhao 2022-11-15 4589 skip_retry = true;
359a5e1416caaf Yu Zhao 2022-11-15 4590 goto retry;
359a5e1416caaf Yu Zhao 2022-11-15 4591 }
ac35a490237446 Yu Zhao 2022-09-18 4592
ac35a490237446 Yu Zhao 2022-09-18 4593 return scanned;
ac35a490237446 Yu Zhao 2022-09-18 4594 }
ac35a490237446 Yu Zhao 2022-09-18 4595
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 406faa5591c1..9809d158f968 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -395,7 +395,24 @@ TRACE_EVENT(mm_vmscan_write_folio, show_reclaim_flags(__entry->reclaim_flags)) ); -TRACE_EVENT(mm_vmscan_lru_shrink_inactive, +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_start, + + TP_PROTO(int nid), + + TP_ARGS(nid), + + TP_STRUCT__entry( + __field(int, nid) + ), + + TP_fast_assign( + __entry->nid = nid; + ), + + TP_printk("nid=%d", __entry->nid) +); + +TRACE_EVENT(mm_vmscan_lru_shrink_inactive_end, TP_PROTO(int nid, unsigned long nr_scanned, unsigned long nr_reclaimed, @@ -446,7 +463,24 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive, show_reclaim_flags(__entry->reclaim_flags)) ); -TRACE_EVENT(mm_vmscan_lru_shrink_active, +TRACE_EVENT(mm_vmscan_lru_shrink_active_start, + + TP_PROTO(int nid), + + TP_ARGS(nid), + + TP_STRUCT__entry( + __field(int, nid) + ), + + TP_fast_assign( + __entry->nid = nid; + ), + + TP_printk("nid=%d", __entry->nid) +); + +TRACE_EVENT(mm_vmscan_lru_shrink_active_end, TP_PROTO(int nid, unsigned long nr_taken, unsigned long nr_active, unsigned long nr_deactivated, diff --git a/mm/vmscan.c b/mm/vmscan.c index 4e3b835c6b4a..73e690b3ce68 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1906,6 +1906,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, struct pglist_data *pgdat = lruvec_pgdat(lruvec); bool stalled = false; + trace_mm_vmscan_lru_shrink_inactive_start(pgdat->node_id); + while (unlikely(too_many_isolated(pgdat, file, sc))) { if (stalled) return 0; @@ -1990,7 +1992,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, if (file) sc->nr.file_taken += nr_taken; - trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, + trace_mm_vmscan_lru_shrink_inactive_end(pgdat->node_id, nr_scanned, nr_reclaimed, &stat, sc->priority, file); return nr_reclaimed; } @@ -2028,6 +2030,8 @@ static void shrink_active_list(unsigned long nr_to_scan, int file = is_file_lru(lru); struct pglist_data *pgdat = lruvec_pgdat(lruvec); + trace_mm_vmscan_lru_shrink_active_start(pgdat->node_id); + lru_add_drain(); spin_lock_irq(&lruvec->lru_lock); @@ -2107,7 +2111,7 @@ static void shrink_active_list(unsigned long nr_to_scan, lru_note_cost(lruvec, file, 0, nr_rotated); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); - trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate, + trace_mm_vmscan_lru_shrink_active_end(pgdat->node_id, nr_taken, nr_activate, nr_deactivate, nr_rotated, sc->priority, file); }