mbox series

[v13,00/18] per memcg lru lock

Message ID 1592555636-115095-1-git-send-email-alex.shi@linux.alibaba.com (mailing list archive)
Headers show
Series per memcg lru lock | expand

Message

Alex Shi June 19, 2020, 8:33 a.m. UTC
This is a new version which bases on linux-next, merged much suggestion
from Hugh Dickins, from compaction fix to less TestClearPageLRU and
comments reverse etc. Thank a lot, Hugh!

Johannes Weiner has suggested:
"So here is a crazy idea that may be worth exploring:

Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
linked list.

Can we make PageLRU atomic and use it to stabilize the lru_lock
instead, and then use the lru_lock only serialize list operations?
..."

With new memcg charge path and this solution, we could isolate
LRU pages to exclusive visit them in compaction, page migration, reclaim,
memcg move_accunt, huge page split etc scenarios while keeping pages' 
memcg stable. Then possible to change per node lru locking to per memcg
lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
pages remain on lru list, lru lock could guard them for list integrity.

The patchset includes 3 parts:
1, some code cleanup and minimum optimization as a preparation.
2, use TestCleanPageLRU as page isolation's precondition
3, replace per node lru_lock with per memcg per node lru_lock

The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
each of memcg per node. So on a large machine, each of memcg don't
have to suffer from per node pgdat->lru_lock competition. They could go
fast with their self lru_lock

Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
containers on a 2s * 26cores * HT box with a modefied case:
https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice

With this patchset, the readtwice performance increased about 80%
in concurrent containers.

Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
idea 8 years ago, and others who give comments as well: Daniel Jordan, 
Mel Gorman, Shakeel Butt, Matthew Wilcox etc.

Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!

Alex Shi (16):
  mm/vmscan: remove unnecessary lruvec adding
  mm/page_idle: no unlikely double check for idle page counting
  mm/compaction: correct the comments of compact_defer_shift
  mm/compaction: rename compact_deferred as compact_should_defer
  mm/thp: move lru_add_page_tail func to huge_memory.c
  mm/thp: clean up lru_add_page_tail
  mm/thp: narrow lru locking
  mm/memcg: add debug checking in lock_page_memcg
  mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn
  mm/lru: introduce TestClearPageLRU
  mm/compaction: do page isolation first in compaction
  mm/mlock: reorder isolation sequence during munlock
  mm/swap: serialize memcg changes during pagevec_lru_move_fn
  mm/lru: replace pgdat lru_lock with lruvec lock
  mm/lru: introduce the relock_page_lruvec function
  mm/pgdat: remove pgdat lru_lock

Hugh Dickins (2):
  mm/vmscan: use relock for move_pages_to_lru
  mm/lru: revise the comments of lru_lock

 Documentation/admin-guide/cgroup-v1/memcg_test.rst |  15 +-
 Documentation/admin-guide/cgroup-v1/memory.rst     |  21 ++-
 Documentation/trace/events-kmem.rst                |   2 +-
 Documentation/vm/unevictable-lru.rst               |  22 +--
 include/linux/compaction.h                         |   4 +-
 include/linux/memcontrol.h                         |  95 +++++++++++
 include/linux/mm_types.h                           |   2 +-
 include/linux/mmzone.h                             |   6 +-
 include/linux/page-flags.h                         |   1 +
 include/linux/swap.h                               |   4 +-
 include/trace/events/compaction.h                  |   2 +-
 mm/compaction.c                                    | 113 ++++++++-----
 mm/filemap.c                                       |   4 +-
 mm/huge_memory.c                                   |  54 +++++--
 mm/memcontrol.c                                    |  56 ++++++-
 mm/mlock.c                                         |  93 +++++------
 mm/mmzone.c                                        |   1 +
 mm/page_alloc.c                                    |   1 -
 mm/page_idle.c                                     |   8 -
 mm/rmap.c                                          |   4 +-
 mm/swap.c                                          | 175 +++++++--------------
 mm/swap_state.c                                    |   5 +-
 mm/vmscan.c                                        | 165 ++++++++++---------
 mm/workingset.c                                    |   4 +-
 24 files changed, 500 insertions(+), 357 deletions(-)

Comments

Andrew Morton June 20, 2020, 11:08 p.m. UTC | #1
On Fri, 19 Jun 2020 16:33:38 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:

> This is a new version which bases on linux-next, merged much suggestion
> from Hugh Dickins, from compaction fix to less TestClearPageLRU and
> comments reverse etc. Thank a lot, Hugh!
> 
> Johannes Weiner has suggested:
> "So here is a crazy idea that may be worth exploring:
> 
> Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
> linked list.
> 
> Can we make PageLRU atomic and use it to stabilize the lru_lock
> instead, and then use the lru_lock only serialize list operations?

I don't understand this sentence.  How can a per-page flag stabilize a
per-pgdat spinlock?  Perhaps some additional description will help.

> ..."
> 
> With new memcg charge path and this solution, we could isolate
> LRU pages to exclusive visit them in compaction, page migration, reclaim,
> memcg move_accunt, huge page split etc scenarios while keeping pages' 
> memcg stable. Then possible to change per node lru locking to per memcg
> lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
> pages remain on lru list, lru lock could guard them for list integrity.
> 
> The patchset includes 3 parts:
> 1, some code cleanup and minimum optimization as a preparation.
> 2, use TestCleanPageLRU as page isolation's precondition
> 3, replace per node lru_lock with per memcg per node lru_lock
> 
> The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
> each of memcg per node. So on a large machine, each of memcg don't
> have to suffer from per node pgdat->lru_lock competition. They could go
> fast with their self lru_lock
> 
> Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
> containers on a 2s * 26cores * HT box with a modefied case:
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
> 
> With this patchset, the readtwice performance increased about 80%
> in concurrent containers.
> 
> Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
> idea 8 years ago, and others who give comments as well: Daniel Jordan, 
> Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
> 
> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
> 
> ...
>
>  24 files changed, 500 insertions(+), 357 deletions(-)

It's a large patchset and afaict the whole point is performance gain. 
80% in one specialized test sounds nice, but is there a plan for more
extensive quantification?

There isn't much sign of completed review activity here, so I'll go
into hiding for a while.
Alex Shi June 21, 2020, 3:44 p.m. UTC | #2
在 2020/6/21 上午7:08, Andrew Morton 写道:
> On Fri, 19 Jun 2020 16:33:38 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:
> 
>> This is a new version which bases on linux-next, merged much suggestion
>> from Hugh Dickins, from compaction fix to less TestClearPageLRU and
>> comments reverse etc. Thank a lot, Hugh!
>>
>> Johannes Weiner has suggested:
>> "So here is a crazy idea that may be worth exploring:
>>
>> Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
>> linked list.
>>
>> Can we make PageLRU atomic and use it to stabilize the lru_lock
>> instead, and then use the lru_lock only serialize list operations?
> 
> I don't understand this sentence.  How can a per-page flag stabilize a
> per-pgdat spinlock?  Perhaps some additional description will help.

Hi Andrew,

Well, above comments miss a context, which lru_lock means new lru_lock on each
of memcg not the current per node lru_lock. Sorry!

Currently the lru bit changed under lru_lock, so isolate a page from lru just
need take lru_lock. New patch will change it with a atomic action alone from 
lru_lock, so isolate a page need both actions: TestClearPageLRU and take the
lru_lock. like followings in isolate_lru_page():

The main reason for this comes from isolate_migratepages_block() in compaction.c
we have to take lru bit before lru lock, that serialized the page isolation in 
memcg page charge/migration which will change page's lruvec and new lru_lock
in it. The current isolation just take lru lock directly which fails on guard 
page's lruvec change(memcg change).

changes in isolate_lru_page():-	if (PageLRU(page)) {
+	if (TestClearPageLRU(page)) {
 		pg_data_t *pgdat = page_pgdat(page);
 		struct lruvec *lruvec;
+		int lru = page_lru(page);
 
-		spin_lock_irq(&pgdat->lru_lock);
+		get_page(page);
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-		if (PageLRU(page)) {
-			int lru = page_lru(page);
-			get_page(page);
-			ClearPageLRU(page);
-			del_page_from_lru_list(page, lruvec, lru);
-			ret = 0;
-		}
+		spin_lock_irq(&pgdat->lru_lock);
+		del_page_from_lru_list(page, lruvec, lru);
 		spin_unlock_irq(&pgdat->lru_lock);
+		ret = 0;
 	}

> 

>>
>> Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
>> containers on a 2s * 26cores * HT box with a modefied case:
>> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
>>
>> With this patchset, the readtwice performance increased about 80%
>> in concurrent containers.
>>
>> Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
>> idea 8 years ago, and others who give comments as well: Daniel Jordan, 
>> Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
>>
>> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
>> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
>>
>> ...
>>
>>  24 files changed, 500 insertions(+), 357 deletions(-)
> 
> It's a large patchset and afaict the whole point is performance gain. 
> 80% in one specialized test sounds nice, but is there a plan for more
> extensive quantification?

Once I got 5% aim7 performance gain on 16 cores machine, and about 20+%
readtwice performance gain. the performance gain is increased a lot following
larger cores.

Is there some suggestion for this?

> 
> There isn't much sign of completed review activity here, so I'll go
> into hiding for a while.
> 

Yes, it's relatively big. also much of change from comments part. :)
Anyway, thanks for look into!

Thanks
Alex