Message ID | 20210421070059.69361-1-songmuchun@bytedance.com (mailing list archive) |
---|---|
Headers | show |
Series | Use obj_cgroup APIs to charge the LRU pages | expand |
Ping... Hi Johannes and Roman, Any suggestions on this patch set? Thanks. On Wed, Apr 21, 2021 at 3:01 PM Muchun Song <songmuchun@bytedance.com> wrote: > > This is v3 based on the top of the series[1] (memcontrol code cleanup and > simplification). Roman is working on the generalization of obj_cgroup API. > But before that, hope someone can review this patches for correctness. > > Since the following patchsets applied. All the kernel memory are charged > with the new APIs of obj_cgroup. > > [v17,00/19] The new cgroup slab memory controller[2] > [v5,0/7] Use obj_cgroup APIs to charge kmem pages[3] > > But user memory allocations (LRU pages) pinning memcgs for a long time - > it exists at a larger scale and is causing recurring problems in the real > world: page cache doesn't get reclaimed for a long time, or is used by the > second, third, fourth, ... instance of the same job that was restarted into > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > and make page reclaim very inefficient. > > We can convert LRU pages and most other raw memcg pins to the objcg direction > to fix this problem, and then the LRU pages will not pin the memcgs. > > This patchset aims to make the LRU pages to drop the reference to memory > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > of the dying cgroups will not increase if we run the following test script. > > ```bash > #!/bin/bash > > cat /proc/cgroups | grep memory > > cd /sys/fs/cgroup/memory > > for i in range{1..500} > do > mkdir test > echo $$ > test/cgroup.procs > sleep 60 & > echo $$ > cgroup.procs > echo `cat test/cgroup.procs` > cgroup.procs > rmdir test > done > > cat /proc/cgroups | grep memory > ``` > > Thanks. > > [1] https://lore.kernel.org/linux-mm/20210417043538.9793-1-songmuchun@bytedance.com/ > [2] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ > [3] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ > > Changlogs in RFC v3: > 1. Drop the code cleanup and simplification patches. Gather those patches > into a separate series[1]. > 2. Rework patch #1 suggested by Johannes. > > Changlogs in RFC v2: > 1. Collect Acked-by tags by Johannes. Thanks. > 2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > 3. Fix move_pages_to_lru(). > > Muchun Song (12): > mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM > mm: memcontrol: introduce compact_lock_page_lruvec_irqsave > mm: memcontrol: make lruvec lock safe when the LRU pages reparented > mm: vmscan: rework move_pages_to_lru() > mm: thp: introduce lock/unlock_split_queue{_irqsave}() > mm: thp: make deferred split queue lock safe when the LRU pages > reparented > mm: memcontrol: make all the callers of page_memcg() safe > mm: memcontrol: introduce memcg_reparent_ops > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() > mm: lru: add VM_BUG_ON_PAGE to lru maintenance function > mm: lru: use lruvec lock to serialize memcg changes > > Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- > fs/buffer.c | 13 +- > fs/fs-writeback.c | 23 +- > fs/iomap/buffered-io.c | 4 +- > include/linux/memcontrol.h | 182 ++++---- > include/linux/mm_inline.h | 6 + > mm/compaction.c | 36 +- > mm/filemap.c | 2 +- > mm/huge_memory.c | 171 ++++++-- > mm/memcontrol.c | 562 ++++++++++++++++++------- > mm/migrate.c | 4 + > mm/page-writeback.c | 24 +- > mm/page_io.c | 5 +- > mm/rmap.c | 14 +- > mm/swap.c | 46 +- > mm/vmscan.c | 56 ++- > 16 files changed, 795 insertions(+), 355 deletions(-) > > -- > 2.11.0 >
Hi Muchun! It looks like the writeback problem will be solved in a different way, which will not require generalization of the obj_cgroup api to the cgroup level. It’s not fully confirmed yet though. We still might wanna do this generalization lingn-term, but as now I have no objections for continuing the work on your patchset. I’m on pto this week, but will take a deeper look at your patches early next week. Sorry for the delay. Thanks! Sent from my iPhone > On May 18, 2021, at 06:50, Muchun Song <songmuchun@bytedance.com> wrote: > > Ping... > > Hi Johannes and Roman, > > Any suggestions on this patch set? > > Thanks. > >> On Wed, Apr 21, 2021 at 3:01 PM Muchun Song <songmuchun@bytedance.com> wrote: >> >> This is v3 based on the top of the series[1] (memcontrol code cleanup and >> simplification). Roman is working on the generalization of obj_cgroup API. >> But before that, hope someone can review this patches for correctness. >> >> Since the following patchsets applied. All the kernel memory are charged >> with the new APIs of obj_cgroup. >> >> [v17,00/19] The new cgroup slab memory controller[2] >> [v5,0/7] Use obj_cgroup APIs to charge kmem pages[3] >> >> But user memory allocations (LRU pages) pinning memcgs for a long time - >> it exists at a larger scale and is causing recurring problems in the real >> world: page cache doesn't get reclaimed for a long time, or is used by the >> second, third, fourth, ... instance of the same job that was restarted into >> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, >> and make page reclaim very inefficient. >> >> We can convert LRU pages and most other raw memcg pins to the objcg direction >> to fix this problem, and then the LRU pages will not pin the memcgs. >> >> This patchset aims to make the LRU pages to drop the reference to memory >> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number >> of the dying cgroups will not increase if we run the following test script. >> >> ```bash >> #!/bin/bash >> >> cat /proc/cgroups | grep memory >> >> cd /sys/fs/cgroup/memory >> >> for i in range{1..500} >> do >> mkdir test >> echo $$ > test/cgroup.procs >> sleep 60 & >> echo $$ > cgroup.procs >> echo `cat test/cgroup.procs` > cgroup.procs >> rmdir test >> done >> >> cat /proc/cgroups | grep memory >> ``` >> >> Thanks. >> >> [1] https://lore.kernel.org/linux-mm/20210417043538.9793-1-songmuchun@bytedance.com/ >> [2] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ >> [3] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ >> >> Changlogs in RFC v3: >> 1. Drop the code cleanup and simplification patches. Gather those patches >> into a separate series[1]. >> 2. Rework patch #1 suggested by Johannes. >> >> Changlogs in RFC v2: >> 1. Collect Acked-by tags by Johannes. Thanks. >> 2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. >> 3. Fix move_pages_to_lru(). >> >> Muchun Song (12): >> mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM >> mm: memcontrol: introduce compact_lock_page_lruvec_irqsave >> mm: memcontrol: make lruvec lock safe when the LRU pages reparented >> mm: vmscan: rework move_pages_to_lru() >> mm: thp: introduce lock/unlock_split_queue{_irqsave}() >> mm: thp: make deferred split queue lock safe when the LRU pages >> reparented >> mm: memcontrol: make all the callers of page_memcg() safe >> mm: memcontrol: introduce memcg_reparent_ops >> mm: memcontrol: use obj_cgroup APIs to charge the LRU pages >> mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() >> mm: lru: add VM_BUG_ON_PAGE to lru maintenance function >> mm: lru: use lruvec lock to serialize memcg changes >> >> Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- >> fs/buffer.c | 13 +- >> fs/fs-writeback.c | 23 +- >> fs/iomap/buffered-io.c | 4 +- >> include/linux/memcontrol.h | 182 ++++---- >> include/linux/mm_inline.h | 6 + >> mm/compaction.c | 36 +- >> mm/filemap.c | 2 +- >> mm/huge_memory.c | 171 ++++++-- >> mm/memcontrol.c | 562 ++++++++++++++++++------- >> mm/migrate.c | 4 + >> mm/page-writeback.c | 24 +- >> mm/page_io.c | 5 +- >> mm/rmap.c | 14 +- >> mm/swap.c | 46 +- >> mm/vmscan.c | 56 ++- >> 16 files changed, 795 insertions(+), 355 deletions(-) >> >> -- >> 2.11.0 >>
On Tue, May 18, 2021 at 10:17 PM Roman Gushchin <guro@fb.com> wrote: > > Hi Muchun! > > It looks like the writeback problem will be solved in a different way, which will not require generalization of the obj_cgroup api to the cgroup level. It’s not fully confirmed yet though. We still might wanna do this generalization lingn-term, but as now I have no objections for continuing the work on your patchset. I’m on pto this week, but will take a deeper look at your patches early next week. Sorry for the delay. Waiting on your review. Thanks Roman. > > Thanks! > > Sent from my iPhone > > > On May 18, 2021, at 06:50, Muchun Song <songmuchun@bytedance.com> wrote: > > > > Ping... > > > > Hi Johannes and Roman, > > > > Any suggestions on this patch set? > > > > Thanks. > > > >> On Wed, Apr 21, 2021 at 3:01 PM Muchun Song <songmuchun@bytedance.com> wrote: > >> > >> This is v3 based on the top of the series[1] (memcontrol code cleanup and > >> simplification). Roman is working on the generalization of obj_cgroup API. > >> But before that, hope someone can review this patches for correctness. > >> > >> Since the following patchsets applied. All the kernel memory are charged > >> with the new APIs of obj_cgroup. > >> > >> [v17,00/19] The new cgroup slab memory controller[2] > >> [v5,0/7] Use obj_cgroup APIs to charge kmem pages[3] > >> > >> But user memory allocations (LRU pages) pinning memcgs for a long time - > >> it exists at a larger scale and is causing recurring problems in the real > >> world: page cache doesn't get reclaimed for a long time, or is used by the > >> second, third, fourth, ... instance of the same job that was restarted into > >> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > >> and make page reclaim very inefficient. > >> > >> We can convert LRU pages and most other raw memcg pins to the objcg direction > >> to fix this problem, and then the LRU pages will not pin the memcgs. > >> > >> This patchset aims to make the LRU pages to drop the reference to memory > >> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > >> of the dying cgroups will not increase if we run the following test script. > >> > >> ```bash > >> #!/bin/bash > >> > >> cat /proc/cgroups | grep memory > >> > >> cd /sys/fs/cgroup/memory > >> > >> for i in range{1..500} > >> do > >> mkdir test > >> echo $$ > test/cgroup.procs > >> sleep 60 & > >> echo $$ > cgroup.procs > >> echo `cat test/cgroup.procs` > cgroup.procs > >> rmdir test > >> done > >> > >> cat /proc/cgroups | grep memory > >> ``` > >> > >> Thanks. > >> > >> [1] https://lore.kernel.org/linux-mm/20210417043538.9793-1-songmuchun@bytedance.com/ > >> [2] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ > >> [3] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ > >> > >> Changlogs in RFC v3: > >> 1. Drop the code cleanup and simplification patches. Gather those patches > >> into a separate series[1]. > >> 2. Rework patch #1 suggested by Johannes. > >> > >> Changlogs in RFC v2: > >> 1. Collect Acked-by tags by Johannes. Thanks. > >> 2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > >> 3. Fix move_pages_to_lru(). > >> > >> Muchun Song (12): > >> mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM > >> mm: memcontrol: introduce compact_lock_page_lruvec_irqsave > >> mm: memcontrol: make lruvec lock safe when the LRU pages reparented > >> mm: vmscan: rework move_pages_to_lru() > >> mm: thp: introduce lock/unlock_split_queue{_irqsave}() > >> mm: thp: make deferred split queue lock safe when the LRU pages > >> reparented > >> mm: memcontrol: make all the callers of page_memcg() safe > >> mm: memcontrol: introduce memcg_reparent_ops > >> mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > >> mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() > >> mm: lru: add VM_BUG_ON_PAGE to lru maintenance function > >> mm: lru: use lruvec lock to serialize memcg changes > >> > >> Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- > >> fs/buffer.c | 13 +- > >> fs/fs-writeback.c | 23 +- > >> fs/iomap/buffered-io.c | 4 +- > >> include/linux/memcontrol.h | 182 ++++---- > >> include/linux/mm_inline.h | 6 + > >> mm/compaction.c | 36 +- > >> mm/filemap.c | 2 +- > >> mm/huge_memory.c | 171 ++++++-- > >> mm/memcontrol.c | 562 ++++++++++++++++++------- > >> mm/migrate.c | 4 + > >> mm/page-writeback.c | 24 +- > >> mm/page_io.c | 5 +- > >> mm/rmap.c | 14 +- > >> mm/swap.c | 46 +- > >> mm/vmscan.c | 56 ++- > >> 16 files changed, 795 insertions(+), 355 deletions(-) > >> > >> -- > >> 2.11.0 > >>
On Thu, May 20, 2021 at 11:20:47AM +0800, Muchun Song wrote: > On Tue, May 18, 2021 at 10:17 PM Roman Gushchin <guro@fb.com> wrote: > > > > Hi Muchun! > > > > It looks like the writeback problem will be solved in a different way, which will not require generalization of the obj_cgroup api to the cgroup level. It’s not fully confirmed yet though. We still might wanna do this generalization lingn-term, but as now I have no objections for continuing the work on your patchset. I’m on pto this week, but will take a deeper look at your patches early next week. Sorry for the delay. > > Waiting on your review. Thanks Roman. It looks like the mm tree went ahead and I can't clearly apply the whole patchset. Would you mind to rebase it and resend? Thank you!
On Wed, May 26, 2021 at 1:35 AM Roman Gushchin <guro@fb.com> wrote: > > On Thu, May 20, 2021 at 11:20:47AM +0800, Muchun Song wrote: > > On Tue, May 18, 2021 at 10:17 PM Roman Gushchin <guro@fb.com> wrote: > > > > > > Hi Muchun! > > > > > > It looks like the writeback problem will be solved in a different way, which will not require generalization of the obj_cgroup api to the cgroup level. It’s not fully confirmed yet though. We still might wanna do this generalization lingn-term, but as now I have no objections for continuing the work on your patchset. I’m on pto this week, but will take a deeper look at your patches early next week. Sorry for the delay. > > > > Waiting on your review. Thanks Roman. > > It looks like the mm tree went ahead and I can't clearly apply the whole patchset. > Would you mind to rebase it and resend? Got it. Will do that. Thanks. > > Thank you!