Message ID | 20220621125658.64935-1-songmuchun@bytedance.com (mailing list archive) |
---|---|
Headers | show |
Series | Use obj_cgroup APIs to charge the LRU pages | expand |
On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > into mm-unstable which will help to determine whether there is a problem or > degradation. I am also doing some benchmark tests in parallel. > > Since the following patchsets applied. All the kernel memory are charged > with the new APIs of obj_cgroup. > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > But user memory allocations (LRU pages) pinning memcgs for a long time - > it exists at a larger scale and is causing recurring problems in the real > world: page cache doesn't get reclaimed for a long time, or is used by the > second, third, fourth, ... instance of the same job that was restarted into > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > and make page reclaim very inefficient. > > We can convert LRU pages and most other raw memcg pins to the objcg direction > to fix this problem, and then the LRU pages will not pin the memcgs. > > This patchset aims to make the LRU pages to drop the reference to memory > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > of the dying cgroups will not increase if we run the following test script. This is amazing work! Sorry if I came late, I didn't follow the threads of previous versions so this might be redundant, I just have a couple of questions. a) If LRU pages keep getting parented until they reach root_mem_cgroup (assuming they can), aren't these pages effectively unaccounted at this point or leaked? Is there protection against this? b) Since moving charged pages between memcgs is now becoming easier by using the APIs of obj_cgroup, I wonder if this opens the door for future work to transfer charges to memcgs that are actually using reparented resources. For example, let's say cgroup A reads a few pages into page cache, and then they are no longer used by cgroup A. cgroup B, however, is using the same pages that are currently charged to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A dies, and these pages are reparented to A's parent, can we possibly mark these reparented pages (maybe in the page tables somewhere) so that next time they get accessed we recharge them to B instead (possibly asynchronously)? I don't have much experience about page tables but I am pretty sure they are loaded so maybe there is no room in PTEs for something like this, but I have always wondered about what we can do for this case where a cgroup is consistently using memory charged to another cgroup. Maybe when this memory is reparented is a good point in time to decide to recharge appropriately. It would also fix the reparenty leak to root problem (if it even exists). Thanks again for this work and please excuse my ignorance if any part of what I said doesn't make sense :) > > ```bash > #!/bin/bash > > dd if=/dev/zero of=temp bs=4096 count=1 > cat /proc/cgroups | grep memory > > for i in {0..2000} > do > mkdir /sys/fs/cgroup/memory/test$i > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > cat temp >> log > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > rmdir /sys/fs/cgroup/memory/test$i > done > > cat /proc/cgroups | grep memory > > rm -f temp log > ``` > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > v6: > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > - Rebase to mm-unstable. > > v5: > - Lots of improvements from Johannes, Roman and Waiman. > - Fix lockdep warning reported by kernel test robot. > - Add two new patches to do code cleanup. > - Collect Acked-by and Reviewed-by from Johannes and Roman. > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > it to local_lock. It could be an improvement in the future. > > v4: > - Resend and rebased on v5.18. > > v3: > - Removed the Acked-by tags from Roman since this version is based on > the folio relevant. > > v2: > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > - Rebase to linux 5.15-rc1. > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > v1: > - Drop RFC tag. > - Rebase to linux next-20210811. > > RFC v4: > - Collect Acked-by from Roman. > - Rebase to linux next-20210525. > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > - Convert reparent_ops_head to an array in patch 8. > > Thanks for Roman's review and suggestions. > > RFC v3: > - Drop the code cleanup and simplification patches. Gather those patches > into a separate series[1]. > - Rework patch #1 suggested by Johannes. > > RFC v2: > - Collect Acked-by tags by Johannes. Thanks. > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > - Fix move_pages_to_lru(). > > Muchun Song (11): > mm: memcontrol: remove dead code and comments > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > lruvec_unlock{_irq, _irqrestore} > mm: memcontrol: prepare objcg API for non-kmem usage > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > mm: vmscan: rework move_pages_to_lru() > mm: thp: make split queue lock safe when LRU pages are reparented > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > mm: memcontrol: introduce memcg_reparent_ops > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > mm: lru: use lruvec lock to serialize memcg changes > > fs/buffer.c | 4 +- > fs/fs-writeback.c | 23 +- > include/linux/memcontrol.h | 218 +++++++++------ > include/linux/mm_inline.h | 6 + > include/trace/events/writeback.h | 5 + > mm/compaction.c | 39 ++- > mm/huge_memory.c | 153 ++++++++-- > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > mm/migrate.c | 4 + > mm/mlock.c | 2 +- > mm/page_io.c | 5 +- > mm/swap.c | 49 ++-- > mm/vmscan.c | 66 ++--- > 13 files changed, 776 insertions(+), 382 deletions(-) > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > -- > 2.11.0 > >
On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > into mm-unstable which will help to determine whether there is a problem or > > degradation. I am also doing some benchmark tests in parallel. > > > > Since the following patchsets applied. All the kernel memory are charged > > with the new APIs of obj_cgroup. > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > it exists at a larger scale and is causing recurring problems in the real > > world: page cache doesn't get reclaimed for a long time, or is used by the > > second, third, fourth, ... instance of the same job that was restarted into > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > and make page reclaim very inefficient. > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > This patchset aims to make the LRU pages to drop the reference to memory > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > of the dying cgroups will not increase if we run the following test script. > > This is amazing work! > > Sorry if I came late, I didn't follow the threads of previous versions > so this might be redundant, I just have a couple of questions. > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > (assuming they can), aren't these pages effectively unaccounted at > this point or leaked? Is there protection against this? > In this case, those pages are accounted in root memcg level. Unfortunately, there is no mechanism now to transfer a page's memcg from one to another. > b) Since moving charged pages between memcgs is now becoming easier by > using the APIs of obj_cgroup, I wonder if this opens the door for > future work to transfer charges to memcgs that are actually using > reparented resources. For example, let's say cgroup A reads a few > pages into page cache, and then they are no longer used by cgroup A. > cgroup B, however, is using the same pages that are currently charged > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > dies, and these pages are reparented to A's parent, can we possibly > mark these reparented pages (maybe in the page tables somewhere) so > that next time they get accessed we recharge them to B instead > (possibly asynchronously)? > I don't have much experience about page tables but I am pretty sure > they are loaded so maybe there is no room in PTEs for something like > this, but I have always wondered about what we can do for this case > where a cgroup is consistently using memory charged to another cgroup. > Maybe when this memory is reparented is a good point in time to decide > to recharge appropriately. It would also fix the reparenty leak to > root problem (if it even exists). > From my point of view, this is going to be an improvement to the memcg subsystem in the future. IIUC, most reparented pages are page cache pages without be mapped to users. So page tables are not a suitable place to record this information. However, we already have this information in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not equal to the page's obj_cgroup->memcg->objcg, it means this page have been reparented. I am thinking if a place where a page is mapped (probably page fault patch) or page (cache) is written (usually vfs write path) is suitable to transfer page's memcg from one to another. But need more thinking, e.g. How to decide if a reparented page needs to be transferred? If we need more information to make this decision, where to store those information? This is my primary thoughts on this question. Thanks. > Thanks again for this work and please excuse my ignorance if any part > of what I said doesn't make sense :) > > > > > ```bash > > #!/bin/bash > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > cat /proc/cgroups | grep memory > > > > for i in {0..2000} > > do > > mkdir /sys/fs/cgroup/memory/test$i > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > cat temp >> log > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > rmdir /sys/fs/cgroup/memory/test$i > > done > > > > cat /proc/cgroups | grep memory > > > > rm -f temp log > > ``` > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > v6: > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > - Rebase to mm-unstable. > > > > v5: > > - Lots of improvements from Johannes, Roman and Waiman. > > - Fix lockdep warning reported by kernel test robot. > > - Add two new patches to do code cleanup. > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > it to local_lock. It could be an improvement in the future. > > > > v4: > > - Resend and rebased on v5.18. > > > > v3: > > - Removed the Acked-by tags from Roman since this version is based on > > the folio relevant. > > > > v2: > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > - Rebase to linux 5.15-rc1. > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > v1: > > - Drop RFC tag. > > - Rebase to linux next-20210811. > > > > RFC v4: > > - Collect Acked-by from Roman. > > - Rebase to linux next-20210525. > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > - Convert reparent_ops_head to an array in patch 8. > > > > Thanks for Roman's review and suggestions. > > > > RFC v3: > > - Drop the code cleanup and simplification patches. Gather those patches > > into a separate series[1]. > > - Rework patch #1 suggested by Johannes. > > > > RFC v2: > > - Collect Acked-by tags by Johannes. Thanks. > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > - Fix move_pages_to_lru(). > > > > Muchun Song (11): > > mm: memcontrol: remove dead code and comments > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > lruvec_unlock{_irq, _irqrestore} > > mm: memcontrol: prepare objcg API for non-kmem usage > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > mm: vmscan: rework move_pages_to_lru() > > mm: thp: make split queue lock safe when LRU pages are reparented > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > mm: memcontrol: introduce memcg_reparent_ops > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > mm: lru: use lruvec lock to serialize memcg changes > > > > fs/buffer.c | 4 +- > > fs/fs-writeback.c | 23 +- > > include/linux/memcontrol.h | 218 +++++++++------ > > include/linux/mm_inline.h | 6 + > > include/trace/events/writeback.h | 5 + > > mm/compaction.c | 39 ++- > > mm/huge_memory.c | 153 ++++++++-- > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > mm/migrate.c | 4 + > > mm/mlock.c | 2 +- > > mm/page_io.c | 5 +- > > mm/swap.c | 49 ++-- > > mm/vmscan.c | 66 ++--- > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > -- > > 2.11.0 > > > > >
On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > into mm-unstable which will help to determine whether there is a problem or > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > with the new APIs of obj_cgroup. > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > it exists at a larger scale and is causing recurring problems in the real > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > second, third, fourth, ... instance of the same job that was restarted into > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > and make page reclaim very inefficient. > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > of the dying cgroups will not increase if we run the following test script. > > > > This is amazing work! > > > > Sorry if I came late, I didn't follow the threads of previous versions > > so this might be redundant, I just have a couple of questions. > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > (assuming they can), aren't these pages effectively unaccounted at > > this point or leaked? Is there protection against this? > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > there is no mechanism now to transfer a page's memcg from one to another. > > > b) Since moving charged pages between memcgs is now becoming easier by > > using the APIs of obj_cgroup, I wonder if this opens the door for > > future work to transfer charges to memcgs that are actually using > > reparented resources. For example, let's say cgroup A reads a few > > pages into page cache, and then they are no longer used by cgroup A. > > cgroup B, however, is using the same pages that are currently charged > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > dies, and these pages are reparented to A's parent, can we possibly > > mark these reparented pages (maybe in the page tables somewhere) so > > that next time they get accessed we recharge them to B instead > > (possibly asynchronously)? > > I don't have much experience about page tables but I am pretty sure > > they are loaded so maybe there is no room in PTEs for something like > > this, but I have always wondered about what we can do for this case > > where a cgroup is consistently using memory charged to another cgroup. > > Maybe when this memory is reparented is a good point in time to decide > > to recharge appropriately. It would also fix the reparenty leak to > > root problem (if it even exists). > > > > From my point of view, this is going to be an improvement to the memcg > subsystem in the future. IIUC, most reparented pages are page cache > pages without be mapped to users. So page tables are not a suitable > place to record this information. However, we already have this information > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > equal to the page's obj_cgroup->memcg->objcg, it means this page have > been reparented. I am thinking if a place where a page is mapped (probably > page fault patch) or page (cache) is written (usually vfs write path) > is suitable to transfer page's memcg from one to another. But need more Very good point about unmapped pages, I missed this. Page tables will do us no good here. Such a change would indeed require careful thought because (like you mentioned) there are multiple points in time where it might be suitable to consider recharging the page (e.g. when the page is mapped). This could be an incremental change though. Right now we have no recharging at all, so maybe we can gradually add recharging to suitable paths. > thinking, e.g. How to decide if a reparented page needs to be transferred? Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of current is not a descendant of page's obj_cgroup->memcg) is a good place to start? My rationale is that if the page is charged to root_mem_cgroup through reparenting and a process in a memcg is using it then this is probably an accounting leak. If a page is charged to a memcg A through reparenting and is used by a memcg B in a different subtree, then probably memcg B is getting away with using the page for free while A is being taxed. If B is a descendant of A, it is still getting away with using the page unaccounted, but at least it makes no difference for A. One could argue that we might as well recharge a reparented page anyway if the process is cheap (or done asynchronously), and the paths where we do recharging are not very common. All of this might be moot, I am just thinking out loud. In any way this would be future work and not part of this work. > If we need more information to make this decision, where to store those > information? This is my primary thoughts on this question. > > Thanks. > > > Thanks again for this work and please excuse my ignorance if any part > > of what I said doesn't make sense :) > > > > > > > > ```bash > > > #!/bin/bash > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > cat /proc/cgroups | grep memory > > > > > > for i in {0..2000} > > > do > > > mkdir /sys/fs/cgroup/memory/test$i > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > cat temp >> log > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > rmdir /sys/fs/cgroup/memory/test$i > > > done > > > > > > cat /proc/cgroups | grep memory > > > > > > rm -f temp log > > > ``` > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > v6: > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > - Rebase to mm-unstable. > > > > > > v5: > > > - Lots of improvements from Johannes, Roman and Waiman. > > > - Fix lockdep warning reported by kernel test robot. > > > - Add two new patches to do code cleanup. > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > it to local_lock. It could be an improvement in the future. > > > > > > v4: > > > - Resend and rebased on v5.18. > > > > > > v3: > > > - Removed the Acked-by tags from Roman since this version is based on > > > the folio relevant. > > > > > > v2: > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > - Rebase to linux 5.15-rc1. > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > v1: > > > - Drop RFC tag. > > > - Rebase to linux next-20210811. > > > > > > RFC v4: > > > - Collect Acked-by from Roman. > > > - Rebase to linux next-20210525. > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > Thanks for Roman's review and suggestions. > > > > > > RFC v3: > > > - Drop the code cleanup and simplification patches. Gather those patches > > > into a separate series[1]. > > > - Rework patch #1 suggested by Johannes. > > > > > > RFC v2: > > > - Collect Acked-by tags by Johannes. Thanks. > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > - Fix move_pages_to_lru(). > > > > > > Muchun Song (11): > > > mm: memcontrol: remove dead code and comments > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > lruvec_unlock{_irq, _irqrestore} > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > mm: vmscan: rework move_pages_to_lru() > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > mm: memcontrol: introduce memcg_reparent_ops > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > fs/buffer.c | 4 +- > > > fs/fs-writeback.c | 23 +- > > > include/linux/memcontrol.h | 218 +++++++++------ > > > include/linux/mm_inline.h | 6 + > > > include/trace/events/writeback.h | 5 + > > > mm/compaction.c | 39 ++- > > > mm/huge_memory.c | 153 ++++++++-- > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > mm/migrate.c | 4 + > > > mm/mlock.c | 2 +- > > > mm/page_io.c | 5 +- > > > mm/swap.c | 49 ++-- > > > mm/vmscan.c | 66 ++--- > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > -- > > > 2.11.0 > > > > > > > >
On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > into mm-unstable which will help to determine whether there is a problem or > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > with the new APIs of obj_cgroup. > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > it exists at a larger scale and is causing recurring problems in the real > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > and make page reclaim very inefficient. > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > This is amazing work! > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > so this might be redundant, I just have a couple of questions. > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > (assuming they can), aren't these pages effectively unaccounted at > > > this point or leaked? Is there protection against this? > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > future work to transfer charges to memcgs that are actually using > > > reparented resources. For example, let's say cgroup A reads a few > > > pages into page cache, and then they are no longer used by cgroup A. > > > cgroup B, however, is using the same pages that are currently charged > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > dies, and these pages are reparented to A's parent, can we possibly > > > mark these reparented pages (maybe in the page tables somewhere) so > > > that next time they get accessed we recharge them to B instead > > > (possibly asynchronously)? > > > I don't have much experience about page tables but I am pretty sure > > > they are loaded so maybe there is no room in PTEs for something like > > > this, but I have always wondered about what we can do for this case > > > where a cgroup is consistently using memory charged to another cgroup. > > > Maybe when this memory is reparented is a good point in time to decide > > > to recharge appropriately. It would also fix the reparenty leak to > > > root problem (if it even exists). > > > > > > > From my point of view, this is going to be an improvement to the memcg > > subsystem in the future. IIUC, most reparented pages are page cache > > pages without be mapped to users. So page tables are not a suitable > > place to record this information. However, we already have this information > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > been reparented. I am thinking if a place where a page is mapped (probably > > page fault patch) or page (cache) is written (usually vfs write path) > > is suitable to transfer page's memcg from one to another. But need more > > Very good point about unmapped pages, I missed this. Page tables will > do us no good here. Such a change would indeed require careful thought > because (like you mentioned) there are multiple points in time where > it might be suitable to consider recharging the page (e.g. when the > page is mapped). This could be an incremental change though. Right now > we have no recharging at all, so maybe we can gradually add recharging > to suitable paths. > Agree. > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of This is a good start. > current is not a descendant of page's obj_cgroup->memcg) is a good I am not sure this one since a page could be shared between different memcg. root / \ A B / \ \ C E D e.g. a page (originally, it belongs to memcg E and E is dying) is reparented to memcg A, and it is shared between C and D now. Then we need to consider whether it should be recharged. Yep, we need more thinging about recharging. > place to start? > > My rationale is that if the page is charged to root_mem_cgroup through I think the following issue not only exists in root_mem_cgroup but also in non-root. > reparenting and a process in a memcg is using it then this is probably > an accounting leak. If a page is charged to a memcg A through > reparenting and is used by a memcg B in a different subtree, then > probably memcg B is getting away with using the page for free while A > is being taxed. If B is a descendant of A, it is still getting away > with using the page unaccounted, but at least it makes no difference > for A. I agree this case needs to be improved. > > One could argue that we might as well recharge a reparented page > anyway if the process is cheap (or done asynchronously), and the paths > where we do recharging are not very common. > > All of this might be moot, I am just thinking out loud. In any way > this would be future work and not part of this work. > Agree. Thanks. > > > If we need more information to make this decision, where to store those > > information? This is my primary thoughts on this question. > > > > > Thanks. > > > > > Thanks again for this work and please excuse my ignorance if any part > > > of what I said doesn't make sense :) > > > > > > > > > > > ```bash > > > > #!/bin/bash > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > cat /proc/cgroups | grep memory > > > > > > > > for i in {0..2000} > > > > do > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > cat temp >> log > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > done > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > rm -f temp log > > > > ``` > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > v6: > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > - Rebase to mm-unstable. > > > > > > > > v5: > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > - Fix lockdep warning reported by kernel test robot. > > > > - Add two new patches to do code cleanup. > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > v4: > > > > - Resend and rebased on v5.18. > > > > > > > > v3: > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > the folio relevant. > > > > > > > > v2: > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > - Rebase to linux 5.15-rc1. > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > v1: > > > > - Drop RFC tag. > > > > - Rebase to linux next-20210811. > > > > > > > > RFC v4: > > > > - Collect Acked-by from Roman. > > > > - Rebase to linux next-20210525. > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > RFC v3: > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > into a separate series[1]. > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > RFC v2: > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > - Fix move_pages_to_lru(). > > > > > > > > Muchun Song (11): > > > > mm: memcontrol: remove dead code and comments > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > lruvec_unlock{_irq, _irqrestore} > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > mm: vmscan: rework move_pages_to_lru() > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > fs/buffer.c | 4 +- > > > > fs/fs-writeback.c | 23 +- > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > include/linux/mm_inline.h | 6 + > > > > include/trace/events/writeback.h | 5 + > > > > mm/compaction.c | 39 ++- > > > > mm/huge_memory.c | 153 ++++++++-- > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > mm/migrate.c | 4 + > > > > mm/mlock.c | 2 +- > > > > mm/page_io.c | 5 +- > > > > mm/swap.c | 49 ++-- > > > > mm/vmscan.c | 66 ++--- > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > -- > > > > 2.11.0 > > > > > > > > > > > >
On 27.6.2022 11.05, Yosry Ahmed wrote: > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: >> >> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: >>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: >>>> >>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series >>>> into mm-unstable which will help to determine whether there is a problem or >>>> degradation. I am also doing some benchmark tests in parallel. >>>> >>>> Since the following patchsets applied. All the kernel memory are charged >>>> with the new APIs of obj_cgroup. >>>> >>>> commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") >>>> commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") >>>> >>>> But user memory allocations (LRU pages) pinning memcgs for a long time - >>>> it exists at a larger scale and is causing recurring problems in the real >>>> world: page cache doesn't get reclaimed for a long time, or is used by the >>>> second, third, fourth, ... instance of the same job that was restarted into >>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, >>>> and make page reclaim very inefficient. >>>> >>>> We can convert LRU pages and most other raw memcg pins to the objcg direction >>>> to fix this problem, and then the LRU pages will not pin the memcgs. >>>> >>>> This patchset aims to make the LRU pages to drop the reference to memory >>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number >>>> of the dying cgroups will not increase if we run the following test script. >>> >>> This is amazing work! >>> >>> Sorry if I came late, I didn't follow the threads of previous versions >>> so this might be redundant, I just have a couple of questions. >>> >>> a) If LRU pages keep getting parented until they reach root_mem_cgroup >>> (assuming they can), aren't these pages effectively unaccounted at >>> this point or leaked? Is there protection against this? >>> >> >> In this case, those pages are accounted in root memcg level. Unfortunately, >> there is no mechanism now to transfer a page's memcg from one to another. >> >>> b) Since moving charged pages between memcgs is now becoming easier by >>> using the APIs of obj_cgroup, I wonder if this opens the door for >>> future work to transfer charges to memcgs that are actually using >>> reparented resources. For example, let's say cgroup A reads a few >>> pages into page cache, and then they are no longer used by cgroup A. >>> cgroup B, however, is using the same pages that are currently charged >>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A >>> dies, and these pages are reparented to A's parent, can we possibly >>> mark these reparented pages (maybe in the page tables somewhere) so >>> that next time they get accessed we recharge them to B instead >>> (possibly asynchronously)? >>> I don't have much experience about page tables but I am pretty sure >>> they are loaded so maybe there is no room in PTEs for something like >>> this, but I have always wondered about what we can do for this case >>> where a cgroup is consistently using memory charged to another cgroup. >>> Maybe when this memory is reparented is a good point in time to decide >>> to recharge appropriately. It would also fix the reparenty leak to >>> root problem (if it even exists). >>> >> >> From my point of view, this is going to be an improvement to the memcg >> subsystem in the future. IIUC, most reparented pages are page cache >> pages without be mapped to users. So page tables are not a suitable >> place to record this information. However, we already have this information >> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not >> equal to the page's obj_cgroup->memcg->objcg, it means this page have >> been reparented. I am thinking if a place where a page is mapped (probably >> page fault patch) or page (cache) is written (usually vfs write path) >> is suitable to transfer page's memcg from one to another. But need more > > Very good point about unmapped pages, I missed this. Page tables will > do us no good here. Such a change would indeed require careful thought > because (like you mentioned) there are multiple points in time where > it might be suitable to consider recharging the page (e.g. when the > page is mapped). This could be an incremental change though. Right now > we have no recharging at all, so maybe we can gradually add recharging > to suitable paths. > >> thinking, e.g. How to decide if a reparented page needs to be transferred? > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > current is not a descendant of page's obj_cgroup->memcg) is a good > place to start? > > My rationale is that if the page is charged to root_mem_cgroup through > reparenting and a process in a memcg is using it then this is probably > an accounting leak. If a page is charged to a memcg A through > reparenting and is used by a memcg B in a different subtree, then > probably memcg B is getting away with using the page for free while A > is being taxed. If B is a descendant of A, it is still getting away > with using the page unaccounted, but at least it makes no difference > for A. > > One could argue that we might as well recharge a reparented page > anyway if the process is cheap (or done asynchronously), and the paths > where we do recharging are not very common. > > All of this might be moot, I am just thinking out loud. In any way > this would be future work and not part of this work. > I think you have to uncharge at the reparented parent to keep balances right (because parent is hierarchically charged thru page_counter). And maybe recharge after that if appropriate. > >> If we need more information to make this decision, where to store those >> information? This is my primary thoughts on this question. > >> >> Thanks. >> >>> Thanks again for this work and please excuse my ignorance if any part >>> of what I said doesn't make sense :) >>> >>>> >>>> ```bash >>>> #!/bin/bash >>>> >>>> dd if=/dev/zero of=temp bs=4096 count=1 >>>> cat /proc/cgroups | grep memory >>>> >>>> for i in {0..2000} >>>> do >>>> mkdir /sys/fs/cgroup/memory/test$i >>>> echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs >>>> cat temp >> log >>>> echo $$ > /sys/fs/cgroup/memory/cgroup.procs >>>> rmdir /sys/fs/cgroup/memory/test$i >>>> done >>>> >>>> cat /proc/cgroups | grep memory >>>> >>>> rm -f temp log >>>> ``` >>>> >>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ >>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ >>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ >>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ >>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ >>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ >>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ >>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ >>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ >>>> >>>> v6: >>>> - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. >>>> - Rebase to mm-unstable. >>>> >>>> v5: >>>> - Lots of improvements from Johannes, Roman and Waiman. >>>> - Fix lockdep warning reported by kernel test robot. >>>> - Add two new patches to do code cleanup. >>>> - Collect Acked-by and Reviewed-by from Johannes and Roman. >>>> - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since >>>> local_lock/unlock_irq() takes an parameter, it needs more thinking to transform >>>> it to local_lock. It could be an improvement in the future. >>>> >>>> v4: >>>> - Resend and rebased on v5.18. >>>> >>>> v3: >>>> - Removed the Acked-by tags from Roman since this version is based on >>>> the folio relevant. >>>> >>>> v2: >>>> - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the >>>> dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). >>>> - Rebase to linux 5.15-rc1. >>>> - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). >>>> >>>> v1: >>>> - Drop RFC tag. >>>> - Rebase to linux next-20210811. >>>> >>>> RFC v4: >>>> - Collect Acked-by from Roman. >>>> - Rebase to linux next-20210525. >>>> - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). >>>> - Change the patch 1 title to "prepare objcg API for non-kmem usage". >>>> - Convert reparent_ops_head to an array in patch 8. >>>> >>>> Thanks for Roman's review and suggestions. >>>> >>>> RFC v3: >>>> - Drop the code cleanup and simplification patches. Gather those patches >>>> into a separate series[1]. >>>> - Rework patch #1 suggested by Johannes. >>>> >>>> RFC v2: >>>> - Collect Acked-by tags by Johannes. Thanks. >>>> - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. >>>> - Fix move_pages_to_lru(). >>>> >>>> Muchun Song (11): >>>> mm: memcontrol: remove dead code and comments >>>> mm: rename unlock_page_lruvec{_irq, _irqrestore} to >>>> lruvec_unlock{_irq, _irqrestore} >>>> mm: memcontrol: prepare objcg API for non-kmem usage >>>> mm: memcontrol: make lruvec lock safe when LRU pages are reparented >>>> mm: vmscan: rework move_pages_to_lru() >>>> mm: thp: make split queue lock safe when LRU pages are reparented >>>> mm: memcontrol: make all the callers of {folio,page}_memcg() safe >>>> mm: memcontrol: introduce memcg_reparent_ops >>>> mm: memcontrol: use obj_cgroup APIs to charge the LRU pages >>>> mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function >>>> mm: lru: use lruvec lock to serialize memcg changes >>>> >>>> fs/buffer.c | 4 +- >>>> fs/fs-writeback.c | 23 +- >>>> include/linux/memcontrol.h | 218 +++++++++------ >>>> include/linux/mm_inline.h | 6 + >>>> include/trace/events/writeback.h | 5 + >>>> mm/compaction.c | 39 ++- >>>> mm/huge_memory.c | 153 ++++++++-- >>>> mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ >>>> mm/migrate.c | 4 + >>>> mm/mlock.c | 2 +- >>>> mm/page_io.c | 5 +- >>>> mm/swap.c | 49 ++-- >>>> mm/vmscan.c | 66 ++--- >>>> 13 files changed, 776 insertions(+), 382 deletions(-) >>>> >>>> >>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f >>>> -- >>>> 2.11.0 >>>> >>>> >>> >
On Mon, Jun 27, 2022 at 3:13 AM Muchun Song <songmuchun@bytedance.com> wrote: > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > and make page reclaim very inefficient. > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > This is amazing work! > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > this point or leaked? Is there protection against this? > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > future work to transfer charges to memcgs that are actually using > > > > reparented resources. For example, let's say cgroup A reads a few > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > cgroup B, however, is using the same pages that are currently charged > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > that next time they get accessed we recharge them to B instead > > > > (possibly asynchronously)? > > > > I don't have much experience about page tables but I am pretty sure > > > > they are loaded so maybe there is no room in PTEs for something like > > > > this, but I have always wondered about what we can do for this case > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > Maybe when this memory is reparented is a good point in time to decide > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > root problem (if it even exists). > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > subsystem in the future. IIUC, most reparented pages are page cache > > > pages without be mapped to users. So page tables are not a suitable > > > place to record this information. However, we already have this information > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > been reparented. I am thinking if a place where a page is mapped (probably > > > page fault patch) or page (cache) is written (usually vfs write path) > > > is suitable to transfer page's memcg from one to another. But need more > > > > Very good point about unmapped pages, I missed this. Page tables will > > do us no good here. Such a change would indeed require careful thought > > because (like you mentioned) there are multiple points in time where > > it might be suitable to consider recharging the page (e.g. when the > > page is mapped). This could be an incremental change though. Right now > > we have no recharging at all, so maybe we can gradually add recharging > > to suitable paths. > > > > Agree. > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > This is a good start. > > > current is not a descendant of page's obj_cgroup->memcg) is a good > > I am not sure this one since a page could be shared between different > memcg. > > root > / \ > A B > / \ \ > C E D > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented > to memcg A, and it is shared between C and D now. Then we need to consider > whether it should be recharged. Yep, we need more thinging about recharging. Assuming that we are recharging in the mapping path, and D is mapping a page that was used by E and later reparented to A, I think we should recharge it to D and uncharge from A in all cases: If C is not using the page (not shared), then the page should be accounted to its real user, D, instead of taxing A. If C is also using the page(shared), then it is not wrong to have the page accounted to D since it's also a user of the page. Either way only one of the memcgs using the page will be charged. So I think either way recharging the page to D instead of A would be correct. IMO, whether we want to skip the recharge to D for some cases or not would depend on performance and not correctness, since it should always be correct to recharge the page to D in this scenario. > > > place to start? > > > > My rationale is that if the page is charged to root_mem_cgroup through > > I think the following issue not only exists in root_mem_cgroup but also > in non-root. What's special about root is that every single memcg is a descendant from root, and that accounting user pages to root is usually not something that we want. So if we rely on a heuristic like (memcg of current is not a descendant of page's obj_cgroup->memcg), we need to have a special case for root so that reparented pages to root are always recharged. > > > reparenting and a process in a memcg is using it then this is probably > > an accounting leak. If a page is charged to a memcg A through > > reparenting and is used by a memcg B in a different subtree, then > > probably memcg B is getting away with using the page for free while A > > is being taxed. If B is a descendant of A, it is still getting away > > with using the page unaccounted, but at least it makes no difference > > for A. > > I agree this case needs to be improved. > > > > > One could argue that we might as well recharge a reparented page > > anyway if the process is cheap (or done asynchronously), and the paths > > where we do recharging are not very common. > > > > All of this might be moot, I am just thinking out loud. In any way > > this would be future work and not part of this work. > > > > Agree. > > Thanks. > > > > > > If we need more information to make this decision, where to store those > > > information? This is my primary thoughts on this question. > > > > > > > > Thanks. > > > > > > > Thanks again for this work and please excuse my ignorance if any part > > > > of what I said doesn't make sense :) > > > > > > > > > > > > > > ```bash > > > > > #!/bin/bash > > > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > for i in {0..2000} > > > > > do > > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > > cat temp >> log > > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > > done > > > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > rm -f temp log > > > > > ``` > > > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > > > v6: > > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > > - Rebase to mm-unstable. > > > > > > > > > > v5: > > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > > - Fix lockdep warning reported by kernel test robot. > > > > > - Add two new patches to do code cleanup. > > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > > > v4: > > > > > - Resend and rebased on v5.18. > > > > > > > > > > v3: > > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > > the folio relevant. > > > > > > > > > > v2: > > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > > - Rebase to linux 5.15-rc1. > > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > > > v1: > > > > > - Drop RFC tag. > > > > > - Rebase to linux next-20210811. > > > > > > > > > > RFC v4: > > > > > - Collect Acked-by from Roman. > > > > > - Rebase to linux next-20210525. > > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > > > RFC v3: > > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > > into a separate series[1]. > > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > > > RFC v2: > > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > > - Fix move_pages_to_lru(). > > > > > > > > > > Muchun Song (11): > > > > > mm: memcontrol: remove dead code and comments > > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > > lruvec_unlock{_irq, _irqrestore} > > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > > mm: vmscan: rework move_pages_to_lru() > > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > > > fs/buffer.c | 4 +- > > > > > fs/fs-writeback.c | 23 +- > > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > > include/linux/mm_inline.h | 6 + > > > > > include/trace/events/writeback.h | 5 + > > > > > mm/compaction.c | 39 ++- > > > > > mm/huge_memory.c | 153 ++++++++-- > > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > > mm/migrate.c | 4 + > > > > > mm/mlock.c | 2 +- > > > > > mm/page_io.c | 5 +- > > > > > mm/swap.c | 49 ++-- > > > > > mm/vmscan.c | 66 ++--- > > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > > -- > > > > > 2.11.0 > > > > > > > > > > > > > > > >
On Mon, Jun 27, 2022 at 3:43 AM Mika Penttilä <mpenttil@redhat.com> wrote: > > > > On 27.6.2022 11.05, Yosry Ahmed wrote: > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > >> > >> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > >>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > >>>> > >>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series > >>>> into mm-unstable which will help to determine whether there is a problem or > >>>> degradation. I am also doing some benchmark tests in parallel. > >>>> > >>>> Since the following patchsets applied. All the kernel memory are charged > >>>> with the new APIs of obj_cgroup. > >>>> > >>>> commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > >>>> commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > >>>> > >>>> But user memory allocations (LRU pages) pinning memcgs for a long time - > >>>> it exists at a larger scale and is causing recurring problems in the real > >>>> world: page cache doesn't get reclaimed for a long time, or is used by the > >>>> second, third, fourth, ... instance of the same job that was restarted into > >>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > >>>> and make page reclaim very inefficient. > >>>> > >>>> We can convert LRU pages and most other raw memcg pins to the objcg direction > >>>> to fix this problem, and then the LRU pages will not pin the memcgs. > >>>> > >>>> This patchset aims to make the LRU pages to drop the reference to memory > >>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > >>>> of the dying cgroups will not increase if we run the following test script. > >>> > >>> This is amazing work! > >>> > >>> Sorry if I came late, I didn't follow the threads of previous versions > >>> so this might be redundant, I just have a couple of questions. > >>> > >>> a) If LRU pages keep getting parented until they reach root_mem_cgroup > >>> (assuming they can), aren't these pages effectively unaccounted at > >>> this point or leaked? Is there protection against this? > >>> > >> > >> In this case, those pages are accounted in root memcg level. Unfortunately, > >> there is no mechanism now to transfer a page's memcg from one to another. > >> > >>> b) Since moving charged pages between memcgs is now becoming easier by > >>> using the APIs of obj_cgroup, I wonder if this opens the door for > >>> future work to transfer charges to memcgs that are actually using > >>> reparented resources. For example, let's say cgroup A reads a few > >>> pages into page cache, and then they are no longer used by cgroup A. > >>> cgroup B, however, is using the same pages that are currently charged > >>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > >>> dies, and these pages are reparented to A's parent, can we possibly > >>> mark these reparented pages (maybe in the page tables somewhere) so > >>> that next time they get accessed we recharge them to B instead > >>> (possibly asynchronously)? > >>> I don't have much experience about page tables but I am pretty sure > >>> they are loaded so maybe there is no room in PTEs for something like > >>> this, but I have always wondered about what we can do for this case > >>> where a cgroup is consistently using memory charged to another cgroup. > >>> Maybe when this memory is reparented is a good point in time to decide > >>> to recharge appropriately. It would also fix the reparenty leak to > >>> root problem (if it even exists). > >>> > >> > >> From my point of view, this is going to be an improvement to the memcg > >> subsystem in the future. IIUC, most reparented pages are page cache > >> pages without be mapped to users. So page tables are not a suitable > >> place to record this information. However, we already have this information > >> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > >> equal to the page's obj_cgroup->memcg->objcg, it means this page have > >> been reparented. I am thinking if a place where a page is mapped (probably > >> page fault patch) or page (cache) is written (usually vfs write path) > >> is suitable to transfer page's memcg from one to another. But need more > > > > Very good point about unmapped pages, I missed this. Page tables will > > do us no good here. Such a change would indeed require careful thought > > because (like you mentioned) there are multiple points in time where > > it might be suitable to consider recharging the page (e.g. when the > > page is mapped). This could be an incremental change though. Right now > > we have no recharging at all, so maybe we can gradually add recharging > > to suitable paths. > > > >> thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > current is not a descendant of page's obj_cgroup->memcg) is a good > > place to start? > > > > My rationale is that if the page is charged to root_mem_cgroup through > > reparenting and a process in a memcg is using it then this is probably > > an accounting leak. If a page is charged to a memcg A through > > reparenting and is used by a memcg B in a different subtree, then > > probably memcg B is getting away with using the page for free while A > > is being taxed. If B is a descendant of A, it is still getting away > > with using the page unaccounted, but at least it makes no difference > > for A. > > > > One could argue that we might as well recharge a reparented page > > anyway if the process is cheap (or done asynchronously), and the paths > > where we do recharging are not very common. > > > > All of this might be moot, I am just thinking out loud. In any way > > this would be future work and not part of this work. > > > > > I think you have to uncharge at the reparented parent to keep balances > right (because parent is hierarchically charged thru page_counter). And > maybe recharge after that if appropriate. > Yeah when I say "recharge" I mean transferring the accounting from one memcg to another. I think every page should end up accounted to one memcg afterall. Thanks for pointing that out. > > > > > > >> If we need more information to make this decision, where to store those > >> information? This is my primary thoughts on this question. > > > >> > >> Thanks. > >> > >>> Thanks again for this work and please excuse my ignorance if any part > >>> of what I said doesn't make sense :) > >>> > >>>> > >>>> ```bash > >>>> #!/bin/bash > >>>> > >>>> dd if=/dev/zero of=temp bs=4096 count=1 > >>>> cat /proc/cgroups | grep memory > >>>> > >>>> for i in {0..2000} > >>>> do > >>>> mkdir /sys/fs/cgroup/memory/test$i > >>>> echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > >>>> cat temp >> log > >>>> echo $$ > /sys/fs/cgroup/memory/cgroup.procs > >>>> rmdir /sys/fs/cgroup/memory/test$i > >>>> done > >>>> > >>>> cat /proc/cgroups | grep memory > >>>> > >>>> rm -f temp log > >>>> ``` > >>>> > >>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > >>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > >>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > >>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > >>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > >>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > >>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > >>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > >>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > >>>> > >>>> v6: > >>>> - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > >>>> - Rebase to mm-unstable. > >>>> > >>>> v5: > >>>> - Lots of improvements from Johannes, Roman and Waiman. > >>>> - Fix lockdep warning reported by kernel test robot. > >>>> - Add two new patches to do code cleanup. > >>>> - Collect Acked-by and Reviewed-by from Johannes and Roman. > >>>> - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > >>>> local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > >>>> it to local_lock. It could be an improvement in the future. > >>>> > >>>> v4: > >>>> - Resend and rebased on v5.18. > >>>> > >>>> v3: > >>>> - Removed the Acked-by tags from Roman since this version is based on > >>>> the folio relevant. > >>>> > >>>> v2: > >>>> - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > >>>> dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > >>>> - Rebase to linux 5.15-rc1. > >>>> - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > >>>> > >>>> v1: > >>>> - Drop RFC tag. > >>>> - Rebase to linux next-20210811. > >>>> > >>>> RFC v4: > >>>> - Collect Acked-by from Roman. > >>>> - Rebase to linux next-20210525. > >>>> - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > >>>> - Change the patch 1 title to "prepare objcg API for non-kmem usage". > >>>> - Convert reparent_ops_head to an array in patch 8. > >>>> > >>>> Thanks for Roman's review and suggestions. > >>>> > >>>> RFC v3: > >>>> - Drop the code cleanup and simplification patches. Gather those patches > >>>> into a separate series[1]. > >>>> - Rework patch #1 suggested by Johannes. > >>>> > >>>> RFC v2: > >>>> - Collect Acked-by tags by Johannes. Thanks. > >>>> - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > >>>> - Fix move_pages_to_lru(). > >>>> > >>>> Muchun Song (11): > >>>> mm: memcontrol: remove dead code and comments > >>>> mm: rename unlock_page_lruvec{_irq, _irqrestore} to > >>>> lruvec_unlock{_irq, _irqrestore} > >>>> mm: memcontrol: prepare objcg API for non-kmem usage > >>>> mm: memcontrol: make lruvec lock safe when LRU pages are reparented > >>>> mm: vmscan: rework move_pages_to_lru() > >>>> mm: thp: make split queue lock safe when LRU pages are reparented > >>>> mm: memcontrol: make all the callers of {folio,page}_memcg() safe > >>>> mm: memcontrol: introduce memcg_reparent_ops > >>>> mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > >>>> mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > >>>> mm: lru: use lruvec lock to serialize memcg changes > >>>> > >>>> fs/buffer.c | 4 +- > >>>> fs/fs-writeback.c | 23 +- > >>>> include/linux/memcontrol.h | 218 +++++++++------ > >>>> include/linux/mm_inline.h | 6 + > >>>> include/trace/events/writeback.h | 5 + > >>>> mm/compaction.c | 39 ++- > >>>> mm/huge_memory.c | 153 ++++++++-- > >>>> mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > >>>> mm/migrate.c | 4 + > >>>> mm/mlock.c | 2 +- > >>>> mm/page_io.c | 5 +- > >>>> mm/swap.c | 49 ++-- > >>>> mm/vmscan.c | 66 ++--- > >>>> 13 files changed, 776 insertions(+), 382 deletions(-) > >>>> > >>>> > >>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > >>>> -- > >>>> 2.11.0 > >>>> > >>>> > >>> > > >
On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote: > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > and make page reclaim very inefficient. > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > This is amazing work! > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > this point or leaked? Is there protection against this? > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > future work to transfer charges to memcgs that are actually using > > > > reparented resources. For example, let's say cgroup A reads a few > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > cgroup B, however, is using the same pages that are currently charged > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > that next time they get accessed we recharge them to B instead > > > > (possibly asynchronously)? > > > > I don't have much experience about page tables but I am pretty sure > > > > they are loaded so maybe there is no room in PTEs for something like > > > > this, but I have always wondered about what we can do for this case > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > Maybe when this memory is reparented is a good point in time to decide > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > root problem (if it even exists). > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > subsystem in the future. IIUC, most reparented pages are page cache > > > pages without be mapped to users. So page tables are not a suitable > > > place to record this information. However, we already have this information > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > been reparented. I am thinking if a place where a page is mapped (probably > > > page fault patch) or page (cache) is written (usually vfs write path) > > > is suitable to transfer page's memcg from one to another. But need more > > > > Very good point about unmapped pages, I missed this. Page tables will > > do us no good here. Such a change would indeed require careful thought > > because (like you mentioned) there are multiple points in time where > > it might be suitable to consider recharging the page (e.g. when the > > page is mapped). This could be an incremental change though. Right now > > we have no recharging at all, so maybe we can gradually add recharging > > to suitable paths. > > > > Agree. > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > This is a good start. > > > current is not a descendant of page's obj_cgroup->memcg) is a good > > I am not sure this one since a page could be shared between different > memcg. No way :) > > root > / \ > A B > / \ \ > C E D > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented > to memcg A, and it is shared between C and D now. Then we need to consider > whether it should be recharged. Yep, we need more thinging about recharging. This is why I wasn't sure that objcg-based reparenting is the best approach. Instead (or maybe even _with_ the reparenting) we can recharge pages on, say, page activation and/or rotation (inactive->inactive). Pagefaults/reads are probably to hot to do it there. But the reclaim path should be more accessible in terms of the performance overhead. Just some ideas. Thanks!
On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote: > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > > and make page reclaim very inefficient. > > > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > > > This is amazing work! > > > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > > this point or leaked? Is there protection against this? > > > > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > > future work to transfer charges to memcgs that are actually using > > > > > reparented resources. For example, let's say cgroup A reads a few > > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > > cgroup B, however, is using the same pages that are currently charged > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > > that next time they get accessed we recharge them to B instead > > > > > (possibly asynchronously)? > > > > > I don't have much experience about page tables but I am pretty sure > > > > > they are loaded so maybe there is no room in PTEs for something like > > > > > this, but I have always wondered about what we can do for this case > > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > > Maybe when this memory is reparented is a good point in time to decide > > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > > root problem (if it even exists). > > > > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > > subsystem in the future. IIUC, most reparented pages are page cache > > > > pages without be mapped to users. So page tables are not a suitable > > > > place to record this information. However, we already have this information > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > > been reparented. I am thinking if a place where a page is mapped (probably > > > > page fault patch) or page (cache) is written (usually vfs write path) > > > > is suitable to transfer page's memcg from one to another. But need more > > > > > > Very good point about unmapped pages, I missed this. Page tables will > > > do us no good here. Such a change would indeed require careful thought > > > because (like you mentioned) there are multiple points in time where > > > it might be suitable to consider recharging the page (e.g. when the > > > page is mapped). This could be an incremental change though. Right now > > > we have no recharging at all, so maybe we can gradually add recharging > > > to suitable paths. > > > > > > > Agree. > > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > > > This is a good start. > > > > > current is not a descendant of page's obj_cgroup->memcg) is a good > > > > I am not sure this one since a page could be shared between different > > memcg. > > No way :) No way in terms of charging or usage? AFAIU a page is only charged to one memcg, but can be used by multiple memcgs if it exists in the page cache for example. Am I missing something here? > > > > > root > > / \ > > A B > > / \ \ > > C E D > > > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented > > to memcg A, and it is shared between C and D now. Then we need to consider > > whether it should be recharged. Yep, we need more thinging about recharging. > > This is why I wasn't sure that objcg-based reparenting is the best approach. > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say, > page activation and/or rotation (inactive->inactive). Pagefaults/reads are > probably to hot to do it there. But the reclaim path should be more accessible > in terms of the performance overhead. Just some ideas. Thanks for chipping in, Roman! I am honestly not sure on what paths the recharge should occur, but I know that we will probably need a recharge mechanism at some point. We can start adding recharging gradually to paths that don't affect performance, reclaim is a very good place. Maybe we sort LRUs such that reparented pages are scanned first, and possibly recharged under memcg pressure. > > Thanks!
On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote: > On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > > > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote: > > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > > > and make page reclaim very inefficient. > > > > > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > > > > > This is amazing work! > > > > > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > > > this point or leaked? Is there protection against this? > > > > > > > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > > > future work to transfer charges to memcgs that are actually using > > > > > > reparented resources. For example, let's say cgroup A reads a few > > > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > > > cgroup B, however, is using the same pages that are currently charged > > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > > > that next time they get accessed we recharge them to B instead > > > > > > (possibly asynchronously)? > > > > > > I don't have much experience about page tables but I am pretty sure > > > > > > they are loaded so maybe there is no room in PTEs for something like > > > > > > this, but I have always wondered about what we can do for this case > > > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > > > Maybe when this memory is reparented is a good point in time to decide > > > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > > > root problem (if it even exists). > > > > > > > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > > > subsystem in the future. IIUC, most reparented pages are page cache > > > > > pages without be mapped to users. So page tables are not a suitable > > > > > place to record this information. However, we already have this information > > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > > > been reparented. I am thinking if a place where a page is mapped (probably > > > > > page fault patch) or page (cache) is written (usually vfs write path) > > > > > is suitable to transfer page's memcg from one to another. But need more > > > > > > > > Very good point about unmapped pages, I missed this. Page tables will > > > > do us no good here. Such a change would indeed require careful thought > > > > because (like you mentioned) there are multiple points in time where > > > > it might be suitable to consider recharging the page (e.g. when the > > > > page is mapped). This could be an incremental change though. Right now > > > > we have no recharging at all, so maybe we can gradually add recharging > > > > to suitable paths. > > > > > > > > > > Agree. > > > > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > > > > > This is a good start. > > > > > > > current is not a descendant of page's obj_cgroup->memcg) is a good > > > > > > I am not sure this one since a page could be shared between different > > > memcg. > > > > No way :) > > No way in terms of charging or usage? AFAIU a page is only charged to > one memcg, but can be used by multiple memcgs if it exists in the page > cache for example. Am I missing something here? Charging of course. I mean we can't realistically precisely account for shared use of a page between multiple cgroups, at least not at 4k granularity. > > > > > > > > > root > > > / \ > > > A B > > > / \ \ > > > C E D > > > > > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented > > > to memcg A, and it is shared between C and D now. Then we need to consider > > > whether it should be recharged. Yep, we need more thinging about recharging. > > > > This is why I wasn't sure that objcg-based reparenting is the best approach. > > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say, > > page activation and/or rotation (inactive->inactive). Pagefaults/reads are > > probably to hot to do it there. But the reclaim path should be more accessible > > in terms of the performance overhead. Just some ideas. > > Thanks for chipping in, Roman! I am honestly not sure on what paths > the recharge should occur, but I know that we will probably need a > recharge mechanism at some point. We can start adding recharging > gradually to paths that don't affect performance, reclaim is a very > good place. Maybe we sort LRUs such that reparented pages are scanned > first, and possibly recharged under memcg pressure. I think the activation path is a good place to start because we know for sure that a page is actively used and we know who is using it.
On Mon, Jun 27, 2022 at 6:38 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote: > > On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > > > > > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote: > > > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote: > > > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > > > > and make page reclaim very inefficient. > > > > > > > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > > > > > > > This is amazing work! > > > > > > > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > > > > this point or leaked? Is there protection against this? > > > > > > > > > > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > > > > future work to transfer charges to memcgs that are actually using > > > > > > > reparented resources. For example, let's say cgroup A reads a few > > > > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > > > > cgroup B, however, is using the same pages that are currently charged > > > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > > > > that next time they get accessed we recharge them to B instead > > > > > > > (possibly asynchronously)? > > > > > > > I don't have much experience about page tables but I am pretty sure > > > > > > > they are loaded so maybe there is no room in PTEs for something like > > > > > > > this, but I have always wondered about what we can do for this case > > > > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > > > > Maybe when this memory is reparented is a good point in time to decide > > > > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > > > > root problem (if it even exists). > > > > > > > > > > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > > > > subsystem in the future. IIUC, most reparented pages are page cache > > > > > > pages without be mapped to users. So page tables are not a suitable > > > > > > place to record this information. However, we already have this information > > > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > > > > been reparented. I am thinking if a place where a page is mapped (probably > > > > > > page fault patch) or page (cache) is written (usually vfs write path) > > > > > > is suitable to transfer page's memcg from one to another. But need more > > > > > > > > > > Very good point about unmapped pages, I missed this. Page tables will > > > > > do us no good here. Such a change would indeed require careful thought > > > > > because (like you mentioned) there are multiple points in time where > > > > > it might be suitable to consider recharging the page (e.g. when the > > > > > page is mapped). This could be an incremental change though. Right now > > > > > we have no recharging at all, so maybe we can gradually add recharging > > > > > to suitable paths. > > > > > > > > > > > > > Agree. > > > > > > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > > > > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of > > > > > > > > This is a good start. > > > > > > > > > current is not a descendant of page's obj_cgroup->memcg) is a good > > > > > > > > I am not sure this one since a page could be shared between different > > > > memcg. > > > > > > No way :) > > > > No way in terms of charging or usage? AFAIU a page is only charged to > > one memcg, but can be used by multiple memcgs if it exists in the page > > cache for example. Am I missing something here? > > Charging of course. I mean we can't realistically precisely account for > shared use of a page between multiple cgroups, at least not at 4k granularity. > > > > > > > > > > > > > > root > > > > / \ > > > > A B > > > > / \ \ > > > > C E D > > > > > > > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented > > > > to memcg A, and it is shared between C and D now. Then we need to consider > > > > whether it should be recharged. Yep, we need more thinging about recharging. > > > > > > This is why I wasn't sure that objcg-based reparenting is the best approach. > > > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say, > > > page activation and/or rotation (inactive->inactive). Pagefaults/reads are > > > probably to hot to do it there. But the reclaim path should be more accessible > > > in terms of the performance overhead. Just some ideas. > > > > Thanks for chipping in, Roman! I am honestly not sure on what paths > > the recharge should occur, but I know that we will probably need a > > recharge mechanism at some point. We can start adding recharging > > gradually to paths that don't affect performance, reclaim is a very > > good place. Maybe we sort LRUs such that reparented pages are scanned > > first, and possibly recharged under memcg pressure. > > I think the activation path is a good place to start because we know for sure > that a page is actively used and we know who is using it. I agree. What I am suggesting is to additionally scan reparented pages first under memory pressure. These pages were used by a dead descendant, so there is a big chance they aren't being used anymore or they are used by a different memcg, in this case recharging these pages (if possible) might put the memcg back below its limit. If a memcg reaches its limit and undergoes reclaim because of reparented pages it isn't using, this is bad. If during reclaim we keep those pages and reclaim other pages that are actually being used by the memcg (even if colder), is arguably worse. WDYT?
On Tue, 21 Jun 2022 20:56:47 +0800 Muchun Song <songmuchun@bytedance.com> wrote: > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > into mm-unstable which will help to determine whether there is a problem or > degradation. I am also doing some benchmark tests in parallel. > > Since the following patchsets applied. All the kernel memory are charged > with the new APIs of obj_cgroup. > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > But user memory allocations (LRU pages) pinning memcgs for a long time - > it exists at a larger scale and is causing recurring problems in the real > world: page cache doesn't get reclaimed for a long time, or is used by the > second, third, fourth, ... instance of the same job that was restarted into > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > and make page reclaim very inefficient. > > We can convert LRU pages and most other raw memcg pins to the objcg direction > to fix this problem, and then the LRU pages will not pin the memcgs. > > This patchset aims to make the LRU pages to drop the reference to memory > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > of the dying cgroups will not increase if we run the following test script. > > ... > I don't have reviewer or acker tags on a couple of these, but there is still time - I plan to push this series into mm-stable around July 8.
On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > into mm-unstable which will help to determine whether there is a problem or > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > with the new APIs of obj_cgroup. > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > it exists at a larger scale and is causing recurring problems in the real > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > second, third, fourth, ... instance of the same job that was restarted into > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > and make page reclaim very inefficient. > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > of the dying cgroups will not increase if we run the following test script. > > > > This is amazing work! > > > > Sorry if I came late, I didn't follow the threads of previous versions > > so this might be redundant, I just have a couple of questions. > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > (assuming they can), aren't these pages effectively unaccounted at > > this point or leaked? Is there protection against this? > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > there is no mechanism now to transfer a page's memcg from one to another. > Hey Muchun, Quick question regarding the behavior of this change on cgroup v1 (I know .. I know .. sorry): When a memcg dies, its LRU pages are reparented, but what happens to the charge? IIUC we don't do anything because the pages are already hierarchically charged to the parent. Is this correct? In cgroup v1, we have non-hierarchical stats as well, so I am trying to understand if the reparented memory will appear in the non-hierarchical stats of the parent (my understanding is that the will not). I am also particularly interested in the charging behavior of pages that get reparented to root_mem_cgroup. The main reason I am asking is that (hierarchical_usage - non-hierarchical_usage - children_hierarchical_usage) is *roughly* something that we use, especially at the root level, to estimate zombie memory usage. I am trying to see if this change will break such calculations. Thanks! > > b) Since moving charged pages between memcgs is now becoming easier by > > using the APIs of obj_cgroup, I wonder if this opens the door for > > future work to transfer charges to memcgs that are actually using > > reparented resources. For example, let's say cgroup A reads a few > > pages into page cache, and then they are no longer used by cgroup A. > > cgroup B, however, is using the same pages that are currently charged > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > dies, and these pages are reparented to A's parent, can we possibly > > mark these reparented pages (maybe in the page tables somewhere) so > > that next time they get accessed we recharge them to B instead > > (possibly asynchronously)? > > I don't have much experience about page tables but I am pretty sure > > they are loaded so maybe there is no room in PTEs for something like > > this, but I have always wondered about what we can do for this case > > where a cgroup is consistently using memory charged to another cgroup. > > Maybe when this memory is reparented is a good point in time to decide > > to recharge appropriately. It would also fix the reparenty leak to > > root problem (if it even exists). > > > > From my point of view, this is going to be an improvement to the memcg > subsystem in the future. IIUC, most reparented pages are page cache > pages without be mapped to users. So page tables are not a suitable > place to record this information. However, we already have this information > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > equal to the page's obj_cgroup->memcg->objcg, it means this page have > been reparented. I am thinking if a place where a page is mapped (probably > page fault patch) or page (cache) is written (usually vfs write path) > is suitable to transfer page's memcg from one to another. But need more > thinking, e.g. How to decide if a reparented page needs to be transferred? > If we need more information to make this decision, where to store those > information? This is my primary thoughts on this question. > > Thanks. > > > Thanks again for this work and please excuse my ignorance if any part > > of what I said doesn't make sense :) > > > > > > > > ```bash > > > #!/bin/bash > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > cat /proc/cgroups | grep memory > > > > > > for i in {0..2000} > > > do > > > mkdir /sys/fs/cgroup/memory/test$i > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > cat temp >> log > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > rmdir /sys/fs/cgroup/memory/test$i > > > done > > > > > > cat /proc/cgroups | grep memory > > > > > > rm -f temp log > > > ``` > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > v6: > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > - Rebase to mm-unstable. > > > > > > v5: > > > - Lots of improvements from Johannes, Roman and Waiman. > > > - Fix lockdep warning reported by kernel test robot. > > > - Add two new patches to do code cleanup. > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > it to local_lock. It could be an improvement in the future. > > > > > > v4: > > > - Resend and rebased on v5.18. > > > > > > v3: > > > - Removed the Acked-by tags from Roman since this version is based on > > > the folio relevant. > > > > > > v2: > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > - Rebase to linux 5.15-rc1. > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > v1: > > > - Drop RFC tag. > > > - Rebase to linux next-20210811. > > > > > > RFC v4: > > > - Collect Acked-by from Roman. > > > - Rebase to linux next-20210525. > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > Thanks for Roman's review and suggestions. > > > > > > RFC v3: > > > - Drop the code cleanup and simplification patches. Gather those patches > > > into a separate series[1]. > > > - Rework patch #1 suggested by Johannes. > > > > > > RFC v2: > > > - Collect Acked-by tags by Johannes. Thanks. > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > - Fix move_pages_to_lru(). > > > > > > Muchun Song (11): > > > mm: memcontrol: remove dead code and comments > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > lruvec_unlock{_irq, _irqrestore} > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > mm: vmscan: rework move_pages_to_lru() > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > mm: memcontrol: introduce memcg_reparent_ops > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > fs/buffer.c | 4 +- > > > fs/fs-writeback.c | 23 +- > > > include/linux/memcontrol.h | 218 +++++++++------ > > > include/linux/mm_inline.h | 6 + > > > include/trace/events/writeback.h | 5 + > > > mm/compaction.c | 39 ++- > > > mm/huge_memory.c | 153 ++++++++-- > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > mm/migrate.c | 4 + > > > mm/mlock.c | 2 +- > > > mm/page_io.c | 5 +- > > > mm/swap.c | 49 ++-- > > > mm/vmscan.c | 66 ++--- > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > -- > > > 2.11.0 > > > > > > > >
On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote: > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > into mm-unstable which will help to determine whether there is a problem or > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > with the new APIs of obj_cgroup. > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > it exists at a larger scale and is causing recurring problems in the real > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > and make page reclaim very inefficient. > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > This is amazing work! > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > so this might be redundant, I just have a couple of questions. > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > (assuming they can), aren't these pages effectively unaccounted at > > > this point or leaked? Is there protection against this? > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > there is no mechanism now to transfer a page's memcg from one to another. > > > > Hey Muchun, > > Quick question regarding the behavior of this change on cgroup v1 (I > know .. I know .. sorry): > > When a memcg dies, its LRU pages are reparented, but what happens to > the charge? IIUC we don't do anything because the pages are already > hierarchically charged to the parent. Is this correct? > Correct. > In cgroup v1, we have non-hierarchical stats as well, so I am trying > to understand if the reparented memory will appear in the > non-hierarchical stats of the parent (my understanding is that the > will not). I am also particularly interested in the charging behavior > of pages that get reparented to root_mem_cgroup. > I didn't change any memory stats when reparenting. > The main reason I am asking is that (hierarchical_usage - > non-hierarchical_usage - children_hierarchical_usage) is *roughly* > something that we use, especially at the root level, to estimate > zombie memory usage. I am trying to see if this change will break such > calculations. Thanks! > So I think your calculations will still be correct. If you have any unexpected result, please let me know. Thanks. > > > b) Since moving charged pages between memcgs is now becoming easier by > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > future work to transfer charges to memcgs that are actually using > > > reparented resources. For example, let's say cgroup A reads a few > > > pages into page cache, and then they are no longer used by cgroup A. > > > cgroup B, however, is using the same pages that are currently charged > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > dies, and these pages are reparented to A's parent, can we possibly > > > mark these reparented pages (maybe in the page tables somewhere) so > > > that next time they get accessed we recharge them to B instead > > > (possibly asynchronously)? > > > I don't have much experience about page tables but I am pretty sure > > > they are loaded so maybe there is no room in PTEs for something like > > > this, but I have always wondered about what we can do for this case > > > where a cgroup is consistently using memory charged to another cgroup. > > > Maybe when this memory is reparented is a good point in time to decide > > > to recharge appropriately. It would also fix the reparenty leak to > > > root problem (if it even exists). > > > > > > > From my point of view, this is going to be an improvement to the memcg > > subsystem in the future. IIUC, most reparented pages are page cache > > pages without be mapped to users. So page tables are not a suitable > > place to record this information. However, we already have this information > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > been reparented. I am thinking if a place where a page is mapped (probably > > page fault patch) or page (cache) is written (usually vfs write path) > > is suitable to transfer page's memcg from one to another. But need more > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > If we need more information to make this decision, where to store those > > information? This is my primary thoughts on this question. > > > > Thanks. > > > > > Thanks again for this work and please excuse my ignorance if any part > > > of what I said doesn't make sense :) > > > > > > > > > > > ```bash > > > > #!/bin/bash > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > cat /proc/cgroups | grep memory > > > > > > > > for i in {0..2000} > > > > do > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > cat temp >> log > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > done > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > rm -f temp log > > > > ``` > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > v6: > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > - Rebase to mm-unstable. > > > > > > > > v5: > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > - Fix lockdep warning reported by kernel test robot. > > > > - Add two new patches to do code cleanup. > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > v4: > > > > - Resend and rebased on v5.18. > > > > > > > > v3: > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > the folio relevant. > > > > > > > > v2: > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > - Rebase to linux 5.15-rc1. > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > v1: > > > > - Drop RFC tag. > > > > - Rebase to linux next-20210811. > > > > > > > > RFC v4: > > > > - Collect Acked-by from Roman. > > > > - Rebase to linux next-20210525. > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > RFC v3: > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > into a separate series[1]. > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > RFC v2: > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > - Fix move_pages_to_lru(). > > > > > > > > Muchun Song (11): > > > > mm: memcontrol: remove dead code and comments > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > lruvec_unlock{_irq, _irqrestore} > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > mm: vmscan: rework move_pages_to_lru() > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > fs/buffer.c | 4 +- > > > > fs/fs-writeback.c | 23 +- > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > include/linux/mm_inline.h | 6 + > > > > include/trace/events/writeback.h | 5 + > > > > mm/compaction.c | 39 ++- > > > > mm/huge_memory.c | 153 ++++++++-- > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > mm/migrate.c | 4 + > > > > mm/mlock.c | 2 +- > > > > mm/page_io.c | 5 +- > > > > mm/swap.c | 49 ++-- > > > > mm/vmscan.c | 66 ++--- > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > -- > > > > 2.11.0 > > > > > > > > > > > >
On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote: > > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote: > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > and make page reclaim very inefficient. > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > This is amazing work! > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > this point or leaked? Is there protection against this? > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > Hey Muchun, > > > > Quick question regarding the behavior of this change on cgroup v1 (I > > know .. I know .. sorry): > > > > When a memcg dies, its LRU pages are reparented, but what happens to > > the charge? IIUC we don't do anything because the pages are already > > hierarchically charged to the parent. Is this correct? > > > > Correct. > > > In cgroup v1, we have non-hierarchical stats as well, so I am trying > > to understand if the reparented memory will appear in the > > non-hierarchical stats of the parent (my understanding is that the > > will not). I am also particularly interested in the charging behavior > > of pages that get reparented to root_mem_cgroup. > > > > I didn't change any memory stats when reparenting. > > > The main reason I am asking is that (hierarchical_usage - > > non-hierarchical_usage - children_hierarchical_usage) is *roughly* > > something that we use, especially at the root level, to estimate > > zombie memory usage. I am trying to see if this change will break such > > calculations. Thanks! > > > > So I think your calculations will still be correct. If you have > any unexpected result, please let me know. Thanks. I have been looking at the code and the patchset and I think there might be a problem with the stats, at least for cgroup v1. Lets say we have a parent memcg P, which has a child memcg C. When processes in memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and for P (aggregated stats, memcg->vmstats). When memcg C is offlined, its pages are reparented to memcg P, so far P->vmstats (hierarchical) still have those pages, and P->vmstats_percpu (non-hierarchical) don't. So far so good. Now those reparented pages get uncharged, but their memcg is P now, so they get subtracted from P's *non-hierarchical* stats (and eventually hierarchical stats as well). So now P->vmstats (hierarchical) decreases, which is correct, but P->vmstats_percpu (non-hierarchical) also decreases, which is wrong, as those stats were never added to P->vmstats_percpu to begin with. From a cgroup v2 perspective *maybe* everything continues to work, but this breaks cgroup v1 non-hierarchical stats. In fact, if the reparented memory exceeds the original non-hierarchical memory in P, we can underflow those stats because we are subtracting stats that were never added in the first place. Please let me know if I am misunderstanding something and there is actually no problem with the non-hierarchical stats (you can stop reading here if this is all in my head and there's actually no problem). Off the top of my mind we can handle stats modifications of reparented memory separately. We should not updated local per-cpu counters, maybe we should rather update memcg->vmstat.state_pending directly so that the changes appear as if they come from a child memcg. Two problems come with such an approach: 1) memcg->vmstat.state_pending is shared between cpus, and so far is only modified by mem_cgroup_css_rstat_flush() in locked context. A solution would be to add reparented state to memcg->vmstat.state_percpu instead and treat it like memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in mind that this adds a tiny bit of memory overhead (roughly 8 bytes*num_cpus for each memcg). 2) Identifying that we are updating stats of reparented memory. This should be easy if we have a pointer to the page to compare page->objcg with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no idea in each of these what page(s) is the stats update associated with. They are called from many different places, it would be troublesome to pass such information down from all call sites. I have nothing off the top of my head to fix this problem except passing the necessary info through all code paths to __mod_memcg_state() and __mod_memcg_lruvec_state(), which is far from ideal. Again, I am sorry if these discussions are late, I didn't have time to look at previous versions of this patchset. > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > future work to transfer charges to memcgs that are actually using > > > > reparented resources. For example, let's say cgroup A reads a few > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > cgroup B, however, is using the same pages that are currently charged > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > that next time they get accessed we recharge them to B instead > > > > (possibly asynchronously)? > > > > I don't have much experience about page tables but I am pretty sure > > > > they are loaded so maybe there is no room in PTEs for something like > > > > this, but I have always wondered about what we can do for this case > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > Maybe when this memory is reparented is a good point in time to decide > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > root problem (if it even exists). > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > subsystem in the future. IIUC, most reparented pages are page cache > > > pages without be mapped to users. So page tables are not a suitable > > > place to record this information. However, we already have this information > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > been reparented. I am thinking if a place where a page is mapped (probably > > > page fault patch) or page (cache) is written (usually vfs write path) > > > is suitable to transfer page's memcg from one to another. But need more > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > If we need more information to make this decision, where to store those > > > information? This is my primary thoughts on this question. > > > > > > Thanks. > > > > > > > Thanks again for this work and please excuse my ignorance if any part > > > > of what I said doesn't make sense :) > > > > > > > > > > > > > > ```bash > > > > > #!/bin/bash > > > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > for i in {0..2000} > > > > > do > > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > > cat temp >> log > > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > > done > > > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > rm -f temp log > > > > > ``` > > > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > > > v6: > > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > > - Rebase to mm-unstable. > > > > > > > > > > v5: > > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > > - Fix lockdep warning reported by kernel test robot. > > > > > - Add two new patches to do code cleanup. > > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > > > v4: > > > > > - Resend and rebased on v5.18. > > > > > > > > > > v3: > > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > > the folio relevant. > > > > > > > > > > v2: > > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > > - Rebase to linux 5.15-rc1. > > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > > > v1: > > > > > - Drop RFC tag. > > > > > - Rebase to linux next-20210811. > > > > > > > > > > RFC v4: > > > > > - Collect Acked-by from Roman. > > > > > - Rebase to linux next-20210525. > > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > > > RFC v3: > > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > > into a separate series[1]. > > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > > > RFC v2: > > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > > - Fix move_pages_to_lru(). > > > > > > > > > > Muchun Song (11): > > > > > mm: memcontrol: remove dead code and comments > > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > > lruvec_unlock{_irq, _irqrestore} > > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > > mm: vmscan: rework move_pages_to_lru() > > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > > > fs/buffer.c | 4 +- > > > > > fs/fs-writeback.c | 23 +- > > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > > include/linux/mm_inline.h | 6 + > > > > > include/trace/events/writeback.h | 5 + > > > > > mm/compaction.c | 39 ++- > > > > > mm/huge_memory.c | 153 ++++++++-- > > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > > mm/migrate.c | 4 + > > > > > mm/mlock.c | 2 +- > > > > > mm/page_io.c | 5 +- > > > > > mm/swap.c | 49 ++-- > > > > > mm/vmscan.c | 66 ++--- > > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > > -- > > > > > 2.11.0 > > > > > > > > > > > > > > > >
On Fri, Jul 08, 2022 at 02:26:08AM -0700, Yosry Ahmed wrote: > On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote: > > > > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote: > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > > and make page reclaim very inefficient. > > > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > > > This is amazing work! > > > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > > this point or leaked? Is there protection against this? > > > > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > > > > Hey Muchun, > > > > > > Quick question regarding the behavior of this change on cgroup v1 (I > > > know .. I know .. sorry): > > > > > > When a memcg dies, its LRU pages are reparented, but what happens to > > > the charge? IIUC we don't do anything because the pages are already > > > hierarchically charged to the parent. Is this correct? > > > > > > > Correct. > > > > > In cgroup v1, we have non-hierarchical stats as well, so I am trying > > > to understand if the reparented memory will appear in the > > > non-hierarchical stats of the parent (my understanding is that the > > > will not). I am also particularly interested in the charging behavior > > > of pages that get reparented to root_mem_cgroup. > > > > > > > I didn't change any memory stats when reparenting. > > > > > The main reason I am asking is that (hierarchical_usage - > > > non-hierarchical_usage - children_hierarchical_usage) is *roughly* > > > something that we use, especially at the root level, to estimate > > > zombie memory usage. I am trying to see if this change will break such > > > calculations. Thanks! > > > > > > > So I think your calculations will still be correct. If you have > > any unexpected result, please let me know. Thanks. > > I have been looking at the code and the patchset and I think there > might be a problem with the stats, at least for cgroup v1. Lets say we > have a parent memcg P, which has a child memcg C. When processes in > memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated > for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and > for P (aggregated stats, memcg->vmstats). > > When memcg C is offlined, its pages are reparented to memcg P, so far > P->vmstats (hierarchical) still have those pages, and > P->vmstats_percpu (non-hierarchical) don't. So far so good. > > Now those reparented pages get uncharged, but their memcg is P now, so > they get subtracted from P's *non-hierarchical* stats (and eventually > hierarchical stats as well). So now P->vmstats (hierarchical) > decreases, which is correct, but P->vmstats_percpu (non-hierarchical) > also decreases, which is wrong, as those stats were never added to > P->vmstats_percpu to begin with. > > From a cgroup v2 perspective *maybe* everything continues to work, but > this breaks cgroup v1 non-hierarchical stats. In fact, if the > reparented memory exceeds the original non-hierarchical memory in P, > we can underflow those stats because we are subtracting stats that > were never added in the first place. > > Please let me know if I am misunderstanding something and there is > actually no problem with the non-hierarchical stats (you can stop > reading here if this is all in my head and there's actually no > problem). > Thanks for patient explanation. Now I got your point. > Off the top of my mind we can handle stats modifications of reparented > memory separately. We should not updated local per-cpu counters, maybe > we should rather update memcg->vmstat.state_pending directly so that > the changes appear as if they come from a child memcg. Two problems > come with such an approach: > Instead of avoiding updating local per-cpu counters for reparented pages, after reparenting, how about propagating the child memcg's local per-cpu counters to its parent after LRU pages reparenting? And we do not need to propagate all vmstats, just some vmstats exposed to cgroup v1 users (like memcg1_stats, memcg1_events and lru list pages). I think a reparented page is just a little bit of difference compared to other non-reparented pages, propagating local per-cpu counters may be acceptable. What do you think? > 1) memcg->vmstat.state_pending is shared between cpus, and so far is > only modified by mem_cgroup_css_rstat_flush() in locked context. A > solution would be to add reparented state to > memcg->vmstat.state_percpu instead and treat it like > memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in > mind that this adds a tiny bit of memory overhead (roughly 8 > bytes*num_cpus for each memcg). > > 2) Identifying that we are updating stats of reparented memory. This > should be easy if we have a pointer to the page to compare page->objcg > with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated > in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no > idea in each of these what page(s) is the stats update associated > with. They are called from many different places, it would be > troublesome to pass such information down from all call sites. I have > nothing off the top of my head to fix this problem except passing the > necessary info through all code paths to __mod_memcg_state() and > __mod_memcg_lruvec_state(), which is far from ideal. > > Again, I am sorry if these discussions are late, I didn't have time to > look at previous versions of this patchset. > Not late, thanks for your feedback. > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > > future work to transfer charges to memcgs that are actually using > > > > > reparented resources. For example, let's say cgroup A reads a few > > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > > cgroup B, however, is using the same pages that are currently charged > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > > that next time they get accessed we recharge them to B instead > > > > > (possibly asynchronously)? > > > > > I don't have much experience about page tables but I am pretty sure > > > > > they are loaded so maybe there is no room in PTEs for something like > > > > > this, but I have always wondered about what we can do for this case > > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > > Maybe when this memory is reparented is a good point in time to decide > > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > > root problem (if it even exists). > > > > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > > subsystem in the future. IIUC, most reparented pages are page cache > > > > pages without be mapped to users. So page tables are not a suitable > > > > place to record this information. However, we already have this information > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > > been reparented. I am thinking if a place where a page is mapped (probably > > > > page fault patch) or page (cache) is written (usually vfs write path) > > > > is suitable to transfer page's memcg from one to another. But need more > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > If we need more information to make this decision, where to store those > > > > information? This is my primary thoughts on this question. > > > > > > > > Thanks. > > > > > > > > > Thanks again for this work and please excuse my ignorance if any part > > > > > of what I said doesn't make sense :) > > > > > > > > > > > > > > > > > ```bash > > > > > > #!/bin/bash > > > > > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > > > for i in {0..2000} > > > > > > do > > > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > > > cat temp >> log > > > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > > > done > > > > > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > > > rm -f temp log > > > > > > ``` > > > > > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > > > > > v6: > > > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > > > - Rebase to mm-unstable. > > > > > > > > > > > > v5: > > > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > > > - Fix lockdep warning reported by kernel test robot. > > > > > > - Add two new patches to do code cleanup. > > > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > > > > > v4: > > > > > > - Resend and rebased on v5.18. > > > > > > > > > > > > v3: > > > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > > > the folio relevant. > > > > > > > > > > > > v2: > > > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > > > - Rebase to linux 5.15-rc1. > > > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > > > > > v1: > > > > > > - Drop RFC tag. > > > > > > - Rebase to linux next-20210811. > > > > > > > > > > > > RFC v4: > > > > > > - Collect Acked-by from Roman. > > > > > > - Rebase to linux next-20210525. > > > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > > > > > RFC v3: > > > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > > > into a separate series[1]. > > > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > > > > > RFC v2: > > > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > > > - Fix move_pages_to_lru(). > > > > > > > > > > > > Muchun Song (11): > > > > > > mm: memcontrol: remove dead code and comments > > > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > > > lruvec_unlock{_irq, _irqrestore} > > > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > > > mm: vmscan: rework move_pages_to_lru() > > > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > > > > > fs/buffer.c | 4 +- > > > > > > fs/fs-writeback.c | 23 +- > > > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > > > include/linux/mm_inline.h | 6 + > > > > > > include/trace/events/writeback.h | 5 + > > > > > > mm/compaction.c | 39 ++- > > > > > > mm/huge_memory.c | 153 ++++++++-- > > > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > > > mm/migrate.c | 4 + > > > > > > mm/mlock.c | 2 +- > > > > > > mm/page_io.c | 5 +- > > > > > > mm/swap.c | 49 ++-- > > > > > > mm/vmscan.c | 66 ++--- > > > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > > > -- > > > > > > 2.11.0 > > > > > > > > > > > > > > > > > > > > >
On Fri, Jul 8, 2022 at 10:51 PM Muchun Song <songmuchun@bytedance.com> wrote: > > On Fri, Jul 08, 2022 at 02:26:08AM -0700, Yosry Ahmed wrote: > > On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote: > > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote: > > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > > > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series > > > > > > > into mm-unstable which will help to determine whether there is a problem or > > > > > > > degradation. I am also doing some benchmark tests in parallel. > > > > > > > > > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > > > > > with the new APIs of obj_cgroup. > > > > > > > > > > > > > > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages") > > > > > > > commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages") > > > > > > > > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > > > > > it exists at a larger scale and is causing recurring problems in the real > > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > > > > > second, third, fourth, ... instance of the same job that was restarted into > > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > > > > > and make page reclaim very inefficient. > > > > > > > > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > > > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > > > > > of the dying cgroups will not increase if we run the following test script. > > > > > > > > > > > > This is amazing work! > > > > > > > > > > > > Sorry if I came late, I didn't follow the threads of previous versions > > > > > > so this might be redundant, I just have a couple of questions. > > > > > > > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup > > > > > > (assuming they can), aren't these pages effectively unaccounted at > > > > > > this point or leaked? Is there protection against this? > > > > > > > > > > > > > > > > In this case, those pages are accounted in root memcg level. Unfortunately, > > > > > there is no mechanism now to transfer a page's memcg from one to another. > > > > > > > > > > > > > Hey Muchun, > > > > > > > > Quick question regarding the behavior of this change on cgroup v1 (I > > > > know .. I know .. sorry): > > > > > > > > When a memcg dies, its LRU pages are reparented, but what happens to > > > > the charge? IIUC we don't do anything because the pages are already > > > > hierarchically charged to the parent. Is this correct? > > > > > > > > > > Correct. > > > > > > > In cgroup v1, we have non-hierarchical stats as well, so I am trying > > > > to understand if the reparented memory will appear in the > > > > non-hierarchical stats of the parent (my understanding is that the > > > > will not). I am also particularly interested in the charging behavior > > > > of pages that get reparented to root_mem_cgroup. > > > > > > > > > > I didn't change any memory stats when reparenting. > > > > > > > The main reason I am asking is that (hierarchical_usage - > > > > non-hierarchical_usage - children_hierarchical_usage) is *roughly* > > > > something that we use, especially at the root level, to estimate > > > > zombie memory usage. I am trying to see if this change will break such > > > > calculations. Thanks! > > > > > > > > > > So I think your calculations will still be correct. If you have > > > any unexpected result, please let me know. Thanks. > > > > I have been looking at the code and the patchset and I think there > > might be a problem with the stats, at least for cgroup v1. Lets say we > > have a parent memcg P, which has a child memcg C. When processes in > > memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated > > for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and > > for P (aggregated stats, memcg->vmstats). > > > > When memcg C is offlined, its pages are reparented to memcg P, so far > > P->vmstats (hierarchical) still have those pages, and > > P->vmstats_percpu (non-hierarchical) don't. So far so good. > > > > Now those reparented pages get uncharged, but their memcg is P now, so > > they get subtracted from P's *non-hierarchical* stats (and eventually > > hierarchical stats as well). So now P->vmstats (hierarchical) > > decreases, which is correct, but P->vmstats_percpu (non-hierarchical) > > also decreases, which is wrong, as those stats were never added to > > P->vmstats_percpu to begin with. > > > > From a cgroup v2 perspective *maybe* everything continues to work, but > > this breaks cgroup v1 non-hierarchical stats. In fact, if the > > reparented memory exceeds the original non-hierarchical memory in P, > > we can underflow those stats because we are subtracting stats that > > were never added in the first place. > > > > Please let me know if I am misunderstanding something and there is > > actually no problem with the non-hierarchical stats (you can stop > > reading here if this is all in my head and there's actually no > > problem). > > > > Thanks for patient explanation. Now I got your point. > > > Off the top of my mind we can handle stats modifications of reparented > > memory separately. We should not updated local per-cpu counters, maybe > > we should rather update memcg->vmstat.state_pending directly so that > > the changes appear as if they come from a child memcg. Two problems > > come with such an approach: > > > > Instead of avoiding updating local per-cpu counters for reparented pages, > after reparenting, how about propagating the child memcg's local per-cpu > counters to its parent after LRU pages reparenting? And we do not need to > propagate all vmstats, just some vmstats exposed to cgroup v1 users (like > memcg1_stats, memcg1_events and lru list pages). I think a reparented page > is just a little bit of difference compared to other non-reparented pages, > propagating local per-cpu counters may be acceptable. What do you think? > I think this introduces another problem. Now the non-hierarchical stats of a parent memcg (P in the above example) would include reparented memory. This hides zombie memory usage. As I elaborated earlier, parent_hierarchical_usage - parent_non_hierarchical_usage - SUM(children_hierarchical_usage) should give an estimate of the zombie memory under parent. If we propagate reparented memory stats (aka zombies) to the parent's non-hierarchical stats, then we have no way of finding out how much zombie memory lives in a memcg. This problem becomes more significant when we are reparenting to root, where zombie memory is part of unaccounted system overhead. Actually there is a different problem even in cgroup v2. At root level there will be no way of finding out whether unaccounted system overhead (root_usage - SUM(top_level_memcgs_usage)) comes from zombie memcgs or not, because zombie memcgs will no longer exist and reparented/zombie memory can be indistinguishable from memory that has always lived in root. This makes debugging high system overhead even harder, but that's a problem with the reparenting approach in general, unrelated to the non-hierarchical stats problem. > > 1) memcg->vmstat.state_pending is shared between cpus, and so far is > > only modified by mem_cgroup_css_rstat_flush() in locked context. A > > solution would be to add reparented state to > > memcg->vmstat.state_percpu instead and treat it like > > memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in > > mind that this adds a tiny bit of memory overhead (roughly 8 > > bytes*num_cpus for each memcg). > > > > 2) Identifying that we are updating stats of reparented memory. This > > should be easy if we have a pointer to the page to compare page->objcg > > with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated > > in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no > > idea in each of these what page(s) is the stats update associated > > with. They are called from many different places, it would be > > troublesome to pass such information down from all call sites. I have > > nothing off the top of my head to fix this problem except passing the > > necessary info through all code paths to __mod_memcg_state() and > > __mod_memcg_lruvec_state(), which is far from ideal. > > > > Again, I am sorry if these discussions are late, I didn't have time to > > look at previous versions of this patchset. > > > > Not late, thanks for your feedback. > > > > > > > > > > b) Since moving charged pages between memcgs is now becoming easier by > > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for > > > > > > future work to transfer charges to memcgs that are actually using > > > > > > reparented resources. For example, let's say cgroup A reads a few > > > > > > pages into page cache, and then they are no longer used by cgroup A. > > > > > > cgroup B, however, is using the same pages that are currently charged > > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A > > > > > > dies, and these pages are reparented to A's parent, can we possibly > > > > > > mark these reparented pages (maybe in the page tables somewhere) so > > > > > > that next time they get accessed we recharge them to B instead > > > > > > (possibly asynchronously)? > > > > > > I don't have much experience about page tables but I am pretty sure > > > > > > they are loaded so maybe there is no room in PTEs for something like > > > > > > this, but I have always wondered about what we can do for this case > > > > > > where a cgroup is consistently using memory charged to another cgroup. > > > > > > Maybe when this memory is reparented is a good point in time to decide > > > > > > to recharge appropriately. It would also fix the reparenty leak to > > > > > > root problem (if it even exists). > > > > > > > > > > > > > > > > From my point of view, this is going to be an improvement to the memcg > > > > > subsystem in the future. IIUC, most reparented pages are page cache > > > > > pages without be mapped to users. So page tables are not a suitable > > > > > place to record this information. However, we already have this information > > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not > > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have > > > > > been reparented. I am thinking if a place where a page is mapped (probably > > > > > page fault patch) or page (cache) is written (usually vfs write path) > > > > > is suitable to transfer page's memcg from one to another. But need more > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred? > > > > > If we need more information to make this decision, where to store those > > > > > information? This is my primary thoughts on this question. > > > > > > > > > > Thanks. > > > > > > > > > > > Thanks again for this work and please excuse my ignorance if any part > > > > > > of what I said doesn't make sense :) > > > > > > > > > > > > > > > > > > > > ```bash > > > > > > > #!/bin/bash > > > > > > > > > > > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > > > > > for i in {0..2000} > > > > > > > do > > > > > > > mkdir /sys/fs/cgroup/memory/test$i > > > > > > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > > > > > > cat temp >> log > > > > > > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > > > > > > rmdir /sys/fs/cgroup/memory/test$i > > > > > > > done > > > > > > > > > > > > > > cat /proc/cgroups | grep memory > > > > > > > > > > > > > > rm -f temp log > > > > > > > ``` > > > > > > > > > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/ > > > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > > > > > > > > > > > v6: > > > > > > > - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks. > > > > > > > - Rebase to mm-unstable. > > > > > > > > > > > > > > v5: > > > > > > > - Lots of improvements from Johannes, Roman and Waiman. > > > > > > > - Fix lockdep warning reported by kernel test robot. > > > > > > > - Add two new patches to do code cleanup. > > > > > > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > > > > > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > > > > > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > > > > > > it to local_lock. It could be an improvement in the future. > > > > > > > > > > > > > > v4: > > > > > > > - Resend and rebased on v5.18. > > > > > > > > > > > > > > v3: > > > > > > > - Removed the Acked-by tags from Roman since this version is based on > > > > > > > the folio relevant. > > > > > > > > > > > > > > v2: > > > > > > > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > > > > > > > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > > > > > > > - Rebase to linux 5.15-rc1. > > > > > > > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > > > > > > > > > > > > > v1: > > > > > > > - Drop RFC tag. > > > > > > > - Rebase to linux next-20210811. > > > > > > > > > > > > > > RFC v4: > > > > > > > - Collect Acked-by from Roman. > > > > > > > - Rebase to linux next-20210525. > > > > > > > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > > > > > > > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > > > > > > > - Convert reparent_ops_head to an array in patch 8. > > > > > > > > > > > > > > Thanks for Roman's review and suggestions. > > > > > > > > > > > > > > RFC v3: > > > > > > > - Drop the code cleanup and simplification patches. Gather those patches > > > > > > > into a separate series[1]. > > > > > > > - Rework patch #1 suggested by Johannes. > > > > > > > > > > > > > > RFC v2: > > > > > > > - Collect Acked-by tags by Johannes. Thanks. > > > > > > > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > > > > > > > - Fix move_pages_to_lru(). > > > > > > > > > > > > > > Muchun Song (11): > > > > > > > mm: memcontrol: remove dead code and comments > > > > > > > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > > > > > > > lruvec_unlock{_irq, _irqrestore} > > > > > > > mm: memcontrol: prepare objcg API for non-kmem usage > > > > > > > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > > > > > > > mm: vmscan: rework move_pages_to_lru() > > > > > > > mm: thp: make split queue lock safe when LRU pages are reparented > > > > > > > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > > > > > > > mm: memcontrol: introduce memcg_reparent_ops > > > > > > > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > > > > > > > mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function > > > > > > > mm: lru: use lruvec lock to serialize memcg changes > > > > > > > > > > > > > > fs/buffer.c | 4 +- > > > > > > > fs/fs-writeback.c | 23 +- > > > > > > > include/linux/memcontrol.h | 218 +++++++++------ > > > > > > > include/linux/mm_inline.h | 6 + > > > > > > > include/trace/events/writeback.h | 5 + > > > > > > > mm/compaction.c | 39 ++- > > > > > > > mm/huge_memory.c | 153 ++++++++-- > > > > > > > mm/memcontrol.c | 584 +++++++++++++++++++++++++++------------ > > > > > > > mm/migrate.c | 4 + > > > > > > > mm/mlock.c | 2 +- > > > > > > > mm/page_io.c | 5 +- > > > > > > > mm/swap.c | 49 ++-- > > > > > > > mm/vmscan.c | 66 ++--- > > > > > > > 13 files changed, 776 insertions(+), 382 deletions(-) > > > > > > > > > > > > > > > > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f > > > > > > > -- > > > > > > > 2.11.0 > > > > > > > > > > > > > > > > > > > > > > > > > >