Message ID | 20200205223348.880610-2-dschatzberg@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] mm: Charge current memcg when no mm is set | expand |
On Wed, Feb 05, 2020 at 02:33:47PM -0800, Dan Schatzberg wrote: > This modifies the shmem and mm charge logic so that now if there is no > mm set (as in the case of tmpfs backed loop device), we charge the > current memcg, if set. > > Signed-off-by: Dan Schatzberg <dschatzberg@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> It's a dependency for 2/2, but it's also an overdue cleanup IMO: it's always been a bit weird that memalloc_use_memcg() worked for kernel allocations but was silently ignored for user pages. This patch establishes a precedence order for who gets charged: 1. If there is a memcg associated with the page already, that memcg is charged. This happens during swapin. 2. If an explicit mm is passed, mm->memcg is charged. This happens during page faults, which can be triggered in remote VMs (eg gup). 3. Otherwise consult the current process context. If it has configured a current->active_memcg, use that. Otherwise, current->mm->memcg. Thanks Dan
On Wed, Feb 05, 2020 at 02:33:47PM -0800, Dan Schatzberg wrote: > This modifies the shmem and mm charge logic so that now if there is no > mm set (as in the case of tmpfs backed loop device), we charge the > current memcg, if set. > > Signed-off-by: Dan Schatzberg <dschatzberg@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Thanks.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c5b5f74cfd4d..5a6ab3183525 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6354,7 +6354,8 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, * @compound: charge the page as compound or small page * * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. + * pages according to @gfp_mask if necessary. If @mm is NULL, try to + * charge to the active memcg. * * Returns 0 on success, with *@memcgp pointing to the charged memcg. * Otherwise, an error code is returned. @@ -6398,8 +6399,12 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, } } - if (!memcg) - memcg = get_mem_cgroup_from_mm(mm); + if (!memcg) { + if (!mm) + memcg = get_mem_cgroup_from_current(); + else + memcg = get_mem_cgroup_from_mm(mm); + } ret = try_charge(memcg, gfp_mask, nr_pages); diff --git a/mm/shmem.c b/mm/shmem.c index 165fa6332993..014e576a617b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1766,7 +1766,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } sbinfo = SHMEM_SB(inode->i_sb); - charge_mm = vma ? vma->vm_mm : current->mm; + charge_mm = vma ? vma->vm_mm : NULL; page = find_lock_entry(mapping, index); if (xa_is_value(page)) {
This modifies the shmem and mm charge logic so that now if there is no mm set (as in the case of tmpfs backed loop device), we charge the current memcg, if set. Signed-off-by: Dan Schatzberg <dschatzberg@fb.com> --- mm/memcontrol.c | 11 ++++++++--- mm/shmem.c | 2 +- 2 files changed, 9 insertions(+), 4 deletions(-)