Message ID | 20210105225817.1036378-8-shy828301@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Make shrinker's nr_deferred memcg aware | expand |
On 06.01.2021 01:58, Yang Shi wrote: > Currently the number of deferred objects are per shrinker, but some slabs, for example, > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > may suffer from over shrink, excessive reclaim latency, etc. > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > We observed this hit in our production environment which was running vfs heavy workload > shown as the below tracing log: > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > cache items 246404277 delta 31345 total_scan 123202138 > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > last shrinker return val 123186855 > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > better isolation. > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > Signed-off-by: Yang Shi <shy828301@gmail.com> > --- > include/linux/memcontrol.h | 7 +++--- > mm/vmscan.c | 49 +++++++++++++++++++++++++------------- > 2 files changed, 37 insertions(+), 19 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index e05bbe8277cc..5599082df623 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -93,12 +93,13 @@ struct lruvec_stat { > }; > > /* > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > - * which have elements charged to this memcg. > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > + * shrinkers, which have elements charged to this memcg. > */ > struct memcg_shrinker_info { > struct rcu_head rcu; > - unsigned long map[]; > + unsigned long *map; > + atomic_long_t *nr_deferred; > }; > > /* > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 0033659abf9e..72259253e414 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -193,10 +193,12 @@ static void memcg_free_shrinker_info_rcu(struct rcu_head *head) > } > > static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > - int size, int old_size) > + int m_size, int d_size, > + int old_m_size, int old_d_size) > { > struct memcg_shrinker_info *new, *old; > int nid; > + int size = m_size + d_size; > > for_each_node(nid) { > old = rcu_dereference_protected( > @@ -209,9 +211,18 @@ static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > if (!new) > return -ENOMEM; > > - /* Set all old bits, clear all new bits */ > - memset(new->map, (int)0xff, old_size); > - memset((void *)new->map + old_size, 0, size - old_size); > + new->map = (unsigned long *)((unsigned long)new + sizeof(*new)); > + new->nr_deferred = (atomic_long_t *)((unsigned long)new + > + sizeof(*new) + m_size); Can't we write this more compact? new->map = (unsigned long *)(new + 1); new->nr_deferred = (atomic_long_t)(new->map + 1); > + > + /* map: set all old bits, clear all new bits */ > + memset(new->map, (int)0xff, old_m_size); > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > + /* nr_deferred: copy old values, clear all new values */ > + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, > + old_d_size); Why not memcpy(new->nr_deferred, old->nr_deferred, old_d_size); ? > + memset((void *)new->nr_deferred + old_d_size, 0, > + d_size - old_d_size); > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); > @@ -226,9 +237,6 @@ void memcg_free_shrinker_info(struct mem_cgroup *memcg) > struct memcg_shrinker_info *info; > int nid; > > - if (mem_cgroup_is_root(memcg)) > - return; > - > for_each_node(nid) { > pn = mem_cgroup_nodeinfo(memcg, nid); > info = rcu_dereference_protected(pn->shrinker_info, true); > @@ -242,12 +250,13 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > { > struct memcg_shrinker_info *info; > int nid, size, ret = 0; > - > - if (mem_cgroup_is_root(memcg)) > - return 0; > + int m_size, d_size = 0; > > down_read(&shrinker_rwsem); > - size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > + m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > + d_size = shrinker_nr_max * sizeof(atomic_long_t); > + size = m_size + d_size; > + > for_each_node(nid) { > info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); > if (!info) { > @@ -255,6 +264,9 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > ret = -ENOMEM; > break; > } > + info->map = (unsigned long *)((unsigned long)info + sizeof(*info)); > + info->nr_deferred = (atomic_long_t *)((unsigned long)info + > + sizeof(*info) + m_size); > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > } > up_read(&shrinker_rwsem); > @@ -265,10 +277,16 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > static int memcg_expand_shrinker_info(int new_id) > { > int size, old_size, ret = 0; > + int m_size, d_size = 0; > + int old_m_size, old_d_size = 0; > struct mem_cgroup *memcg; > > - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > - old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > + m_size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > + d_size = (new_id + 1) * sizeof(atomic_long_t); > + size = m_size + d_size; > + old_m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); > + old_size = old_m_size + old_d_size; > if (size <= old_size) > return 0; This replication of patch [4/11] looks awkwardly. Please, try to incorporate the same changes to nr_deferred as I requested for shrinker_map in [4/11]. > > @@ -277,9 +295,8 @@ static int memcg_expand_shrinker_info(int new_id) > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > do { > - if (mem_cgroup_is_root(memcg)) > - continue; > - ret = memcg_expand_one_shrinker_info(memcg, size, old_size); > + ret = memcg_expand_one_shrinker_info(memcg, m_size, d_size, > + old_m_size, old_d_size); > if (ret) { > mem_cgroup_iter_break(NULL, memcg); > goto out; >
On Wed, Jan 6, 2021 at 3:07 AM Kirill Tkhai <ktkhai@virtuozzo.com> wrote: > > On 06.01.2021 01:58, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi <shy828301@gmail.com> > > --- > > include/linux/memcontrol.h | 7 +++--- > > mm/vmscan.c | 49 +++++++++++++++++++++++++------------- > > 2 files changed, 37 insertions(+), 19 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index e05bbe8277cc..5599082df623 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > }; > > > > /* > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > - * which have elements charged to this memcg. > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > + * shrinkers, which have elements charged to this memcg. > > */ > > struct memcg_shrinker_info { > > struct rcu_head rcu; > > - unsigned long map[]; > > + unsigned long *map; > > + atomic_long_t *nr_deferred; > > }; > > > > /* > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 0033659abf9e..72259253e414 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -193,10 +193,12 @@ static void memcg_free_shrinker_info_rcu(struct rcu_head *head) > > } > > > > static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > > - int size, int old_size) > > + int m_size, int d_size, > > + int old_m_size, int old_d_size) > > { > > struct memcg_shrinker_info *new, *old; > > int nid; > > + int size = m_size + d_size; > > > > for_each_node(nid) { > > old = rcu_dereference_protected( > > @@ -209,9 +211,18 @@ static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > > if (!new) > > return -ENOMEM; > > > > - /* Set all old bits, clear all new bits */ > > - memset(new->map, (int)0xff, old_size); > > - memset((void *)new->map + old_size, 0, size - old_size); > > + new->map = (unsigned long *)((unsigned long)new + sizeof(*new)); > > + new->nr_deferred = (atomic_long_t *)((unsigned long)new + > > + sizeof(*new) + m_size); > > Can't we write this more compact? > > new->map = (unsigned long *)(new + 1); > new->nr_deferred = (atomic_long_t)(new->map + 1); Thanks for the suggestion, will incorporate it in v4. > > > + > > + /* map: set all old bits, clear all new bits */ > > + memset(new->map, (int)0xff, old_m_size); > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > > + /* nr_deferred: copy old values, clear all new values */ > > + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, > > + old_d_size); > > Why not > memcpy(new->nr_deferred, old->nr_deferred, old_d_size); > ? Will fix in v4. > > > + memset((void *)new->nr_deferred + old_d_size, 0, > > + d_size - old_d_size); > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); > > @@ -226,9 +237,6 @@ void memcg_free_shrinker_info(struct mem_cgroup *memcg) > > struct memcg_shrinker_info *info; > > int nid; > > > > - if (mem_cgroup_is_root(memcg)) > > - return; > > - > > for_each_node(nid) { > > pn = mem_cgroup_nodeinfo(memcg, nid); > > info = rcu_dereference_protected(pn->shrinker_info, true); > > @@ -242,12 +250,13 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > { > > struct memcg_shrinker_info *info; > > int nid, size, ret = 0; > > - > > - if (mem_cgroup_is_root(memcg)) > > - return 0; > > + int m_size, d_size = 0; > > > > down_read(&shrinker_rwsem); > > - size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + d_size = shrinker_nr_max * sizeof(atomic_long_t); > > + size = m_size + d_size; > > + > > for_each_node(nid) { > > info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); > > if (!info) { > > @@ -255,6 +264,9 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > ret = -ENOMEM; > > break; > > } > > + info->map = (unsigned long *)((unsigned long)info + sizeof(*info)); > > + info->nr_deferred = (atomic_long_t *)((unsigned long)info + > > + sizeof(*info) + m_size); > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > } > > up_read(&shrinker_rwsem); > > @@ -265,10 +277,16 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > static int memcg_expand_shrinker_info(int new_id) > > { > > int size, old_size, ret = 0; > > + int m_size, d_size = 0; > > + int old_m_size, old_d_size = 0; > > struct mem_cgroup *memcg; > > > > - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > > - old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + m_size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > > + d_size = (new_id + 1) * sizeof(atomic_long_t); > > + size = m_size + d_size; > > + old_m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); > > + old_size = old_m_size + old_d_size; > > if (size <= old_size) > > return 0; > > This replication of patch [4/11] looks awkwardly. Please, try to incorporate > the same changes to nr_deferred as I requested for shrinker_map in [4/11]. Sure. Thanks. > > > > > @@ -277,9 +295,8 @@ static int memcg_expand_shrinker_info(int new_id) > > > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > do { > > - if (mem_cgroup_is_root(memcg)) > > - continue; > > - ret = memcg_expand_one_shrinker_info(memcg, size, old_size); > > + ret = memcg_expand_one_shrinker_info(memcg, m_size, d_size, > > + old_m_size, old_d_size); > > if (ret) { > > mem_cgroup_iter_break(NULL, memcg); > > goto out; > > > >
On Wed, Jan 6, 2021 at 3:07 AM Kirill Tkhai <ktkhai@virtuozzo.com> wrote: > > On 06.01.2021 01:58, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi <shy828301@gmail.com> > > --- > > include/linux/memcontrol.h | 7 +++--- > > mm/vmscan.c | 49 +++++++++++++++++++++++++------------- > > 2 files changed, 37 insertions(+), 19 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index e05bbe8277cc..5599082df623 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -93,12 +93,13 @@ struct lruvec_stat { > > }; > > > > /* > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, > > - * which have elements charged to this memcg. > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware > > + * shrinkers, which have elements charged to this memcg. > > */ > > struct memcg_shrinker_info { > > struct rcu_head rcu; > > - unsigned long map[]; > > + unsigned long *map; > > + atomic_long_t *nr_deferred; > > }; > > > > /* > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 0033659abf9e..72259253e414 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -193,10 +193,12 @@ static void memcg_free_shrinker_info_rcu(struct rcu_head *head) > > } > > > > static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > > - int size, int old_size) > > + int m_size, int d_size, > > + int old_m_size, int old_d_size) > > { > > struct memcg_shrinker_info *new, *old; > > int nid; > > + int size = m_size + d_size; > > > > for_each_node(nid) { > > old = rcu_dereference_protected( > > @@ -209,9 +211,18 @@ static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, > > if (!new) > > return -ENOMEM; > > > > - /* Set all old bits, clear all new bits */ > > - memset(new->map, (int)0xff, old_size); > > - memset((void *)new->map + old_size, 0, size - old_size); > > + new->map = (unsigned long *)((unsigned long)new + sizeof(*new)); > > + new->nr_deferred = (atomic_long_t *)((unsigned long)new + > > + sizeof(*new) + m_size); > > Can't we write this more compact? > > new->map = (unsigned long *)(new + 1); > new->nr_deferred = (atomic_long_t)(new->map + 1); By relooking this, the second line looks wrong. The layout should be: ---------------------------- | struct shrinker_info | ----------------------------- | map array | ----------------------------- | nr_deferred array | ------------------------------ new->map is the pointer to map array, its type is "unsigned long *", so "new->map + 1" should point to the next 32 bytes, but the map array may occupy more than one "unsigned long", this would corrupt the arrays. I think we could use "new->map + (shrinker_nr_max / BITS_PER_LONG) + 1" > > > + > > + /* map: set all old bits, clear all new bits */ > > + memset(new->map, (int)0xff, old_m_size); > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); > > + /* nr_deferred: copy old values, clear all new values */ > > + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, > > + old_d_size); > > Why not > memcpy(new->nr_deferred, old->nr_deferred, old_d_size); > ? > > > + memset((void *)new->nr_deferred + old_d_size, 0, > > + d_size - old_d_size); > > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); > > call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); > > @@ -226,9 +237,6 @@ void memcg_free_shrinker_info(struct mem_cgroup *memcg) > > struct memcg_shrinker_info *info; > > int nid; > > > > - if (mem_cgroup_is_root(memcg)) > > - return; > > - > > for_each_node(nid) { > > pn = mem_cgroup_nodeinfo(memcg, nid); > > info = rcu_dereference_protected(pn->shrinker_info, true); > > @@ -242,12 +250,13 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > { > > struct memcg_shrinker_info *info; > > int nid, size, ret = 0; > > - > > - if (mem_cgroup_is_root(memcg)) > > - return 0; > > + int m_size, d_size = 0; > > > > down_read(&shrinker_rwsem); > > - size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + d_size = shrinker_nr_max * sizeof(atomic_long_t); > > + size = m_size + d_size; > > + > > for_each_node(nid) { > > info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); > > if (!info) { > > @@ -255,6 +264,9 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > ret = -ENOMEM; > > break; > > } > > + info->map = (unsigned long *)((unsigned long)info + sizeof(*info)); > > + info->nr_deferred = (atomic_long_t *)((unsigned long)info + > > + sizeof(*info) + m_size); > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); > > } > > up_read(&shrinker_rwsem); > > @@ -265,10 +277,16 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) > > static int memcg_expand_shrinker_info(int new_id) > > { > > int size, old_size, ret = 0; > > + int m_size, d_size = 0; > > + int old_m_size, old_d_size = 0; > > struct mem_cgroup *memcg; > > > > - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > > - old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + m_size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); > > + d_size = (new_id + 1) * sizeof(atomic_long_t); > > + size = m_size + d_size; > > + old_m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); > > + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); > > + old_size = old_m_size + old_d_size; > > if (size <= old_size) > > return 0; > > This replication of patch [4/11] looks awkwardly. Please, try to incorporate > the same changes to nr_deferred as I requested for shrinker_map in [4/11]. > > > > > @@ -277,9 +295,8 @@ static int memcg_expand_shrinker_info(int new_id) > > > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > do { > > - if (mem_cgroup_is_root(memcg)) > > - continue; > > - ret = memcg_expand_one_shrinker_info(memcg, size, old_size); > > + ret = memcg_expand_one_shrinker_info(memcg, m_size, d_size, > > + old_m_size, old_d_size); > > if (ret) { > > mem_cgroup_iter_break(NULL, memcg); > > goto out; > > > >
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e05bbe8277cc..5599082df623 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct memcg_shrinker_info { struct rcu_head rcu; - unsigned long map[]; + unsigned long *map; + atomic_long_t *nr_deferred; }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 0033659abf9e..72259253e414 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -193,10 +193,12 @@ static void memcg_free_shrinker_info_rcu(struct rcu_head *head) } static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int m_size, int d_size, + int old_m_size, int old_d_size) { struct memcg_shrinker_info *new, *old; int nid; + int size = m_size + d_size; for_each_node(nid) { old = rcu_dereference_protected( @@ -209,9 +211,18 @@ static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, if (!new) return -ENOMEM; - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->map = (unsigned long *)((unsigned long)new + sizeof(*new)); + new->nr_deferred = (atomic_long_t *)((unsigned long)new + + sizeof(*new) + m_size); + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_m_size); + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, + old_d_size); + memset((void *)new->nr_deferred + old_d_size, 0, + d_size - old_d_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); @@ -226,9 +237,6 @@ void memcg_free_shrinker_info(struct mem_cgroup *memcg) struct memcg_shrinker_info *info; int nid; - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); info = rcu_dereference_protected(pn->shrinker_info, true); @@ -242,12 +250,13 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) { struct memcg_shrinker_info *info; int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int m_size, d_size = 0; down_read(&shrinker_rwsem); - size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + d_size = shrinker_nr_max * sizeof(atomic_long_t); + size = m_size + d_size; + for_each_node(nid) { info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); if (!info) { @@ -255,6 +264,9 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) ret = -ENOMEM; break; } + info->map = (unsigned long *)((unsigned long)info + sizeof(*info)); + info->nr_deferred = (atomic_long_t *)((unsigned long)info + + sizeof(*info) + m_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_read(&shrinker_rwsem); @@ -265,10 +277,16 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) static int memcg_expand_shrinker_info(int new_id) { int size, old_size, ret = 0; + int m_size, d_size = 0; + int old_m_size, old_d_size = 0; struct mem_cgroup *memcg; - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + m_size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + d_size = (new_id + 1) * sizeof(atomic_long_t); + size = m_size + d_size; + old_m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); + old_size = old_m_size + old_d_size; if (size <= old_size) return 0; @@ -277,9 +295,8 @@ static int memcg_expand_shrinker_info(int new_id) memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_info(memcg, size, old_size); + ret = memcg_expand_one_shrinker_info(memcg, m_size, d_size, + old_m_size, old_d_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out;
Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi <shy828301@gmail.com> --- include/linux/memcontrol.h | 7 +++--- mm/vmscan.c | 49 +++++++++++++++++++++++++------------- 2 files changed, 37 insertions(+), 19 deletions(-)