Message ID | 20210303055917.66054-2-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Use obj_cgroup APIs to charge kmem pages | expand |
On Wed, Mar 03, 2021 at 01:59:13PM +0800, Muchun Song wrote: > We know that the unit of slab object charging is bytes, the unit of > kmem page charging is PAGE_SIZE. If we want to reuse obj_cgroup APIs > to charge the kmem pages, we should pass PAGE_SIZE (as third parameter) > to obj_cgroup_charge(). Because the size is already PAGE_SIZE, we can > skip touch the objcg stock. And obj_cgroup_{un}charge_page() are > introduced to charge in units of page level. > > In the later patch, we also can reuse those two helpers to charge or > uncharge a number of kernel pages to a object cgroup. This is just > a code movement without any functional changes. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> This patch looks good to me, even as a standalone refactoring. Please, rename obj_cgroup_charge_page() to obj_cgroup_charge_pages() and the same with uncharge. It's because _page suffix usually means we're dealing with a physical page (e.g. struct page * as an argument), here it's not the case. Please, add my Acked-by: Roman Gushchin <guro@fb.com> after the renaming. Thank you! > --- > mm/memcontrol.c | 46 +++++++++++++++++++++++++++++++--------------- > 1 file changed, 31 insertions(+), 15 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 845eec01ef9d..faae16def127 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3056,6 +3056,34 @@ static void memcg_free_cache_id(int id) > ida_simple_remove(&memcg_cache_ida, id); > } > > +static inline void obj_cgroup_uncharge_page(struct obj_cgroup *objcg, > + unsigned int nr_pages) > +{ > + rcu_read_lock(); > + __memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages); > + rcu_read_unlock(); > +} > + > +static int obj_cgroup_charge_page(struct obj_cgroup *objcg, gfp_t gfp, > + unsigned int nr_pages) > +{ > + struct mem_cgroup *memcg; > + int ret; > + > + rcu_read_lock(); > +retry: > + memcg = obj_cgroup_memcg(objcg); > + if (unlikely(!css_tryget(&memcg->css))) > + goto retry; > + rcu_read_unlock(); > + > + ret = __memcg_kmem_charge(memcg, gfp, nr_pages); > + > + css_put(&memcg->css); > + > + return ret; > +} > + > /** > * __memcg_kmem_charge: charge a number of kernel pages to a memcg > * @memcg: memory cgroup to charge > @@ -3180,11 +3208,8 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) > unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; > unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); > > - if (nr_pages) { > - rcu_read_lock(); > - __memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages); > - rcu_read_unlock(); > - } > + if (nr_pages) > + obj_cgroup_uncharge_page(old, nr_pages); > > /* > * The leftover is flushed to the centralized per-memcg value. > @@ -3242,7 +3267,6 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > > int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) > { > - struct mem_cgroup *memcg; > unsigned int nr_pages, nr_bytes; > int ret; > > @@ -3259,24 +3283,16 @@ int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) > * refill_obj_stock(), called from this function or > * independently later. > */ > - rcu_read_lock(); > -retry: > - memcg = obj_cgroup_memcg(objcg); > - if (unlikely(!css_tryget(&memcg->css))) > - goto retry; > - rcu_read_unlock(); > - > nr_pages = size >> PAGE_SHIFT; > nr_bytes = size & (PAGE_SIZE - 1); > > if (nr_bytes) > nr_pages += 1; > > - ret = __memcg_kmem_charge(memcg, gfp, nr_pages); > + ret = obj_cgroup_charge_page(objcg, gfp, nr_pages); > if (!ret && nr_bytes) > refill_obj_stock(objcg, PAGE_SIZE - nr_bytes); > > - css_put(&memcg->css); > return ret; > } > > -- > 2.11.0 >
On Sat, Mar 6, 2021 at 2:56 AM Roman Gushchin <guro@fb.com> wrote: > > On Wed, Mar 03, 2021 at 01:59:13PM +0800, Muchun Song wrote: > > We know that the unit of slab object charging is bytes, the unit of > > kmem page charging is PAGE_SIZE. If we want to reuse obj_cgroup APIs > > to charge the kmem pages, we should pass PAGE_SIZE (as third parameter) > > to obj_cgroup_charge(). Because the size is already PAGE_SIZE, we can > > skip touch the objcg stock. And obj_cgroup_{un}charge_page() are > > introduced to charge in units of page level. > > > > In the later patch, we also can reuse those two helpers to charge or > > uncharge a number of kernel pages to a object cgroup. This is just > > a code movement without any functional changes. > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > This patch looks good to me, even as a standalone refactoring. > Please, rename obj_cgroup_charge_page() to obj_cgroup_charge_pages() > and the same with uncharge. It's because _page suffix usually means > we're dealing with a physical page (e.g. struct page * as an argument), > here it's not the case. Make sense. > > Please, add my Acked-by: Roman Gushchin <guro@fb.com> > after the renaming. Will do. Thanks for your review. > > Thank you! > > > --- > > mm/memcontrol.c | 46 +++++++++++++++++++++++++++++++--------------- > > 1 file changed, 31 insertions(+), 15 deletions(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 845eec01ef9d..faae16def127 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -3056,6 +3056,34 @@ static void memcg_free_cache_id(int id) > > ida_simple_remove(&memcg_cache_ida, id); > > } > > > > +static inline void obj_cgroup_uncharge_page(struct obj_cgroup *objcg, > > + unsigned int nr_pages) > > +{ > > + rcu_read_lock(); > > + __memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages); > > + rcu_read_unlock(); > > +} > > + > > +static int obj_cgroup_charge_page(struct obj_cgroup *objcg, gfp_t gfp, > > + unsigned int nr_pages) > > +{ > > + struct mem_cgroup *memcg; > > + int ret; > > + > > + rcu_read_lock(); > > +retry: > > + memcg = obj_cgroup_memcg(objcg); > > + if (unlikely(!css_tryget(&memcg->css))) > > + goto retry; > > + rcu_read_unlock(); > > + > > + ret = __memcg_kmem_charge(memcg, gfp, nr_pages); > > + > > + css_put(&memcg->css); > > + > > + return ret; > > +} > > + > > /** > > * __memcg_kmem_charge: charge a number of kernel pages to a memcg > > * @memcg: memory cgroup to charge > > @@ -3180,11 +3208,8 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) > > unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; > > unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); > > > > - if (nr_pages) { > > - rcu_read_lock(); > > - __memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages); > > - rcu_read_unlock(); > > - } > > + if (nr_pages) > > + obj_cgroup_uncharge_page(old, nr_pages); > > > > /* > > * The leftover is flushed to the centralized per-memcg value. > > @@ -3242,7 +3267,6 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) > > > > int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) > > { > > - struct mem_cgroup *memcg; > > unsigned int nr_pages, nr_bytes; > > int ret; > > > > @@ -3259,24 +3283,16 @@ int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) > > * refill_obj_stock(), called from this function or > > * independently later. > > */ > > - rcu_read_lock(); > > -retry: > > - memcg = obj_cgroup_memcg(objcg); > > - if (unlikely(!css_tryget(&memcg->css))) > > - goto retry; > > - rcu_read_unlock(); > > - > > nr_pages = size >> PAGE_SHIFT; > > nr_bytes = size & (PAGE_SIZE - 1); > > > > if (nr_bytes) > > nr_pages += 1; > > > > - ret = __memcg_kmem_charge(memcg, gfp, nr_pages); > > + ret = obj_cgroup_charge_page(objcg, gfp, nr_pages); > > if (!ret && nr_bytes) > > refill_obj_stock(objcg, PAGE_SIZE - nr_bytes); > > > > - css_put(&memcg->css); > > return ret; > > } > > > > -- > > 2.11.0 > >
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 845eec01ef9d..faae16def127 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3056,6 +3056,34 @@ static void memcg_free_cache_id(int id) ida_simple_remove(&memcg_cache_ida, id); } +static inline void obj_cgroup_uncharge_page(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ + rcu_read_lock(); + __memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages); + rcu_read_unlock(); +} + +static int obj_cgroup_charge_page(struct obj_cgroup *objcg, gfp_t gfp, + unsigned int nr_pages) +{ + struct mem_cgroup *memcg; + int ret; + + rcu_read_lock(); +retry: + memcg = obj_cgroup_memcg(objcg); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + ret = __memcg_kmem_charge(memcg, gfp, nr_pages); + + css_put(&memcg->css); + + return ret; +} + /** * __memcg_kmem_charge: charge a number of kernel pages to a memcg * @memcg: memory cgroup to charge @@ -3180,11 +3208,8 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT; unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1); - if (nr_pages) { - rcu_read_lock(); - __memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages); - rcu_read_unlock(); - } + if (nr_pages) + obj_cgroup_uncharge_page(old, nr_pages); /* * The leftover is flushed to the centralized per-memcg value. @@ -3242,7 +3267,6 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) { - struct mem_cgroup *memcg; unsigned int nr_pages, nr_bytes; int ret; @@ -3259,24 +3283,16 @@ int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) * refill_obj_stock(), called from this function or * independently later. */ - rcu_read_lock(); -retry: - memcg = obj_cgroup_memcg(objcg); - if (unlikely(!css_tryget(&memcg->css))) - goto retry; - rcu_read_unlock(); - nr_pages = size >> PAGE_SHIFT; nr_bytes = size & (PAGE_SIZE - 1); if (nr_bytes) nr_pages += 1; - ret = __memcg_kmem_charge(memcg, gfp, nr_pages); + ret = obj_cgroup_charge_page(objcg, gfp, nr_pages); if (!ret && nr_bytes) refill_obj_stock(objcg, PAGE_SIZE - nr_bytes); - css_put(&memcg->css); return ret; }
We know that the unit of slab object charging is bytes, the unit of kmem page charging is PAGE_SIZE. If we want to reuse obj_cgroup APIs to charge the kmem pages, we should pass PAGE_SIZE (as third parameter) to obj_cgroup_charge(). Because the size is already PAGE_SIZE, we can skip touch the objcg stock. And obj_cgroup_{un}charge_page() are introduced to charge in units of page level. In the later patch, we also can reuse those two helpers to charge or uncharge a number of kernel pages to a object cgroup. This is just a code movement without any functional changes. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/memcontrol.c | 46 +++++++++++++++++++++++++++++++--------------- 1 file changed, 31 insertions(+), 15 deletions(-)