Message ID | 20230507170631.89607-1-hanjinke.666@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] blk-throttle: Fix io statistics for cgroup v1 | expand |
On Mon, May 08, 2023 at 01:06:31AM +0800, Jinke Han wrote: > From: Jinke Han <hanjinke.666@bytedance.com> > > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"), > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become > the only stable io stats interface of cgroup v1, and these statistics > are done in the blk-throttle code. But the current code only counts the > bios that are actually throttled. When the user does not add the throttle > limit, the io stats for cgroup v1 has nothing. I fix it according to the > statistical method of v2, and made it count all ios accurately. > > Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline") > Tested-by: Andrea Righi <andrea.righi@canonical.com> > Signed-off-by: Jinke Han <hanjinke.666@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Thanks.
On Mon, May 8, 2023 at 1:07 AM Jinke Han <hanjinke.666@bytedance.com> wrote: > > From: Jinke Han <hanjinke.666@bytedance.com> > > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"), > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become > the only stable io stats interface of cgroup v1, and these statistics > are done in the blk-throttle code. But the current code only counts the > bios that are actually throttled. When the user does not add the throttle > limit, the io stats for cgroup v1 has nothing. I fix it according to the > statistical method of v2, and made it count all ios accurately. > > Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline") > Tested-by: Andrea Righi <andrea.righi@canonical.com> > Signed-off-by: Jinke Han <hanjinke.666@bytedance.com> Good catch. Acked-by: Muchun Song <songmuchun@bytedance.com> Thanks.
Hi,Jens Can you consider merging this patch?I think this is a fix. It also Acked-by Tejun, Muchun Song and Tested-by Andrea Righi. 在 2023/5/8 上午1:06, Jinke Han 写道: > From: Jinke Han <hanjinke.666@bytedance.com> > > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"), > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become > the only stable io stats interface of cgroup v1, and these statistics > are done in the blk-throttle code. But the current code only counts the > bios that are actually throttled. When the user does not add the throttle > limit, the io stats for cgroup v1 has nothing. I fix it according to the > statistical method of v2, and made it count all ios accurately. > > Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline") > Tested-by: Andrea Righi <andrea.righi@canonical.com> > Signed-off-by: Jinke Han <hanjinke.666@bytedance.com> > --- > block/blk-cgroup.c | 6 ++++-- > block/blk-throttle.c | 6 ------ > block/blk-throttle.h | 9 +++++++++ > 3 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c > index 0ce64dd73cfe..5b29912a0ee2 100644 > --- a/block/blk-cgroup.c > +++ b/block/blk-cgroup.c > @@ -2048,6 +2048,9 @@ void blk_cgroup_bio_start(struct bio *bio) > struct blkg_iostat_set *bis; > unsigned long flags; > > + if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) > + return; > + > /* Root-level stats are sourced from system-wide IO stats */ > if (!cgroup_parent(blkcg->css.cgroup)) > return; > @@ -2079,8 +2082,7 @@ void blk_cgroup_bio_start(struct bio *bio) > } > > u64_stats_update_end_irqrestore(&bis->sync, flags); > - if (cgroup_subsys_on_dfl(io_cgrp_subsys)) > - cgroup_rstat_updated(blkcg->css.cgroup, cpu); > + cgroup_rstat_updated(blkcg->css.cgroup, cpu); > put_cpu(); > } > > diff --git a/block/blk-throttle.c b/block/blk-throttle.c > index 9d010d867fbf..7397ff199d66 100644 > --- a/block/blk-throttle.c > +++ b/block/blk-throttle.c > @@ -2178,12 +2178,6 @@ bool __blk_throtl_bio(struct bio *bio) > > rcu_read_lock(); > > - if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { > - blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, > - bio->bi_iter.bi_size); > - blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); > - } > - > spin_lock_irq(&q->queue_lock); > > throtl_update_latency_buckets(td); > diff --git a/block/blk-throttle.h b/block/blk-throttle.h > index ef4b7a4de987..d1ccbfe9f797 100644 > --- a/block/blk-throttle.h > +++ b/block/blk-throttle.h > @@ -185,6 +185,15 @@ static inline bool blk_should_throtl(struct bio *bio) > struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg); > int rw = bio_data_dir(bio); > > + if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { > + if (!bio_flagged(bio, BIO_CGROUP_ACCT)) { > + bio_set_flag(bio, BIO_CGROUP_ACCT); > + blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, > + bio->bi_iter.bi_size); > + } > + blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); > + } > + > /* iops limit is always counted */ > if (tg->has_rules_iops[rw]) > return true;
On Mon, 08 May 2023 01:06:31 +0800, Jinke Han wrote: > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"), > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become > the only stable io stats interface of cgroup v1, and these statistics > are done in the blk-throttle code. But the current code only counts the > bios that are actually throttled. When the user does not add the throttle > limit, the io stats for cgroup v1 has nothing. I fix it according to the > statistical method of v2, and made it count all ios accurately. > > [...] Applied, thanks! [1/1] blk-throttle: Fix io statistics for cgroup v1 commit: ad7c3b41e86b59943a903d23c7b037d820e6270c Best regards,
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 0ce64dd73cfe..5b29912a0ee2 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -2048,6 +2048,9 @@ void blk_cgroup_bio_start(struct bio *bio) struct blkg_iostat_set *bis; unsigned long flags; + if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) + return; + /* Root-level stats are sourced from system-wide IO stats */ if (!cgroup_parent(blkcg->css.cgroup)) return; @@ -2079,8 +2082,7 @@ void blk_cgroup_bio_start(struct bio *bio) } u64_stats_update_end_irqrestore(&bis->sync, flags); - if (cgroup_subsys_on_dfl(io_cgrp_subsys)) - cgroup_rstat_updated(blkcg->css.cgroup, cpu); + cgroup_rstat_updated(blkcg->css.cgroup, cpu); put_cpu(); } diff --git a/block/blk-throttle.c b/block/blk-throttle.c index 9d010d867fbf..7397ff199d66 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -2178,12 +2178,6 @@ bool __blk_throtl_bio(struct bio *bio) rcu_read_lock(); - if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { - blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, - bio->bi_iter.bi_size); - blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); - } - spin_lock_irq(&q->queue_lock); throtl_update_latency_buckets(td); diff --git a/block/blk-throttle.h b/block/blk-throttle.h index ef4b7a4de987..d1ccbfe9f797 100644 --- a/block/blk-throttle.h +++ b/block/blk-throttle.h @@ -185,6 +185,15 @@ static inline bool blk_should_throtl(struct bio *bio) struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg); int rw = bio_data_dir(bio); + if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { + if (!bio_flagged(bio, BIO_CGROUP_ACCT)) { + bio_set_flag(bio, BIO_CGROUP_ACCT); + blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, + bio->bi_iter.bi_size); + } + blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); + } + /* iops limit is always counted */ if (tg->has_rules_iops[rw]) return true;