Message ID | 8e8a0c4644d5eb01b7f79ec9b67c2b240f4a6434.1611798287.git.baolin.wang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] blk-cgroup: Use cond_resched() when destroy blkgs | expand |
On Thu, Jan 28, 2021 at 11:22:00AM +0800, Baolin Wang wrote: > On !PREEMPT kernel, we can get below softlockup when doing stress > testing with creating and destroying block cgroup repeatly. The > reason is it may take a long time to acquire the queue's lock in > the loop of blkcg_destroy_blkgs(), or the system can accumulate a > huge number of blkgs in pathological cases. We can add a need_resched() > check on each loop and release locks and do cond_resched() if true > to avoid this issue, since the blkcg_destroy_blkgs() is not called > from atomic contexts. > > [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! > [ 4757.010698] Call trace: > [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 > [ 4757.010701] cgwb_release_workfn+0x104/0x158 > [ 4757.010702] process_one_work+0x1bc/0x3f0 > [ 4757.010704] worker_thread+0x164/0x468 > [ 4757.010705] kthread+0x108/0x138 > > Suggested-by: Tejun Heo <tj@kernel.org> > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Tejun Heo <tj@kernel.org> Thanks.
On 1/27/21 8:22 PM, Baolin Wang wrote: > On !PREEMPT kernel, we can get below softlockup when doing stress > testing with creating and destroying block cgroup repeatly. The > reason is it may take a long time to acquire the queue's lock in > the loop of blkcg_destroy_blkgs(), or the system can accumulate a > huge number of blkgs in pathological cases. We can add a need_resched() > check on each loop and release locks and do cond_resched() if true > to avoid this issue, since the blkcg_destroy_blkgs() is not called > from atomic contexts. > > [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! > [ 4757.010698] Call trace: > [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 > [ 4757.010701] cgwb_release_workfn+0x104/0x158 > [ 4757.010702] process_one_work+0x1bc/0x3f0 > [ 4757.010704] worker_thread+0x164/0x468 > [ 4757.010705] kthread+0x108/0x138 Kind of ugly with the two clauses for dropping the blkcg lock, one being a cpu_relax() and the other a resched. How about something like this: diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 031114d454a6..4221a1539391 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) */ void blkcg_destroy_blkgs(struct blkcg *blkcg) { + might_sleep(); + spin_lock_irq(&blkcg->lock); while (!hlist_empty(&blkcg->blkg_list)) { @@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) struct blkcg_gq, blkcg_node); struct request_queue *q = blkg->q; - if (spin_trylock(&q->queue_lock)) { - blkg_destroy(blkg); - spin_unlock(&q->queue_lock); - } else { + if (need_resched() || !spin_trylock(&q->queue_lock)) { + /* + * Given that the system can accumulate a huge number + * of blkgs in pathological cases, check to see if we + * need to rescheduling to avoid softlockup. + */ spin_unlock_irq(&blkcg->lock); - cpu_relax(); + cond_resched(); spin_lock_irq(&blkcg->lock); + continue; } + + blkg_destroy(blkg); + spin_unlock(&q->queue_lock); } spin_unlock_irq(&blkcg->lock);
在 2021/1/28 11:41, Jens Axboe 写道: > On 1/27/21 8:22 PM, Baolin Wang wrote: >> On !PREEMPT kernel, we can get below softlockup when doing stress >> testing with creating and destroying block cgroup repeatly. The >> reason is it may take a long time to acquire the queue's lock in >> the loop of blkcg_destroy_blkgs(), or the system can accumulate a >> huge number of blkgs in pathological cases. We can add a need_resched() >> check on each loop and release locks and do cond_resched() if true >> to avoid this issue, since the blkcg_destroy_blkgs() is not called >> from atomic contexts. >> >> [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! >> [ 4757.010698] Call trace: >> [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 >> [ 4757.010701] cgwb_release_workfn+0x104/0x158 >> [ 4757.010702] process_one_work+0x1bc/0x3f0 >> [ 4757.010704] worker_thread+0x164/0x468 >> [ 4757.010705] kthread+0x108/0x138 > > Kind of ugly with the two clauses for dropping the blkcg lock, one > being a cpu_relax() and the other a resched. How about something > like this: > > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c > index 031114d454a6..4221a1539391 100644 > --- a/block/blk-cgroup.c > +++ b/block/blk-cgroup.c > @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) > */ > void blkcg_destroy_blkgs(struct blkcg *blkcg) > { > + might_sleep(); > + > spin_lock_irq(&blkcg->lock); > > while (!hlist_empty(&blkcg->blkg_list)) { > @@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) > struct blkcg_gq, blkcg_node); > struct request_queue *q = blkg->q; > > - if (spin_trylock(&q->queue_lock)) { > - blkg_destroy(blkg); > - spin_unlock(&q->queue_lock); > - } else { > + if (need_resched() || !spin_trylock(&q->queue_lock)) { > + /* > + * Given that the system can accumulate a huge number > + * of blkgs in pathological cases, check to see if we > + * need to rescheduling to avoid softlockup. > + */ > spin_unlock_irq(&blkcg->lock); > - cpu_relax(); > + cond_resched(); > spin_lock_irq(&blkcg->lock); > + continue; > } > + > + blkg_destroy(blkg); > + spin_unlock(&q->queue_lock); > } > > spin_unlock_irq(&blkcg->lock); > Looks better to me. Do I need resend with your suggestion? Thanks.
On 1/27/21 8:49 PM, Baolin Wang wrote: > > > 在 2021/1/28 11:41, Jens Axboe 写道: >> On 1/27/21 8:22 PM, Baolin Wang wrote: >>> On !PREEMPT kernel, we can get below softlockup when doing stress >>> testing with creating and destroying block cgroup repeatly. The >>> reason is it may take a long time to acquire the queue's lock in >>> the loop of blkcg_destroy_blkgs(), or the system can accumulate a >>> huge number of blkgs in pathological cases. We can add a need_resched() >>> check on each loop and release locks and do cond_resched() if true >>> to avoid this issue, since the blkcg_destroy_blkgs() is not called >>> from atomic contexts. >>> >>> [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! >>> [ 4757.010698] Call trace: >>> [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 >>> [ 4757.010701] cgwb_release_workfn+0x104/0x158 >>> [ 4757.010702] process_one_work+0x1bc/0x3f0 >>> [ 4757.010704] worker_thread+0x164/0x468 >>> [ 4757.010705] kthread+0x108/0x138 >> >> Kind of ugly with the two clauses for dropping the blkcg lock, one >> being a cpu_relax() and the other a resched. How about something >> like this: >> >> >> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c >> index 031114d454a6..4221a1539391 100644 >> --- a/block/blk-cgroup.c >> +++ b/block/blk-cgroup.c >> @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) >> */ >> void blkcg_destroy_blkgs(struct blkcg *blkcg) >> { >> + might_sleep(); >> + >> spin_lock_irq(&blkcg->lock); >> >> while (!hlist_empty(&blkcg->blkg_list)) { >> @@ -1023,14 +1025,20 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) >> struct blkcg_gq, blkcg_node); >> struct request_queue *q = blkg->q; >> >> - if (spin_trylock(&q->queue_lock)) { >> - blkg_destroy(blkg); >> - spin_unlock(&q->queue_lock); >> - } else { >> + if (need_resched() || !spin_trylock(&q->queue_lock)) { >> + /* >> + * Given that the system can accumulate a huge number >> + * of blkgs in pathological cases, check to see if we >> + * need to rescheduling to avoid softlockup. >> + */ >> spin_unlock_irq(&blkcg->lock); >> - cpu_relax(); >> + cond_resched(); >> spin_lock_irq(&blkcg->lock); >> + continue; >> } >> + >> + blkg_destroy(blkg); >> + spin_unlock(&q->queue_lock); >> } >> >> spin_unlock_irq(&blkcg->lock); >> > > Looks better to me. Do I need resend with your suggestion? Thanks. Probably best, gives Tejun another chance to sign off on it :-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 3465d6e..94eeed7 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1016,6 +1016,8 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) */ void blkcg_destroy_blkgs(struct blkcg *blkcg) { + might_sleep(); + spin_lock_irq(&blkcg->lock); while (!hlist_empty(&blkcg->blkg_list)) { @@ -1031,6 +1033,17 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) cpu_relax(); spin_lock_irq(&blkcg->lock); } + + /* + * Given that the system can accumulate a huge number + * of blkgs in pathological cases, check to see if we + * need to rescheduling to avoid softlockup. + */ + if (need_resched()) { + spin_unlock_irq(&blkcg->lock); + cond_resched(); + spin_lock_irq(&blkcg->lock); + } } spin_unlock_irq(&blkcg->lock);
On !PREEMPT kernel, we can get below softlockup when doing stress testing with creating and destroying block cgroup repeatly. The reason is it may take a long time to acquire the queue's lock in the loop of blkcg_destroy_blkgs(), or the system can accumulate a huge number of blkgs in pathological cases. We can add a need_resched() check on each loop and release locks and do cond_resched() if true to avoid this issue, since the blkcg_destroy_blkgs() is not called from atomic contexts. [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s! [ 4757.010698] Call trace: [ 4757.010700] blkcg_destroy_blkgs+0x68/0x150 [ 4757.010701] cgwb_release_workfn+0x104/0x158 [ 4757.010702] process_one_work+0x1bc/0x3f0 [ 4757.010704] worker_thread+0x164/0x468 [ 4757.010705] kthread+0x108/0x138 Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- Changes from v1: - Add might_sleep() in blkcg_destroy_blkgs(). - Add an explicitly need_resched() check before releasing lock. - Add some comments. --- block/blk-cgroup.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)