Message ID | 20230223031947.3717433-1-yangerkun@huaweicloud.com (mailing list archive) |
---|---|
State | Superseded, archived |
Delegated to: | Mike Snitzer |
Headers | show |
Series | dm-crypt: fix softlockup in dmcrypt_write | expand |
On 2/22/23 19:19, yangerkun wrote: > @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) > > BUG_ON(rb_parent(write_tree.rb_node)); > > + if (time_is_before_jiffies(start_time + HZ)) { > + schedule(); > + start_time = jiffies; > + } Why schedule() instead of cond_resched()? Thanks, Bart. -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
在 2023/2/26 10:01, Bart Van Assche 写道: > On 2/22/23 19:19, yangerkun wrote: >> @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) >> BUG_ON(rb_parent(write_tree.rb_node)); >> + if (time_is_before_jiffies(start_time + HZ)) { >> + schedule(); >> + start_time = jiffies; >> + } > > Why schedule() instead of cond_resched()? cond_resched may not really schedule, which may trigger the problem too, but it seems after 1 second, it may never happend? Thanks, Kun. > > Thanks, > > Bart. > -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Sun, Feb 26 2023 at 8:31P -0500, yangerkun <yangerkun@huaweicloud.com> wrote: > > > 在 2023/2/26 10:01, Bart Van Assche 写道: > > On 2/22/23 19:19, yangerkun wrote: > > > @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) > > > BUG_ON(rb_parent(write_tree.rb_node)); > > > + if (time_is_before_jiffies(start_time + HZ)) { > > > + schedule(); > > > + start_time = jiffies; > > > + } > > > > Why schedule() instead of cond_resched()? > > cond_resched may not really schedule, which may trigger the problem too, but > it seems after 1 second, it may never happend? I had the same question as Bart when reviewing your homegrown conditional schedule(). Hopefully you can reproduce this issue? If so, please see if simply using cond_resched() fixes the issue. Thanks, Mike -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Mon, Feb 27 2023 at 12:55P -0500, Mike Snitzer <snitzer@kernel.org> wrote: > On Sun, Feb 26 2023 at 8:31P -0500, > yangerkun <yangerkun@huaweicloud.com> wrote: > > > > > > > 在 2023/2/26 10:01, Bart Van Assche 写道: > > > On 2/22/23 19:19, yangerkun wrote: > > > > @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) > > > > BUG_ON(rb_parent(write_tree.rb_node)); > > > > + if (time_is_before_jiffies(start_time + HZ)) { > > > > + schedule(); > > > > + start_time = jiffies; > > > > + } > > > > > > Why schedule() instead of cond_resched()? > > > > cond_resched may not really schedule, which may trigger the problem too, but > > it seems after 1 second, it may never happend? > > I had the same question as Bart when reviewing your homegrown > conditional schedule(). Hopefully you can reproduce this issue? If > so, please see if simply using cond_resched() fixes the issue. This seems like a more appropriate patch: diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 87c5706131f2..faba1be572f9 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1937,6 +1937,7 @@ static int dmcrypt_write(void *data) io = crypt_io_from_node(rb_first(&write_tree)); rb_erase(&io->rb_node, &write_tree); kcryptd_io_write(io); + cond_resched(); } while (!RB_EMPTY_ROOT(&write_tree)); blk_finish_plug(&plug); } -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Mon, Feb 27 2023 at 1:03P -0500, Mike Snitzer <snitzer@kernel.org> wrote: > On Mon, Feb 27 2023 at 12:55P -0500, > Mike Snitzer <snitzer@kernel.org> wrote: > > > On Sun, Feb 26 2023 at 8:31P -0500, > > yangerkun <yangerkun@huaweicloud.com> wrote: > > > > > > > > > > > 在 2023/2/26 10:01, Bart Van Assche 写道: > > > > On 2/22/23 19:19, yangerkun wrote: > > > > > @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) > > > > > BUG_ON(rb_parent(write_tree.rb_node)); > > > > > + if (time_is_before_jiffies(start_time + HZ)) { > > > > > + schedule(); > > > > > + start_time = jiffies; > > > > > + } > > > > > > > > Why schedule() instead of cond_resched()? > > > > > > cond_resched may not really schedule, which may trigger the problem too, but > > > it seems after 1 second, it may never happend? > > > > I had the same question as Bart when reviewing your homegrown > > conditional schedule(). Hopefully you can reproduce this issue? If > > so, please see if simply using cond_resched() fixes the issue. > > This seems like a more appropriate patch: > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > index 87c5706131f2..faba1be572f9 100644 > --- a/drivers/md/dm-crypt.c > +++ b/drivers/md/dm-crypt.c > @@ -1937,6 +1937,7 @@ static int dmcrypt_write(void *data) > io = crypt_io_from_node(rb_first(&write_tree)); > rb_erase(&io->rb_node, &write_tree); > kcryptd_io_write(io); > + cond_resched(); > } while (!RB_EMPTY_ROOT(&write_tree)); > blk_finish_plug(&plug); > } or: diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 87c5706131f2..3ba2fd3e4358 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1934,6 +1934,7 @@ static int dmcrypt_write(void *data) */ blk_start_plug(&plug); do { + cond_resched(); io = crypt_io_from_node(rb_first(&write_tree)); rb_erase(&io->rb_node, &write_tree); kcryptd_io_write(io); -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
在 2023/2/28 1:55, Mike Snitzer 写道: > On Sun, Feb 26 2023 at 8:31P -0500, > yangerkun <yangerkun@huaweicloud.com> wrote: > >> >> >> 在 2023/2/26 10:01, Bart Van Assche 写道: >>> On 2/22/23 19:19, yangerkun wrote: >>>> @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) >>>> BUG_ON(rb_parent(write_tree.rb_node)); >>>> + if (time_is_before_jiffies(start_time + HZ)) { >>>> + schedule(); >>>> + start_time = jiffies; >>>> + } >>> >>> Why schedule() instead of cond_resched()? >> >> cond_resched may not really schedule, which may trigger the problem too, but >> it seems after 1 second, it may never happend? > > I had the same question as Bart when reviewing your homegrown > conditional schedule(). Hopefully you can reproduce this issue? If > so, please see if simply using cond_resched() fixes the issue. Yes, our testcase can trigger the issue, will do it with cond_resched. > > Thanks, > Mike -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
在 2023/2/28 2:06, Mike Snitzer 写道: > On Mon, Feb 27 2023 at 1:03P -0500, > Mike Snitzer <snitzer@kernel.org> wrote: > >> On Mon, Feb 27 2023 at 12:55P -0500, >> Mike Snitzer <snitzer@kernel.org> wrote: >> >>> On Sun, Feb 26 2023 at 8:31P -0500, >>> yangerkun <yangerkun@huaweicloud.com> wrote: >>> >>>> >>>> >>>> 在 2023/2/26 10:01, Bart Van Assche 写道: >>>>> On 2/22/23 19:19, yangerkun wrote: >>>>>> @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) >>>>>> BUG_ON(rb_parent(write_tree.rb_node)); >>>>>> + if (time_is_before_jiffies(start_time + HZ)) { >>>>>> + schedule(); >>>>>> + start_time = jiffies; >>>>>> + } >>>>> >>>>> Why schedule() instead of cond_resched()? >>>> >>>> cond_resched may not really schedule, which may trigger the problem too, but >>>> it seems after 1 second, it may never happend? >>> >>> I had the same question as Bart when reviewing your homegrown >>> conditional schedule(). Hopefully you can reproduce this issue? If >>> so, please see if simply using cond_resched() fixes the issue. >> >> This seems like a more appropriate patch: >> >> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c >> index 87c5706131f2..faba1be572f9 100644 >> --- a/drivers/md/dm-crypt.c >> +++ b/drivers/md/dm-crypt.c >> @@ -1937,6 +1937,7 @@ static int dmcrypt_write(void *data) >> io = crypt_io_from_node(rb_first(&write_tree)); >> rb_erase(&io->rb_node, &write_tree); >> kcryptd_io_write(io); >> + cond_resched(); >> } while (!RB_EMPTY_ROOT(&write_tree)); >> blk_finish_plug(&plug); >> } > > > or: > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > index 87c5706131f2..3ba2fd3e4358 100644 > --- a/drivers/md/dm-crypt.c > +++ b/drivers/md/dm-crypt.c > @@ -1934,6 +1934,7 @@ static int dmcrypt_write(void *data) > */ > blk_start_plug(&plug); > do { > + cond_resched(); > io = crypt_io_from_node(rb_first(&write_tree)); > rb_erase(&io->rb_node, &write_tree); > kcryptd_io_write(io); Hi, Thanks a lot for your review! It's ok to fix the softlockup, but for async write encrypt, kcryptd_crypt_write_io_submit will add bio to write_tree, and once we call cond_resched before every kcryptd_io_write, the write performance may be poor while we meet a high cpu usage scene. kcryptd_crypt_write_io_submit will wakeup write_thread once there is a empty write_tree, and dmcrypt_write will peel the old write_tree to submit bio, so there can not exist too many bio in write_tree. Then I choose yield cpu before the 'while' that submit bio... Thanks, Kun. -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
在 2023/2/28 9:25, yangerkun 写道: > > > 在 2023/2/28 1:55, Mike Snitzer 写道: >> On Sun, Feb 26 2023 at 8:31P -0500, >> yangerkun <yangerkun@huaweicloud.com> wrote: >> >>> >>> >>> 在 2023/2/26 10:01, Bart Van Assche 写道: >>>> On 2/22/23 19:19, yangerkun wrote: >>>>> @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) >>>>> BUG_ON(rb_parent(write_tree.rb_node)); >>>>> + if (time_is_before_jiffies(start_time + HZ)) { >>>>> + schedule(); >>>>> + start_time = jiffies; >>>>> + } >>>> >>>> Why schedule() instead of cond_resched()? >>> >>> cond_resched may not really schedule, which may trigger the problem >>> too, but >>> it seems after 1 second, it may never happend? >> >> I had the same question as Bart when reviewing your homegrown >> conditional schedule(). Hopefully you can reproduce this issue? If >> so, please see if simply using cond_resched() fixes the issue. > > Yes, our testcase can trigger the issue, will do it with cond_resched. Without this patch, the softlockup may trigger soon, after this patch no matter cond_resched or schedule, softlockup won't trigger after two hour test. Thanks, Kun. > > > >> >> Thanks, >> Mike -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Tue, 28 Feb 2023, yangerkun wrote: > > > 在 2023/2/28 2:06, Mike Snitzer 写道: > > On Mon, Feb 27 2023 at 1:03P -0500, > > Mike Snitzer <snitzer@kernel.org> wrote: > > > >> On Mon, Feb 27 2023 at 12:55P -0500, > >> Mike Snitzer <snitzer@kernel.org> wrote: > >> > >>> On Sun, Feb 26 2023 at 8:31P -0500, > >>> yangerkun <yangerkun@huaweicloud.com> wrote: > >>> > >>>> > >>>> > >>>> 在 2023/2/26 10:01, Bart Van Assche 写道: > >>>>> On 2/22/23 19:19, yangerkun wrote: > >>>>>> @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) > >>>>>> BUG_ON(rb_parent(write_tree.rb_node)); > >>>>>> + if (time_is_before_jiffies(start_time + HZ)) { > >>>>>> + schedule(); > >>>>>> + start_time = jiffies; > >>>>>> + } > >>>>> > >>>>> Why schedule() instead of cond_resched()? > >>>> > >>>> cond_resched may not really schedule, which may trigger the problem too, > but > >>>> it seems after 1 second, it may never happend? > >>> > >>> I had the same question as Bart when reviewing your homegrown > >>> conditional schedule(). Hopefully you can reproduce this issue? If > >>> so, please see if simply using cond_resched() fixes the issue. > >> > >> This seems like a more appropriate patch: > >> > >> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > >> index 87c5706131f2..faba1be572f9 100644 > >> --- a/drivers/md/dm-crypt.c > >> +++ b/drivers/md/dm-crypt.c > >> @@ -1937,6 +1937,7 @@ static int dmcrypt_write(void *data) > >> io = crypt_io_from_node(rb_first(&write_tree)); > >> rb_erase(&io->rb_node, &write_tree); > >> kcryptd_io_write(io); > >> + cond_resched(); > >> } while (!RB_EMPTY_ROOT(&write_tree)); > >> blk_finish_plug(&plug); > >> } > > > > > > or: > > > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > > index 87c5706131f2..3ba2fd3e4358 100644 > > --- a/drivers/md/dm-crypt.c > > +++ b/drivers/md/dm-crypt.c > > @@ -1934,6 +1934,7 @@ static int dmcrypt_write(void *data) > > */ > > blk_start_plug(&plug); > > do { > > + cond_resched(); > > io = crypt_io_from_node(rb_first(&write_tree)); > > rb_erase(&io->rb_node, &write_tree); > > kcryptd_io_write(io); > > Hi, > > Thanks a lot for your review! > > It's ok to fix the softlockup, but for async write encrypt, > kcryptd_crypt_write_io_submit will add bio to write_tree, and once we > call cond_resched before every kcryptd_io_write, the write performance > may be poor while we meet a high cpu usage scene. Hi To fix this problem, find the PID of the process "dmcrypt_write" and change its priority to -20, for example "renice -n -20 -p 34748". This is the proper way how to fix it; locking up the process for one second is not. We used to have high-priority workqueues by default, but it caused audio playback skipping, so we had to revert it - see f612b2132db529feac4f965f28a1b9258ea7c22b. Perhaps we should add an option to have high-priority kernel threads? Mikulas > kcryptd_crypt_write_io_submit will wakeup write_thread once there is a > empty write_tree, and dmcrypt_write will peel the old write_tree to > submit bio, so there can not exist too many bio in write_tree. Then I > choose yield cpu before the 'while' that submit bio... > > Thanks, > Kun. > > -- > dm-devel mailing list > dm-devel@redhat.com > https://listman.redhat.com/mailman/listinfo/dm-devel > -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On 3/6/23 10:02, Mikulas Patocka wrote: > On Tue, 28 Feb 2023, yangerkun wrote: >> It's ok to fix the softlockup, but for async write encrypt, >> kcryptd_crypt_write_io_submit will add bio to write_tree, and once we >> call cond_resched before every kcryptd_io_write, the write performance >> may be poor while we meet a high cpu usage scene. > > Hi > > To fix this problem, find the PID of the process "dmcrypt_write" and > change its priority to -20, for example "renice -n -20 -p 34748". > > This is the proper way how to fix it; locking up the process for one > second is not. > > We used to have high-priority workqueues by default, but it caused audio > playback skipping, so we had to revert it - see > f612b2132db529feac4f965f28a1b9258ea7c22b. > > Perhaps we should add an option to have high-priority kernel threads? Would calling cond_resched() every n iterations instead of every iteration help? From mm/swapfile.c: if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; } Thanks, Bart. -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
On Mon, 6 Mar 2023, Bart Van Assche wrote: > On 3/6/23 10:02, Mikulas Patocka wrote: > > On Tue, 28 Feb 2023, yangerkun wrote: > > > It's ok to fix the softlockup, but for async write encrypt, > > > kcryptd_crypt_write_io_submit will add bio to write_tree, and once we > > > call cond_resched before every kcryptd_io_write, the write performance > > > may be poor while we meet a high cpu usage scene. > > > > Hi > > > > To fix this problem, find the PID of the process "dmcrypt_write" and > > change its priority to -20, for example "renice -n -20 -p 34748". > > > > This is the proper way how to fix it; locking up the process for one > > second is not. > > > > We used to have high-priority workqueues by default, but it caused audio > > playback skipping, so we had to revert it - see > > f612b2132db529feac4f965f28a1b9258ea7c22b. > > > > Perhaps we should add an option to have high-priority kernel threads? > > Would calling cond_resched() every n iterations instead of every iteration > help? From mm/swapfile.c: > > if (unlikely(--latency_ration < 0)) { > cond_resched(); > latency_ration = LATENCY_LIMIT; > } > > Thanks, > > Bart. I think that if this helps, it is really a bug in the scheduler... It shouldn't switch tasks so often. Mikulas -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 2653516bcdef..755a01d72cdb 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1891,6 +1891,7 @@ static int dmcrypt_write(void *data) { struct crypt_config *cc = data; struct dm_crypt_io *io; + unsigned long start_time = jiffies; while (1) { struct rb_root write_tree; @@ -1913,6 +1914,7 @@ static int dmcrypt_write(void *data) schedule(); + start_time = jiffies; set_current_state(TASK_RUNNING); spin_lock_irq(&cc->write_thread_lock); goto continue_locked; @@ -1924,6 +1926,10 @@ static int dmcrypt_write(void *data) BUG_ON(rb_parent(write_tree.rb_node)); + if (time_is_before_jiffies(start_time + HZ)) { + schedule(); + start_time = jiffies; + } /* * Note: we cannot walk the tree here with rb_next because * the structures may be freed when kcryptd_io_write is called.