Message ID | 20190321214512.11524-5-longman@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | Signal: Fix hard lockup problem in flush_sigqueue() | expand |
On Thu, Mar 21, 2019 at 05:45:12PM -0400, Waiman Long wrote: > If the freeing queue has many objects, freeing all of them consecutively > may cause soft lockup especially on a debug kernel. So kmem_free_up_q() > is modified to call cond_resched() if running in the process context. > > Signed-off-by: Waiman Long <longman@redhat.com> > --- > mm/slab_common.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/mm/slab_common.c b/mm/slab_common.c > index dba20b4208f1..633a1d0f6d20 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -1622,11 +1622,14 @@ EXPORT_SYMBOL_GPL(kmem_free_q_add); > * kmem_free_up_q - free all the objects in the freeing queue > * @head: freeing queue head > * > - * Free all the objects in the freeing queue. > + * Free all the objects in the freeing queue. The caller cannot hold any > + * non-sleeping locks. > */ > void kmem_free_up_q(struct kmem_free_q_head *head) > { > struct kmem_free_q_node *node, *next; > + bool do_resched = !in_irq(); > + int cnt = 0; > > for (node = head->first; node; node = next) { > next = node->next; > @@ -1634,6 +1637,12 @@ void kmem_free_up_q(struct kmem_free_q_head *head) > kmem_cache_free(node->cachep, node); > else > kfree(node); > + /* > + * Call cond_resched() every 256 objects freed when in > + * process context. > + */ > + if (do_resched && !(++cnt & 0xff)) > + cond_resched(); Why not just: cond_resched() ? > } > } > EXPORT_SYMBOL_GPL(kmem_free_up_q); > -- > 2.18.1 >
On 03/21/2019 06:00 PM, Peter Zijlstra wrote: > On Thu, Mar 21, 2019 at 05:45:12PM -0400, Waiman Long wrote: >> If the freeing queue has many objects, freeing all of them consecutively >> may cause soft lockup especially on a debug kernel. So kmem_free_up_q() >> is modified to call cond_resched() if running in the process context. >> >> Signed-off-by: Waiman Long <longman@redhat.com> >> --- >> mm/slab_common.c | 11 ++++++++++- >> 1 file changed, 10 insertions(+), 1 deletion(-) >> >> diff --git a/mm/slab_common.c b/mm/slab_common.c >> index dba20b4208f1..633a1d0f6d20 100644 >> --- a/mm/slab_common.c >> +++ b/mm/slab_common.c >> @@ -1622,11 +1622,14 @@ EXPORT_SYMBOL_GPL(kmem_free_q_add); >> * kmem_free_up_q - free all the objects in the freeing queue >> * @head: freeing queue head >> * >> - * Free all the objects in the freeing queue. >> + * Free all the objects in the freeing queue. The caller cannot hold any >> + * non-sleeping locks. >> */ >> void kmem_free_up_q(struct kmem_free_q_head *head) >> { >> struct kmem_free_q_node *node, *next; >> + bool do_resched = !in_irq(); >> + int cnt = 0; >> >> for (node = head->first; node; node = next) { >> next = node->next; >> @@ -1634,6 +1637,12 @@ void kmem_free_up_q(struct kmem_free_q_head *head) >> kmem_cache_free(node->cachep, node); >> else >> kfree(node); >> + /* >> + * Call cond_resched() every 256 objects freed when in >> + * process context. >> + */ >> + if (do_resched && !(++cnt & 0xff)) >> + cond_resched(); > Why not just: cond_resched() ? cond_resched() calls ___might_sleep(). So it is prudent to check for process context first to avoid erroneous message. Yes, I can call cond_resched() after every free. I added the count just to not call it too frequently. Cheers, Longman
diff --git a/mm/slab_common.c b/mm/slab_common.c index dba20b4208f1..633a1d0f6d20 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1622,11 +1622,14 @@ EXPORT_SYMBOL_GPL(kmem_free_q_add); * kmem_free_up_q - free all the objects in the freeing queue * @head: freeing queue head * - * Free all the objects in the freeing queue. + * Free all the objects in the freeing queue. The caller cannot hold any + * non-sleeping locks. */ void kmem_free_up_q(struct kmem_free_q_head *head) { struct kmem_free_q_node *node, *next; + bool do_resched = !in_irq(); + int cnt = 0; for (node = head->first; node; node = next) { next = node->next; @@ -1634,6 +1637,12 @@ void kmem_free_up_q(struct kmem_free_q_head *head) kmem_cache_free(node->cachep, node); else kfree(node); + /* + * Call cond_resched() every 256 objects freed when in + * process context. + */ + if (do_resched && !(++cnt & 0xff)) + cond_resched(); } } EXPORT_SYMBOL_GPL(kmem_free_up_q);
If the freeing queue has many objects, freeing all of them consecutively may cause soft lockup especially on a debug kernel. So kmem_free_up_q() is modified to call cond_resched() if running in the process context. Signed-off-by: Waiman Long <longman@redhat.com> --- mm/slab_common.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)