Message ID | 20240913131720.1762188-1-chenridong@huawei.com (mailing list archive) |
---|---|
Headers | show |
Series | Fix deadlock caused by cgroup_mutex and cpu_hotplug_lock | expand |
On 2024/9/13 21:17, Chen Ridong wrote: > The patch 1 have been reviewed by Michal Koutný. > Add two patches as follow. > > v4: > - add a patch to document that saturating the system_wq is not permitted. > - add a patch to adjust WQ_MAX_ACTIVE from 512 to 2048. > > v3: > - optimize commit msg. > > Link v1: https://lore.kernel.org/cgroups/20240607110313.2230669-1-chenridong@huawei.com/ > Link v2: https://lore.kernel.org/cgroups/20240719025232.2143638-1-chenridong@huawei.com/ > Link v3: https://lore.kernel.org/cgroups/20240817093334.6062-1-chenridong@huawei.com/ > > > Chen Ridong (3): > cgroup: fix deadlock caused by cgroup_mutex and cpu_hotplug_lock > workqueue: doc: Add a note saturating the system_wq is not permitted > workqueue: Adjust WQ_MAX_ACTIVE from 512 to 2048 > > Documentation/core-api/workqueue.rst | 8 ++++++-- > include/linux/workqueue.h | 2 +- > kernel/bpf/cgroup.c | 2 +- > kernel/cgroup/cgroup-internal.h | 1 + > kernel/cgroup/cgroup.c | 2 +- > 5 files changed, 10 insertions(+), 5 deletions(-) > Friendly ping.
On Wed, Sep 18, 2024 at 05:44:47PM +0800, Chen Ridong wrote: ... > > cgroup: fix deadlock caused by cgroup_mutex and cpu_hotplug_lock > > workqueue: doc: Add a note saturating the system_wq is not permitted > > workqueue: Adjust WQ_MAX_ACTIVE from 512 to 2048 > > > > Documentation/core-api/workqueue.rst | 8 ++++++-- > > include/linux/workqueue.h | 2 +- > > kernel/bpf/cgroup.c | 2 +- > > kernel/cgroup/cgroup-internal.h | 1 + > > kernel/cgroup/cgroup.c | 2 +- > > 5 files changed, 10 insertions(+), 5 deletions(-) > > > Friendly ping. I don't know why but this series isn't in my inbox for some reason. Here are some feedbacks after looking at the thread from lore: - Can you create a separate workqueue for cgrp->bpf.release_work instead of exporting cgroup_destroy_wq? Workqueues aren't that expensive. No reason to share like this. - The patch title is rather misleading. The deadlock isn't really caused between cgroup_mutex and cpu_hotplug_lock. They're just victims of system_wq concurrency depletion. Can you please update accordingly? - Can you add a new line before the note paragraph? Also, I'd say "deadlock" rather than "block" to properly convey the imapct of such saturation events. I don't think "eg. cgroup release work" is adding much. Thanks.
On 2024/9/21 3:23, Tejun Heo wrote: > On Wed, Sep 18, 2024 at 05:44:47PM +0800, Chen Ridong wrote: > ... >>> cgroup: fix deadlock caused by cgroup_mutex and cpu_hotplug_lock >>> workqueue: doc: Add a note saturating the system_wq is not permitted >>> workqueue: Adjust WQ_MAX_ACTIVE from 512 to 2048 >>> >>> Documentation/core-api/workqueue.rst | 8 ++++++-- >>> include/linux/workqueue.h | 2 +- >>> kernel/bpf/cgroup.c | 2 +- >>> kernel/cgroup/cgroup-internal.h | 1 + >>> kernel/cgroup/cgroup.c | 2 +- >>> 5 files changed, 10 insertions(+), 5 deletions(-) >>> >> Friendly ping. > > I don't know why but this series isn't in my inbox for some reason. Here are > some feedbacks after looking at the thread from lore: > > - Can you create a separate workqueue for cgrp->bpf.release_work instead of > exporting cgroup_destroy_wq? Workqueues aren't that expensive. No reason > to share like this. > > - The patch title is rather misleading. The deadlock isn't really caused > between cgroup_mutex and cpu_hotplug_lock. They're just victims of > system_wq concurrency depletion. Can you please update accordingly? > > - Can you add a new line before the note paragraph? Also, I'd say "deadlock" > rather than "block" to properly convey the imapct of such saturation > events. I don't think "eg. cgroup release work" is adding much. > > Thanks. > Thanks, TJ, I will do that. Best regards, Ridong