mbox series

[v4,0/5] cgroup/cpuset: Improve CPU isolation in isolated partitions

Message ID 20231116033405.185166-1-longman@redhat.com (mailing list archive)
Headers show
Series cgroup/cpuset: Improve CPU isolation in isolated partitions | expand

Message

Waiman Long Nov. 16, 2023, 3:34 a.m. UTC
v4:
 - Update patch 1 to move apply_wqattrs_lock() and apply_wqattrs_unlock()
   down into CONFIG_SYSFS block to avoid compilation warnings.

v3:
 - Break out a separate patch to make workqueue_set_unbound_cpumask()
   static and move it down to the CONFIG_SYSFS section.
 - Remove the "__DEBUG__." prefix and the CFTYPE_DEBUG flag from the
   new root only cpuset.cpus.isolated control files and update the
   test accordingly.

v2:
 - Add 2 read-only workqueue sysfs files to expose the user requested
   cpumask as well as the isolated CPUs to be excluded from
   wq_unbound_cpumask.
 - Ensure that caller of the new workqueue_unbound_exclude_cpumask()
   hold cpus_read_lock.
 - Update the cpuset code to make sure the cpus_read_lock is held
   whenever workqueue_unbound_exclude_cpumask() may be called.

Isolated cpuset partition can currently be created to contain an
exclusive set of CPUs not used in other cgroups and with load balancing
disabled to reduce interference from the scheduler.

The main purpose of this isolated partition type is to dynamically
emulate what can be done via the "isolcpus" boot command line option,
specifically the default domain flag. One effect of the "isolcpus" option
is to remove the isolated CPUs from the cpumasks of unbound workqueues
since running work functions in an isolated CPU can be a major source
of interference. Changing the unbound workqueue cpumasks can be done at
run time by writing an appropriate cpumask without the isolated CPUs to
/sys/devices/virtual/workqueue/cpumask. So one can set up an isolated
cpuset partition and then write to the cpumask sysfs file to achieve
similar level of CPU isolation. However, this manual process can be
error prone.

This patch series implements automatic exclusion of isolated CPUs from
unbound workqueue cpumasks when an isolated cpuset partition is created
and then adds those CPUs back when the isolated partition is destroyed.

There are also other places in the kernel that look at the HK_FLAG_DOMAIN
cpumask or other HK_FLAG_* cpumasks and exclude the isolated CPUs from
certain actions to further reduce interference. CPUs in an isolated
cpuset partition will not be able to avoid those interferences yet. That
may change in the future as the need arises.

Waiman Long (5):
  workqueue: Make workqueue_set_unbound_cpumask() static
  workqueue: Add workqueue_unbound_exclude_cpumask() to exclude CPUs
    from wq_unbound_cpumask
  selftests/cgroup: Minor code cleanup and reorganization of
    test_cpuset_prs.sh
  cgroup/cpuset: Keep track of CPUs in isolated partitions
  cgroup/cpuset: Take isolated CPUs out of workqueue unbound cpumask

 Documentation/admin-guide/cgroup-v2.rst       |  10 +-
 include/linux/workqueue.h                     |   2 +-
 kernel/cgroup/cpuset.c                        | 286 +++++++++++++-----
 kernel/workqueue.c                            | 165 +++++++---
 .../selftests/cgroup/test_cpuset_prs.sh       | 216 ++++++++-----
 5 files changed, 475 insertions(+), 204 deletions(-)

Comments

Tejun Heo Nov. 19, 2023, 3:23 p.m. UTC | #1
On Wed, Nov 15, 2023 at 10:34:00PM -0500, Waiman Long wrote:
> v4:
>  - Update patch 1 to move apply_wqattrs_lock() and apply_wqattrs_unlock()
>    down into CONFIG_SYSFS block to avoid compilation warnings.

I already applied v3 to cgroup/for-6.8. Can you please send the fix up patch
against that branch?

Thanks.
Waiman Long Nov. 20, 2023, 5:52 p.m. UTC | #2
On 11/19/23 10:23, Tejun Heo wrote:
> On Wed, Nov 15, 2023 at 10:34:00PM -0500, Waiman Long wrote:
>> v4:
>>   - Update patch 1 to move apply_wqattrs_lock() and apply_wqattrs_unlock()
>>     down into CONFIG_SYSFS block to avoid compilation warnings.
> I already applied v3 to cgroup/for-6.8. Can you please send the fix up patch
> against that branch?
>
Sure. I will post another fixup patch.

Thanks,
Longman