diff mbox series

[v12,3/6] sched/core: uclamp: Propagate system defaults to root group

Message ID 20190718181748.28446-4-patrick.bellasi@arm.com (mailing list archive)
State Not Applicable, archived
Headers show
Series Add utilization clamping support (CGroups API) | expand

Commit Message

Patrick Bellasi July 18, 2019, 6:17 p.m. UTC
The clamp values are not tunable at the level of the root task group.
That's for two main reasons:

 - the root group represents "system resources" which are always
   entirely available from the cgroup standpoint.

 - when tuning/restricting "system resources" makes sense, tuning must
   be done using a system wide API which should also be available when
   control groups are not.

When a system wide restriction is available, cgroups should be aware of
its value in order to know exactly how much "system resources" are
available for the subgroups.

Utilization clamping supports already the concepts of:

 - system defaults: which define the maximum possible clamp values
   usable by tasks.

 - effective clamps: which allows a parent cgroup to constraint (maybe
   temporarily) its descendants without losing the information related
   to the values "requested" from them.

Exploit these two concepts and bind them together in such a way that,
whenever system default are tuned, the new values are propagated to
(possibly) restrict or relax the "effective" value of nested cgroups.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>

---
Changes in v12:
 Message-ID: <20190716143417.us3xhksrsaxsl2ok@e110439-lin>
 - add missing RCU read locks across cpu_util_update_eff() call from
   uclamp_update_root_tg()
---
 kernel/sched/core.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

Comments

Michal Koutný July 25, 2019, 11:41 a.m. UTC | #1
On Thu, Jul 18, 2019 at 07:17:45PM +0100, Patrick Bellasi <patrick.bellasi@arm.com> wrote:
> The clamp values are not tunable at the level of the root task group.
> That's for two main reasons:
> 
>  - the root group represents "system resources" which are always
>    entirely available from the cgroup standpoint.
> 
>  - when tuning/restricting "system resources" makes sense, tuning must
>    be done using a system wide API which should also be available when
>    control groups are not.
> 
> When a system wide restriction is available, cgroups should be aware of
> its value in order to know exactly how much "system resources" are
> available for the subgroups.
IIUC, the global default would apply in uclamp_eff_get(), so this
propagation isn't strictly necessary in order to apply to tasks (that's
how it works under !CONFIG_UCLAMP_TASK_GROUP).
The reason is that effective value (which isn't exposed currently) in a
group takes into account this global restriction, right?


> @@ -1043,12 +1063,17 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
> [...]
> +	if (update_root_tg)
> +		uclamp_update_root_tg();
> +
>  	/*
>  	 * Updating all the RUNNABLE task is expensive, keep it simple and do
>  	 * just a lazy update at each next enqueue time.
Since uclamp_update_root_tg() traverses down to
uclamp_update_active_tasks() is this comment half true now?
Patrick Bellasi Aug. 1, 2019, 11:10 a.m. UTC | #2
On 25-Jul 13:41, Michal Koutný wrote:
> On Thu, Jul 18, 2019 at 07:17:45PM +0100, Patrick Bellasi <patrick.bellasi@arm.com> wrote:
> > The clamp values are not tunable at the level of the root task group.
> > That's for two main reasons:
> > 
> >  - the root group represents "system resources" which are always
> >    entirely available from the cgroup standpoint.
> > 
> >  - when tuning/restricting "system resources" makes sense, tuning must
> >    be done using a system wide API which should also be available when
> >    control groups are not.
> > 
> > When a system wide restriction is available, cgroups should be aware of
> > its value in order to know exactly how much "system resources" are
> > available for the subgroups.
> IIUC, the global default would apply in uclamp_eff_get(), so this
> propagation isn't strictly necessary in order to apply to tasks (that's
> how it works under !CONFIG_UCLAMP_TASK_GROUP).

That's right.

> The reason is that effective value (which isn't exposed currently) in a
> group takes into account this global restriction, right?

Yep, well admittedly in this area things changed in a slightly confusing way.

Up to v10:
 - effective values was exposed to userspace
 - system defaults was enforced only at enqueue time

Now instead:
 - effective values are not exposed anymore (because of Tejun request)
 - system defaults are applied to the root group and propagated down
   the hierarchy to all effective values

Both solutions are functionally correct but, in the first case, the
cgroup's effective values was not really reflecting what a task will
get while, in the current solution, we force update all effective
values while not exposing them anymore.

However, I think this solution is better in keeping information more
consistent and should create less confusion if in the future we decide
to expose effective values to user-space.

Thought?

> > @@ -1043,12 +1063,17 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
> > [...]
> > +	if (update_root_tg)
> > +		uclamp_update_root_tg();
> > +
> >  	/*
> >  	 * Updating all the RUNNABLE task is expensive, keep it simple and do
> >  	 * just a lazy update at each next enqueue time.
> Since uclamp_update_root_tg() traverses down to
> uclamp_update_active_tasks() is this comment half true now?

Right, this comment is now wrong. We update all RUNNABLE tasks on
system default changes. However, despite the above command it's
difficult to say how much expensive that operation can be.

It really depends on how many RUNNABLE tasks we have, the number of
CPUs and also how many tasks are not already clamped by a more
restrictive "effective" value. Thus, for the time being, we can
consider speculation the above statement and add in a simple change if
in the future that should be reported as a real issue to justify a
lazy update.

The upside is that with the current implementation we have a more
strict control on task. Even long running tasks can be clamped on
sysadmin demand without waiting for them to sleep.

Does that makes sense?

If it does, I'll drop the above comment in v13.

Cheers Patrick
diff mbox series

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 08f5a0c205c6..e9231b089d5c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1017,10 +1017,30 @@  static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
 		uclamp_rq_dec_id(rq, p, clamp_id);
 }
 
+#ifdef CONFIG_UCLAMP_TASK_GROUP
+static void cpu_util_update_eff(struct cgroup_subsys_state *css);
+static void uclamp_update_root_tg(void)
+{
+	struct task_group *tg = &root_task_group;
+
+	uclamp_se_set(&tg->uclamp_req[UCLAMP_MIN],
+		      sysctl_sched_uclamp_util_min, false);
+	uclamp_se_set(&tg->uclamp_req[UCLAMP_MAX],
+		      sysctl_sched_uclamp_util_max, false);
+
+	rcu_read_lock();
+	cpu_util_update_eff(&root_task_group.css);
+	rcu_read_unlock();
+}
+#else
+static void uclamp_update_root_tg(void) { }
+#endif
+
 int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
 				void __user *buffer, size_t *lenp,
 				loff_t *ppos)
 {
+	bool update_root_tg = false;
 	int old_min, old_max;
 	int result;
 
@@ -1043,12 +1063,17 @@  int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
 	if (old_min != sysctl_sched_uclamp_util_min) {
 		uclamp_se_set(&uclamp_default[UCLAMP_MIN],
 			      sysctl_sched_uclamp_util_min, false);
+		update_root_tg = true;
 	}
 	if (old_max != sysctl_sched_uclamp_util_max) {
 		uclamp_se_set(&uclamp_default[UCLAMP_MAX],
 			      sysctl_sched_uclamp_util_max, false);
+		update_root_tg = true;
 	}
 
+	if (update_root_tg)
+		uclamp_update_root_tg();
+
 	/*
 	 * Updating all the RUNNABLE task is expensive, keep it simple and do
 	 * just a lazy update at each next enqueue time.