Message ID | 20240711130004.2157737-7-vschneid@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | sched/fair: Defer CFS throttle to user entry | expand |
On Thu, Jul 11, 2024 at 03:00:00PM +0200, Valentin Schneider wrote: > Later commits will change CFS bandwidth control throttling from a > per-cfs_rq basis to a per-task basis. This means special care needs to be > taken around any transition a task can have into and out of a cfs_rq. > > To ease reviewing, the transitions are patched with dummy-helpers that are > implemented later on. > > Add helpers to switched_from_fair() and switched_to_fair() to cover class > changes. If switching from CFS, a task may need to be unthrottled. If > switching to CFS, a task may need to be throttled. > > Signed-off-by: Valentin Schneider <vschneid@redhat.com> > --- > kernel/sched/fair.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 095357bd17f0e..acac0829c71f3 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5694,6 +5694,10 @@ static inline int throttled_hierarchy(struct cfs_rq *cfs_rq) > return cfs_bandwidth_used() && cfs_rq->throttle_count; > } > > +static inline bool task_needs_throttling(struct task_struct *p) { return false; } > +static inline void task_throttle_setup(struct task_struct *p) { } > +static inline void task_throttle_cancel(struct task_struct *p) { } > + > /* > * Ensure that neither of the group entities corresponding to src_cpu or > * dest_cpu are members of a throttled hierarchy when performing group > @@ -6622,6 +6626,10 @@ static inline int throttled_lb_pair(struct task_group *tg, > return 0; > } > > +static inline bool task_needs_throttling(struct task_struct *p) { return false; } > +static inline void task_throttle_setup(struct task_struct *p) { } > +static inline void task_throttle_cancel(struct task_struct *p) { } > + > #ifdef CONFIG_FAIR_GROUP_SCHED > void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *parent) {} > static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} > @@ -12847,11 +12855,15 @@ static void attach_task_cfs_rq(struct task_struct *p) > static void switched_from_fair(struct rq *rq, struct task_struct *p) > { > detach_task_cfs_rq(p); > + if (cfs_bandwidth_used()) > + task_throttle_cancel(p); > } > > static void switched_to_fair(struct rq *rq, struct task_struct *p) > { > attach_task_cfs_rq(p); > + if (cfs_bandwidth_used() && task_needs_throttling(p)) > + task_throttle_setup(p); > > set_task_max_allowed_capacity(p); Other functions seem to have cfs_bandwidth_used() inside them, and not bother the caller with this detail.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 095357bd17f0e..acac0829c71f3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5694,6 +5694,10 @@ static inline int throttled_hierarchy(struct cfs_rq *cfs_rq) return cfs_bandwidth_used() && cfs_rq->throttle_count; } +static inline bool task_needs_throttling(struct task_struct *p) { return false; } +static inline void task_throttle_setup(struct task_struct *p) { } +static inline void task_throttle_cancel(struct task_struct *p) { } + /* * Ensure that neither of the group entities corresponding to src_cpu or * dest_cpu are members of a throttled hierarchy when performing group @@ -6622,6 +6626,10 @@ static inline int throttled_lb_pair(struct task_group *tg, return 0; } +static inline bool task_needs_throttling(struct task_struct *p) { return false; } +static inline void task_throttle_setup(struct task_struct *p) { } +static inline void task_throttle_cancel(struct task_struct *p) { } + #ifdef CONFIG_FAIR_GROUP_SCHED void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b, struct cfs_bandwidth *parent) {} static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq) {} @@ -12847,11 +12855,15 @@ static void attach_task_cfs_rq(struct task_struct *p) static void switched_from_fair(struct rq *rq, struct task_struct *p) { detach_task_cfs_rq(p); + if (cfs_bandwidth_used()) + task_throttle_cancel(p); } static void switched_to_fair(struct rq *rq, struct task_struct *p) { attach_task_cfs_rq(p); + if (cfs_bandwidth_used() && task_needs_throttling(p)) + task_throttle_setup(p); set_task_max_allowed_capacity(p);
Later commits will change CFS bandwidth control throttling from a per-cfs_rq basis to a per-task basis. This means special care needs to be taken around any transition a task can have into and out of a cfs_rq. To ease reviewing, the transitions are patched with dummy-helpers that are implemented later on. Add helpers to switched_from_fair() and switched_to_fair() to cover class changes. If switching from CFS, a task may need to be unthrottled. If switching to CFS, a task may need to be throttled. Signed-off-by: Valentin Schneider <vschneid@redhat.com> --- kernel/sched/fair.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)