diff mbox series

[RFCv2,2/6] sched: Introduce switch to enable TurboSched mode

Message ID 20190515135322.19393-3-parth@linux.ibm.com (mailing list archive)
State RFC, archived
Headers show
Series TurboSched: A scheduler for sustaining Turbo Frequencies for longer durations | expand

Commit Message

Parth Shah May 15, 2019, 1:53 p.m. UTC
This patch creates a static key which allows to enable or disable
TurboSched feature at runtime.

This key is added in order to enable the TurboSched feature. The static key
helps in optimizing the scheduler fast-path when the TurboSched feature is
disabled.

The patch also provides get/put methods to keep track of the cgroups using
the TurboSched feature. This allows to enable the feature on adding first
cgroup classified as jitter, similarly disable the feature on removal of
such last cgroup.

Signed-off-by: Parth Shah <parth@linux.ibm.com>
---
 kernel/sched/core.c  | 20 ++++++++++++++++++++
 kernel/sched/sched.h |  7 +++++++
 2 files changed, 27 insertions(+)

Comments

Peter Zijlstra May 15, 2019, 4:30 p.m. UTC | #1
On Wed, May 15, 2019 at 07:23:18PM +0530, Parth Shah wrote:
> +void turbo_sched_get(void)
> +{
> +	spin_lock(&turbo_sched_lock);
> +	if (!turbo_sched_count++)
> +		static_branch_enable(&__turbo_sched_enabled);
> +	spin_unlock(&turbo_sched_lock);
> +}

Muwhahaha, you didn't test this code, did you?
Parth Shah May 16, 2019, 4:15 p.m. UTC | #2
On 5/15/19 10:00 PM, Peter Zijlstra wrote:
> On Wed, May 15, 2019 at 07:23:18PM +0530, Parth Shah wrote:
>> +void turbo_sched_get(void)
>> +{
>> +	spin_lock(&turbo_sched_lock);
>> +	if (!turbo_sched_count++)
>> +		static_branch_enable(&__turbo_sched_enabled);
>> +	spin_unlock(&turbo_sched_lock);
>> +}
> 
> Muwhahaha, you didn't test this code, did you?
> 

yeah, I didn't see task-sleep problem coming in.
I will change to mutex.

Thanks for pointing out.
diff mbox series

Patch

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 77aa4aee4478..facbedd2554e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -60,6 +60,26 @@  __read_mostly int scheduler_running;
  */
 int sysctl_sched_rt_runtime = 950000;
 
+DEFINE_STATIC_KEY_FALSE(__turbo_sched_enabled);
+static DEFINE_SPINLOCK(turbo_sched_lock);
+static int turbo_sched_count;
+
+void turbo_sched_get(void)
+{
+	spin_lock(&turbo_sched_lock);
+	if (!turbo_sched_count++)
+		static_branch_enable(&__turbo_sched_enabled);
+	spin_unlock(&turbo_sched_lock);
+}
+
+void turbo_sched_put(void)
+{
+	spin_lock(&turbo_sched_lock);
+	if (!--turbo_sched_count)
+		static_branch_disable(&__turbo_sched_enabled);
+	spin_unlock(&turbo_sched_lock);
+}
+
 /*
  * __task_rq_lock - lock the rq @p resides on.
  */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e75ffaf3ff34..0339964cdf43 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2437,3 +2437,10 @@  static inline bool sched_energy_enabled(void)
 static inline bool sched_energy_enabled(void) { return false; }
 
 #endif /* CONFIG_ENERGY_MODEL && CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
+
+DECLARE_STATIC_KEY_FALSE(__turbo_sched_enabled);
+
+static inline bool is_turbosched_enabled(void)
+{
+	return static_branch_unlikely(&__turbo_sched_enabled);
+}