Message ID | 20201207091516.24683-2-mgorman@techsingularity.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Reduce worst-case scanning of runqueues in select_idle_sibling | expand |
On Mon, 7 Dec 2020 at 10:15, Mel Gorman <mgorman@techsingularity.net> wrote: > > SIS_AVG_CPU was introduced as a means of avoiding a search when the > average search cost indicated that the search would likely fail. It > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make > select_idle_cpu() more aggressive") and later replaced with a proportional > search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to > scale select_idle_cpu()"). > > While there are corner cases where SIS_AVG_CPU is better, it has now been > disabled for almost three years. As the intent of SIS_PROP is to reduce > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus > on SIS_PROP as a throttling mechanism. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Let see if someone complains but looks reasonable > --- > kernel/sched/fair.c | 3 --- > kernel/sched/features.h | 1 - > 2 files changed, 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 98075f9ea9a8..23934dbac635 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > avg_idle = this_rq()->avg_idle / 512; > avg_cost = this_sd->avg_scan_cost + 1; > > - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost) > - return -1; > - > if (sched_feat(SIS_PROP)) { > u64 span_avg = sd->span_weight * avg_idle; > if (span_avg > 4*avg_cost) > diff --git a/kernel/sched/features.h b/kernel/sched/features.h > index 68d369cba9e4..e875eabb6600 100644 > --- a/kernel/sched/features.h > +++ b/kernel/sched/features.h > @@ -54,7 +54,6 @@ SCHED_FEAT(TTWU_QUEUE, true) > /* > * When doing wakeups, attempt to limit superfluous scans of the LLC domain. > */ > -SCHED_FEAT(SIS_AVG_CPU, false) > SCHED_FEAT(SIS_PROP, true) > > /* > -- > 2.26.2 >
On 07/12/2020 10:15, Mel Gorman wrote: > SIS_AVG_CPU was introduced as a means of avoiding a search when the > average search cost indicated that the search would likely fail. It > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make > select_idle_cpu() more aggressive") and later replaced with a proportional > search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to > scale select_idle_cpu()"). > > While there are corner cases where SIS_AVG_CPU is better, it has now been > disabled for almost three years. As the intent of SIS_PROP is to reduce > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus > on SIS_PROP as a throttling mechanism. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > --- > kernel/sched/fair.c | 3 --- > kernel/sched/features.h | 1 - > 2 files changed, 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 98075f9ea9a8..23934dbac635 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > avg_idle = this_rq()->avg_idle / 512; > avg_cost = this_sd->avg_scan_cost + 1; > > - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost) > - return -1; > - > if (sched_feat(SIS_PROP)) { > u64 span_avg = sd->span_weight * avg_idle; > if (span_avg > 4*avg_cost) Nitpick: Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go completely into the SIS_PROP if condition. diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 09f6f0edead4..fce9457cccb9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6121,7 +6121,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); struct sched_domain *this_sd; - u64 avg_cost, avg_idle; u64 time; int this = smp_processor_id(); int cpu, nr = INT_MAX; @@ -6130,14 +6129,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t if (!this_sd) return -1; - /* - * Due to large variance we need a large fuzz factor; hackbench in - * particularly is sensitive here. - */ - avg_idle = this_rq()->avg_idle / 512; - avg_cost = this_sd->avg_scan_cost + 1; - if (sched_feat(SIS_PROP)) { + /* + * Due to large variance we need a large fuzz factor; hackbench in + * particularly is sensitive here. + */ + u64 avg_idle = this_rq()->avg_idle / 512; + u64 avg_cost = this_sd->avg_scan_cost + 1; u64 span_avg = sd->span_weight * avg_idle; if (span_avg > 4*avg_cost) nr = div_u64(span_avg, avg_cost);
On Tue, Dec 08, 2020 at 11:07:19AM +0100, Dietmar Eggemann wrote: > On 07/12/2020 10:15, Mel Gorman wrote: > > SIS_AVG_CPU was introduced as a means of avoiding a search when the > > average search cost indicated that the search would likely fail. It > > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make > > select_idle_cpu() more aggressive") and later replaced with a proportional > > search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to > > scale select_idle_cpu()"). > > > > While there are corner cases where SIS_AVG_CPU is better, it has now been > > disabled for almost three years. As the intent of SIS_PROP is to reduce > > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus > > on SIS_PROP as a throttling mechanism. > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > --- > > kernel/sched/fair.c | 3 --- > > kernel/sched/features.h | 1 - > > 2 files changed, 4 deletions(-) > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 98075f9ea9a8..23934dbac635 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > > avg_idle = this_rq()->avg_idle / 512; > > avg_cost = this_sd->avg_scan_cost + 1; > > > > - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost) > > - return -1; > > - > > if (sched_feat(SIS_PROP)) { > > u64 span_avg = sd->span_weight * avg_idle; > > if (span_avg > 4*avg_cost) > > Nitpick: > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > completely into the SIS_PROP if condition. > Yeah, I can do that. In the initial prototype, that happened in a separate patch that split out SIS_PROP into a helper function and I never merged it back. It's a trivial change. Thanks.
On Tue, 8 Dec 2020 at 11:59, Mel Gorman <mgorman@techsingularity.net> wrote: > > On Tue, Dec 08, 2020 at 11:07:19AM +0100, Dietmar Eggemann wrote: > > On 07/12/2020 10:15, Mel Gorman wrote: > > > SIS_AVG_CPU was introduced as a means of avoiding a search when the > > > average search cost indicated that the search would likely fail. It > > > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make > > > select_idle_cpu() more aggressive") and later replaced with a proportional > > > search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to > > > scale select_idle_cpu()"). > > > > > > While there are corner cases where SIS_AVG_CPU is better, it has now been > > > disabled for almost three years. As the intent of SIS_PROP is to reduce > > > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus > > > on SIS_PROP as a throttling mechanism. > > > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > > --- > > > kernel/sched/fair.c | 3 --- > > > kernel/sched/features.h | 1 - > > > 2 files changed, 4 deletions(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 98075f9ea9a8..23934dbac635 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > > > avg_idle = this_rq()->avg_idle / 512; > > > avg_cost = this_sd->avg_scan_cost + 1; > > > > > > - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost) > > > - return -1; > > > - > > > if (sched_feat(SIS_PROP)) { > > > u64 span_avg = sd->span_weight * avg_idle; > > > if (span_avg > 4*avg_cost) > > > > Nitpick: > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > completely into the SIS_PROP if condition. > > > > Yeah, I can do that. In the initial prototype, that happened in a > separate patch that split out SIS_PROP into a helper function and I > never merged it back. It's a trivial change. while doing this, should you also put the update of this_sd->avg_scan_cost under the SIS_PROP feature ? > > Thanks. > > -- > Mel Gorman > SUSE Labs
On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > Nitpick: > > > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > > completely into the SIS_PROP if condition. > > > > > > > Yeah, I can do that. In the initial prototype, that happened in a > > separate patch that split out SIS_PROP into a helper function and I > > never merged it back. It's a trivial change. > > while doing this, should you also put the update of > this_sd->avg_scan_cost under the SIS_PROP feature ? > It's outside the scope of the series but why not. This? --8<-- sched/fair: Move avg_scan_cost calculations under SIS_PROP As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP check and while we are at it, exclude the cost of initialising the CPU mask from the average scan cost. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 19ca0265f8aa..0fee53b1aae4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6176,10 +6176,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t nr = 4; } - time = cpu_clock(this); - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + if (sched_feat(SIS_PROP)) + time = cpu_clock(this); for_each_cpu_wrap(cpu, cpus, target) { if (!--nr) return -1; @@ -6187,8 +6187,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t break; } - time = cpu_clock(this) - time; - update_avg(&this_sd->avg_scan_cost, time); + if (sched_feat(SIS_PROP)) { + time = cpu_clock(this) - time; + update_avg(&this_sd->avg_scan_cost, time); + } return cpu; }
On Tue, 8 Dec 2020 at 14:36, Mel Gorman <mgorman@techsingularity.net> wrote: > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > > Nitpick: > > > > > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > > > completely into the SIS_PROP if condition. > > > > > > > > > > Yeah, I can do that. In the initial prototype, that happened in a > > > separate patch that split out SIS_PROP into a helper function and I > > > never merged it back. It's a trivial change. > > > > while doing this, should you also put the update of > > this_sd->avg_scan_cost under the SIS_PROP feature ? > > > > It's outside the scope of the series but why not. This? > > --8<-- > sched/fair: Move avg_scan_cost calculations under SIS_PROP > > As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP > even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP > check and while we are at it, exclude the cost of initialising the CPU > mask from the average scan cost. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 19ca0265f8aa..0fee53b1aae4 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6176,10 +6176,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > nr = 4; > } > > - time = cpu_clock(this); I would move it in the if (sched_feat(SIS_PROP)) above. > - > cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); > > + if (sched_feat(SIS_PROP)) > + time = cpu_clock(this); > for_each_cpu_wrap(cpu, cpus, target) { > if (!--nr) > return -1; > @@ -6187,8 +6187,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > break; > } > > - time = cpu_clock(this) - time; > - update_avg(&this_sd->avg_scan_cost, time); > + if (sched_feat(SIS_PROP)) { > + time = cpu_clock(this) - time; > + update_avg(&this_sd->avg_scan_cost, time); > + } > > return cpu; > }
On Tue, Dec 08, 2020 at 02:43:10PM +0100, Vincent Guittot wrote: > On Tue, 8 Dec 2020 at 14:36, Mel Gorman <mgorman@techsingularity.net> wrote: > > > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > > > Nitpick: > > > > > > > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > > > > completely into the SIS_PROP if condition. > > > > > > > > > > > > > Yeah, I can do that. In the initial prototype, that happened in a > > > > separate patch that split out SIS_PROP into a helper function and I > > > > never merged it back. It's a trivial change. > > > > > > while doing this, should you also put the update of > > > this_sd->avg_scan_cost under the SIS_PROP feature ? > > > > > > > It's outside the scope of the series but why not. This? > > > > --8<-- > > sched/fair: Move avg_scan_cost calculations under SIS_PROP > > > > As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP > > even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP > > check and while we are at it, exclude the cost of initialising the CPU > > mask from the average scan cost. > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 19ca0265f8aa..0fee53b1aae4 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -6176,10 +6176,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > > nr = 4; > > } > > > > - time = cpu_clock(this); > > I would move it in the if (sched_feat(SIS_PROP)) above. > I considered it but made the choice to exclude the cost of cpumask_and() from the avg_scan_cost instead. It's minor but when doing the original prototype, I didn't think it was appropriate to count the cpumask clearing as part of the scan cost as it's not directly related.
On Tue, 8 Dec 2020 at 14:54, Mel Gorman <mgorman@techsingularity.net> wrote: > > On Tue, Dec 08, 2020 at 02:43:10PM +0100, Vincent Guittot wrote: > > On Tue, 8 Dec 2020 at 14:36, Mel Gorman <mgorman@techsingularity.net> wrote: > > > > > > On Tue, Dec 08, 2020 at 02:24:32PM +0100, Vincent Guittot wrote: > > > > > > Nitpick: > > > > > > > > > > > > Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go > > > > > > completely into the SIS_PROP if condition. > > > > > > > > > > > > > > > > Yeah, I can do that. In the initial prototype, that happened in a > > > > > separate patch that split out SIS_PROP into a helper function and I > > > > > never merged it back. It's a trivial change. > > > > > > > > while doing this, should you also put the update of > > > > this_sd->avg_scan_cost under the SIS_PROP feature ? > > > > > > > > > > It's outside the scope of the series but why not. This? > > > > > > --8<-- > > > sched/fair: Move avg_scan_cost calculations under SIS_PROP > > > > > > As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP > > > even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP > > > check and while we are at it, exclude the cost of initialising the CPU > > > mask from the average scan cost. > > > > > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 19ca0265f8aa..0fee53b1aae4 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -6176,10 +6176,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > > > nr = 4; > > > } > > > > > > - time = cpu_clock(this); > > > > I would move it in the if (sched_feat(SIS_PROP)) above. > > > > I considered it but made the choice to exclude the cost of cpumask_and() > from the avg_scan_cost instead. It's minor but when doing the original At the cost of a less readable code > prototype, I didn't think it was appropriate to count the cpumask > clearing as part of the scan cost as it's not directly related. hmm... I think it is because the number of loop is directly related to the allowed cpus > > -- > Mel Gorman > SUSE Labs
On Tue, Dec 08, 2020 at 03:47:40PM +0100, Vincent Guittot wrote: > > I considered it but made the choice to exclude the cost of cpumask_and() > > from the avg_scan_cost instead. It's minor but when doing the original > > At the cost of a less readable code > Slightly less readable, yes. > > prototype, I didn't think it was appropriate to count the cpumask > > clearing as part of the scan cost as it's not directly related. > > hmm... I think it is because the number of loop is directly related to > the allowed cpus > While that is true, the cost of initialising the map is constant and what is most important is tracking the scan cost which is variable. Without SIS_AVG_CPU, the cpumask init can go before SIS_PROP without any penalty so is this version preferable? --8<-- sched/fair: Move avg_scan_cost calculations under SIS_PROP As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP check and while we are at it, exclude the cost of initialising the CPU mask from the average scan cost. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- kernel/sched/fair.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ac7b34e7372b..5c41875aec23 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6153,6 +6153,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t if (!this_sd) return -1; + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + if (sched_feat(SIS_PROP)) { u64 avg_cost, avg_idle, span_avg; @@ -6168,11 +6170,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t nr = div_u64(span_avg, avg_cost); else nr = 4; - } - - time = cpu_clock(this); - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + time = cpu_clock(this); + } for_each_cpu_wrap(cpu, cpus, target) { if (!--nr) @@ -6181,8 +6181,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t break; } - time = cpu_clock(this) - time; - update_avg(&this_sd->avg_scan_cost, time); + if (sched_feat(SIS_PROP)) { + time = cpu_clock(this) - time; + update_avg(&this_sd->avg_scan_cost, time); + } return cpu; }
On Tue, 8 Dec 2020 at 16:12, Mel Gorman <mgorman@techsingularity.net> wrote: > > On Tue, Dec 08, 2020 at 03:47:40PM +0100, Vincent Guittot wrote: > > > I considered it but made the choice to exclude the cost of cpumask_and() > > > from the avg_scan_cost instead. It's minor but when doing the original > > > > At the cost of a less readable code > > > > Slightly less readable, yes. > > > > prototype, I didn't think it was appropriate to count the cpumask > > > clearing as part of the scan cost as it's not directly related. > > > > hmm... I think it is because the number of loop is directly related to > > the allowed cpus > > > > While that is true, the cost of initialising the map is constant and > what is most important is tracking the scan cost which is variable. > Without SIS_AVG_CPU, the cpumask init can go before SIS_PROP without any > penalty so is this version preferable? yes looks good to me > > --8<-- > sched/fair: Move avg_scan_cost calculations under SIS_PROP > > As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP > even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP > check and while we are at it, exclude the cost of initialising the CPU > mask from the average scan cost. > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > --- > kernel/sched/fair.c | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index ac7b34e7372b..5c41875aec23 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6153,6 +6153,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > if (!this_sd) > return -1; > > + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); > + > if (sched_feat(SIS_PROP)) { > u64 avg_cost, avg_idle, span_avg; > > @@ -6168,11 +6170,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > nr = div_u64(span_avg, avg_cost); > else > nr = 4; > - } > - > - time = cpu_clock(this); > > - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); > + time = cpu_clock(this); > + } > > for_each_cpu_wrap(cpu, cpus, target) { > if (!--nr) > @@ -6181,8 +6181,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > break; > } > > - time = cpu_clock(this) - time; > - update_avg(&this_sd->avg_scan_cost, time); > + if (sched_feat(SIS_PROP)) { > + time = cpu_clock(this) - time; > + update_avg(&this_sd->avg_scan_cost, time); > + } > > return cpu; > }
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 98075f9ea9a8..23934dbac635 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t avg_idle = this_rq()->avg_idle / 512; avg_cost = this_sd->avg_scan_cost + 1; - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost) - return -1; - if (sched_feat(SIS_PROP)) { u64 span_avg = sd->span_weight * avg_idle; if (span_avg > 4*avg_cost) diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 68d369cba9e4..e875eabb6600 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -54,7 +54,6 @@ SCHED_FEAT(TTWU_QUEUE, true) /* * When doing wakeups, attempt to limit superfluous scans of the LLC domain. */ -SCHED_FEAT(SIS_AVG_CPU, false) SCHED_FEAT(SIS_PROP, true) /*
SIS_AVG_CPU was introduced as a means of avoiding a search when the average search cost indicated that the search would likely fail. It was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make select_idle_cpu() more aggressive") and later replaced with a proportional search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()"). While there are corner cases where SIS_AVG_CPU is better, it has now been disabled for almost three years. As the intent of SIS_PROP is to reduce the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus on SIS_PROP as a throttling mechanism. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- kernel/sched/fair.c | 3 --- kernel/sched/features.h | 1 - 2 files changed, 4 deletions(-)