diff mbox series

[RFC,v6,3/4] scheduler: scan idle cpu in cluster for tasks within one LLC

Message ID 20210420001844.9116-4-song.bao.hua@hisilicon.com (mailing list archive)
State New, archived
Headers show
Series scheduler: expose the topology of clusters and add cluster scheduler | expand

Commit Message

Song Bao Hua (Barry Song) April 20, 2021, 12:18 a.m. UTC
On kunpeng920, cpus within one cluster can communicate wit each other
much faster than cpus across different clusters. A simple hackbench
can prove that.
hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters shows a large contrast:
(1) within a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285

(2) across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524

This inspires us to change the wake_affine path to scan cluster to pack
the related tasks. Ideally, a two-level packing vs. spreading heuristic
could be introduced to distinguish between llc-packing and even narrower
(cluster or MC-L2)-packing. But this way would be quite trivial. So this
patch begins from those tasks running in same LLC. This is actually quite
common in real use cases when tasks are bound in a NUMA.

If users use "numactl -N 0" to bind tasks, this patch will scan cluster
rather than llc to select idle cpu. A hackbench running with some groups
of monogamous sender-receiver model shows a major improvement.

To evaluate the performance impact to related tasks talking with each
other, we run the below hackbench with different -g parameter from 6
to 32 in a NUMA node with 24 cores, for each different g, we run the
command 20 times and get the average time:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
As -f is set to 1, this means all threads are talking with each other
monogamously.

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g 6
Running in threaded mode with 6 groups using 2 file descriptors each (== 12 tasks)
Each sender will pass 1000000 messages of 100 bytes

The below is the result of hackbench:
g                =    6      12      18     24    28    32
w/o                 1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
w/domain            1.1458 1.3309 1.3416 1.4990 1.9212 2.3411
w/domain+affine     0.9500 1.0728 1.1756 1.2201 1.4166 1.5464

w/o: without any change
w/domain: added cluster domain without changing wake_affine
w/domain+affine: added cluster domain, changed wake_affine

while g=6, if we use top -H to show the cpus which tasks are running on,
we can easily find couples are running in same CCL.

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v6:
  * emulated a two-level spreading/packing heuristic by only scanning cluster
    in wake_affine path for tasks running in same LLC(also NUMA).

    This partially addressed Dietmar's comment in RFC v3:
    "In case we would like to further distinguish between llc-packing and
     even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
     packing vs. spreading heuristic further down in sis().
   
     IMHO, Barry's current implementation doesn't do this right now. Instead
     he's trying to pack on cluster first and if not successful look further
     among the remaining llc CPUs for an idle CPU."

  * adjusted the hackbench parameter to make relatively low and high load.
    previous patchsets with "-f 10" ran under an extremely high load with
    hundreds of threads, which seems not real use cases.

    This also addressed Vincent's question in RFC v4:
    "In particular, I'm still not convinced that the modification of the wakeup
    path is the root of the hackbench improvement; especially with g=14 where
    there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs."

 block/blk-mq.c                 |  2 +-
 include/linux/sched/topology.h |  5 +++--
 kernel/sched/core.c            |  9 +++++---
 kernel/sched/fair.c            | 47 +++++++++++++++++++++++++-----------------
 kernel/sched/sched.h           |  3 +++
 kernel/sched/topology.c        | 12 +++++++++++
 6 files changed, 53 insertions(+), 25 deletions(-)

Comments

Dietmar Eggemann April 27, 2021, 11:35 a.m. UTC | #1
On 20/04/2021 02:18, Barry Song wrote:

[...]

> @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
>   * whatever is irrelevant, spread criteria is apparent partner count exceeds
>   * socket size.
>   */
> -static int wake_wide(struct task_struct *p)
> +static int wake_wide(struct task_struct *p, int cluster)
>  {
>  	unsigned int master = current->wakee_flips;
>  	unsigned int slave = p->wakee_flips;
> -	int factor = __this_cpu_read(sd_llc_size);
> +	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> +		__this_cpu_read(sd_llc_size);

I don't see that the wake_wide() change has any effect here. None of the
sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
end up in the slow path.
Have you seen a diff when running your `lmbench stream` workload in what
wake_wide() returns when you use `sd cluster size` instead of `sd llc
size` as factor?

I guess for you,  wakeups are now subdivided into faster (cluster = 4
CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
(find_idlest_cpu()).

>  
>  	if (master < slave)
>  		swap(master, slave);

[...]

> @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  	int want_affine = 0;
>  	/* SD_flags and WF_flags share the first nibble */
>  	int sd_flag = wake_flags & 0xF;
> +	/*
> +	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
> +	 * than llc. this is typically true while tasks are bound within
> +	 * one numa
> +	 */
> +	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);

So you changed from scanning cluster before LLC to scan either cluster
or LLC.

And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
or not. So you only see an effect when running the workload with
`numactl -N X ...`.

>  
>  	if (wake_flags & WF_TTWU) {
>  		record_wakee(p);
> @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  			new_cpu = prev_cpu;
>  		}
>  
> -		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> +		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
>  	}
>  
>  	rcu_read_lock();
> @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
>  		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
>  			if (cpu != prev_cpu)
> -				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> +				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
>  
>  			sd = NULL; /* Prefer wake_affine over balance flags */
>  			break;
> @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
>  	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
>  		/* Fast path */
> -		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> +		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
>  
>  		if (want_affine)
>  			current->recent_used_cpu = cpu;

[...]
Song Bao Hua (Barry Song) April 28, 2021, 9:51 a.m. UTC | #2
> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Tuesday, April 27, 2021 11:36 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> > @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> >   * whatever is irrelevant, spread criteria is apparent partner count exceeds
> >   * socket size.
> >   */
> > -static int wake_wide(struct task_struct *p)
> > +static int wake_wide(struct task_struct *p, int cluster)
> >  {
> >  	unsigned int master = current->wakee_flips;
> >  	unsigned int slave = p->wakee_flips;
> > -	int factor = __this_cpu_read(sd_llc_size);
> > +	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> > +		__this_cpu_read(sd_llc_size);
> 
> I don't see that the wake_wide() change has any effect here. None of the
> sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
> end up in the slow path.

I am really confused. The whole code has only checked if wake_flags
has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.

static int
select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
{
	...

	if (wake_flags & WF_TTWU) {
		record_wakee(p);

		if (sched_energy_enabled()) {
			new_cpu = find_energy_efficient_cpu(p, prev_cpu);
			if (new_cpu >= 0)
				return new_cpu;
			new_cpu = prev_cpu;
		}

		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
	}
}

And try_to_wake_up() has always set WF_TTWU:
static int
try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) 
{
	cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
	...
}

So the change in wake_wide will actually affect the value of want_affine.
And I did also see code entered slow path during my benchmark.

One issue I mentioned during linaro open discussion is that
since I have moved to use cluster size to decide the value
of wake_wide, relatively less tasks will make wake_wide()
decide to go to slow path, thus, tasks begin to spread to
other NUMA,  but actually llc_size might be able to contain
those tasks. So a possible model might be:
static int wake_wide(struct task_struct *p)
{
	tasksize < cluster : scan cluster
	tasksize > llc      : slow path
	tasksize > cluster && tasksize < llc: scan llc
}

thoughts?

> Have you seen a diff when running your `lmbench stream` workload in what
> wake_wide() returns when you use `sd cluster size` instead of `sd llc
> size` as factor?
> 
> I guess for you,  wakeups are now subdivided into faster (cluster = 4
> CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
> (find_idlest_cpu()).
> 
> >
> >  	if (master < slave)
> >  		swap(master, slave);
> 
> [...]
> 
> > @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct
> task_struct *p, int prev_cpu)
> >  	int want_affine = 0;
> >  	/* SD_flags and WF_flags share the first nibble */
> >  	int sd_flag = wake_flags & 0xF;
> > +	/*
> > +	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
> > +	 * than llc. this is typically true while tasks are bound within
> > +	 * one numa
> > +	 */
> > +	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
> 
> So you changed from scanning cluster before LLC to scan either cluster
> or LLC.

Yes, I have seen two ugly things for scanning cluster before scanning LLC
in select_idle_cpu.
1. avg_scan_cost is actually measuring the scanning time. But if we scan
cluster before scanning LLC, during the gap of these two different
domains, we need a huge bit operation and this bit operation is not
a scanning operation at all. This makes the scan_cost quite
untrustworthy particularly "nr" can sometimes be < cluster size, sometimes
> cluster size.

2. select_idle_cpu() is actually the last step of wake_affine, before
that, wake_affine code has been totally depending on cpus_share_cache()
to decide the target to scan from. When waker and wakee have been already
in one LLC, if we only change select_idle_cpu(), at that time, decision
has been made. we may lose some chance to choose the right target to scan
from. So it should be more sensible to let cpus_share_cache() check cluster
when related tasks have been in one same LLC.

> 
> And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
> or not. So you only see an effect when running the workload with
> `numactl -N X ...`.

Ideally, I'd expect this can also positively affect tasks located in
different LLCs.
For example, if taskA and taskB are in different NUMAs(also LLC for both
kunpeng920 and Tim's hardware) at the first beginning, a two-stage packing
might make them take the advantage of cluster:
For the first wake-up, taskA and taskB will be put in one LLC by scanning
LLC; 
For the second wake-up, they might be put in one cluster by scanning
cluster.
So ideally, scanning LLC and scanning cluster can work in two stages
for related tasks and pack them step by step. Practically, this
could happen. But LB between NUMA might take the opposite way. Actually,
for a kernel completely *without* cluster patch, I have seen some
serious ping-pong of tasks in two numa nodes due to the conflict
of wake_affine and LB. this kind of ping-pong could seriously affect
the performance.
For example, for g=6,12,18,24,28,32, I have found running same workload
on 2numa shows so much worse latency than doing that on single one
numa(each numa has 24 cpus).
1numa command: numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
2numa command: numactl -N 0-1 hackbench -p -T -l 1000000 -f 1 -g $1

Measured the average latency of 20 times for each command.

*)without cluster scheduler, 2numa vs 1numa:
g      =     6     12     18    24      28     32
1numa      1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
2numa      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419

BTW, my cluster patch actually also improves 2numa:
*)with cluster scheduler 2numa vs 1numa:
g      =     6     12     18    24      28     32
1numa      0.9500 1.0728 1.1756 1.2201 1.4166 1.5464
2numa      3.5404 4.3744 4.3988 4.6994 5.3117 5.4908

*) 2numa  w/ and w/o cluster:
g          =     6     12     18    24      28     32
2numa w/o      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
2numa w/       3.5404 4.3744 4.3988 4.6994 5.3117 5.4908

Ideally, load balance should try to pull unmarried tasks rather than
married tasks. I mean, if we have
groupA: task1+task2 as couple, task3 as bachelor
groupB: task4.
groupB should try to pull task3. But I feel it is extremely hard to let
LB understand who is married and who is unmarried.

I assume 2numa worse than 1numa should be a different topic
which might be worth more investigation.

On the other hand, use cases I'm now targeting at are really using
"numactl -N x" to bind processes in one NUMA. If we ignore other NUMA
(also other LLCs) and think one NUMA as a whole system, cluster would
be the last-level topology scheduler can use. And the code could be
quite clean to directly leverage the existing select_sibling code for
LLC by simply changing cpus_share_cache() to cluster level.

> 
> >
> >  	if (wake_flags & WF_TTWU) {
> >  		record_wakee(p);
> > @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  			new_cpu = prev_cpu;
> >  		}
> >
> > -		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> > +		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu,
> p->cpus_ptr);
> >  	}
> >
> >  	rcu_read_lock();
> > @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> >  		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> >  			if (cpu != prev_cpu)
> > -				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > +				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
> >
> >  			sd = NULL; /* Prefer wake_affine over balance flags */
> >  			break;
> > @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> >  	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
> >  		/* Fast path */
> > -		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> > +		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
> >
> >  		if (want_affine)
> >  			current->recent_used_cpu = cpu;
> 
> [...]

Thanks
Barry
Vincent Guittot April 28, 2021, 1:04 p.m. UTC | #3
On Wed, 28 Apr 2021 at 11:51, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Tuesday, April 27, 2021 11:36 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> > tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de
> > Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 20/04/2021 02:18, Barry Song wrote:
> >
> > [...]
> >
> > > @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> > >   * whatever is irrelevant, spread criteria is apparent partner count exceeds
> > >   * socket size.
> > >   */
> > > -static int wake_wide(struct task_struct *p)
> > > +static int wake_wide(struct task_struct *p, int cluster)
> > >  {
> > >     unsigned int master = current->wakee_flips;
> > >     unsigned int slave = p->wakee_flips;
> > > -   int factor = __this_cpu_read(sd_llc_size);
> > > +   int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> > > +           __this_cpu_read(sd_llc_size);
> >
> > I don't see that the wake_wide() change has any effect here. None of the
> > sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
> > end up in the slow path.
>
> I am really confused. The whole code has only checked if wake_flags
> has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.

look at :
#define WF_TTWU     0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */

so  when wake_wide return false, we use the wake_affine mecanism but
if it's false then we fllback to default mode which looks for:
if (tmp->flags & sd_flag)

This means looking for SD_BALANCE_WAKE which is never set

so sd will stay NULL and you will end up calling select_idle_sibling anyway

>
> static int
> select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
> {
>         ...
>
>         if (wake_flags & WF_TTWU) {
>                 record_wakee(p);
>
>                 if (sched_energy_enabled()) {
>                         new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>                         if (new_cpu >= 0)
>                                 return new_cpu;
>                         new_cpu = prev_cpu;
>                 }
>
>                 want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
>         }
> }
>
> And try_to_wake_up() has always set WF_TTWU:
> static int
> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> {
>         cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
>         ...
> }
>
> So the change in wake_wide will actually affect the value of want_affine.
> And I did also see code entered slow path during my benchmark.
>
> One issue I mentioned during linaro open discussion is that
> since I have moved to use cluster size to decide the value
> of wake_wide, relatively less tasks will make wake_wide()
> decide to go to slow path, thus, tasks begin to spread to
> other NUMA,  but actually llc_size might be able to contain
> those tasks. So a possible model might be:
> static int wake_wide(struct task_struct *p)
> {
>         tasksize < cluster : scan cluster
>         tasksize > llc      : slow path
>         tasksize > cluster && tasksize < llc: scan llc
> }
>
> thoughts?
>
> > Have you seen a diff when running your `lmbench stream` workload in what
> > wake_wide() returns when you use `sd cluster size` instead of `sd llc
> > size` as factor?
> >
> > I guess for you,  wakeups are now subdivided into faster (cluster = 4
> > CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
> > (find_idlest_cpu()).
> >
> > >
> > >     if (master < slave)
> > >             swap(master, slave);
> >
> > [...]
> >
> > > @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct
> > task_struct *p, int prev_cpu)
> > >     int want_affine = 0;
> > >     /* SD_flags and WF_flags share the first nibble */
> > >     int sd_flag = wake_flags & 0xF;
> > > +   /*
> > > +    * if cpu and prev_cpu share LLC, consider cluster sibling rather
> > > +    * than llc. this is typically true while tasks are bound within
> > > +    * one numa
> > > +    */
> > > +   int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
> >
> > So you changed from scanning cluster before LLC to scan either cluster
> > or LLC.
>
> Yes, I have seen two ugly things for scanning cluster before scanning LLC
> in select_idle_cpu.
> 1. avg_scan_cost is actually measuring the scanning time. But if we scan
> cluster before scanning LLC, during the gap of these two different
> domains, we need a huge bit operation and this bit operation is not
> a scanning operation at all. This makes the scan_cost quite
> untrustworthy particularly "nr" can sometimes be < cluster size, sometimes
> > cluster size.
>
> 2. select_idle_cpu() is actually the last step of wake_affine, before
> that, wake_affine code has been totally depending on cpus_share_cache()
> to decide the target to scan from. When waker and wakee have been already
> in one LLC, if we only change select_idle_cpu(), at that time, decision
> has been made. we may lose some chance to choose the right target to scan
> from. So it should be more sensible to let cpus_share_cache() check cluster
> when related tasks have been in one same LLC.
>
> >
> > And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
> > or not. So you only see an effect when running the workload with
> > `numactl -N X ...`.
>
> Ideally, I'd expect this can also positively affect tasks located in
> different LLCs.
> For example, if taskA and taskB are in different NUMAs(also LLC for both
> kunpeng920 and Tim's hardware) at the first beginning, a two-stage packing
> might make them take the advantage of cluster:
> For the first wake-up, taskA and taskB will be put in one LLC by scanning
> LLC;
> For the second wake-up, they might be put in one cluster by scanning
> cluster.
> So ideally, scanning LLC and scanning cluster can work in two stages
> for related tasks and pack them step by step. Practically, this
> could happen. But LB between NUMA might take the opposite way. Actually,
> for a kernel completely *without* cluster patch, I have seen some
> serious ping-pong of tasks in two numa nodes due to the conflict
> of wake_affine and LB. this kind of ping-pong could seriously affect
> the performance.
> For example, for g=6,12,18,24,28,32, I have found running same workload
> on 2numa shows so much worse latency than doing that on single one
> numa(each numa has 24 cpus).
> 1numa command: numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
> 2numa command: numactl -N 0-1 hackbench -p -T -l 1000000 -f 1 -g $1
>
> Measured the average latency of 20 times for each command.
>
> *)without cluster scheduler, 2numa vs 1numa:
> g      =     6     12     18    24      28     32
> 1numa      1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
> 2numa      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
>
> BTW, my cluster patch actually also improves 2numa:
> *)with cluster scheduler 2numa vs 1numa:
> g      =     6     12     18    24      28     32
> 1numa      0.9500 1.0728 1.1756 1.2201 1.4166 1.5464
> 2numa      3.5404 4.3744 4.3988 4.6994 5.3117 5.4908
>
> *) 2numa  w/ and w/o cluster:
> g          =     6     12     18    24      28     32
> 2numa w/o      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
> 2numa w/       3.5404 4.3744 4.3988 4.6994 5.3117 5.4908
>
> Ideally, load balance should try to pull unmarried tasks rather than
> married tasks. I mean, if we have
> groupA: task1+task2 as couple, task3 as bachelor
> groupB: task4.
> groupB should try to pull task3. But I feel it is extremely hard to let
> LB understand who is married and who is unmarried.
>
> I assume 2numa worse than 1numa should be a different topic
> which might be worth more investigation.
>
> On the other hand, use cases I'm now targeting at are really using
> "numactl -N x" to bind processes in one NUMA. If we ignore other NUMA
> (also other LLCs) and think one NUMA as a whole system, cluster would
> be the last-level topology scheduler can use. And the code could be
> quite clean to directly leverage the existing select_sibling code for
> LLC by simply changing cpus_share_cache() to cluster level.
>
> >
> > >
> > >     if (wake_flags & WF_TTWU) {
> > >             record_wakee(p);
> > > @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >                     new_cpu = prev_cpu;
> > >             }
> > >
> > > -           want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> > > +           want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu,
> > p->cpus_ptr);
> > >     }
> > >
> > >     rcu_read_lock();
> > > @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >             if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > >                 cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > >                     if (cpu != prev_cpu)
> > > -                           new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > > +                           new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
> > >
> > >                     sd = NULL; /* Prefer wake_affine over balance flags */
> > >                     break;
> > > @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >             new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> > >     } else if (wake_flags & WF_TTWU) { /* XXX always ? */
> > >             /* Fast path */
> > > -           new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> > > +           new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
> > >
> > >             if (want_affine)
> > >                     current->recent_used_cpu = cpu;
> >
> > [...]
>
> Thanks
> Barry
>
Dietmar Eggemann April 28, 2021, 4:47 p.m. UTC | #4
On 28/04/2021 15:04, Vincent Guittot wrote:
> On Wed, 28 Apr 2021 at 11:51, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
>>
>>> -----Original Message-----
>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

>> I am really confused. The whole code has only checked if wake_flags
>> has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.
> 
> look at :
> #define WF_TTWU     0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
> 
> so  when wake_wide return false, we use the wake_affine mecanism but
> if it's false then we fllback to default mode which looks for:
> if (tmp->flags & sd_flag)
> 
> This means looking for SD_BALANCE_WAKE which is never set
> 
> so sd will stay NULL and you will end up calling select_idle_sibling anyway
> 
>>
>> static int
>> select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
>> {
>>         ...
>>
>>         if (wake_flags & WF_TTWU) {
>>                 record_wakee(p);
>>
>>                 if (sched_energy_enabled()) {
>>                         new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>>                         if (new_cpu >= 0)
>>                                 return new_cpu;
>>                         new_cpu = prev_cpu;
>>                 }
>>
>>                 want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
>>         }
>> }
>>
>> And try_to_wake_up() has always set WF_TTWU:
>> static int
>> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>> {
>>         cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
>>         ...
>> }
>>
>> So the change in wake_wide will actually affect the value of want_affine.
>> And I did also see code entered slow path during my benchmark.

Yes, this is happening but IMHO not for wakeups. Check wake_flags for
the tasks which go through `slow path` on your machine. They should have
WF_EXEC or WF_FORK, not WF_TTWU (& WF_SYNC).

>> One issue I mentioned during linaro open discussion is that
>> since I have moved to use cluster size to decide the value
>> of wake_wide, relatively less tasks will make wake_wide()
>> decide to go to slow path, thus, tasks begin to spread to
>> other NUMA,  but actually llc_size might be able to contain
>> those tasks. So a possible model might be:
>> static int wake_wide(struct task_struct *p)
>> {
>>         tasksize < cluster : scan cluster
>>         tasksize > llc      : slow path
>>         tasksize > cluster && tasksize < llc: scan llc
>> }
>>
>> thoughts?

Like Vincent explained, the return value of wake_wide() doesn't matter.
For wakeups you always end up in sis().

[...]
Dietmar Eggemann April 30, 2021, 10:42 a.m. UTC | #5
On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> 
> 
>> -----Original Message-----
>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>
>> [...]
>>
>>>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

> Though we will never go to slow path, wake_wide() will affect want_affine,
> so eventually affect the "new_cpu"?

yes.

> 
> 	for_each_domain(cpu, tmp) {
> 		/*
> 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> 		 * cpu is a valid SD_WAKE_AFFINE target.
> 		 */
> 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> 			if (cpu != prev_cpu)
> 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> 
> 			sd = NULL; /* Prefer wake_affine over balance flags */
> 			break;
> 		}
> 
> 		if (tmp->flags & sd_flag)
> 			sd = tmp;
> 		else if (!want_affine)
> 			break;
> 	}
> 
> If wake_affine is false, the above won't execute, new_cpu(target) will
> always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> this means we won't pull the wakee to the cluster of waker? It seems
> sensible.

What is `task size` here?

The criterion is `!(slave < factor || master < slave * factor)` or
`slave >= factor && master >= slave * factor` to wake wide.

I see that since you effectively change the sched domain size from LLC
to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
(hence the `numactl -N 0` in your workload), wake_wide() has to take
CLUSTER size into consideration.

I was wondering if you saw wake_wide() returning 1 with your use cases:

numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5
Song Bao Hua (Barry Song) May 3, 2021, 6:19 a.m. UTC | #6
> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Friday, April 30, 2021 10:43 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >
> >
> >> -----Original Message-----
> >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>
> >> [...]
> >>
> >>>>> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> > Though we will never go to slow path, wake_wide() will affect want_affine,
> > so eventually affect the "new_cpu"?
> 
> yes.
> 
> >
> > 	for_each_domain(cpu, tmp) {
> > 		/*
> > 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> > 		 * cpu is a valid SD_WAKE_AFFINE target.
> > 		 */
> > 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > 			if (cpu != prev_cpu)
> > 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> >
> > 			sd = NULL; /* Prefer wake_affine over balance flags */
> > 			break;
> > 		}
> >
> > 		if (tmp->flags & sd_flag)
> > 			sd = tmp;
> > 		else if (!want_affine)
> > 			break;
> > 	}
> >
> > If wake_affine is false, the above won't execute, new_cpu(target) will
> > always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> > this means we won't pull the wakee to the cluster of waker? It seems
> > sensible.
> 
> What is `task size` here?
> 
> The criterion is `!(slave < factor || master < slave * factor)` or
> `slave >= factor && master >= slave * factor` to wake wide.
> 

Yes. For "task size", I actually mean a bundle of waker-wakee tasks
which can make "slave >= factor && master >= slave * factor" either
true or false, then change the target cpu where we are going to scan
from.
Now since I have moved to cluster level when tasks have been in same
LLC level, it seems it would be more sensible to use "cluster_size" as
factor?

> I see that since you effectively change the sched domain size from LLC
> to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
> (hence the `numactl -N 0` in your workload), wake_wide() has to take
> CLUSTER size into consideration.
> 
> I was wondering if you saw wake_wide() returning 1 with your use cases:
> 
> numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5

I couldn't make wake_wide return 1 by the above stream command.
And I can't reproduce it by a 1:1(monogamous) hackbench "-f 1".

But I am able to reproduce this issue by a M:N hackbench, for example:

numactl -N 0 hackbench -p -T -f 10 -l 20000 -g 1

hackbench will create 10 senders which will send messages to 10
receivers. (Each sender can send messages to all 10 receivers.)

I've often seen flips like:
waker wakee
1501  39
1509  17
11   1320
13   2016

11, 13, 17 is smaller than LLC but larger than cluster. So the wake_wide()
using cluster factor will return 1, on the other hand, if we always use
llc_size as factor, it will return 0.

However, it seems the change in wake_wide() could bring some negative
influence to M:N relationship(-f 10) according to tests made today by:

numactl -N 0 hackbench -p -T -f 10 -l 20000 -g $1

g             =      1     2       3       4
cluster_size     0.5768 0.6578  0.8117 1.0119
LLC_size         0.5479 0.6162  0.6922 0.7754

Always using llc_size as factor in wake_wide still shows better result
in the 10:10 polygamous hackbench.

So it seems the `slave >= factor && master >= slave * factor` isn't
a suitable criterion for cluster size?

Thanks
Barry
Song Bao Hua (Barry Song) May 3, 2021, 11:35 a.m. UTC | #7
> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Monday, May 3, 2021 6:12 PM
> To: 'Dietmar Eggemann' <dietmar.eggemann@arm.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> 
> 
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Friday, April 30, 2021 10:43 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> > <vincent.guittot@linaro.org>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> > lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> > bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> > valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com;
> > sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >
> > [...]
> >
> > >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>
> > >> [...]
> > >>
> > >>>>> On 20/04/2021 02:18, Barry Song wrote:
> >
> > [...]
> >
> > > Though we will never go to slow path, wake_wide() will affect want_affine,
> > > so eventually affect the "new_cpu"?
> >
> > yes.
> >
> > >
> > > 	for_each_domain(cpu, tmp) {
> > > 		/*
> > > 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> > > 		 * cpu is a valid SD_WAKE_AFFINE target.
> > > 		 */
> > > 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > > 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > > 			if (cpu != prev_cpu)
> > > 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > >
> > > 			sd = NULL; /* Prefer wake_affine over balance flags */
> > > 			break;
> > > 		}
> > >
> > > 		if (tmp->flags & sd_flag)
> > > 			sd = tmp;
> > > 		else if (!want_affine)
> > > 			break;
> > > 	}
> > >
> > > If wake_affine is false, the above won't execute, new_cpu(target) will
> > > always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> > > this means we won't pull the wakee to the cluster of waker? It seems
> > > sensible.
> >
> > What is `task size` here?
> >
> > The criterion is `!(slave < factor || master < slave * factor)` or
> > `slave >= factor && master >= slave * factor` to wake wide.
> >
> 
> Yes. For "task size", I actually mean a bundle of waker-wakee tasks
> which can make "slave >= factor && master >= slave * factor" either
> true or false, then change the target cpu where we are going to scan
> from.
> Now since I have moved to cluster level when tasks have been in same
> LLC level, it seems it would be more sensible to use "cluster_size" as
> factor?
> 
> > I see that since you effectively change the sched domain size from LLC
> > to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
> > (hence the `numactl -N 0` in your workload), wake_wide() has to take
> > CLUSTER size into consideration.
> >
> > I was wondering if you saw wake_wide() returning 1 with your use cases:
> >
> > numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5
> 
> I couldn't make wake_wide return 1 by the above stream command.
> And I can't reproduce it by a 1:1(monogamous) hackbench "-f 1".
> 
> But I am able to reproduce this issue by a M:N hackbench, for example:
> 
> numactl -N 0 hackbench -p -T -f 10 -l 20000 -g 1
> 
> hackbench will create 10 senders which will send messages to 10
> receivers. (Each sender can send messages to all 10 receivers.)
> 
> I've often seen flips like:
> waker wakee
> 1501  39
> 1509  17
> 11   1320
> 13   2016
> 
> 11, 13, 17 is smaller than LLC but larger than cluster. So the wake_wide()
> using cluster factor will return 1, on the other hand, if we always use
> llc_size as factor, it will return 0.
> 
> However, it seems the change in wake_wide() could bring some negative
> influence to M:N relationship(-f 10) according to tests made today by:
> 
> numactl -N 0 hackbench -p -T -f 10 -l 20000 -g $1
> 
> g             =      1     2       3       4
> cluster_size     0.5768 0.6578  0.8117 1.0119
> LLC_size         0.5479 0.6162  0.6922 0.7754
> 
> Always using llc_size as factor in wake_wide still shows better result
> in the 10:10 polygamous hackbench.
> 
> So it seems the `slave >= factor && master >= slave * factor` isn't
> a suitable criterion for cluster size?

On the other hand, according to "sched: Implement smarter wake-affine logic"
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62470419

Proper factor in wake_wide is mainly beneficial of 1:n tasks like postgresql/pgbench.
So using the smaller cluster size as factor might help make wake_affine false so
improve pgbench.

From the commit log, while clients =  2*cpus, the commit made the biggest
improvement. In my case, It should be clients=48 for a machine whose LLC
size is 24.

In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
under two different scenarios:
1. page cache always hit, so no real I/O for database read
2. echo 3 > /proc/sys/vm/drop_caches

For case 1, using cluster_size and using llc_size will result in similar
tps= ~108000, all of 24 cpus have 100% cpu utilization.

For case 2, using llc_size still shows better performance.

tps for each test round(cluster size as factor in wake_wide):
1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294 1539.877146
avg tps = 1464

tps for each test round(llc size as factor in wake_wide):
1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814 1876.802168
avg tps = 1641  (+12%)

so it seems using cluster_size as factor in "slave >= factor && master >= slave *
factor" isn't a good choice for my machine at least.

Thanks
Barry
Dietmar Eggemann May 5, 2021, 12:29 p.m. UTC | #8
On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:

[...]

>> From: Song Bao Hua (Barry Song)

[...]

>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>
>>> [...]
>>>
>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>
>>>>> [...]
>>>>>
>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

> 
> On the other hand, according to "sched: Implement smarter wake-affine logic"
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62470419
> 
> Proper factor in wake_wide is mainly beneficial of 1:n tasks like postgresql/pgbench.
> So using the smaller cluster size as factor might help make wake_affine false so
> improve pgbench.
> 
> From the commit log, while clients =  2*cpus, the commit made the biggest
> improvement. In my case, It should be clients=48 for a machine whose LLC
> size is 24.
> 
> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> under two different scenarios:
> 1. page cache always hit, so no real I/O for database read
> 2. echo 3 > /proc/sys/vm/drop_caches
> 
> For case 1, using cluster_size and using llc_size will result in similar
> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> 
> For case 2, using llc_size still shows better performance.
> 
> tps for each test round(cluster size as factor in wake_wide):
> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294 1539.877146
> avg tps = 1464
> 
> tps for each test round(llc size as factor in wake_wide):
> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814 1876.802168
> avg tps = 1641  (+12%)
> 
> so it seems using cluster_size as factor in "slave >= factor && master >= slave *
> factor" isn't a good choice for my machine at least.

So SD size = 4 (instead of 24) seems to be too small for `-c 48`.

Just curious, have you seen the benefit of using wake wide on SD size =
24 (LLC) compared to not using it at all?
Song Bao Hua (Barry Song) May 7, 2021, 1:07 p.m. UTC | #9
> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Thursday, May 6, 2021 12:30 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> 
> [...]
> 
> >> From: Song Bao Hua (Barry Song)
> 
> [...]
> 
> >>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >>>>
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>
> >>> [...]
> >>>
> >>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> >
> > On the other hand, according to "sched: Implement smarter wake-affine logic"
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> ?id=62470419
> >
> > Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> postgresql/pgbench.
> > So using the smaller cluster size as factor might help make wake_affine false
> so
> > improve pgbench.
> >
> > From the commit log, while clients =  2*cpus, the commit made the biggest
> > improvement. In my case, It should be clients=48 for a machine whose LLC
> > size is 24.
> >
> > In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> > under two different scenarios:
> > 1. page cache always hit, so no real I/O for database read
> > 2. echo 3 > /proc/sys/vm/drop_caches
> >
> > For case 1, using cluster_size and using llc_size will result in similar
> > tps= ~108000, all of 24 cpus have 100% cpu utilization.
> >
> > For case 2, using llc_size still shows better performance.
> >
> > tps for each test round(cluster size as factor in wake_wide):
> > 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> 1539.877146
> > avg tps = 1464
> >
> > tps for each test round(llc size as factor in wake_wide):
> > 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> 1876.802168
> > avg tps = 1641  (+12%)
> >
> > so it seems using cluster_size as factor in "slave >= factor && master >=
> slave *
> > factor" isn't a good choice for my machine at least.
> 
> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> 
> Just curious, have you seen the benefit of using wake wide on SD size =
> 24 (LLC) compared to not using it at all?

At least in my benchmark made today, I have not seen any benefit to use
llc_size. Always returning 0 in wake_wide() seems to be much better.

postgres@ubuntu:$pgbench -i pgbench
postgres@pgbench:$ pgbench -T 120 -c 48 pgbench

using llc_size, it got to 123tps
always returning 0 in wake_wide(), it got to 158tps

actually, I really couldn't reproduce the performance improvement
the commit "sched: Implement smarter wake-affine logic" mentioned.
on the other hand, the commit log didn't present the pgbench command
parameter used. I guess the benchmark result will highly depend on
the command parameter and disk I/O speed.

Thanks
Barry
Dietmar Eggemann May 13, 2021, 12:32 p.m. UTC | #10
On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> 
> 
>> -----Original Message-----
>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
>>
>> [...]
>>
>>>> From: Song Bao Hua (Barry Song)
>>
>> [...]
>>
>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>
>> [...]
>>
>>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>
>>>>> [...]
>>>>>
>>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>>>
>>>>>>> [...]
>>>>>>>
>>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
>>
>> [...]
>>
>>>
>>> On the other hand, according to "sched: Implement smarter wake-affine logic"
>>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
>> ?id=62470419
>>>
>>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
>> postgresql/pgbench.
>>> So using the smaller cluster size as factor might help make wake_affine false
>> so
>>> improve pgbench.
>>>
>>> From the commit log, while clients =  2*cpus, the commit made the biggest
>>> improvement. In my case, It should be clients=48 for a machine whose LLC
>>> size is 24.
>>>
>>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
>>> under two different scenarios:
>>> 1. page cache always hit, so no real I/O for database read
>>> 2. echo 3 > /proc/sys/vm/drop_caches
>>>
>>> For case 1, using cluster_size and using llc_size will result in similar
>>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
>>>
>>> For case 2, using llc_size still shows better performance.
>>>
>>> tps for each test round(cluster size as factor in wake_wide):
>>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
>> 1539.877146
>>> avg tps = 1464
>>>
>>> tps for each test round(llc size as factor in wake_wide):
>>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
>> 1876.802168
>>> avg tps = 1641  (+12%)
>>>
>>> so it seems using cluster_size as factor in "slave >= factor && master >=
>> slave *
>>> factor" isn't a good choice for my machine at least.
>>
>> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
>>
>> Just curious, have you seen the benefit of using wake wide on SD size =
>> 24 (LLC) compared to not using it at all?
> 
> At least in my benchmark made today, I have not seen any benefit to use
> llc_size. Always returning 0 in wake_wide() seems to be much better.
> 
> postgres@ubuntu:$pgbench -i pgbench
> postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> 
> using llc_size, it got to 123tps
> always returning 0 in wake_wide(), it got to 158tps
> 
> actually, I really couldn't reproduce the performance improvement
> the commit "sched: Implement smarter wake-affine logic" mentioned.
> on the other hand, the commit log didn't present the pgbench command
> parameter used. I guess the benchmark result will highly depend on
> the command parameter and disk I/O speed.

I see. And it was a way smaller machine (12 CPUs) back then.

You could run pgbench via mmtests https://github.com/gormanm/mmtests.

I.e the `timed-ro-medium` test.

mmtests# ./run-mmtests.sh --config
./configs/config-db-pgbench-timed-ro-medium test_tag

/shellpacks/shellpack-bench-pgbench contains all the individual test
steps. Something you could use as a template for your pgbench standalone
tests as well.

I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
memory on v5.12 vanilla and w/o wakewide.
The test uses `scale_factor = 2570` on this machine. I guess this
relates to ~41GB? At least this was the size of the:

#mmtests/work/testdisk/data/pgdata directory when the test started.


mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
wo_wakewide | grep ^Hmean


      #clients  v5.12 vanilla          v5.12 w/o wakewide

Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*


So there are some improvements w/ wakewide but nothing of the scale
showed in the original wakewide patch.

I'm not an expert on how to set up these pgbench tests though. So maybe
other pgbench related mmtests configs or some more fine-grained tuning
can produce bigger diffs?
Song Bao Hua (Barry Song) May 25, 2021, 8:14 a.m. UTC | #11
> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Friday, May 14, 2021 12:32 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> >
> >
> >> -----Original Message-----
> >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> >>
> >> [...]
> >>
> >>>> From: Song Bao Hua (Barry Song)
> >>
> >> [...]
> >>
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>
> >> [...]
> >>
> >>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >>>>>>
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>>>
> >>>>>>> [...]
> >>>>>>>
> >>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> >>
> >> [...]
> >>
> >>>
> >>> On the other hand, according to "sched: Implement smarter wake-affine logic"
> >>>
> >>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> >> ?id=62470419
> >>>
> >>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> >> postgresql/pgbench.
> >>> So using the smaller cluster size as factor might help make wake_affine
> false
> >> so
> >>> improve pgbench.
> >>>
> >>> From the commit log, while clients =  2*cpus, the commit made the biggest
> >>> improvement. In my case, It should be clients=48 for a machine whose LLC
> >>> size is 24.
> >>>
> >>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> >>> under two different scenarios:
> >>> 1. page cache always hit, so no real I/O for database read
> >>> 2. echo 3 > /proc/sys/vm/drop_caches
> >>>
> >>> For case 1, using cluster_size and using llc_size will result in similar
> >>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> >>>
> >>> For case 2, using llc_size still shows better performance.
> >>>
> >>> tps for each test round(cluster size as factor in wake_wide):
> >>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> >> 1539.877146
> >>> avg tps = 1464
> >>>
> >>> tps for each test round(llc size as factor in wake_wide):
> >>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> >> 1876.802168
> >>> avg tps = 1641  (+12%)
> >>>
> >>> so it seems using cluster_size as factor in "slave >= factor && master >=
> >> slave *
> >>> factor" isn't a good choice for my machine at least.
> >>
> >> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> >>
> >> Just curious, have you seen the benefit of using wake wide on SD size =
> >> 24 (LLC) compared to not using it at all?
> >
> > At least in my benchmark made today, I have not seen any benefit to use
> > llc_size. Always returning 0 in wake_wide() seems to be much better.
> >
> > postgres@ubuntu:$pgbench -i pgbench
> > postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> >
> > using llc_size, it got to 123tps
> > always returning 0 in wake_wide(), it got to 158tps
> >
> > actually, I really couldn't reproduce the performance improvement
> > the commit "sched: Implement smarter wake-affine logic" mentioned.
> > on the other hand, the commit log didn't present the pgbench command
> > parameter used. I guess the benchmark result will highly depend on
> > the command parameter and disk I/O speed.
> 
> I see. And it was a way smaller machine (12 CPUs) back then.
> 
> You could run pgbench via mmtests https://github.com/gormanm/mmtests.
> 
> I.e the `timed-ro-medium` test.
> 
> mmtests# ./run-mmtests.sh --config
> ./configs/config-db-pgbench-timed-ro-medium test_tag
> 
> /shellpacks/shellpack-bench-pgbench contains all the individual test
> steps. Something you could use as a template for your pgbench standalone
> tests as well.
> 
> I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
> memory on v5.12 vanilla and w/o wakewide.
> The test uses `scale_factor = 2570` on this machine. I guess this
> relates to ~41GB? At least this was the size of the:

Thanks. Dietmar, sorry for slow response. Sick leave for the whole
last week.

I feel it makes much more sense to use mmtests which is setting
scale_factor according to total memory size, thus, considering
the impact of page cache. And it is also doing database warming-up
for 30minutes.

I will get more data and compare three cases:
1. use cluster as wake_wide factor
2. use llc as wake_wide factor
3. always return 0 in wake_wide.

and post the result afterwards.

> 
> #mmtests/work/testdisk/data/pgdata directory when the test started.
> 
> 
> mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
> wo_wakewide | grep ^Hmean
> 
> 
>       #clients  v5.12 vanilla          v5.12 w/o wakewide
> 
> Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
> Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
> Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
> Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
> Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
> Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
> Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*
> 
> 
> So there are some improvements w/ wakewide but nothing of the scale
> showed in the original wakewide patch.
> 
> I'm not an expert on how to set up these pgbench tests though. So maybe
> other pgbench related mmtests configs or some more fine-grained tuning
> can produce bigger diffs?

Thanks
Barry
Song Bao Hua (Barry Song) May 26, 2021, 9:54 a.m. UTC | #12
> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, May 25, 2021 8:07 PM
> To: 'Dietmar Eggemann' <dietmar.eggemann@arm.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> 
> 
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Friday, May 14, 2021 12:32 AM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> > <vincent.guittot@linaro.org>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> > lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> > bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> > valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com;
> > sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >
> > [...]
> >
> > >> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> > >>
> > >> [...]
> > >>
> > >>>> From: Song Bao Hua (Barry Song)
> > >>
> > >> [...]
> > >>
> > >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>
> > >> [...]
> > >>
> > >>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>>> -----Original Message-----
> > >>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>
> > >>>>> [...]
> > >>>>>
> > >>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>>>
> > >>>>>>> [...]
> > >>>>>>>
> > >>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> > >>
> > >> [...]
> > >>
> > >>>
> > >>> On the other hand, according to "sched: Implement smarter wake-affine
> logic"
> > >>>
> > >>
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> > >> ?id=62470419
> > >>>
> > >>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> > >> postgresql/pgbench.
> > >>> So using the smaller cluster size as factor might help make wake_affine
> > false
> > >> so
> > >>> improve pgbench.
> > >>>
> > >>> From the commit log, while clients =  2*cpus, the commit made the biggest
> > >>> improvement. In my case, It should be clients=48 for a machine whose LLC
> > >>> size is 24.
> > >>>
> > >>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> > >>> under two different scenarios:
> > >>> 1. page cache always hit, so no real I/O for database read
> > >>> 2. echo 3 > /proc/sys/vm/drop_caches
> > >>>
> > >>> For case 1, using cluster_size and using llc_size will result in similar
> > >>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> > >>>
> > >>> For case 2, using llc_size still shows better performance.
> > >>>
> > >>> tps for each test round(cluster size as factor in wake_wide):
> > >>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> > >> 1539.877146
> > >>> avg tps = 1464
> > >>>
> > >>> tps for each test round(llc size as factor in wake_wide):
> > >>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> > >> 1876.802168
> > >>> avg tps = 1641  (+12%)
> > >>>
> > >>> so it seems using cluster_size as factor in "slave >= factor && master >=
> > >> slave *
> > >>> factor" isn't a good choice for my machine at least.
> > >>
> > >> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> > >>
> > >> Just curious, have you seen the benefit of using wake wide on SD size =
> > >> 24 (LLC) compared to not using it at all?
> > >
> > > At least in my benchmark made today, I have not seen any benefit to use
> > > llc_size. Always returning 0 in wake_wide() seems to be much better.
> > >
> > > postgres@ubuntu:$pgbench -i pgbench
> > > postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> > >
> > > using llc_size, it got to 123tps
> > > always returning 0 in wake_wide(), it got to 158tps
> > >
> > > actually, I really couldn't reproduce the performance improvement
> > > the commit "sched: Implement smarter wake-affine logic" mentioned.
> > > on the other hand, the commit log didn't present the pgbench command
> > > parameter used. I guess the benchmark result will highly depend on
> > > the command parameter and disk I/O speed.
> >
> > I see. And it was a way smaller machine (12 CPUs) back then.
> >
> > You could run pgbench via mmtests https://github.com/gormanm/mmtests.
> >
> > I.e the `timed-ro-medium` test.
> >
> > mmtests# ./run-mmtests.sh --config
> > ./configs/config-db-pgbench-timed-ro-medium test_tag
> >
> > /shellpacks/shellpack-bench-pgbench contains all the individual test
> > steps. Something you could use as a template for your pgbench standalone
> > tests as well.
> >
> > I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
> > memory on v5.12 vanilla and w/o wakewide.
> > The test uses `scale_factor = 2570` on this machine. I guess this
> > relates to ~41GB? At least this was the size of the:
> 
> Thanks. Dietmar, sorry for slow response. Sick leave for the whole
> last week.
> 
> I feel it makes much more sense to use mmtests which is setting
> scale_factor according to total memory size, thus, considering
> the impact of page cache. And it is also doing database warming-up
> for 30minutes.
> 
> I will get more data and compare three cases:
> 1. use cluster as wake_wide factor
> 2. use llc as wake_wide factor
> 3. always return 0 in wake_wide.
> 
> and post the result afterwards.

I used only a numa node with 24cpus and 60GB memory
(scale factor: 2392) to finish the test. As mentioned
before, each numa node shares one LLC. So waker and
wakee are in same LLC domain.

Basically, the difference is just noise between using
cluster size as factor in wake_wide() and using llc size
as factor for 1/48threads, 8/48threads, 12/48threads,
24/48threads, 32/48threads. But for 48/48threads(system
is busy), using llc size as factor shows 4%+ pgbench
improvement.

                 cluster_as_factor     llc_as_factor
Hmean     1     10779.67 (   0.00%)    10869.27 *   0.83%*
Hmean     8     19595.09 (   0.00%)    19580.59 *  -0.07%*
Hmean     12    29553.06 (   0.00%)    29643.56 *   0.31%*
Hmean     24    43368.55 (   0.00%)    43194.47 *  -0.40%*
Hmean     32    40258.08 (   0.00%)    40163.23 *  -0.24%*
Hmean     48    40450.42 (   0.00%)    42249.29 *   4.45%*

I can further see 14%+ improvement for 48/48threads transactions
case if I totally don't depend on wake_wide(), that is like
wake_wide() always return 0.

                llc_as_factor          don't_use_wake_wide
Hmean     1     10869.27 (   0.00%)    10723.08 *  -1.34%*
Hmean     8     19580.59 (   0.00%)    19469.34 *  -0.57%*
Hmean     12    29643.56 (   0.00%)    29520.16 *  -0.42%*
Hmean     24    43194.47 (   0.00%)    43774.78 *   1.34%*
Hmean     32    40163.23 (   0.00%)    40742.93 *   1.44%*
Hmean     48    42249.29 (   0.00%)    48329.00 *  14.39%*

I begin to believe wake_wide() is useless while waker and wakee
are already in same LLC. So I sent another patch to address this
generic issue:
[PATCH] sched: fair: don't depend on wake_wide if waker and wakee are already in same LLC
https://lore.kernel.org/lkml/20210526091057.1800-1-song.bao.hua@hisilicon.com/

> 
> >
> > #mmtests/work/testdisk/data/pgdata directory when the test started.
> >
> >
> > mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
> > wo_wakewide | grep ^Hmean
> >
> >
> >       #clients  v5.12 vanilla          v5.12 w/o wakewide
> >
> > Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
> > Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
> > Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
> > Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
> > Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
> > Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
> > Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*
> >
> >
> > So there are some improvements w/ wakewide but nothing of the scale
> > showed in the original wakewide patch.
> >
> > I'm not an expert on how to set up these pgbench tests though. So maybe
> > other pgbench related mmtests configs or some more fine-grained tuning
> > can produce bigger diffs?
> 
> Thanks
> Barry

Thanks
Barry
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d4d7c1c..1418981 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -611,7 +611,7 @@  static inline bool blk_mq_complete_need_ipi(struct request *rq)
 	/* same CPU or cache domain?  Complete locally */
 	if (cpu == rq->mq_ctx->cpu ||
 	    (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
-	     cpus_share_cache(cpu, rq->mq_ctx->cpu)))
+	     cpus_share_cache(cpu, rq->mq_ctx->cpu, 0)))
 		return false;
 
 	/* don't try to IPI to an offline CPU */
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 846fcac..d63d6b8 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -176,7 +176,8 @@  extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-bool cpus_share_cache(int this_cpu, int that_cpu);
+/* return true if cpus share cluster(while cluster=1) or llc cache */
+bool cpus_share_cache(int this_cpu, int that_cpu, int cluster);
 
 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
 typedef int (*sched_domain_flags_f)(void);
@@ -225,7 +226,7 @@  struct sched_domain_topology_level {
 {
 }
 
-static inline bool cpus_share_cache(int this_cpu, int that_cpu)
+static inline bool cpus_share_cache(int this_cpu, int that_cpu, int cluster)
 {
 	return true;
 }
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30c300c..c74812a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3126,9 +3126,12 @@  void wake_up_if_idle(int cpu)
 	rcu_read_unlock();
 }
 
-bool cpus_share_cache(int this_cpu, int that_cpu)
+bool cpus_share_cache(int this_cpu, int that_cpu, int cluster)
 {
-	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
+	if (cluster)
+		return per_cpu(sd_cluster_id, this_cpu) == per_cpu(sd_cluster_id, that_cpu);
+	else
+		return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
 }
 
 static inline bool ttwu_queue_cond(int cpu, int wake_flags)
@@ -3144,7 +3147,7 @@  static inline bool ttwu_queue_cond(int cpu, int wake_flags)
 	 * If the CPU does not share cache, then queue the task on the
 	 * remote rqs wakelist to avoid accessing remote data.
 	 */
-	if (!cpus_share_cache(smp_processor_id(), cpu))
+	if (!cpus_share_cache(smp_processor_id(), cpu, 0))
 		return true;
 
 	/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a327746..69a1704 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -718,7 +718,7 @@  static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
 #include "pelt.h"
 #ifdef CONFIG_SMP
 
-static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
+static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu, int cluster);
 static unsigned long task_h_load(struct task_struct *p);
 static unsigned long capacity_of(int cpu);
 
@@ -5786,11 +5786,12 @@  static void record_wakee(struct task_struct *p)
  * whatever is irrelevant, spread criteria is apparent partner count exceeds
  * socket size.
  */
-static int wake_wide(struct task_struct *p)
+static int wake_wide(struct task_struct *p, int cluster)
 {
 	unsigned int master = current->wakee_flips;
 	unsigned int slave = p->wakee_flips;
-	int factor = __this_cpu_read(sd_llc_size);
+	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
+		__this_cpu_read(sd_llc_size);
 
 	if (master < slave)
 		swap(master, slave);
@@ -5812,7 +5813,7 @@  static int wake_wide(struct task_struct *p)
  *			  for the overloaded case.
  */
 static int
-wake_affine_idle(int this_cpu, int prev_cpu, int sync)
+wake_affine_idle(int this_cpu, int prev_cpu, int sync, int cluster)
 {
 	/*
 	 * If this_cpu is idle, it implies the wakeup is from interrupt
@@ -5826,7 +5827,7 @@  static int wake_wide(struct task_struct *p)
 	 * a cpufreq perspective, it's better to have higher utilisation
 	 * on one CPU.
 	 */
-	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
+	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu, cluster))
 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
 
 	if (sync && cpu_rq(this_cpu)->nr_running == 1)
@@ -5882,12 +5883,12 @@  static int wake_wide(struct task_struct *p)
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p,
-		       int this_cpu, int prev_cpu, int sync)
+		       int this_cpu, int prev_cpu, int sync, int cluster)
 {
 	int target = nr_cpumask_bits;
 
 	if (sched_feat(WA_IDLE))
-		target = wake_affine_idle(this_cpu, prev_cpu, sync);
+		target = wake_affine_idle(this_cpu, prev_cpu, sync, cluster);
 
 	if (sched_feat(WA_WEIGHT) && target == nr_cpumask_bits)
 		target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);
@@ -6139,7 +6140,8 @@  static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
  * average idle time for this rq (as found in rq->avg_idle).
  */
-static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target,
+	int cluster)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	int i, cpu, idle_cpu = -1, nr = INT_MAX;
@@ -6154,7 +6156,8 @@  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
 
-	if (sched_feat(SIS_PROP) && !smt) {
+	/* cluster is usually quite small like 4, no need SIS_PROP */
+	if (sched_feat(SIS_PROP) && !smt && !cluster) {
 		u64 avg_cost, avg_idle, span_avg;
 
 		/*
@@ -6191,7 +6194,7 @@  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	if (smt)
 		set_idle_cores(this, false);
 
-	if (sched_feat(SIS_PROP) && !smt) {
+	if (sched_feat(SIS_PROP) && !smt && !cluster) {
 		time = cpu_clock(this) - time;
 		update_avg(&this_sd->avg_scan_cost, time);
 	}
@@ -6244,7 +6247,7 @@  static inline bool asym_fits_capacity(int task_util, int cpu)
 /*
  * Try and locate an idle core/thread in the LLC cache domain.
  */
-static int select_idle_sibling(struct task_struct *p, int prev, int target)
+static int select_idle_sibling(struct task_struct *p, int prev, int target, int cluster)
 {
 	struct sched_domain *sd;
 	unsigned long task_util;
@@ -6266,7 +6269,7 @@  static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	/*
 	 * If the previous CPU is cache affine and idle, don't be stupid:
 	 */
-	if (prev != target && cpus_share_cache(prev, target) &&
+	if (prev != target && cpus_share_cache(prev, target, cluster) &&
 	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
 	    asym_fits_capacity(task_util, prev))
 		return prev;
@@ -6289,7 +6292,7 @@  static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	recent_used_cpu = p->recent_used_cpu;
 	if (recent_used_cpu != prev &&
 	    recent_used_cpu != target &&
-	    cpus_share_cache(recent_used_cpu, target) &&
+	    cpus_share_cache(recent_used_cpu, target, cluster) &&
 	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
 	    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
 	    asym_fits_capacity(task_util, recent_used_cpu)) {
@@ -6321,11 +6324,11 @@  static int select_idle_sibling(struct task_struct *p, int prev, int target)
 		}
 	}
 
-	sd = rcu_dereference(per_cpu(sd_llc, target));
+	sd = cluster ? rcu_dereference(per_cpu(sd_cluster, target)) :
+			rcu_dereference(per_cpu(sd_llc, target));
 	if (!sd)
 		return target;
-
-	i = select_idle_cpu(p, sd, target);
+	i = select_idle_cpu(p, sd, target, cluster);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
@@ -6745,6 +6748,12 @@  static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 	int want_affine = 0;
 	/* SD_flags and WF_flags share the first nibble */
 	int sd_flag = wake_flags & 0xF;
+	/*
+	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
+	 * than llc. this is typically true while tasks are bound within
+	 * one numa
+	 */
+	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
 
 	if (wake_flags & WF_TTWU) {
 		record_wakee(p);
@@ -6756,7 +6765,7 @@  static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 			new_cpu = prev_cpu;
 		}
 
-		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
+		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
 	}
 
 	rcu_read_lock();
@@ -6768,7 +6777,7 @@  static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
 			if (cpu != prev_cpu)
-				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
+				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
 
 			sd = NULL; /* Prefer wake_affine over balance flags */
 			break;
@@ -6785,7 +6794,7 @@  static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
 		/* Fast path */
-		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
+		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
 
 		if (want_affine)
 			current->recent_used_cpu = cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4e938ba..b4b7d95 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1487,6 +1487,9 @@  static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_size);
 DECLARE_PER_CPU(int, sd_llc_id);
+DECLARE_PER_CPU(struct sched_domain __rcu *, sd_cluster);
+DECLARE_PER_CPU(int, sd_cluster_size);
+DECLARE_PER_CPU(int, sd_cluster_id);
 DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 829ac9d..28a2032 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -644,6 +644,9 @@  static void destroy_sched_domains(struct sched_domain *sd)
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DEFINE_PER_CPU(int, sd_llc_size);
 DEFINE_PER_CPU(int, sd_llc_id);
+DEFINE_PER_CPU(struct sched_domain __rcu *, sd_cluster);
+DEFINE_PER_CPU(int, sd_cluster_size);
+DEFINE_PER_CPU(int, sd_cluster_id);
 DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
@@ -657,6 +660,15 @@  static void update_top_cache_domain(int cpu)
 	int id = cpu;
 	int size = 1;
 
+	sd = highest_flag_domain(cpu, SD_SHARE_CLS_RESOURCES);
+	if (sd) {
+		id = cpumask_first(sched_domain_span(sd));
+		size = cpumask_weight(sched_domain_span(sd));
+	}
+	rcu_assign_pointer(per_cpu(sd_cluster, cpu), sd);
+	per_cpu(sd_cluster_size, cpu) = size;
+	per_cpu(sd_cluster_id, cpu) = id;
+
 	sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);
 	if (sd) {
 		id = cpumask_first(sched_domain_span(sd));