diff mbox

arm64: topology: Avoid checking numa mask for scheduler MC selection

Message ID 20180605190837.493505-1-jeremy.linton@arm.com (mailing list archive)
State Not Applicable, archived
Headers show

Commit Message

Jeremy Linton June 5, 2018, 7:08 p.m. UTC
The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
operations or during early boot. Lets disable the NUMA siblings checks
for the time being, as NUMA in socket machines have LLC's that will
assure that the scheduler topology isn't "borken".

Futher, as a defensive mechanism during hotplug, lets assure that the
LLC siblings are also masked.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
---
 arch/arm64/kernel/topology.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

Comments

Sudeep Holla June 6, 2018, 10:05 a.m. UTC | #1
On 05/06/18 20:08, Jeremy Linton wrote:
> The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
> operations or during early boot. Lets disable the NUMA siblings checks
> for the time being, as NUMA in socket machines have LLC's that will
> assure that the scheduler topology isn't "borken".
> 

^ broken ? (not sure if usage of borken is intentional :))

> Futher, as a defensive mechanism during hotplug, lets assure that the

^ Further

> LLC siblings are also masked.
>

Also add the symptoms of the issue we say as Geert suggested me.
Something like:
" This often leads to system hang or crash during CPU hotplug and system
suspend operation. This is mostly observed on HMP systems where the
CPU compute capacities are different and ends up in different scheduler
domains. Since cpumask_of_node is returned instead core_sibling, the
scheduler is confused with incorrect cpumasks(e.g. one CPU in two
different sched domains at the same time) on CPU hotplug."

You can add Reported-by: Geert... ?

> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
> ---
>  arch/arm64/kernel/topology.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index 7415c166281f..f845a8617812 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -215,13 +215,8 @@ EXPORT_SYMBOL_GPL(cpu_topology);
>  
>  const struct cpumask *cpu_coregroup_mask(int cpu)
>  {
> -	const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu));
> +	const cpumask_t *core_mask = &cpu_topology[cpu].core_sibling;
>  
> -	/* Find the smaller of NUMA, core or LLC siblings */
> -	if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) {
> -		/* not numa in package, lets use the package siblings */
> -		core_mask = &cpu_topology[cpu].core_sibling;
> -	}
>  	if (cpu_topology[cpu].llc_id != -1) {
>  		if (cpumask_subset(&cpu_topology[cpu].llc_siblings, core_mask))
>  			core_mask = &cpu_topology[cpu].llc_siblings;
> @@ -239,8 +234,10 @@ static void update_siblings_masks(unsigned int cpuid)
>  	for_each_possible_cpu(cpu) {
>  		cpu_topo = &cpu_topology[cpu];
>  
> -		if (cpuid_topo->llc_id == cpu_topo->llc_id)
> +		if (cpuid_topo->llc_id == cpu_topo->llc_id) {
>  			cpumask_set_cpu(cpu, &cpuid_topo->llc_siblings);
> +			cpumask_set_cpu(cpuid, &cpu_topo->llc_siblings);
> +		}
>  
>  		if (cpuid_topo->package_id != cpu_topo->package_id)
>  			continue;
> 

Looks good to me for now. I might need to tweek it a bit when I add the
support to update topology on hotplug. But that's for latter. For now,

Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Jeremy Linton June 6, 2018, 2:36 p.m. UTC | #2
On 06/06/2018 05:05 AM, Sudeep Holla wrote:
> 
> 
> On 05/06/18 20:08, Jeremy Linton wrote:
>> The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
>> operations or during early boot. Lets disable the NUMA siblings checks
>> for the time being, as NUMA in socket machines have LLC's that will
>> assure that the scheduler topology isn't "borken".
>>
> 
> ^ broken ? (not sure if usage of borken is intentional :))

Well that is what the scheduler says when it doesn't like the topology.

> 
>> Futher, as a defensive mechanism during hotplug, lets assure that the
> 
> ^ Further

Sure.

> 
>> LLC siblings are also masked.
>>
> 
> Also add the symptoms of the issue we say as Geert suggested me.
> Something like:
> " This often leads to system hang or crash during CPU hotplug and system
> suspend operation. This is mostly observed on HMP systems where the
> CPU compute capacities are different and ends up in different scheduler
> domains. Since cpumask_of_node is returned instead core_sibling, the
> scheduler is confused with incorrect cpumasks(e.g. one CPU in two
> different sched domains at the same time) on CPU hotplug."
> 
> You can add Reported-by: Geert... ?

ok.

> 
>> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>> ---
>>   arch/arm64/kernel/topology.c | 11 ++++-------
>>   1 file changed, 4 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
>> index 7415c166281f..f845a8617812 100644
>> --- a/arch/arm64/kernel/topology.c
>> +++ b/arch/arm64/kernel/topology.c
>> @@ -215,13 +215,8 @@ EXPORT_SYMBOL_GPL(cpu_topology);
>>   
>>   const struct cpumask *cpu_coregroup_mask(int cpu)
>>   {
>> -	const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu));
>> +	const cpumask_t *core_mask = &cpu_topology[cpu].core_sibling;
>>   
>> -	/* Find the smaller of NUMA, core or LLC siblings */
>> -	if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) {
>> -		/* not numa in package, lets use the package siblings */
>> -		core_mask = &cpu_topology[cpu].core_sibling;
>> -	}
>>   	if (cpu_topology[cpu].llc_id != -1) {
>>   		if (cpumask_subset(&cpu_topology[cpu].llc_siblings, core_mask))
>>   			core_mask = &cpu_topology[cpu].llc_siblings;
>> @@ -239,8 +234,10 @@ static void update_siblings_masks(unsigned int cpuid)
>>   	for_each_possible_cpu(cpu) {
>>   		cpu_topo = &cpu_topology[cpu];
>>   
>> -		if (cpuid_topo->llc_id == cpu_topo->llc_id)
>> +		if (cpuid_topo->llc_id == cpu_topo->llc_id) {
>>   			cpumask_set_cpu(cpu, &cpuid_topo->llc_siblings);
>> +			cpumask_set_cpu(cpuid, &cpu_topo->llc_siblings);
>> +		}
>>   
>>   		if (cpuid_topo->package_id != cpu_topo->package_id)
>>   			continue;
>>
> 
> Looks good to me for now. I might need to tweek it a bit when I add the
> support to update topology on hotplug. But that's for latter. For now,
> 
> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Morten Rasmussen June 6, 2018, 2:44 p.m. UTC | #3
On Tue, Jun 05, 2018 at 02:08:37PM -0500, Jeremy Linton wrote:
> The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
> operations or during early boot. Lets disable the NUMA siblings checks
> for the time being, as NUMA in socket machines have LLC's that will
> assure that the scheduler topology isn't "borken".

Could we add an explanation why the numa node mask check is needed in
the first place?

IIUC, we have the check in case the LLC is shared across numa nodes as
this would cause core_siblings > cpumask_of_node() which breaks the
scheduler topology.

While sharing LLC across numa nodes seems quite unusual, I think it is
allowed by ACPI. Those systems might already be broken before, so might
not change anything. It is just worth noting why the check should be
added back later.

Morten
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jeremy Linton June 6, 2018, 2:53 p.m. UTC | #4
On 06/06/2018 09:44 AM, Morten Rasmussen wrote:
> On Tue, Jun 05, 2018 at 02:08:37PM -0500, Jeremy Linton wrote:
>> The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
>> operations or during early boot. Lets disable the NUMA siblings checks
>> for the time being, as NUMA in socket machines have LLC's that will
>> assure that the scheduler topology isn't "borken".
> 
> Could we add an explanation why the numa node mask check is needed in
> the first place. >
> IIUC, we have the check in case the LLC is shared across numa nodes as
> this would cause core_siblings > cpumask_of_node() which breaks the
> scheduler topology.

Yes, that sounds like a good idea, my comments probably assume that the 
reader has been part of these conversations.


> 
> While sharing LLC across numa nodes seems quite unusual, I think it is
> allowed by ACPI. Those systems might already be broken before, so might
> not change anything. It is just worth noting why the check should be
> added back later.

Right, there isn't anything in ACPI that dictates a system topology 
restriction like this. Given that other architectures have built 
machines with large directory caches that span numa nodes the check was 
a safety measure.


--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Morten Rasmussen June 6, 2018, 3:07 p.m. UTC | #5
On Wed, Jun 06, 2018 at 09:53:39AM -0500, Jeremy Linton wrote:
> On 06/06/2018 09:44 AM, Morten Rasmussen wrote:
> >On Tue, Jun 05, 2018 at 02:08:37PM -0500, Jeremy Linton wrote:
> >>The numa mask subset check has problems if !CONFIG_NUMA, over hotplug
> >>operations or during early boot. Lets disable the NUMA siblings checks
> >>for the time being, as NUMA in socket machines have LLC's that will
> >>assure that the scheduler topology isn't "borken".
> >
> >Could we add an explanation why the numa node mask check is needed in
> >the first place. >
> >IIUC, we have the check in case the LLC is shared across numa nodes as
> >this would cause core_siblings > cpumask_of_node() which breaks the
> >scheduler topology.
> 
> Yes, that sounds like a good idea, my comments probably assume that the
> reader has been part of these conversations.
> 
> >
> >While sharing LLC across numa nodes seems quite unusual, I think it is
> >allowed by ACPI. Those systems might already be broken before, so might
> >not change anything. It is just worth noting why the check should be
> >added back later.
> 
> Right, there isn't anything in ACPI that dictates a system topology
> restriction like this. Given that other architectures have built machines
> with large directory caches that span numa nodes the check was a safety
> measure.

Agreed, it seems that another architecture has recently merged support
for that: 1340ccfa9a9a
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 7415c166281f..f845a8617812 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -215,13 +215,8 @@  EXPORT_SYMBOL_GPL(cpu_topology);
 
 const struct cpumask *cpu_coregroup_mask(int cpu)
 {
-	const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu));
+	const cpumask_t *core_mask = &cpu_topology[cpu].core_sibling;
 
-	/* Find the smaller of NUMA, core or LLC siblings */
-	if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) {
-		/* not numa in package, lets use the package siblings */
-		core_mask = &cpu_topology[cpu].core_sibling;
-	}
 	if (cpu_topology[cpu].llc_id != -1) {
 		if (cpumask_subset(&cpu_topology[cpu].llc_siblings, core_mask))
 			core_mask = &cpu_topology[cpu].llc_siblings;
@@ -239,8 +234,10 @@  static void update_siblings_masks(unsigned int cpuid)
 	for_each_possible_cpu(cpu) {
 		cpu_topo = &cpu_topology[cpu];
 
-		if (cpuid_topo->llc_id == cpu_topo->llc_id)
+		if (cpuid_topo->llc_id == cpu_topo->llc_id) {
 			cpumask_set_cpu(cpu, &cpuid_topo->llc_siblings);
+			cpumask_set_cpu(cpuid, &cpu_topo->llc_siblings);
+		}
 
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;