diff mbox

[v2] irqchip/gic-v3-its: fix ITS queue timeout

Message ID 1528252824-15144-1-git-send-email-yangyingliang@huawei.com (mailing list archive)
State New, archived
Headers show

Commit Message

Yang Yingliang June 6, 2018, 2:40 a.m. UTC
When the kernel booted with maxcpus=x, 'x' is smaller
than actual cpu numbers, the TAs of offline cpus won't
be set to its->collection.

If LPI is bind to offline cpu, sync cmd will use zero TA,
it leads to ITS queue timeout.  Fix this by choosing a
online cpu, if there is no online cpu in cpu_mask.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
---
 drivers/irqchip/irq-gic-v3-its.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Comments

Marc Zyngier June 6, 2018, 9:13 a.m. UTC | #1
On Wed, 06 Jun 2018 03:40:24 +0100,
Yang Yingliang wrote:

[I'm travelling, so please do not expect any quick answer...]

> 
> When the kernel booted with maxcpus=x, 'x' is smaller
> than actual cpu numbers, the TAs of offline cpus won't

TA? Target Address? Target Affinity? Timing Advance? Terrible Acronym?

> be set to its->collection.
> 
> If LPI is bind to offline cpu, sync cmd will use zero TA,
> it leads to ITS queue timeout.  Fix this by choosing a
> online cpu, if there is no online cpu in cpu_mask.

So instead of fixing the emission of a sync command on a non-mapped
collection, you hack set_affinity? It doesn't feel like the right
thing to do.

It is also worth noticing that mapping an LPI to a collection that is
not mapped yet is perfectly legal.

> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
> ---
>  drivers/irqchip/irq-gic-v3-its.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index 5416f2b..d8b9539 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -2309,7 +2309,9 @@ static int its_irq_domain_activate(struct irq_domain *domain,
>  		cpu_mask = cpumask_of_node(its_dev->its->numa_node);
>  
>  	/* Bind the LPI to the first possible CPU */
> -	cpu = cpumask_first(cpu_mask);
> +	cpu = cpumask_first_and(cpu_mask, cpu_online_mask);
> +	if (cpu >= nr_cpu_ids)
> +		cpu = cpumask_first(cpu_online_mask);

Now you're completely ignoring cpu_mask which constraints the NUMA
affinity. On some systems, this ends up with a deadlock (Cavium TX1,
if I remember well).

Wouldn't it be better to just return that the affinity setting request
is impossible to satisfy? And more to the point, how comes we end-up
in such a case?

Thanks,

	M.
Hanjun Guo June 7, 2018, 12:25 p.m. UTC | #2
Hi Marc,

On 2018/6/6 17:13, Marc Zyngier wrote:
[...]
> 
> Wouldn't it be better to just return that the affinity setting request
> is impossible to satisfy? And more to the point, how comes we end-up
> in such a case?

The system is booted with a NUMA node has no memory attaching to it
(memory-less NUMA node), also with NR_CPUS less than CPUs presented
in MADT, so CPUs on this memory-less node are not brought up, and
this NUMA node will not be online too. But the ITS attaching to this NUMA
domain is still valid and represented via SRAT to ITS driver.

This is really the corner case which is triggered by the boot testing
when enabling our D06 boards, but it's a bug :)

Thanks
Hanjun
Marc Zyngier June 10, 2018, 10:40 a.m. UTC | #3
Hi Hanjun,

On Thu, 07 Jun 2018 13:25:26 +0100,
Hanjun Guo wrote:
> 
> Hi Marc,
> 
> On 2018/6/6 17:13, Marc Zyngier wrote:
> [...]
> > 
> > Wouldn't it be better to just return that the affinity setting request
> > is impossible to satisfy? And more to the point, how comes we end-up
> > in such a case?
> 
> The system is booted with a NUMA node has no memory attaching to it
> (memory-less NUMA node), also with NR_CPUS less than CPUs presented
> in MADT, so CPUs on this memory-less node are not brought up, and
> this NUMA node will not be online too. But the ITS attaching to this NUMA
> domain is still valid and represented via SRAT to ITS driver.
> 
> This is really the corner case which is triggered by the boot testing
> when enabling our D06 boards, but it's a bug :)

I'm not debating the bringing up (or lack thereof) of the secondary
CPUs. I'm questioning the affinity setting to unavailable CPUs, and I
really wonder what the semantic of such a thing is (and how we end-up
there).

Anyway, I'll plug the "SYNC to unmapped collection" issue (which
definitely needs fixing), but I'd like to understand the above.

Thanks,

	M.
diff mbox

Patch

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 5416f2b..d8b9539 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -2309,7 +2309,9 @@  static int its_irq_domain_activate(struct irq_domain *domain,
 		cpu_mask = cpumask_of_node(its_dev->its->numa_node);
 
 	/* Bind the LPI to the first possible CPU */
-	cpu = cpumask_first(cpu_mask);
+	cpu = cpumask_first_and(cpu_mask, cpu_online_mask);
+	if (cpu >= nr_cpu_ids)
+		cpu = cpumask_first(cpu_online_mask);
 	its_dev->event_map.col_map[event] = cpu;
 	irq_data_update_effective_affinity(d, cpumask_of(cpu));
 
@@ -2466,7 +2468,10 @@  static int its_vpe_set_affinity(struct irq_data *d,
 				bool force)
 {
 	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
-	int cpu = cpumask_first(mask_val);
+	int cpu = cpumask_first_and(mask_val, cpu_online_mask);
+
+	if (cpu >= nr_cpu_ids)
+		cpu = cpumask_first(cpu_online_mask);
 
 	/*
 	 * Changing affinity is mega expensive, so let's be as lazy as