diff mbox

[2/6] irq/affinity: Assign offline CPUs a vector

Message ID 1483569671-1462-3-git-send-email-keith.busch@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Keith Busch Jan. 4, 2017, 10:41 p.m. UTC
The offline CPUs need to assigned to something incase they come online
later, otherwise anyone using the mapping for things other than affinity
will have blank entries for that online CPU.

Signed-off-by: Keith Busch <keith.busch@intel.com>
---
 kernel/irq/affinity.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Christoph Hellwig Jan. 8, 2017, 10:01 a.m. UTC | #1
On Wed, Jan 04, 2017 at 05:41:07PM -0500, Keith Busch wrote:
> The offline CPUs need to assigned to something incase they come online
> later, otherwise anyone using the mapping for things other than affinity
> will have blank entries for that online CPU.

I don't really like the idea behind it.  Back when we came up with
this code I had some discussion with Thomas if we should do the
assignment only for online CPUs, or maybe for all possible CPUs.

Except for some big iron with physical node hotplug the difference
usually is just that some nodes have been temporarily offlined.  So
maybe we need to bite the bullet and move the irq and block code
to consider all possible cpus.
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg Jan. 13, 2017, 8:26 p.m. UTC | #2
>> The offline CPUs need to assigned to something incase they come online
>> later, otherwise anyone using the mapping for things other than affinity
>> will have blank entries for that online CPU.
>
> I don't really like the idea behind it.  Back when we came up with
> this code I had some discussion with Thomas if we should do the
> assignment only for online CPUs, or maybe for all possible CPUs.
>
> Except for some big iron with physical node hotplug the difference
> usually is just that some nodes have been temporarily offlined.  So
> maybe we need to bite the bullet and move the irq and block code
> to consider all possible cpus.

I tend to agree.

Would be great if we could have a cpuhp_register() for external users
that would get ops + state mask structure to act on (this case would
need CPUHP_OFFLINE, CPUHP_ONLINE). We do have something similar
for mmu (mmu_notifier).
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index b25dce0..2367531 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -129,6 +129,14 @@  irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
 	}
 
 done:
+	/*
+	 * Assign offline CPUs to first mask in case they come online later. A
+	 * driver can rerun this from a cpu notifier if they want a more optimal
+	 * spread.
+	 */
+	cpumask_andnot(nmsk, cpu_possible_mask, cpu_online_mask);
+	irq_spread_init_one(masks, nmsk, cpumask_weight(nmsk));
+
 	put_online_cpus();
 
 	/* Fill out vectors at the end that don't need affinity */