diff mbox

Schedule affinity_notify work while migrating IRQs during hot plug

Message ID 559ce4c1fef10c45eab12f65c4e0f0d9@codeaurora.org (mailing list archive)
State New, archived
Headers show

Commit Message

Prasad Sodagudi March 17, 2017, 10:51 a.m. UTC
On 2017-03-13 13:19, Thomas Gleixner wrote:
> On Mon, 13 Mar 2017, Sodagudi Prasad wrote:
>> On 2017-02-27 09:21, Thomas Gleixner wrote:
>> > On Mon, 27 Feb 2017, Sodagudi Prasad wrote:
>> > > So I am thinking that, adding following sched_work() would notify clients.
>> >
>> > And break the world and some more.
>> >
>> > > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> > > index 6b66959..5e4766b 100644
>> > > --- a/kernel/irq/manage.c
>> > > +++ b/kernel/irq/manage.c
>> > > @@ -207,6 +207,7 @@ int irq_do_set_affinity(struct irq_data *data, const
>> > > struct cpumask *mask,
>> > >         case IRQ_SET_MASK_OK_DONE:
>> > >                 cpumask_copy(desc->irq_common_data.affinity, mask);
>> > >         case IRQ_SET_MASK_OK_NOCOPY:
>> > > +               schedule_work(&desc->affinity_notify->work);
>> > >                 irq_set_thread_affinity(desc);
>> > >                 ret = 0;
>> >
>> > You cannot do that unconditionally and just slap that schedule_work() call
>> > into the code. Aside of that schedule_work() would be invoked twice for all
>> > calls which come via irq_set_affinity_locked() ....
>> Hi Tglx,
>> 
>> Yes. I agree with you, schedule_work() gets invoked twice with 
>> previous
>> change.
>> 
>> How about calling irq_set_notify_locked() instead of 
>> irq_do_set_notify()?
> 
> Is this a quiz?
> 
> Can you actually see the difference between these functions? There is a
> damned good reason WHY this calls irq_do_set_affinity().
Other option is that, adding an argument to irq_do_set_affinity() and 
queue
work to notify when that new parameter set.  I have attached patch for 
the same.

I tested this change on arm64 bit platform and observed that clients 
drivers
are getting notified during cpu hot plug.

> Thanks,
> 
> 	tglx

Comments

Thomas Gleixner March 17, 2017, 1:18 p.m. UTC | #1
On Fri, 17 Mar 2017, Sodagudi Prasad wrote:
> On 2017-03-13 13:19, Thomas Gleixner wrote:
> > Can you actually see the difference between these functions? There is a
> > damned good reason WHY this calls irq_do_set_affinity().
> 
> Other option is that, adding an argument to irq_do_set_affinity() and queue
> work to notify when that new parameter set.  I have attached patch for the
> same.

Documentation/process/submitting-patches.rst: Section #6
Prasad Sodagudi March 20, 2017, 4:36 p.m. UTC | #2
irq_do_set_affinity() last argument differentiates whether notify work 
need to queued for this irq or not.  So that we can avoid double queuing of notify
in the irq_set_affinity_locked() path.
diff mbox

Patch

From 54b8d5164126fbdf14d1a9586342b972a6eb5537 Mon Sep 17 00:00:00 2001
From: Prasad Sodagudi <psodagud@codeaurora.org>
Date: Thu, 16 Mar 2017 23:44:44 -0700
Subject: [PATCH] genirq: Notify clients whenever there is change in affinity

During the cpu hotplug, irq are getting migrated from
hotplugging core but not getting notitfied to client
drivers. So add parameter to irq_do_set_affinity(),
to check and notify client drivers during the cpu hotplug.

Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
---
 kernel/irq/cpuhotplug.c | 2 +-
 kernel/irq/internals.h  | 2 +-
 kernel/irq/manage.c     | 9 ++++++---
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 011f8c4..e293d9b 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -38,7 +38,7 @@  static bool migrate_one_irq(struct irq_desc *desc)
 	if (!c->irq_set_affinity) {
 		pr_debug("IRQ%u: unable to set affinity\n", d->irq);
 	} else {
-		int r = irq_do_set_affinity(d, affinity, false);
+		int r = irq_do_set_affinity(d, affinity, false, true);
 		if (r)
 			pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n",
 					    d->irq, r);
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index bc226e7..6abde48 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -114,7 +114,7 @@  static inline void unregister_handler_proc(unsigned int irq,
 extern void irq_set_thread_affinity(struct irq_desc *desc);
 
 extern int irq_do_set_affinity(struct irq_data *data,
-			       const struct cpumask *dest, bool force);
+		const struct cpumask *dest, bool force, bool notify);
 
 /* Inline functions for support of irq chips on slow busses */
 static inline void chip_bus_lock(struct irq_desc *desc)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index a4afe5c..aef8a96 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -197,7 +197,7 @@  static inline bool irq_move_pending(struct irq_data *data)
 #endif
 
 int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
-			bool force)
+			bool force, bool notify)
 {
 	struct irq_desc *desc = irq_data_to_desc(data);
 	struct irq_chip *chip = irq_data_get_irq_chip(data);
@@ -209,6 +209,9 @@  int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
 	case IRQ_SET_MASK_OK_DONE:
 		cpumask_copy(desc->irq_common_data.affinity, mask);
 	case IRQ_SET_MASK_OK_NOCOPY:
+		if (notify)
+			schedule_work(&desc->affinity_notify->work);
+
 		irq_set_thread_affinity(desc);
 		ret = 0;
 	}
@@ -227,7 +230,7 @@  int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
 		return -EINVAL;
 
 	if (irq_can_move_pcntxt(data)) {
-		ret = irq_do_set_affinity(data, mask, force);
+		ret = irq_do_set_affinity(data, mask, force, false);
 	} else {
 		irqd_set_move_pending(data);
 		irq_copy_pending(desc, mask);
@@ -375,7 +378,7 @@  static int setup_affinity(struct irq_desc *desc, struct cpumask *mask)
 		if (cpumask_intersects(mask, nodemask))
 			cpumask_and(mask, mask, nodemask);
 	}
-	irq_do_set_affinity(&desc->irq_data, mask, false);
+	irq_do_set_affinity(&desc->irq_data, mask, false, true);
 	return 0;
 }
 #else
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,\na Linux Foundation Collaborative Project