[1/4] wlcore: Use spin_trylock in wlcore_irq_locked() for running the queue
diff mbox series

Message ID 20200617212505.62519-2-tony@atomide.com
State New
Headers show
Series
  • Improvments for wlcore irq and resume for v5.9
Related show

Commit Message

Tony Lindgren June 17, 2020, 9:25 p.m. UTC
We need the spinlock to check if we need to run the queue. Let's use
spin_trylock instead and always run the queue unless we know there's
nothing to do.

Signed-off-by: Tony Lindgren <tony@atomide.com>
---
 drivers/net/wireless/ti/wlcore/main.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

Comments

Kalle Valo June 22, 2020, 2:14 p.m. UTC | #1
Tony Lindgren <tony@atomide.com> writes:

> We need the spinlock to check if we need to run the queue. Let's use
> spin_trylock instead and always run the queue unless we know there's
> nothing to do.

Why? What's the problem you are solving here?
Tony Lindgren June 22, 2020, 4:06 p.m. UTC | #2
* Kalle Valo <kvalo@codeaurora.org> [200622 14:15]:
> Tony Lindgren <tony@atomide.com> writes:
> 
> > We need the spinlock to check if we need to run the queue. Let's use
> > spin_trylock instead and always run the queue unless we know there's
> > nothing to do.
> 
> Why? What's the problem you are solving here?

To simplify the flags and locking use between the threaded irq
and tx work.

While chasing an occasional hang with an idle wlan doing just a
periodic network scans, I noticed we can start simplifying the
locking between the threaded irq and tx work for the driver.

No luck so far figuring out what the occasional idle wlan hang is,
but I suspect we end up somewhere in a deadlock between tx work
and the threaded irq.

We currently have a collection of flags and locking between the
threaded irq and tx work:

- wl->flags bitops
- wl->mutex
- wl->wl_lock spinlock

The bitops flags do not need a spinlock around them, and
wlcore_irq() already holds the mutex calling wlcore_irq_locked().
And we only need the spinlock to see if we need to run the queue
or not.

So I think eventually we can remove most of the spinlock use in
favor of the mutex. I guess I could leave out the trylock changes
here if this is too many changes at once.

Or do you see some problem in general with this approach?

Regards,

Tony
Kalle Valo June 23, 2020, 6:41 a.m. UTC | #3
Tony Lindgren <tony@atomide.com> writes:

> * Kalle Valo <kvalo@codeaurora.org> [200622 14:15]:
>> Tony Lindgren <tony@atomide.com> writes:
>> 
>> > We need the spinlock to check if we need to run the queue. Let's use
>> > spin_trylock instead and always run the queue unless we know there's
>> > nothing to do.
>> 
>> Why? What's the problem you are solving here?
>
> To simplify the flags and locking use between the threaded irq
> and tx work.
>
> While chasing an occasional hang with an idle wlan doing just a
> periodic network scans, I noticed we can start simplifying the
> locking between the threaded irq and tx work for the driver.
>
> No luck so far figuring out what the occasional idle wlan hang is,
> but I suspect we end up somewhere in a deadlock between tx work
> and the threaded irq.
>
> We currently have a collection of flags and locking between the
> threaded irq and tx work:
>
> - wl->flags bitops
> - wl->mutex
> - wl->wl_lock spinlock
>
> The bitops flags do not need a spinlock around them, and
> wlcore_irq() already holds the mutex calling wlcore_irq_locked().
> And we only need the spinlock to see if we need to run the queue
> or not.
>
> So I think eventually we can remove most of the spinlock use in
> favor of the mutex. I guess I could leave out the trylock changes
> here if this is too many changes at once.
>
> Or do you see some problem in general with this approach?

My only problem was lack of background information in the commit logs.
Conditional locking is tricky and I didn't figure out why you are doing
that and why it's safe to do. So if you could send v2 with the
information above in the commit log I would be happy.
Tony Lindgren June 23, 2020, 6:48 p.m. UTC | #4
* Kalle Valo <kvalo@codeaurora.org> [200623 06:46]:
> Tony Lindgren <tony@atomide.com> writes:
> 
> > * Kalle Valo <kvalo@codeaurora.org> [200622 14:15]:
> >> Tony Lindgren <tony@atomide.com> writes:
> >> 
> >> > We need the spinlock to check if we need to run the queue. Let's use
> >> > spin_trylock instead and always run the queue unless we know there's
> >> > nothing to do.
> >> 
> >> Why? What's the problem you are solving here?
> >
> > To simplify the flags and locking use between the threaded irq
> > and tx work.
> >
> > While chasing an occasional hang with an idle wlan doing just a
> > periodic network scans, I noticed we can start simplifying the
> > locking between the threaded irq and tx work for the driver.
> >
> > No luck so far figuring out what the occasional idle wlan hang is,
> > but I suspect we end up somewhere in a deadlock between tx work
> > and the threaded irq.
> >
> > We currently have a collection of flags and locking between the
> > threaded irq and tx work:
> >
> > - wl->flags bitops
> > - wl->mutex
> > - wl->wl_lock spinlock
> >
> > The bitops flags do not need a spinlock around them, and
> > wlcore_irq() already holds the mutex calling wlcore_irq_locked().
> > And we only need the spinlock to see if we need to run the queue
> > or not.
> >
> > So I think eventually we can remove most of the spinlock use in
> > favor of the mutex. I guess I could leave out the trylock changes
> > here if this is too many changes at once.
> >
> > Or do you see some problem in general with this approach?
> 
> My only problem was lack of background information in the commit logs.
> Conditional locking is tricky and I didn't figure out why you are doing
> that and why it's safe to do. So if you could send v2 with the
> information above in the commit log I would be happy.

OK. I'll update the description for the patches and resend.

Thanks,

Tony

Patch
diff mbox series

diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
--- a/drivers/net/wireless/ti/wlcore/main.c
+++ b/drivers/net/wireless/ti/wlcore/main.c
@@ -521,6 +521,7 @@  static int wlcore_irq_locked(struct wl1271 *wl)
 	int ret = 0;
 	u32 intr;
 	int loopcount = WL1271_IRQ_MAX_LOOPS;
+	bool run_tx_queue = true;
 	bool done = false;
 	unsigned int defer_count;
 	unsigned long flags;
@@ -586,19 +587,22 @@  static int wlcore_irq_locked(struct wl1271 *wl)
 				goto err_ret;
 
 			/* Check if any tx blocks were freed */
-			spin_lock_irqsave(&wl->wl_lock, flags);
-			if (!test_bit(WL1271_FLAG_FW_TX_BUSY, &wl->flags) &&
-			    wl1271_tx_total_queue_count(wl) > 0) {
-				spin_unlock_irqrestore(&wl->wl_lock, flags);
+			if (!test_bit(WL1271_FLAG_FW_TX_BUSY, &wl->flags)) {
+				if (spin_trylock_irqsave(&wl->wl_lock, flags)) {
+					if (!wl1271_tx_total_queue_count(wl))
+						run_tx_queue = false;
+					spin_unlock_irqrestore(&wl->wl_lock, flags);
+				}
+
 				/*
 				 * In order to avoid starvation of the TX path,
 				 * call the work function directly.
 				 */
-				ret = wlcore_tx_work_locked(wl);
-				if (ret < 0)
-					goto err_ret;
-			} else {
-				spin_unlock_irqrestore(&wl->wl_lock, flags);
+				if (run_tx_queue) {
+					ret = wlcore_tx_work_locked(wl);
+					if (ret < 0)
+						goto err_ret;
+				}
 			}
 
 			/* check for tx results */