diff mbox series

[net-next] net: ethernet: mtk_wed: get rid of queue lock for rx queue

Message ID bff65ff7f9a269b8a066cae0095b798ad5b37065.1673102426.git.lorenzo@kernel.org (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next] net: ethernet: mtk_wed: get rid of queue lock for rx queue | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 3 maintainers not CCed: matthias.bgg@gmail.com linux-arm-kernel@lists.infradead.org linux-mediatek@lists.infradead.org
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 28 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Lorenzo Bianconi Jan. 7, 2023, 2:41 p.m. UTC
mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run
concurrently so get rid of spinlock for rx queues.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ----
 1 file changed, 4 deletions(-)

Comments

Alexander Duyck Jan. 10, 2023, 12:50 a.m. UTC | #1
On Sat, 2023-01-07 at 15:41 +0100, Lorenzo Bianconi wrote:
> mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run
> concurrently so get rid of spinlock for rx queues.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

My understanding is that mtk_wed_wo_queue_refill will only be called
during init and by the tasklet. The mtk_wed_wo_queue_rx_clean function
is only called during deinit and only after the tasklet has been
disabled. That is the reason they cannot run at the same time correct?

It would be nice if you explained why they couldn't run concurrently
rather than just stating it is so in the patch description. It makes it
easier to verify assumptions that way. Otherwise the patch itself looks
good to me.

Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Jakub Kicinski Jan. 10, 2023, 3:37 a.m. UTC | #2
On Mon, 09 Jan 2023 16:50:55 -0800 Alexander H Duyck wrote:
> On Sat, 2023-01-07 at 15:41 +0100, Lorenzo Bianconi wrote:
> > mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run
> > concurrently so get rid of spinlock for rx queues.

You say "for rx queues" but mtk_wed_wo_queue_refill() is also called
for tx queues.

> My understanding is that mtk_wed_wo_queue_refill will only be called
> during init and by the tasklet. The mtk_wed_wo_queue_rx_clean function
> is only called during deinit and only after the tasklet has been
> disabled. That is the reason they cannot run at the same time correct?
> 
> It would be nice if you explained why they couldn't run concurrently
> rather than just stating it is so in the patch description. It makes it
> easier to verify assumptions that way. Otherwise the patch itself looks
> good to me.

Agreed, please respin with a better commit message.
Lorenzo Bianconi Jan. 10, 2023, 9:09 a.m. UTC | #3
> On Sat, 2023-01-07 at 15:41 +0100, Lorenzo Bianconi wrote:
> > mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run
> > concurrently so get rid of spinlock for rx queues.
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> 
> My understanding is that mtk_wed_wo_queue_refill will only be called
> during init and by the tasklet. The mtk_wed_wo_queue_rx_clean function
> is only called during deinit and only after the tasklet has been
> disabled. That is the reason they cannot run at the same time correct?

correct

> 
> It would be nice if you explained why they couldn't run concurrently
> rather than just stating it is so in the patch description. It makes it
> easier to verify assumptions that way. Otherwise the patch itself looks
> good to me.

ack, right. I will update the commit message in v2.

Regards,
Lorenzo

> 
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
>
Lorenzo Bianconi Jan. 10, 2023, 9:10 a.m. UTC | #4
> On Mon, 09 Jan 2023 16:50:55 -0800 Alexander H Duyck wrote:
> > On Sat, 2023-01-07 at 15:41 +0100, Lorenzo Bianconi wrote:
> > > mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run
> > > concurrently so get rid of spinlock for rx queues.
> 
> You say "for rx queues" but mtk_wed_wo_queue_refill() is also called
> for tx queues.

ack, but for tx queues is run just during initialization.

> 
> > My understanding is that mtk_wed_wo_queue_refill will only be called
> > during init and by the tasklet. The mtk_wed_wo_queue_rx_clean function
> > is only called during deinit and only after the tasklet has been
> > disabled. That is the reason they cannot run at the same time correct?
> > 
> > It would be nice if you explained why they couldn't run concurrently
> > rather than just stating it is so in the patch description. It makes it
> > easier to verify assumptions that way. Otherwise the patch itself looks
> > good to me.
> 
> Agreed, please respin with a better commit message.
> 

ack, I will post v2 with a better commit message.

Regards,
Lorenzo
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
index a0a39643caf7..d32b86499896 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
@@ -138,7 +138,6 @@  mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
 	enum dma_data_direction dir = rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
 	int n_buf = 0;
 
-	spin_lock_bh(&q->lock);
 	while (q->queued < q->n_desc) {
 		struct mtk_wed_wo_queue_entry *entry;
 		dma_addr_t addr;
@@ -172,7 +171,6 @@  mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
 		q->queued++;
 		n_buf++;
 	}
-	spin_unlock_bh(&q->lock);
 
 	return n_buf;
 }
@@ -316,7 +314,6 @@  mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
 {
 	struct page *page;
 
-	spin_lock_bh(&q->lock);
 	for (;;) {
 		void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true);
 
@@ -325,7 +322,6 @@  mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q)
 
 		skb_free_frag(buf);
 	}
-	spin_unlock_bh(&q->lock);
 
 	if (!q->cache.va)
 		return;