diff mbox series

[net-next,v4,01/10] tsnep: Use spin_lock_bh for TX

Message ID 20230109191523.12070-2-gerhard@engleder-embedded.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series tsnep: XDP support | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 81 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Gerhard Engleder Jan. 9, 2023, 7:15 p.m. UTC
TX processing is done only within BH context. Therefore, _irqsafe
variant is not necessary.

Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com>
---
 drivers/net/ethernet/engleder/tsnep_main.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

Comments

Alexander Duyck Jan. 10, 2023, 3:49 p.m. UTC | #1
On Mon, 2023-01-09 at 20:15 +0100, Gerhard Engleder wrote:
> TX processing is done only within BH context. Therefore, _irqsafe
> variant is not necessary.
> 
> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com>

Rather than reducing the context of this why not drop it completely? It
doesn't look like you are running with the NETIF_F_LLTX flag set so
from what I can tell it looks like you are taking the Tx lock in the
xmit path. So Tx queues are protected with the Tx queue lock at the
netdev level via the HARD_TX_LOCK macro.

Since it is already being used in the Tx path to protect multiple
access you could probably just look at getting rid of it entirely.

The only caveat you would need to watch out for is a race between the
cleaning and transmitting which can be addressed via a few barriers
like what was done in the intel drivers via something like the
__ixgbe_maybe_stop_tx function and the logic to wake the queue in the
clean function.

Alternatively if you really feel you need this in the non-xmit path
functions you could just drop the lock and instead use __netif_tx_lock
for those spots that are accessing the queue outside the normal
transmit path.
Gerhard Engleder Jan. 10, 2023, 7:44 p.m. UTC | #2
On 10.01.23 16:49, Alexander H Duyck wrote:
> On Mon, 2023-01-09 at 20:15 +0100, Gerhard Engleder wrote:
>> TX processing is done only within BH context. Therefore, _irqsafe
>> variant is not necessary.
>>
>> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com>
> 
> Rather than reducing the context of this why not drop it completely? It
> doesn't look like you are running with the NETIF_F_LLTX flag set so
> from what I can tell it looks like you are taking the Tx lock in the
> xmit path. So Tx queues are protected with the Tx queue lock at the
> netdev level via the HARD_TX_LOCK macro.
> 
> Since it is already being used in the Tx path to protect multiple
> access you could probably just look at getting rid of it entirely.
> 
> The only caveat you would need to watch out for is a race between the
> cleaning and transmitting which can be addressed via a few barriers
> like what was done in the intel drivers via something like the
> __ixgbe_maybe_stop_tx function and the logic to wake the queue in the
> clean function.

I know these barriers in the intel drivers. But I chose to use a lock
like the microchip driver to be on the safe side rather than having a
fully optimized driver.

> Alternatively if you really feel you need this in the non-xmit path
> functions you could just drop the lock and instead use __netif_tx_lock
> for those spots that are accessing the queue outside the normal
> transmit path.

Ok, I will work on that. One of your suggestions will be done.

Gerhard
diff mbox series

Patch

diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
index bf0190e1d2ea..7cc5e2407809 100644
--- a/drivers/net/ethernet/engleder/tsnep_main.c
+++ b/drivers/net/ethernet/engleder/tsnep_main.c
@@ -434,7 +434,6 @@  static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
 static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
 					 struct tsnep_tx *tx)
 {
-	unsigned long flags;
 	int count = 1;
 	struct tsnep_tx_entry *entry;
 	int length;
@@ -444,7 +443,7 @@  static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
 	if (skb_shinfo(skb)->nr_frags > 0)
 		count += skb_shinfo(skb)->nr_frags;
 
-	spin_lock_irqsave(&tx->lock, flags);
+	spin_lock_bh(&tx->lock);
 
 	if (tsnep_tx_desc_available(tx) < count) {
 		/* ring full, shall not happen because queue is stopped if full
@@ -452,7 +451,7 @@  static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
 		 */
 		netif_stop_queue(tx->adapter->netdev);
 
-		spin_unlock_irqrestore(&tx->lock, flags);
+		spin_unlock_bh(&tx->lock);
 
 		return NETDEV_TX_BUSY;
 	}
@@ -468,7 +467,7 @@  static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
 
 		tx->dropped++;
 
-		spin_unlock_irqrestore(&tx->lock, flags);
+		spin_unlock_bh(&tx->lock);
 
 		netdev_err(tx->adapter->netdev, "TX DMA map failed\n");
 
@@ -496,20 +495,19 @@  static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
 		netif_stop_queue(tx->adapter->netdev);
 	}
 
-	spin_unlock_irqrestore(&tx->lock, flags);
+	spin_unlock_bh(&tx->lock);
 
 	return NETDEV_TX_OK;
 }
 
 static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
 {
-	unsigned long flags;
 	int budget = 128;
 	struct tsnep_tx_entry *entry;
 	int count;
 	int length;
 
-	spin_lock_irqsave(&tx->lock, flags);
+	spin_lock_bh(&tx->lock);
 
 	do {
 		if (tx->read == tx->write)
@@ -568,18 +566,17 @@  static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
 		netif_wake_queue(tx->adapter->netdev);
 	}
 
-	spin_unlock_irqrestore(&tx->lock, flags);
+	spin_unlock_bh(&tx->lock);
 
 	return (budget != 0);
 }
 
 static bool tsnep_tx_pending(struct tsnep_tx *tx)
 {
-	unsigned long flags;
 	struct tsnep_tx_entry *entry;
 	bool pending = false;
 
-	spin_lock_irqsave(&tx->lock, flags);
+	spin_lock_bh(&tx->lock);
 
 	if (tx->read != tx->write) {
 		entry = &tx->entry[tx->read];
@@ -589,7 +586,7 @@  static bool tsnep_tx_pending(struct tsnep_tx *tx)
 			pending = true;
 	}
 
-	spin_unlock_irqrestore(&tx->lock, flags);
+	spin_unlock_bh(&tx->lock);
 
 	return pending;
 }