Message ID | 1473750006-21199-2-git-send-email-mw@semihalf.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, 2016-09-13 at 09:00 +0200, Marcin Wojtas wrote: > From: Simon Guinot <simon.guinot@sequanux.org> > > Basing on xmit_more flag of the skb, TX descriptors can be concatenated > before flushing. This commit delay Tx descriptor flush if the queue is > running and if there is more skb's to send. > > Signed-off-by: Simon Guinot <simon.guinot@sequanux.org> > --- > drivers/net/ethernet/marvell/mvneta.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c > index d41c28d..b9dccea 100644 > --- a/drivers/net/ethernet/marvell/mvneta.c > +++ b/drivers/net/ethernet/marvell/mvneta.c > @@ -512,6 +512,7 @@ struct mvneta_tx_queue { > * descriptor ring > */ > int count; > + int pending; > int tx_stop_threshold; > int tx_wake_threshold; > > @@ -802,8 +803,9 @@ static void mvneta_txq_pend_desc_add(struct mvneta_port *pp, > /* Only 255 descriptors can be added at once ; Assume caller > * process TX desriptors in quanta less than 256 > */ Hi Marcin Well, given the above comment, and fact that MVNETA_MAX_TXD == 532, it looks like you might add a bug if more than 256 skb are given to your ndo_start_xmit() with skb->xmit_more = 1 I therefore suggest you make sure it does not happen. txq->pending += frags; if (!skb->xmit_more || txq->pending > 256 - MVNETA_MAX_SKB_DESCS || netif_xmit_stopped(nq)) mvneta_txq_pend_desc_add(pp, txq)
On Tue, 2016-09-13 at 07:33 -0700, Eric Dumazet wrote: > Hi Marcin > > Well, given the above comment, and fact that MVNETA_MAX_TXD == 532, it > looks like you might add a bug if more than 256 skb are given to your > ndo_start_xmit() with skb->xmit_more = 1 > > I therefore suggest you make sure it does not happen. > > txq->pending += frags; > if (!skb->xmit_more || > txq->pending > 256 - MVNETA_MAX_SKB_DESCS || > netif_xmit_stopped(nq)) > mvneta_txq_pend_desc_add(pp, txq) > Another solution would be to test the potential overflow in mvneta_tx() and force a mvneta_txq_pend_desc_add(pp, txq) _before_ adding the desc of the "about to be cooked" TSO packet. (This is because MVNETA_MAX_SKB_DESCS is 217, so 255-217 leaves few room for xmit_more to show its power)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index d41c28d..b9dccea 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -512,6 +512,7 @@ struct mvneta_tx_queue { * descriptor ring */ int count; + int pending; int tx_stop_threshold; int tx_wake_threshold; @@ -802,8 +803,9 @@ static void mvneta_txq_pend_desc_add(struct mvneta_port *pp, /* Only 255 descriptors can be added at once ; Assume caller * process TX desriptors in quanta less than 256 */ - val = pend_desc; + val = pend_desc + txq->pending; mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); + txq->pending = 0; } /* Get pointer to next TX descriptor to be processed (send) by HW */ @@ -2357,11 +2359,14 @@ out: struct netdev_queue *nq = netdev_get_tx_queue(dev, txq_id); txq->count += frags; - mvneta_txq_pend_desc_add(pp, txq, frags); - if (txq->count >= txq->tx_stop_threshold) netif_tx_stop_queue(nq); + if (!skb->xmit_more || netif_xmit_stopped(nq)) + mvneta_txq_pend_desc_add(pp, txq, frags); + else + txq->pending += frags; + u64_stats_update_begin(&stats->syncp); stats->tx_packets++; stats->tx_bytes += len;