diff mbox series

[net-next,v5,9/9] mlx4: Add support for persistent NAPI config to RX CQs

Message ID 20241009005525.13651-10-jdamato@fastly.com (mailing list archive)
State Not Applicable
Headers show
Series Add support for per-NAPI config via netlink | expand

Commit Message

Joe Damato Oct. 9, 2024, 12:55 a.m. UTC
Use netif_napi_add_config to assign persistent per-NAPI config when
initializing RX CQ NAPIs.

Presently, struct napi_config only has support for two fields used for
RX, so there is no need to support them with TX CQs, yet.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 drivers/net/ethernet/mellanox/mlx4/en_cq.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Eric Dumazet Oct. 10, 2024, 4:28 a.m. UTC | #1
On Wed, Oct 9, 2024 at 2:56 AM Joe Damato <jdamato@fastly.com> wrote:
>
> Use netif_napi_add_config to assign persistent per-NAPI config when
> initializing RX CQ NAPIs.
>
> Presently, struct napi_config only has support for two fields used for
> RX, so there is no need to support them with TX CQs, yet.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> ---

nit: technically, the napi_defer_hard_irqs could benefit TX completions as well.

Reviewed-by: Eric Dumazet <edumazet@google.com>
Joe Damato Oct. 10, 2024, 4:07 p.m. UTC | #2
On Thu, Oct 10, 2024 at 06:28:59AM +0200, Eric Dumazet wrote:
> On Wed, Oct 9, 2024 at 2:56 AM Joe Damato <jdamato@fastly.com> wrote:
> >
> > Use netif_napi_add_config to assign persistent per-NAPI config when
> > initializing RX CQ NAPIs.
> >
> > Presently, struct napi_config only has support for two fields used for
> > RX, so there is no need to support them with TX CQs, yet.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > ---
> 
> nit: technically, the napi_defer_hard_irqs could benefit TX completions as well.

That's true - I think I missed updating this commit message when I
realized it. I can correct the commit message while retaining your
Reviewed-by for the v6.

Note: This adds to the confusion I have around the support for
allocating max(rxqs, txqs) config structs; it would seem we'll be
missing config structure for some queues if the system is configured
to use the maximum number of each? Perhaps that configuration is
uncommon enough that it doesn't matter?
 
> Reviewed-by: Eric Dumazet <edumazet@google.com>
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
index 461cc2c79c71..0e92956e84cf 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -156,7 +156,8 @@  int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
 		break;
 	case RX:
 		cq->mcq.comp = mlx4_en_rx_irq;
-		netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_rx_cq);
+		netif_napi_add_config(cq->dev, &cq->napi, mlx4_en_poll_rx_cq,
+				      cq_idx);
 		netif_napi_set_irq(&cq->napi, irq);
 		napi_enable(&cq->napi);
 		netif_queue_set_napi(cq->dev, cq_idx, NETDEV_QUEUE_TYPE_RX, &cq->napi);