diff mbox series

[net] bnxt_en: update xdp_rxq_info in queue restart logic

Message ID 20240719041911.533320-1-ap420073@gmail.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net] bnxt_en: update xdp_rxq_info in queue restart logic | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 661 this patch: 661
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 5 maintainers not CCed: ast@kernel.org hawk@kernel.org daniel@iogearbox.net bpf@vger.kernel.org john.fastabend@gmail.com
netdev/build_clang success Errors and warnings before: 662 this patch: 662
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 664 this patch: 664
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 47 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 2 this patch: 2
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-07-19--12-00 (tests: 699)

Commit Message

Taehee Yoo July 19, 2024, 4:19 a.m. UTC
When the netdev_rx_queue_restart() restarts queues, the bnxt_en driver
updates(creates and deletes) a page_pool.
But it doesn't update xdp_rxq_info, so the xdp_rxq_info is still
connected to an old page_pool.
So, bnxt_rx_ring_info->page_pool indicates a new page_pool, but
bnxt_rx_ring_info->xdp_rxq is still connected to an old page_pool.

An old page_pool is no longer used so it is supposed to be
deleted by page_pool_destroy() but it isn't.
Because the xdp_rxq_info is holding the reference count for it and the
xdp_rxq_info is not updated, an old page_pool will not be deleted in
the queue restart logic.

Before restarting 1 queue:
./tools/net/ynl/samples/page-pool
enp10s0f1np1[6] page pools: 4 (zombies: 0)
	refs: 8192 bytes: 33554432 (refs: 0 bytes: 0)
	recycling: 0.0% (alloc: 128:8048 recycle: 0:0)

After restarting 1 queue:
./tools/net/ynl/samples/page-pool
enp10s0f1np1[6] page pools: 5 (zombies: 0)
	refs: 10240 bytes: 41943040 (refs: 0 bytes: 0)
	recycling: 20.0% (alloc: 160:10080 recycle: 1920:128)

Before restarting queues, an interface has 4 page_pools.
After restarting one queue, an interface has 5 page_pools, but it
should be 4, not 5.
The reason is that queue restarting logic creates a new page_pool and
an old page_pool is not deleted due to the absence of an update of
xdp_rxq_info logic.

Fixes: 2d694c27d32e ("bnxt_en: implement netdev_queue_mgmt_ops")
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
---
 drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

Comments

David Wei July 20, 2024, 5:12 a.m. UTC | #1
On 2024-07-18 21:19, Taehee Yoo wrote:
> When the netdev_rx_queue_restart() restarts queues, the bnxt_en driver
> updates(creates and deletes) a page_pool.
> But it doesn't update xdp_rxq_info, so the xdp_rxq_info is still
> connected to an old page_pool.
> So, bnxt_rx_ring_info->page_pool indicates a new page_pool, but
> bnxt_rx_ring_info->xdp_rxq is still connected to an old page_pool.
> 
> An old page_pool is no longer used so it is supposed to be
> deleted by page_pool_destroy() but it isn't.
> Because the xdp_rxq_info is holding the reference count for it and the
> xdp_rxq_info is not updated, an old page_pool will not be deleted in
> the queue restart logic.
> 
> Before restarting 1 queue:
> ./tools/net/ynl/samples/page-pool
> enp10s0f1np1[6] page pools: 4 (zombies: 0)
> 	refs: 8192 bytes: 33554432 (refs: 0 bytes: 0)
> 	recycling: 0.0% (alloc: 128:8048 recycle: 0:0)
> 
> After restarting 1 queue:
> ./tools/net/ynl/samples/page-pool
> enp10s0f1np1[6] page pools: 5 (zombies: 0)
> 	refs: 10240 bytes: 41943040 (refs: 0 bytes: 0)
> 	recycling: 20.0% (alloc: 160:10080 recycle: 1920:128)

Thanks, didn't know this existed! As a follow up once Mina lands his
devmem TCP series with netdev_rx_queue_restart(), a netdev netlink
selftest using would be great.

> 
> Before restarting queues, an interface has 4 page_pools.
> After restarting one queue, an interface has 5 page_pools, but it
> should be 4, not 5.
> The reason is that queue restarting logic creates a new page_pool and
> an old page_pool is not deleted due to the absence of an update of
> xdp_rxq_info logic.
> 
> Fixes: 2d694c27d32e ("bnxt_en: implement netdev_queue_mgmt_ops")
> Signed-off-by: Taehee Yoo <ap420073@gmail.com>
> ---
>  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index bb3be33c1bbd..11d8459376a9 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
[...]
> @@ -15065,6 +15079,8 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
>  	page_pool_destroy(rxr->page_pool);
>  	rxr->page_pool = NULL;
>  
> +	xdp_rxq_info_unreg(&rxr->xdp_rxq);
> +

IMO this should go before page_pool_destroy() for symmetry with
bnxt_free_rx_rings(). I know there's already a call deep inside of
xdp_rxq_info_unreg().

>  	ring = &rxr->rx_ring_struct;
>  	bnxt_free_ring(bp, &ring->ring_mem);
>  
> @@ -15145,6 +15161,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
>  	rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
>  	rxr->rx_next_cons = clone->rx_next_cons;
>  	rxr->page_pool = clone->page_pool;
> +	memcpy(&rxr->xdp_rxq, &clone->xdp_rxq, sizeof(struct xdp_rxq_info));

Assignment is fine here.
Taehee Yoo July 20, 2024, 3:06 p.m. UTC | #2
On Sat, Jul 20, 2024 at 2:13 PM David Wei <dw@davidwei.uk> wrote:
>

Hi David,
Thanks a lot for the review!

> On 2024-07-18 21:19, Taehee Yoo wrote:
> > When the netdev_rx_queue_restart() restarts queues, the bnxt_en driver
> > updates(creates and deletes) a page_pool.
> > But it doesn't update xdp_rxq_info, so the xdp_rxq_info is still
> > connected to an old page_pool.
> > So, bnxt_rx_ring_info->page_pool indicates a new page_pool, but
> > bnxt_rx_ring_info->xdp_rxq is still connected to an old page_pool.
> >
> > An old page_pool is no longer used so it is supposed to be
> > deleted by page_pool_destroy() but it isn't.
> > Because the xdp_rxq_info is holding the reference count for it and the
> > xdp_rxq_info is not updated, an old page_pool will not be deleted in
> > the queue restart logic.
> >
> > Before restarting 1 queue:
> > ./tools/net/ynl/samples/page-pool
> > enp10s0f1np1[6] page pools: 4 (zombies: 0)
> >       refs: 8192 bytes: 33554432 (refs: 0 bytes: 0)
> >       recycling: 0.0% (alloc: 128:8048 recycle: 0:0)
> >
> > After restarting 1 queue:
> > ./tools/net/ynl/samples/page-pool
> > enp10s0f1np1[6] page pools: 5 (zombies: 0)
> >       refs: 10240 bytes: 41943040 (refs: 0 bytes: 0)
> >       recycling: 20.0% (alloc: 160:10080 recycle: 1920:128)
>
> Thanks, didn't know this existed! As a follow up once Mina lands his
> devmem TCP series with netdev_rx_queue_restart(), a netdev netlink
> selftest using would be great.
>

Yes, that would be great!

> >
> > Before restarting queues, an interface has 4 page_pools.
> > After restarting one queue, an interface has 5 page_pools, but it
> > should be 4, not 5.
> > The reason is that queue restarting logic creates a new page_pool and
> > an old page_pool is not deleted due to the absence of an update of
> > xdp_rxq_info logic.
> >
> > Fixes: 2d694c27d32e ("bnxt_en: implement netdev_queue_mgmt_ops")
> > Signed-off-by: Taehee Yoo <ap420073@gmail.com>
> > ---
> >  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 +++++++++++++++++
> >  1 file changed, 17 insertions(+)
> >
> > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > index bb3be33c1bbd..11d8459376a9 100644
> > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> [...]
> > @@ -15065,6 +15079,8 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
> >       page_pool_destroy(rxr->page_pool);
> >       rxr->page_pool = NULL;
> >
> > +     xdp_rxq_info_unreg(&rxr->xdp_rxq);
> > +
>
> IMO this should go before page_pool_destroy() for symmetry with
> bnxt_free_rx_rings(). I know there's already a call deep inside of
> xdp_rxq_info_unreg().
>

I agree about symmetry. So I will change it.

> >       ring = &rxr->rx_ring_struct;
> >       bnxt_free_ring(bp, &ring->ring_mem);
> >
> > @@ -15145,6 +15161,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
> >       rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
> >       rxr->rx_next_cons = clone->rx_next_cons;
> >       rxr->page_pool = clone->page_pool;
> > +     memcpy(&rxr->xdp_rxq, &clone->xdp_rxq, sizeof(struct xdp_rxq_info));
>
> Assignment is fine here.

Okay, I will use the assignment instead of the memcpy.

I will send a v2 patch after some tests.

Thanks a lot!
Taehee Yoo
diff mbox series

Patch

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index bb3be33c1bbd..11d8459376a9 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -4052,6 +4052,7 @@  static void bnxt_reset_rx_ring_struct(struct bnxt *bp,
 
 	rxr->page_pool->p.napi = NULL;
 	rxr->page_pool = NULL;
+	memset(&rxr->xdp_rxq, 0, sizeof(struct xdp_rxq_info));
 
 	ring = &rxr->rx_ring_struct;
 	rmem = &ring->ring_mem;
@@ -15018,6 +15019,16 @@  static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
 	if (rc)
 		return rc;
 
+	rc = xdp_rxq_info_reg(&clone->xdp_rxq, bp->dev, idx, 0);
+	if (rc < 0)
+		goto err_page_pool_destroy;
+
+	rc = xdp_rxq_info_reg_mem_model(&clone->xdp_rxq,
+					MEM_TYPE_PAGE_POOL,
+					clone->page_pool);
+	if (rc)
+		goto err_rxq_info_unreg;
+
 	ring = &clone->rx_ring_struct;
 	rc = bnxt_alloc_ring(bp, &ring->ring_mem);
 	if (rc)
@@ -15047,6 +15058,9 @@  static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
 	bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem);
 err_free_rx_ring:
 	bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem);
+err_rxq_info_unreg:
+	xdp_rxq_info_unreg(&clone->xdp_rxq);
+err_page_pool_destroy:
 	clone->page_pool->p.napi = NULL;
 	page_pool_destroy(clone->page_pool);
 	clone->page_pool = NULL;
@@ -15065,6 +15079,8 @@  static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
 	page_pool_destroy(rxr->page_pool);
 	rxr->page_pool = NULL;
 
+	xdp_rxq_info_unreg(&rxr->xdp_rxq);
+
 	ring = &rxr->rx_ring_struct;
 	bnxt_free_ring(bp, &ring->ring_mem);
 
@@ -15145,6 +15161,7 @@  static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
 	rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
 	rxr->rx_next_cons = clone->rx_next_cons;
 	rxr->page_pool = clone->page_pool;
+	memcpy(&rxr->xdp_rxq, &clone->xdp_rxq, sizeof(struct xdp_rxq_info));
 
 	bnxt_copy_rx_ring(bp, rxr, clone);