diff mbox series

[RFC,net-next,1/2] net: veth: Page pool creation error handling for existing pools only

Message ID 20230719072907.100948-1-liangchen.linux@gmail.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [RFC,net-next,1/2] net: veth: Page pool creation error handling for existing pools only | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1342 this patch: 1342
netdev/cc_maintainers success CCed 5 of 5 maintainers
netdev/build_clang success Errors and warnings before: 1365 this patch: 1365
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1365 this patch: 1365
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 10 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Liang Chen July 19, 2023, 7:29 a.m. UTC
The failure handling procedure destroys page pools for all queues,
including those that haven't had their page pool created yet. this patch
introduces necessary adjustments to prevent potential risks and
inconsistency with the error handling behavior.

Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
---
 drivers/net/veth.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Yunsheng Lin July 19, 2023, 12:43 p.m. UTC | #1
On 2023/7/19 15:29, Liang Chen wrote:
> The failure handling procedure destroys page pools for all queues,
> including those that haven't had their page pool created yet. this patch
> introduces necessary adjustments to prevent potential risks and
> inconsistency with the error handling behavior.
> 
> Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> ---
>  drivers/net/veth.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 614f3e3efab0..509e901da41d 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -1081,8 +1081,9 @@ static int __veth_napi_enable_range(struct net_device *dev, int start, int end)
>  err_xdp_ring:
>  	for (i--; i >= start; i--)
>  		ptr_ring_cleanup(&priv->rq[i].xdp_ring, veth_ptr_free);
> +	i = end;
>  err_page_pool:
> -	for (i = start; i < end; i++) {
> +	for (i--; i >= start; i--) {
>  		page_pool_destroy(priv->rq[i].page_pool);
>  		priv->rq[i].page_pool = NULL;

There is NULL checking in page_pool_destroy(),
priv->rq[i].page_pool is set to NULL here, and the kcalloc()
in veth_alloc_queues() ensure it is NULL initially, maybe it
is fine as it is?

>  	}
>
Liang Chen July 21, 2023, 11:17 a.m. UTC | #2
On Wed, Jul 19, 2023 at 8:44 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2023/7/19 15:29, Liang Chen wrote:
> > The failure handling procedure destroys page pools for all queues,
> > including those that haven't had their page pool created yet. this patch
> > introduces necessary adjustments to prevent potential risks and
> > inconsistency with the error handling behavior.
> >
> > Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
> > ---
> >  drivers/net/veth.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index 614f3e3efab0..509e901da41d 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -1081,8 +1081,9 @@ static int __veth_napi_enable_range(struct net_device *dev, int start, int end)
> >  err_xdp_ring:
> >       for (i--; i >= start; i--)
> >               ptr_ring_cleanup(&priv->rq[i].xdp_ring, veth_ptr_free);
> > +     i = end;
> >  err_page_pool:
> > -     for (i = start; i < end; i++) {
> > +     for (i--; i >= start; i--) {
> >               page_pool_destroy(priv->rq[i].page_pool);
> >               priv->rq[i].page_pool = NULL;
>
> There is NULL checking in page_pool_destroy(),
> priv->rq[i].page_pool is set to NULL here, and the kcalloc()
> in veth_alloc_queues() ensure it is NULL initially, maybe it
> is fine as it is?
>

Sure, it doesn't cause any real problem.

This was meant to align err_page_pool handling with the case above
(though ptr_ring_cleanup cannot take an uninitialized ring), and it
doesn't always need to loop from start to end.

Thanks,
Liang

> >       }
> >
Yunsheng Lin July 21, 2023, 12:35 p.m. UTC | #3
On 2023/7/21 19:17, Liang Chen wrote:

>> There is NULL checking in page_pool_destroy(),
>> priv->rq[i].page_pool is set to NULL here, and the kcalloc()
>> in veth_alloc_queues() ensure it is NULL initially, maybe it
>> is fine as it is?
>>
> 
> Sure, it doesn't cause any real problem.
> 
> This was meant to align err_page_pool handling with the case above
> (though ptr_ring_cleanup cannot take an uninitialized ring), and it
> doesn't always need to loop from start to end.
> 

I suppose it is for the preparation of the next patch, right?
In that case, maybe make it clear in the commit log.
diff mbox series

Patch

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 614f3e3efab0..509e901da41d 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1081,8 +1081,9 @@  static int __veth_napi_enable_range(struct net_device *dev, int start, int end)
 err_xdp_ring:
 	for (i--; i >= start; i--)
 		ptr_ring_cleanup(&priv->rq[i].xdp_ring, veth_ptr_free);
+	i = end;
 err_page_pool:
-	for (i = start; i < end; i++) {
+	for (i--; i >= start; i--) {
 		page_pool_destroy(priv->rq[i].page_pool);
 		priv->rq[i].page_pool = NULL;
 	}