diff mbox series

net: core: page_pool: add user refcnt and reintroduce page_pool_destroy

Message ID 156207778364.29180.5111562317930943530.stgit@firesoul (mailing list archive)
State New, archived
Headers show
Series net: core: page_pool: add user refcnt and reintroduce page_pool_destroy | expand

Commit Message

Jesper Dangaard Brouer July 2, 2019, 2:31 p.m. UTC
From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>

Jesper recently removed page_pool_destroy() (from driver invocation) and
moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
handle in-flight packets/pages. This created an asymmetry in drivers
create/destroy pairs.

This patch add page_pool user refcnt and reintroduce page_pool_destroy.
This serves two purposes, (1) simplify drivers error handling as driver now
drivers always calls page_pool_destroy() and don't need to track if
xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
where a single RX-queue (with a single page_pool) provides packets for two
net_device'es, and thus needs to register the same page_pool twice with two
xdp_rxq_info structures.

This patch is a modified version of Ivan Khoronzhuk's original patch.
Thus, Jesper gives author ownership to Ivan.

Link: https://lore.kernel.org/netdev/20190625175948.24771-2-ivan.khoronzhuk@linaro.org/
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---

To Ivan,
 If you agree with this patch, please add your Signed-off-by.

You can also say if you prefer to take this patch and make it
part of your driver patchset, what ever you prefer.
--Jesper

 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    6 ++---
 drivers/net/ethernet/socionext/netsec.c           |    8 ++----
 include/net/page_pool.h                           |   27 +++++++++++++++++++++
 net/core/page_pool.c                              |    8 ++++++
 net/core/xdp.c                                    |    3 ++
 5 files changed, 43 insertions(+), 9 deletions(-)

Comments

Ivan Khoronzhuk July 2, 2019, 2:44 p.m. UTC | #1
On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
>From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>
>Jesper recently removed page_pool_destroy() (from driver invocation) and
>moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
>handle in-flight packets/pages. This created an asymmetry in drivers
>create/destroy pairs.
>
>This patch add page_pool user refcnt and reintroduce page_pool_destroy.
>This serves two purposes, (1) simplify drivers error handling as driver now
>drivers always calls page_pool_destroy() and don't need to track if
>xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
>where a single RX-queue (with a single page_pool) provides packets for two
>net_device'es, and thus needs to register the same page_pool twice with two
>xdp_rxq_info structures.

As I tend to use xdp level patch there is no more reason to mention (2) case
here. XDP patch serves it better and can prevent not only obj deletion but also
pool flush, so, this one patch I could better leave only for (1) case.
Jesper Dangaard Brouer July 2, 2019, 2:52 p.m. UTC | #2
On Tue, 2 Jul 2019 17:44:27 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:

> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> >
> >Jesper recently removed page_pool_destroy() (from driver invocation) and
> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
> >handle in-flight packets/pages. This created an asymmetry in drivers
> >create/destroy pairs.
> >
> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
> >This serves two purposes, (1) simplify drivers error handling as driver now
> >drivers always calls page_pool_destroy() and don't need to track if
> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
> >where a single RX-queue (with a single page_pool) provides packets for two
> >net_device'es, and thus needs to register the same page_pool twice with two
> >xdp_rxq_info structures.  
> 
> As I tend to use xdp level patch there is no more reason to mention (2) case
> here. XDP patch serves it better and can prevent not only obj deletion but also
> pool flush, so, this one patch I could better leave only for (1) case.
 
I don't understand what you are saying.

Do you approve this patch, or do you reject this patch?
Ivan Khoronzhuk July 2, 2019, 2:56 p.m. UTC | #3
On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
>On Tue, 2 Jul 2019 17:44:27 +0300
>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>
>> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
>> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>> >
>> >Jesper recently removed page_pool_destroy() (from driver invocation) and
>> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
>> >handle in-flight packets/pages. This created an asymmetry in drivers
>> >create/destroy pairs.
>> >
>> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
>> >This serves two purposes, (1) simplify drivers error handling as driver now
>> >drivers always calls page_pool_destroy() and don't need to track if
>> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
>> >where a single RX-queue (with a single page_pool) provides packets for two
>> >net_device'es, and thus needs to register the same page_pool twice with two
>> >xdp_rxq_info structures.
>>
>> As I tend to use xdp level patch there is no more reason to mention (2) case
>> here. XDP patch serves it better and can prevent not only obj deletion but also
>> pool flush, so, this one patch I could better leave only for (1) case.
>
>I don't understand what you are saying.
>
>Do you approve this patch, or do you reject this patch?
>
It's not reject, it's proposition to use both, XDP and page pool patches,
each having its goal.
Jesper Dangaard Brouer July 2, 2019, 3:10 p.m. UTC | #4
On Tue, 2 Jul 2019 17:56:13 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:

> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
> >On Tue, 2 Jul 2019 17:44:27 +0300
> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >  
> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:  
> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> >> >
> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
> >> >handle in-flight packets/pages. This created an asymmetry in drivers
> >> >create/destroy pairs.
> >> >
> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
> >> >This serves two purposes, (1) simplify drivers error handling as driver now
> >> >drivers always calls page_pool_destroy() and don't need to track if
> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
> >> >where a single RX-queue (with a single page_pool) provides packets for two
> >> >net_device'es, and thus needs to register the same page_pool twice with two
> >> >xdp_rxq_info structures.  
> >>
> >> As I tend to use xdp level patch there is no more reason to mention (2) case
> >> here. XDP patch serves it better and can prevent not only obj deletion but also
> >> pool flush, so, this one patch I could better leave only for (1) case.  
> >
> >I don't understand what you are saying.
> >
> >Do you approve this patch, or do you reject this patch?
> >  
> It's not reject, it's proposition to use both, XDP and page pool patches,
> each having its goal.

Just to be clear, if you want this patch to get accepted you have to
reply with your Signed-off-by (as I wrote).

Maybe we should discuss it in another thread, about why you want two
solutions to the same problem.
Ivan Khoronzhuk July 2, 2019, 3:21 p.m. UTC | #5
On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:
>On Tue, 2 Jul 2019 17:56:13 +0300
>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>
>> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
>> >On Tue, 2 Jul 2019 17:44:27 +0300
>> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>> >
>> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
>> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>> >> >
>> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
>> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
>> >> >handle in-flight packets/pages. This created an asymmetry in drivers
>> >> >create/destroy pairs.
>> >> >
>> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
>> >> >This serves two purposes, (1) simplify drivers error handling as driver now
>> >> >drivers always calls page_pool_destroy() and don't need to track if
>> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
>> >> >where a single RX-queue (with a single page_pool) provides packets for two
>> >> >net_device'es, and thus needs to register the same page_pool twice with two
>> >> >xdp_rxq_info structures.
>> >>
>> >> As I tend to use xdp level patch there is no more reason to mention (2) case
>> >> here. XDP patch serves it better and can prevent not only obj deletion but also
>> >> pool flush, so, this one patch I could better leave only for (1) case.
>> >
>> >I don't understand what you are saying.
>> >
>> >Do you approve this patch, or do you reject this patch?
>> >
>> It's not reject, it's proposition to use both, XDP and page pool patches,
>> each having its goal.
>
>Just to be clear, if you want this patch to get accepted you have to
>reply with your Signed-off-by (as I wrote).
>
>Maybe we should discuss it in another thread, about why you want two
>solutions to the same problem.

If it solves same problem I propose to reject this one and use this:
https://lkml.org/lkml/2019/7/2/651
Jesper Dangaard Brouer July 2, 2019, 6:29 p.m. UTC | #6
On Tue, 2 Jul 2019 18:21:13 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:

> On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:
> >On Tue, 2 Jul 2019 17:56:13 +0300
> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >  
> >> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:  
> >> >On Tue, 2 Jul 2019 17:44:27 +0300
> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >> >  
> >> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:  
> >> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> >> >> >
> >> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
> >> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
> >> >> >handle in-flight packets/pages. This created an asymmetry in drivers
> >> >> >create/destroy pairs.
> >> >> >
> >> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
> >> >> >This serves two purposes, (1) simplify drivers error handling as driver now
> >> >> >drivers always calls page_pool_destroy() and don't need to track if
> >> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
> >> >> >where a single RX-queue (with a single page_pool) provides packets for two
> >> >> >net_device'es, and thus needs to register the same page_pool twice with two
> >> >> >xdp_rxq_info structures.  
> >> >>
> >> >> As I tend to use xdp level patch there is no more reason to mention (2) case
> >> >> here. XDP patch serves it better and can prevent not only obj deletion but also
> >> >> pool flush, so, this one patch I could better leave only for (1) case.  
> >> >
> >> >I don't understand what you are saying.
> >> >
> >> >Do you approve this patch, or do you reject this patch?
> >> >  
> >> It's not reject, it's proposition to use both, XDP and page pool patches,
> >> each having its goal.  
> >
> >Just to be clear, if you want this patch to get accepted you have to
> >reply with your Signed-off-by (as I wrote).
> >
> >Maybe we should discuss it in another thread, about why you want two
> >solutions to the same problem.  
> 
> If it solves same problem I propose to reject this one and use this:
> https://lkml.org/lkml/2019/7/2/651

No, I propose using this one, and rejecting the other one.
Ivan Khoronzhuk July 2, 2019, 6:58 p.m. UTC | #7
On Tue, Jul 02, 2019 at 08:29:07PM +0200, Jesper Dangaard Brouer wrote:
>On Tue, 2 Jul 2019 18:21:13 +0300
>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>
>> On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:
>> >On Tue, 2 Jul 2019 17:56:13 +0300
>> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>> >
>> >> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
>> >> >On Tue, 2 Jul 2019 17:44:27 +0300
>> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>> >> >
>> >> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
>> >> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>> >> >> >
>> >> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
>> >> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
>> >> >> >handle in-flight packets/pages. This created an asymmetry in drivers
>> >> >> >create/destroy pairs.
>> >> >> >
>> >> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
>> >> >> >This serves two purposes, (1) simplify drivers error handling as driver now
>> >> >> >drivers always calls page_pool_destroy() and don't need to track if
>> >> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
>> >> >> >where a single RX-queue (with a single page_pool) provides packets for two
>> >> >> >net_device'es, and thus needs to register the same page_pool twice with two
>> >> >> >xdp_rxq_info structures.
>> >> >>
>> >> >> As I tend to use xdp level patch there is no more reason to mention (2) case
>> >> >> here. XDP patch serves it better and can prevent not only obj deletion but also
>> >> >> pool flush, so, this one patch I could better leave only for (1) case.
>> >> >
>> >> >I don't understand what you are saying.
>> >> >
>> >> >Do you approve this patch, or do you reject this patch?
>> >> >
>> >> It's not reject, it's proposition to use both, XDP and page pool patches,
>> >> each having its goal.
>> >
>> >Just to be clear, if you want this patch to get accepted you have to
>> >reply with your Signed-off-by (as I wrote).
>> >
>> >Maybe we should discuss it in another thread, about why you want two
>> >solutions to the same problem.
>>
>> If it solves same problem I propose to reject this one and use this:
>> https://lkml.org/lkml/2019/7/2/651
>
>No, I propose using this one, and rejecting the other one.

There is at least several arguments against this one (related (2) purpose)

It allows:
- avoid changes to page_pool/mlx5/netsec
- save not only allocator obj but allocator "page/buffer flush"
- buffer flush can be present not only in page_pool but for other allocators
  that can behave differently and not so simple solution.
- to not limit cpsw/(potentially others) to use "page_pool" allocator only
....

This patch better leave also, as it simplifies error path for page_pool and
have more error prone usage comparing with existent one.

Please, don't limit cpsw and potentially other drivers to use only
page_pool it can be zca or etc... I don't won't to modify each allocator.
I propose to add both as by fact they solve different problems with common
solution.
Ivan Khoronzhuk July 2, 2019, 8:28 p.m. UTC | #8
On Tue, Jul 02, 2019 at 09:58:40PM +0300, Ivan Khoronzhuk wrote:
>On Tue, Jul 02, 2019 at 08:29:07PM +0200, Jesper Dangaard Brouer wrote:
>>On Tue, 2 Jul 2019 18:21:13 +0300
>>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>>
>>>On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:
>>>>On Tue, 2 Jul 2019 17:56:13 +0300
>>>>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>>>>
>>>>> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
>>>>> >On Tue, 2 Jul 2019 17:44:27 +0300
>>>>> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
>>>>> >
>>>>> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
>>>>> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>>>>> >> >
>>>>> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
>>>>> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
>>>>> >> >handle in-flight packets/pages. This created an asymmetry in drivers
>>>>> >> >create/destroy pairs.
>>>>> >> >
>>>>> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
>>>>> >> >This serves two purposes, (1) simplify drivers error handling as driver now
>>>>> >> >drivers always calls page_pool_destroy() and don't need to track if
>>>>> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
>>>>> >> >where a single RX-queue (with a single page_pool) provides packets for two
>>>>> >> >net_device'es, and thus needs to register the same page_pool twice with two
>>>>> >> >xdp_rxq_info structures.
>>>>> >>
>>>>> >> As I tend to use xdp level patch there is no more reason to mention (2) case
>>>>> >> here. XDP patch serves it better and can prevent not only obj deletion but also
>>>>> >> pool flush, so, this one patch I could better leave only for (1) case.
>>>>> >
>>>>> >I don't understand what you are saying.
>>>>> >
>>>>> >Do you approve this patch, or do you reject this patch?
>>>>> >
>>>>> It's not reject, it's proposition to use both, XDP and page pool patches,
>>>>> each having its goal.
>>>>
>>>>Just to be clear, if you want this patch to get accepted you have to
>>>>reply with your Signed-off-by (as I wrote).
>>>>
>>>>Maybe we should discuss it in another thread, about why you want two
>>>>solutions to the same problem.
>>>
>>>If it solves same problem I propose to reject this one and use this:
>>>https://lkml.org/lkml/2019/7/2/651
>>
>>No, I propose using this one, and rejecting the other one.
>
>There is at least several arguments against this one (related (2) purpose)
>
>It allows:
>- avoid changes to page_pool/mlx5/netsec
>- save not only allocator obj but allocator "page/buffer flush"
>- buffer flush can be present not only in page_pool but for other allocators
> that can behave differently and not so simple solution.
>- to not limit cpsw/(potentially others) to use "page_pool" allocator only
>....
>
>This patch better leave also, as it simplifies error path for page_pool and
>have more error prone usage comparing with existent one.
>
>Please, don't limit cpsw and potentially other drivers to use only
>page_pool it can be zca or etc... I don't won't to modify each allocator.
>I propose to add both as by fact they solve different problems with common
>solution.

I can pick up this one but remove description related to (2) and add
appropriate modifications to cpsw.
Jesper Dangaard Brouer July 2, 2019, 9:02 p.m. UTC | #9
On Tue, 2 Jul 2019 21:58:40 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:

> On Tue, Jul 02, 2019 at 08:29:07PM +0200, Jesper Dangaard Brouer wrote:
> >On Tue, 2 Jul 2019 18:21:13 +0300
> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >  
> >> On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:  
> >> >On Tue, 2 Jul 2019 17:56:13 +0300
> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >> >  
> >> >> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:  
> >> >> >On Tue, 2 Jul 2019 17:44:27 +0300
> >> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >> >> >  
> >> >> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:  
> >> >> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> >> >> >> >
> >> >> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
> >> >> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
> >> >> >> >handle in-flight packets/pages. This created an asymmetry in drivers
> >> >> >> >create/destroy pairs.
> >> >> >> >
> >> >> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
> >> >> >> >This serves two purposes, (1) simplify drivers error handling as driver now
> >> >> >> >drivers always calls page_pool_destroy() and don't need to track if
> >> >> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
> >> >> >> >where a single RX-queue (with a single page_pool) provides packets for two
> >> >> >> >net_device'es, and thus needs to register the same page_pool twice with two
> >> >> >> >xdp_rxq_info structures.  
> >> >> >>
> >> >> >> As I tend to use xdp level patch there is no more reason to mention (2) case
> >> >> >> here. XDP patch serves it better and can prevent not only obj deletion but also
> >> >> >> pool flush, so, this one patch I could better leave only for (1) case.  
> >> >> >
> >> >> >I don't understand what you are saying.
> >> >> >
> >> >> >Do you approve this patch, or do you reject this patch?
> >> >> >  
> >> >> It's not reject, it's proposition to use both, XDP and page pool patches,
> >> >> each having its goal.  
> >> >
> >> >Just to be clear, if you want this patch to get accepted you have to
> >> >reply with your Signed-off-by (as I wrote).
> >> >
> >> >Maybe we should discuss it in another thread, about why you want two
> >> >solutions to the same problem.  
> >>
> >> If it solves same problem I propose to reject this one and use this:
> >> https://lkml.org/lkml/2019/7/2/651  
> >
> >No, I propose using this one, and rejecting the other one.  
> 
> There is at least several arguments against this one (related (2) purpose)
> 
> It allows:
> - avoid changes to page_pool/mlx5/netsec
> - save not only allocator obj but allocator "page/buffer flush"
> - buffer flush can be present not only in page_pool but for other allocators
>   that can behave differently and not so simple solution.
> - to not limit cpsw/(potentially others) to use "page_pool" allocator only
> ....
> 
> This patch better leave also, as it simplifies error path for page_pool and
> have more error prone usage comparing with existent one.
> 
> Please, don't limit cpsw and potentially other drivers to use only
> page_pool it can be zca or etc... I don't won't to modify each allocator.
> I propose to add both as by fact they solve different problems with common
> solution.


I'm trying to limit the scope of your changes, for your special case,
because I'm afraid this more common solution is going to limit our
options, painting ourselves into a corner.

E.g. for correct lifetime handling, I think we actually need to do a
dev_hold() on the net_device. (Changes in f71fec47c2 might not be
enough, but I first need to dig into the details and ask Hellwig about
some details).  Adding that after your patch is more complicated (if
even doable).

E.g. doing dev_hold() on the net_device, can also turn into a
performance advantage, when/if page_pool is extended to also "travel"
into SKBs. (Allowing to elide such dev_hold() calls in netstack).

I also worry about the possible performance impact these changes will
have down the road.  (For the RX/alloc side it should be clear by now
that we gain a lot of performance with the single RX-queue binding and
napi protection).  On the return/free side performance *need* to be
improved (it doesn't scale).  I'm basically looking at different ways
to bulk return pages into the ptr_ring, which requires changes in
page_pool and likely in xdp_allocator structure.  Which your changes
are complicating.

This special use-case, seems confined to your driver. And Ilias told me
that XDP is not really a performance benefit for this driver as the HW
PPS-limit is hit before the XDP and netstack limit.  I ask, does it
make sense to add XDP to this driver, if it complicates the code for
everybody else?
Ilias Apalodimas July 2, 2019, 9:15 p.m. UTC | #10
Hi Jesper,
Getting late here, i'll respond in detail tomorrow. One point though

[...]
> 
> This special use-case, seems confined to your driver. And Ilias told me
> that XDP is not really a performance benefit for this driver as the HW
> PPS-limit is hit before the XDP and netstack limit.  I ask, does it
> make sense to add XDP to this driver, if it complicates the code for
> everybody else?
I think yes. This is a widely used driver on TI embedded devices so having XDP 
to play along is a nice feature. It's also the first and only armv7 we have
supporting this. Ivan already found a couple of issues due to the 32-bit
architecture he is trying to fix, i think there's real benefit in having that,
performance aside.
I fully agree we should not impact the performance of the API to support a
special hardware though. I'll have a look on the 2 solutions tomorrow, but the
general approach on this one should be 'the simpler the better'

Cheers
/Ilias

> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
Ivan Khoronzhuk July 2, 2019, 9:41 p.m. UTC | #11
On Wed, Jul 03, 2019 at 12:15:36AM +0300, Ilias Apalodimas wrote:
>Hi Jesper,
>Getting late here, i'll respond in detail tomorrow. One point though
>
>[...]
>>
>> This special use-case, seems confined to your driver. And Ilias told me
>> that XDP is not really a performance benefit for this driver as the HW
>> PPS-limit is hit before the XDP and netstack limit.  I ask, does it
>> make sense to add XDP to this driver, if it complicates the code for
>> everybody else?
>I think yes. This is a widely used driver on TI embedded devices so having XDP
>to play along is a nice feature. It's also the first and only armv7 we have
>supporting this. Ivan already found a couple of issues due to the 32-bit
>architecture he is trying to fix, i think there's real benefit in having that,
>performance aside.
>I fully agree we should not impact the performance of the API to support a
>special hardware though. I'll have a look on the 2 solutions tomorrow, but the
>general approach on this one should be 'the simpler the better'
>
>Cheers
>/Ilias

BTW even w/o optimization it has close to 300kpps (but with increased number of
descs) on drop which is very close to netsec measurements Ilias sent. But from
what I know there is no h/w limit on cpsw at all that this CPU can serve, so
my assumption it's rather s/w limit. But that's not main here and XDP usage
has not been estimated enough yet in embedded, where hi speed not only benefit
that can be taken from XDP.

I need more clear circumstances to send v6 ...
diff mbox series

Patch

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 1085040675ae..ce1c7a449eae 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -545,10 +545,8 @@  static int mlx5e_alloc_rq(struct mlx5e_channel *c,
 	}
 	err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
 					 MEM_TYPE_PAGE_POOL, rq->page_pool);
-	if (err) {
-		page_pool_free(rq->page_pool);
+	if (err)
 		goto err_free;
-	}
 
 	for (i = 0; i < wq_sz; i++) {
 		if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
@@ -613,6 +611,7 @@  static int mlx5e_alloc_rq(struct mlx5e_channel *c,
 	if (rq->xdp_prog)
 		bpf_prog_put(rq->xdp_prog);
 	xdp_rxq_info_unreg(&rq->xdp_rxq);
+	page_pool_destroy(rq->page_pool);
 	mlx5_wq_destroy(&rq->wq_ctrl);
 
 	return err;
@@ -643,6 +642,7 @@  static void mlx5e_free_rq(struct mlx5e_rq *rq)
 	}
 
 	xdp_rxq_info_unreg(&rq->xdp_rxq);
+	page_pool_destroy(rq->page_pool);
 	mlx5_wq_destroy(&rq->wq_ctrl);
 }
 
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 5544a722543f..43ab0ce90704 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -1210,15 +1210,11 @@  static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
 		}
 	}
 
-	/* Rx is currently using page_pool
-	 * since the pool is created during netsec_setup_rx_dring(), we need to
-	 * free the pool manually if the registration failed
-	 */
+	/* Rx is currently using page_pool */
 	if (id == NETSEC_RING_RX) {
 		if (xdp_rxq_info_is_reg(&dring->xdp_rxq))
 			xdp_rxq_info_unreg(&dring->xdp_rxq);
-		else
-			page_pool_free(dring->page_pool);
+		page_pool_destroy(dring->page_pool);
 	}
 
 	memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM);
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index ee9c871d2043..ea974856d0f7 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -101,6 +101,14 @@  struct page_pool {
 	struct ptr_ring ring;
 
 	atomic_t pages_state_release_cnt;
+
+	/* A page_pool is strictly tied to a single RX-queue being
+	 * protected by NAPI, due to above pp_alloc_cache.  This
+	 * refcnt serves two purposes. (1) simplify drivers error
+	 * handling, and (2) allow special cases where a single
+	 * RX-queue provides packet for two net_device'es.
+	 */
+	refcount_t user_cnt;
 };
 
 struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp);
@@ -134,6 +142,15 @@  static inline void page_pool_free(struct page_pool *pool)
 #endif
 }
 
+/* Drivers use this instead of page_pool_free */
+static inline void page_pool_destroy(struct page_pool *pool)
+{
+	if (!pool)
+		return;
+
+	page_pool_free(pool);
+}
+
 /* Never call this directly, use helpers below */
 void __page_pool_put_page(struct page_pool *pool,
 			  struct page *page, bool allow_direct);
@@ -201,4 +218,14 @@  static inline bool is_page_pool_compiled_in(void)
 #endif
 }
 
+static inline void page_pool_get(struct page_pool *pool)
+{
+	refcount_inc(&pool->user_cnt);
+}
+
+static inline bool page_pool_put(struct page_pool *pool)
+{
+	return refcount_dec_and_test(&pool->user_cnt);
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index b366f59885c1..3272dc7a8c81 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -49,6 +49,9 @@  static int page_pool_init(struct page_pool *pool,
 
 	atomic_set(&pool->pages_state_release_cnt, 0);
 
+	/* Driver calling page_pool_create() also call page_pool_destroy() */
+	refcount_set(&pool->user_cnt, 1);
+
 	if (pool->p.flags & PP_FLAG_DMA_MAP)
 		get_device(pool->p.dev);
 
@@ -70,6 +73,7 @@  struct page_pool *page_pool_create(const struct page_pool_params *params)
 		kfree(pool);
 		return ERR_PTR(err);
 	}
+
 	return pool;
 }
 EXPORT_SYMBOL(page_pool_create);
@@ -356,6 +360,10 @@  static void __warn_in_flight(struct page_pool *pool)
 
 void __page_pool_free(struct page_pool *pool)
 {
+	/* Only last user actually free/release resources */
+	if (!page_pool_put(pool))
+		return;
+
 	WARN(pool->alloc.count, "API usage violation");
 	WARN(!ptr_ring_empty(&pool->ring), "ptr_ring is not empty");
 
diff --git a/net/core/xdp.c b/net/core/xdp.c
index b29d7b513a18..e57a0eb1feb7 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -372,6 +372,9 @@  int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq,
 
 	mutex_unlock(&mem_id_lock);
 
+	if (type == MEM_TYPE_PAGE_POOL)
+		page_pool_get(xdp_alloc->page_pool);
+
 	trace_mem_connect(xdp_alloc, xdp_rxq);
 	return 0;
 err: