[v3,net-next,0/7] net: ethernet: ti: cpsw: Add XDP support
mbox series

Message ID 20190605132009.10734-1-ivan.khoronzhuk@linaro.org
Headers show
Series
  • net: ethernet: ti: cpsw: Add XDP support
Related show

Message

Ivan Khoronzhuk June 5, 2019, 1:20 p.m. UTC
This patchset adds XDP support for TI cpsw driver and base it on
page_pool allocator. It was verified on af_xdp socket drop,
af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.

It was verified with following configs enabled:
CONFIG_JIT=y
CONFIG_BPFILTER=y
CONFIG_BPF_SYSCALL=y
CONFIG_XDP_SOCKETS=y
CONFIG_BPF_EVENTS=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_BPF_JIT=y
CONFIG_CGROUP_BPF=y

Link on previous v2:
https://lkml.org/lkml/2019/5/30/1315

Also regular tests with iperf2 were done in order to verify impact on
regular netstack performance, compared with base commit:
https://pastebin.com/JSMT0iZ4

v2..v3:
- each rxq and ndev has its own page pool

v1..v2:
- combined xdp_xmit functions
- used page allocation w/o refcnt juggle
- unmapped page for skb netstack
- moved rxq/page pool allocation to open/close pair
- added several preliminary patches:
  net: page_pool: add helper function to retrieve dma addresses
  net: page_pool: add helper function to unmap dma addresses
  net: ethernet: ti: cpsw: use cpsw as drv data
  net: ethernet: ti: cpsw_ethtool: simplify slave loops


Based on net-next/master

Ilias Apalodimas (2):
  net: page_pool: add helper function to retrieve dma addresses
  net: page_pool: add helper function to unmap dma addresses

Ivan Khoronzhuk (5):
  net: ethernet: ti: cpsw: use cpsw as drv data
  net: ethernet: ti: cpsw_ethtool: simplify slave loops
  net: ethernet: ti: davinci_cpdma: add dma mapped submit
  net: ethernet: ti: davinci_cpdma: return handler status
  net: ethernet: ti: cpsw: add XDP support

 drivers/net/ethernet/ti/Kconfig         |   1 +
 drivers/net/ethernet/ti/cpsw.c          | 555 ++++++++++++++++++++----
 drivers/net/ethernet/ti/cpsw_ethtool.c  | 100 ++++-
 drivers/net/ethernet/ti/cpsw_priv.h     |   9 +-
 drivers/net/ethernet/ti/davinci_cpdma.c | 122 ++++--
 drivers/net/ethernet/ti/davinci_cpdma.h |   6 +-
 drivers/net/ethernet/ti/davinci_emac.c  |  18 +-
 include/net/page_pool.h                 |   6 +
 net/core/page_pool.c                    |   7 +
 9 files changed, 685 insertions(+), 139 deletions(-)

Comments

David Miller June 5, 2019, 7:14 p.m. UTC | #1
From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Date: Wed,  5 Jun 2019 16:20:02 +0300

> This patchset adds XDP support for TI cpsw driver and base it on
> page_pool allocator. It was verified on af_xdp socket drop,
> af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.

Jesper et al., please give this a good once over.

Thank you.
Jesper Dangaard Brouer June 6, 2019, 8:08 a.m. UTC | #2
On Wed, 05 Jun 2019 12:14:50 -0700 (PDT)
David Miller <davem@davemloft.net> wrote:

> From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> Date: Wed,  5 Jun 2019 16:20:02 +0300
> 
> > This patchset adds XDP support for TI cpsw driver and base it on
> > page_pool allocator. It was verified on af_xdp socket drop,
> > af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.  
> 
> Jesper et al., please give this a good once over.

The issue with merging this, is that I recently discovered two bug with
page_pool API, when using DMA-mappings, which result in missing
DMA-unmap's.  These bugs are not "exposed" yet, but will get exposed
now with this drivers.  

The two bugs are:

#1: in-flight packet-pages can still be on remote drivers TX queue,
while XDP RX driver manage to unregister the page_pool (waiting 1 RCU
period is not enough).

#2: this patchset also introduce page_pool_unmap_page(), which is
called before an XDP frame travel into networks stack (as no callback
exist, yet).  But the CPUMAP redirect *also* needs to call this, else we
"leak"/miss DMA-unmap.

I do have a working prototype, that fixes these two bugs.  I guess, I'm
under pressure to send this to the list soon...
Ivan Khoronzhuk June 6, 2019, 1:24 p.m. UTC | #3
On Thu, Jun 06, 2019 at 10:08:50AM +0200, Jesper Dangaard Brouer wrote:
>On Wed, 05 Jun 2019 12:14:50 -0700 (PDT)
>David Miller <davem@davemloft.net> wrote:
>
>> From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
>> Date: Wed,  5 Jun 2019 16:20:02 +0300
>>
>> > This patchset adds XDP support for TI cpsw driver and base it on
>> > page_pool allocator. It was verified on af_xdp socket drop,
>> > af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX.
>>
>> Jesper et al., please give this a good once over.
>
>The issue with merging this, is that I recently discovered two bug with
>page_pool API, when using DMA-mappings, which result in missing
>DMA-unmap's.  These bugs are not "exposed" yet, but will get exposed
>now with this drivers.
>
>The two bugs are:
>
>#1: in-flight packet-pages can still be on remote drivers TX queue,
>while XDP RX driver manage to unregister the page_pool (waiting 1 RCU
>period is not enough).
>
>#2: this patchset also introduce page_pool_unmap_page(), which is
>called before an XDP frame travel into networks stack (as no callback
>exist, yet).  But the CPUMAP redirect *also* needs to call this, else we
>"leak"/miss DMA-unmap.
>
>I do have a working prototype, that fixes these two bugs.  I guess, I'm
>under pressure to send this to the list soon...

In particular "cpsw" case no dma unmap issue and if no changes in page_pool
API then no changes to the driver required. page_pool_unmap_page() is
used here for consistency reasons with attention that it can be
inherited/reused by other SoCs for what it can be relevant.

One potential change as you mentioned is with dropping page_pool_destroy() that,
now, can look like:

@@ -571,7 +571,6 @@ static void cpsw_destroy_rx_pool(struct cpsw_priv *priv, int ch)
                return;
 
        xdp_rxq_info_unreg(&priv->xdp_rxq[ch]);
-       page_pool_destroy(priv->page_pool[ch]);
        priv->page_pool[ch] = NULL;
 }

From what I know there is ongoing change for adding switchdev to cpsw that can
change a lot and can require more work to rebase / test this patchset, so I want
to believe it can be merged before this.
David Miller June 6, 2019, 8:56 p.m. UTC | #4
From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Thu, 6 Jun 2019 10:08:50 +0200

> I do have a working prototype, that fixes these two bugs.  I guess, I'm
> under pressure to send this to the list soon...

So I'm going to mark this CPSW patchset as "deferred" while these bugs are
worked out.