Message ID | 20230530150035.1943669-10-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net: intel: start The Great Code Dedup + Page Pool for iavf | expand |
On Tue, 2023-05-30 at 17:00 +0200, Alexander Lobakin wrote: > Now that the IAVF driver simply uses dev_alloc_page() + free_page() with > no custom recycling logics and one whole page per frame, it can easily > be switched to using Page Pool API instead. > Introduce libie_rx_page_pool_create(), a wrapper for creating a PP with > the default libie settings applicable to all Intel hardware, and replace > the alloc/free calls with the corresponding PP functions, including the > newly added sync-for-CPU helpers. Use skb_mark_for_recycle() to bring > back the recycling and restore the initial performance. > > From the important object code changes, worth mentioning that > __iavf_alloc_rx_pages() is now inlined due to the greatly reduced size. > The resulting driver is on par with the pre-series code and 1-2% slower > than the "optimized" version right before the recycling removal. > But the number of locs and object code bytes slaughtered is much more > important here after all, not speaking of that there's still a vast > space for optimization and improvements. > > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> > --- > drivers/net/ethernet/intel/Kconfig | 1 + > drivers/net/ethernet/intel/iavf/iavf_txrx.c | 126 +++++--------------- > drivers/net/ethernet/intel/iavf/iavf_txrx.h | 8 +- > drivers/net/ethernet/intel/libie/rx.c | 28 +++++ > include/linux/net/intel/libie/rx.h | 5 +- > 5 files changed, 69 insertions(+), 99 deletions(-) > > diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig > index cec4a938fbd0..a368afc42b8d 100644 > --- a/drivers/net/ethernet/intel/Kconfig > +++ b/drivers/net/ethernet/intel/Kconfig > @@ -86,6 +86,7 @@ config E1000E_HWTS > > config LIBIE > tristate > + select PAGE_POOL > help > libie (Intel Ethernet library) is a common library containing > routines shared by several Intel Ethernet drivers. > diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c > index c33a3d681c83..1de67a70f045 100644 > --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c > +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c > @@ -3,7 +3,6 @@ > > #include <linux/net/intel/libie/rx.h> > #include <linux/prefetch.h> > -#include <net/page_pool.h> > > #include "iavf.h" > #include "iavf_trace.h" > @@ -691,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) > **/ > void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > { > - u16 i; > - > /* ring already cleared, nothing to do */ > if (!rx_ring->rx_pages) > return; > @@ -703,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > } > > /* Free all the Rx ring sk_buffs */ > - for (i = 0; i < rx_ring->count; i++) { > + for (u32 i = 0; i < rx_ring->count; i++) { Did we make a change to our coding style to allow declaration of variables inside of for statements? Just wondering if this is a change since the recent updates to the ISO C standard, or if this doesn't match up with what we would expect per the coding standard. > struct page *page = rx_ring->rx_pages[i]; > - dma_addr_t dma; > > if (!page) > continue; > > - dma = page_pool_get_dma_addr(page); > - > /* Invalidate cache lines that may have been written to by > * device so that we avoid corrupting memory. > */ > - dma_sync_single_range_for_cpu(rx_ring->dev, dma, > - LIBIE_SKB_HEADROOM, > - LIBIE_RX_BUF_LEN, > - DMA_FROM_DEVICE); > - > - /* free resources associated with mapping */ > - dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, > - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); > - > - __free_page(page); > + page_pool_dma_sync_full_for_cpu(rx_ring->pool, page); > + page_pool_put_full_page(rx_ring->pool, page, false); > } > > rx_ring->next_to_clean = 0; > @@ -739,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > **/ > void iavf_free_rx_resources(struct iavf_ring *rx_ring) > { > + struct device *dev = rx_ring->pool->p.dev; > + > iavf_clean_rx_ring(rx_ring); > kfree(rx_ring->rx_pages); > rx_ring->rx_pages = NULL; > > + page_pool_destroy(rx_ring->pool); > + rx_ring->dev = dev; > + > if (rx_ring->desc) { > dma_free_coherent(rx_ring->dev, rx_ring->size, > rx_ring->desc, rx_ring->dma); Not a fan of this switching back and forth between being a page pool pointer and a dev pointer. Seems problematic as it is easily misinterpreted. I would say that at a minimum stick to either it is page_pool(Rx) or dev(Tx) on a ring type basis. > @@ -759,13 +750,15 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) > int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) > { > struct device *dev = rx_ring->dev; > + struct page_pool *pool; > + int ret = -ENOMEM; > > /* warn if we are about to overwrite the pointer */ > WARN_ON(rx_ring->rx_pages); > rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages), > GFP_KERNEL); > if (!rx_ring->rx_pages) > - return -ENOMEM; > + return ret; > > u64_stats_init(&rx_ring->syncp); > > @@ -781,15 +774,27 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) > goto err; > } > > + pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi, > + rx_ring->count); > + if (IS_ERR(pool)) { > + ret = PTR_ERR(pool); > + goto err_free_dma; > + } > + > + rx_ring->pool = pool; > + > rx_ring->next_to_clean = 0; > rx_ring->next_to_use = 0; > > return 0; > + > +err_free_dma: > + dma_free_coherent(dev, rx_ring->size, rx_ring->desc, rx_ring->dma); > err: > kfree(rx_ring->rx_pages); > rx_ring->rx_pages = NULL; > > - return -ENOMEM; > + return ret; > } > > /** This setup works for iavf, however for i40e/ice you may run into issues since the setup_rx_descriptors call is also used to setup the ethtool loopback test w/o a napi struct as I recall so there may not be a q_vector. > @@ -810,40 +815,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) > writel(val, rx_ring->tail); > } > > -/** > - * iavf_alloc_mapped_page - allocate and map a new page > - * @dev: device used for DMA mapping > - * @gfp: GFP mask to allocate page > - * > - * Returns a new &page if the it was successfully allocated, %NULL otherwise. > - **/ > -static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp) > -{ > - struct page *page; > - dma_addr_t dma; > - > - /* alloc new page for storage */ > - page = __dev_alloc_page(gfp); > - if (unlikely(!page)) > - return NULL; > - > - /* map page for use */ > - dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE, > - IAVF_RX_DMA_ATTR); > - > - /* if mapping failed free memory back to system since > - * there isn't much point in holding memory we can't use > - */ > - if (dma_mapping_error(dev, dma)) { > - __free_page(page); > - return NULL; > - } > - > - page_pool_set_dma_addr(page, dma); > - > - return page; > -} > - > /** > * iavf_receive_skb - Send a completed packet up the stack > * @rx_ring: rx ring in play > @@ -877,7 +848,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, > static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, > gfp_t gfp) > { > - struct device *dev = rx_ring->dev; > + struct page_pool *pool = rx_ring->pool; > u32 ntu = rx_ring->next_to_use; > union iavf_rx_desc *rx_desc; > > @@ -891,7 +862,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, > struct page *page; > dma_addr_t dma; > > - page = iavf_alloc_mapped_page(dev, gfp); > + page = page_pool_alloc_pages(pool, gfp); > if (!page) { > rx_ring->rx_stats.alloc_page_failed++; > break; > @@ -900,11 +871,6 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, > rx_ring->rx_pages[ntu] = page; > dma = page_pool_get_dma_addr(page); > > - /* sync the buffer for use by the device */ > - dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM, > - LIBIE_RX_BUF_LEN, > - DMA_FROM_DEVICE); > - > /* Refresh the desc even if buffer_addrs didn't change > * because each write-back erases this info. > */ > @@ -1091,21 +1057,6 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size) > LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE); > } > > -/** > - * iavf_sync_rx_page - Synchronize received data for use > - * @dev: device used for DMA mapping > - * @page: Rx page containing the data > - * @size: size of the received data > - * > - * This function will synchronize the Rx buffer for use by the CPU. > - */ > -static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size) > -{ > - dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page), > - LIBIE_SKB_HEADROOM, size, > - DMA_FROM_DEVICE); > -} > - > /** > * iavf_build_skb - Build skb around an existing buffer > * @page: Rx page to with the data > @@ -1128,6 +1079,8 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) > if (unlikely(!skb)) > return NULL; > > + skb_mark_for_recycle(skb); > + > /* update pointers within the skb to store the data */ > skb_reserve(skb, LIBIE_SKB_HEADROOM); > __skb_put(skb, size); > @@ -1135,19 +1088,6 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) > return skb; > } > > -/** > - * iavf_unmap_rx_page - Unmap used page > - * @dev: device used for DMA mapping > - * @page: page to release > - */ > -static void iavf_unmap_rx_page(struct device *dev, struct page *page) > -{ > - dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), > - LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE, > - IAVF_RX_DMA_ATTR); > - page_pool_set_dma_addr(page, 0); > -} > - > /** > * iavf_is_non_eop - process handling of non-EOP buffers > * @rx_ring: Rx ring being processed > @@ -1190,8 +1130,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) > unsigned int total_rx_bytes = 0, total_rx_packets = 0; > const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; > u32 to_refill = IAVF_DESC_UNUSED(rx_ring); > + struct page_pool *pool = rx_ring->pool; > struct sk_buff *skb = rx_ring->skb; > - struct device *dev = rx_ring->dev; > u32 ntc = rx_ring->next_to_clean; > u32 ring_size = rx_ring->count; > u32 cleaned_count = 0; > @@ -1240,13 +1180,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) > * stripped by the HW. > */ > if (unlikely(!size)) { > - iavf_unmap_rx_page(dev, page); > - __free_page(page); > + page_pool_recycle_direct(pool, page); > goto skip_data; > } > > - iavf_sync_rx_page(dev, page, size); > - iavf_unmap_rx_page(dev, page); > + page_pool_dma_sync_for_cpu(pool, page, size); > > /* retrieve a buffer from the ring */ > if (skb) > @@ -1256,7 +1194,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) > > /* exit if we failed to retrieve a buffer */ > if (!skb) { > - __free_page(page); > + page_pool_put_page(pool, page, size, true); > rx_ring->rx_stats.alloc_buff_failed++; > break; > } > diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h > index 1421e90c7c4e..8fbe549ce6a5 100644 > --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h > +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h > @@ -83,9 +83,6 @@ enum iavf_dyn_idx_t { > > #define iavf_rx_desc iavf_32byte_rx_desc > > -#define IAVF_RX_DMA_ATTR \ > - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) > - > /** > * iavf_test_staterr - tests bits in Rx descriptor status and error fields > * @rx_desc: pointer to receive descriptor (in le64 format) > @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { > struct iavf_ring { > struct iavf_ring *next; /* pointer to next ring in q_vector */ > void *desc; /* Descriptor ring memory */ > - struct device *dev; /* Used for DMA mapping */ > + union { > + struct page_pool *pool; /* Used for Rx page management */ > + struct device *dev; /* Used for DMA mapping on Tx */ > + }; > struct net_device *netdev; /* netdev ring maps to */ > union { > struct iavf_tx_buffer *tx_bi; Would it make more sense to have the page pool in the q_vector rather than the ring? Essentially the page pool is associated per napi instance so it seems like it would make more sense to store it with the napi struct rather than potentially have multiple instances per napi. > diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c > index f503476d8eef..d68eab76593c 100644 > --- a/drivers/net/ethernet/intel/libie/rx.c > +++ b/drivers/net/ethernet/intel/libie/rx.c > @@ -105,6 +105,34 @@ const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = { > }; > EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE); > > +/* Page Pool */ > + > +/** > + * libie_rx_page_pool_create - create a PP with the default libie settings > + * @napi: &napi_struct covering this PP (no usage outside its poll loops) > + * @size: size of the PP, usually simply Rx queue len > + * > + * Returns &page_pool on success, casted -errno on failure. > + */ > +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, > + u32 size) > +{ > + const struct page_pool_params pp = { > + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, > + .order = LIBIE_RX_PAGE_ORDER, > + .pool_size = size, > + .nid = NUMA_NO_NODE, > + .dev = napi->dev->dev.parent, > + .napi = napi, > + .dma_dir = DMA_FROM_DEVICE, > + .max_len = LIBIE_RX_BUF_LEN, > + .offset = LIBIE_SKB_HEADROOM, > + }; > + > + return page_pool_create(&pp); > +} > +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); > + > MODULE_AUTHOR("Intel Corporation"); > MODULE_DESCRIPTION("Intel(R) Ethernet common library"); > MODULE_LICENSE("GPL"); > diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h > index 3e8d0d5206e1..b86cadd281f1 100644 > --- a/include/linux/net/intel/libie/rx.h > +++ b/include/linux/net/intel/libie/rx.h > @@ -5,7 +5,7 @@ > #define __LIBIE_RX_H > > #include <linux/if_vlan.h> > -#include <linux/netdevice.h> > +#include <net/page_pool.h> > > /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed > * bitfield struct. > @@ -160,4 +160,7 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, > /* Maximum frame size minus LL overhead */ > #define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN) > > +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, > + u32 size); > + > #endif /* __LIBIE_RX_H */
From: Alexander H Duyck <alexander.duyck@gmail.com> Date: Wed, 31 May 2023 09:19:06 -0700 > On Tue, 2023-05-30 at 17:00 +0200, Alexander Lobakin wrote: >> Now that the IAVF driver simply uses dev_alloc_page() + free_page() with >> no custom recycling logics and one whole page per frame, it can easily >> be switched to using Page Pool API instead. [...] >> @@ -691,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) >> **/ >> void iavf_clean_rx_ring(struct iavf_ring *rx_ring) >> { >> - u16 i; >> - >> /* ring already cleared, nothing to do */ >> if (!rx_ring->rx_pages) >> return; >> @@ -703,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) >> } >> >> /* Free all the Rx ring sk_buffs */ >> - for (i = 0; i < rx_ring->count; i++) { >> + for (u32 i = 0; i < rx_ring->count; i++) { > > Did we make a change to our coding style to allow declaration of > variables inside of for statements? Just wondering if this is a change > since the recent updates to the ISO C standard, or if this doesn't > match up with what we would expect per the coding standard. It's optional right now, nobody would object declaring it either way. Doing it inside is allowed since we switched to C11, right. Here I did that because my heart was breaking to see this little u16 alone (and yeah, u16 on the stack). > >> struct page *page = rx_ring->rx_pages[i]; >> - dma_addr_t dma; >> >> if (!page) >> continue; >> >> - dma = page_pool_get_dma_addr(page); >> - >> /* Invalidate cache lines that may have been written to by >> * device so that we avoid corrupting memory. >> */ >> - dma_sync_single_range_for_cpu(rx_ring->dev, dma, >> - LIBIE_SKB_HEADROOM, >> - LIBIE_RX_BUF_LEN, >> - DMA_FROM_DEVICE); >> - >> - /* free resources associated with mapping */ >> - dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, >> - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); >> - >> - __free_page(page); >> + page_pool_dma_sync_full_for_cpu(rx_ring->pool, page); >> + page_pool_put_full_page(rx_ring->pool, page, false); >> } >> >> rx_ring->next_to_clean = 0; >> @@ -739,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) >> **/ >> void iavf_free_rx_resources(struct iavf_ring *rx_ring) >> { >> + struct device *dev = rx_ring->pool->p.dev; >> + >> iavf_clean_rx_ring(rx_ring); >> kfree(rx_ring->rx_pages); >> rx_ring->rx_pages = NULL; >> >> + page_pool_destroy(rx_ring->pool); >> + rx_ring->dev = dev; >> + >> if (rx_ring->desc) { >> dma_free_coherent(rx_ring->dev, rx_ring->size, >> rx_ring->desc, rx_ring->dma); > > Not a fan of this switching back and forth between being a page pool > pointer and a dev pointer. Seems problematic as it is easily > misinterpreted. I would say that at a minimum stick to either it is > page_pool(Rx) or dev(Tx) on a ring type basis. The problem is that page_pool has lifetime from ifup to ifdown, while its ring lives longer. So I had to do something with this, but also I didn't want to have 2 pointers at the same time since it's redundant and +8 bytes to the ring for nothing. [...] > This setup works for iavf, however for i40e/ice you may run into issues > since the setup_rx_descriptors call is also used to setup the ethtool > loopback test w/o a napi struct as I recall so there may not be a > q_vector. I'll handle that. Somehow :D Thanks for noticing, I'll take a look whether I should do something right now or it can be done later when switching the actual mentioned drivers. [...] >> @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { >> struct iavf_ring { >> struct iavf_ring *next; /* pointer to next ring in q_vector */ >> void *desc; /* Descriptor ring memory */ >> - struct device *dev; /* Used for DMA mapping */ >> + union { >> + struct page_pool *pool; /* Used for Rx page management */ >> + struct device *dev; /* Used for DMA mapping on Tx */ >> + }; >> struct net_device *netdev; /* netdev ring maps to */ >> union { >> struct iavf_tx_buffer *tx_bi; > > Would it make more sense to have the page pool in the q_vector rather > than the ring? Essentially the page pool is associated per napi > instance so it seems like it would make more sense to store it with the > napi struct rather than potentially have multiple instances per napi. As per Page Pool design, you should have it per ring. Plus you have rxq_info (XDP-related structure), which is also per-ring and participates in recycling in some cases. So I wouldn't complicate. I went down the chain and haven't found any place where having more than 1 PP per NAPI would break anything. If I got it correctly, Jakub's optimization discourages having 1 PP per several NAPIs (or scheduling one NAPI on different CPUs), but not the other way around. The goal was to exclude concurrent access to one PP from different threads, and here it's impossible. Lemme know. I can always disable NAPI optimization for cases when one vector is shared by several queues -- and it's not a usual case for these NICs anyway -- but I haven't found a reason for that. [...] Thanks, Olek
On Fri, Jun 2, 2023 at 9:31 AM Alexander Lobakin <aleksander.lobakin@intel.com> wrote: > > From: Alexander H Duyck <alexander.duyck@gmail.com> > Date: Wed, 31 May 2023 09:19:06 -0700 > > > On Tue, 2023-05-30 at 17:00 +0200, Alexander Lobakin wrote: > >> Now that the IAVF driver simply uses dev_alloc_page() + free_page() with > >> no custom recycling logics and one whole page per frame, it can easily > >> be switched to using Page Pool API instead. > > [...] > > >> @@ -691,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) > >> **/ > >> void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > >> { > >> - u16 i; > >> - > >> /* ring already cleared, nothing to do */ > >> if (!rx_ring->rx_pages) > >> return; > >> @@ -703,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > >> } > >> > >> /* Free all the Rx ring sk_buffs */ > >> - for (i = 0; i < rx_ring->count; i++) { > >> + for (u32 i = 0; i < rx_ring->count; i++) { > > > > Did we make a change to our coding style to allow declaration of > > variables inside of for statements? Just wondering if this is a change > > since the recent updates to the ISO C standard, or if this doesn't > > match up with what we would expect per the coding standard. > > It's optional right now, nobody would object declaring it either way. > Doing it inside is allowed since we switched to C11, right. > Here I did that because my heart was breaking to see this little u16 > alone (and yeah, u16 on the stack). Yeah, that was back when I was declaring stack variables the exact same size as the ring parameters. So u16 should match the size of rx_ring->count not that it matters. It was just a quirk I had at the time. > > > >> struct page *page = rx_ring->rx_pages[i]; > >> - dma_addr_t dma; > >> > >> if (!page) > >> continue; > >> > >> - dma = page_pool_get_dma_addr(page); > >> - > >> /* Invalidate cache lines that may have been written to by > >> * device so that we avoid corrupting memory. > >> */ > >> - dma_sync_single_range_for_cpu(rx_ring->dev, dma, > >> - LIBIE_SKB_HEADROOM, > >> - LIBIE_RX_BUF_LEN, > >> - DMA_FROM_DEVICE); > >> - > >> - /* free resources associated with mapping */ > >> - dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, > >> - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); > >> - > >> - __free_page(page); > >> + page_pool_dma_sync_full_for_cpu(rx_ring->pool, page); > >> + page_pool_put_full_page(rx_ring->pool, page, false); > >> } > >> > >> rx_ring->next_to_clean = 0; > >> @@ -739,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) > >> **/ > >> void iavf_free_rx_resources(struct iavf_ring *rx_ring) > >> { > >> + struct device *dev = rx_ring->pool->p.dev; > >> + > >> iavf_clean_rx_ring(rx_ring); > >> kfree(rx_ring->rx_pages); > >> rx_ring->rx_pages = NULL; > >> > >> + page_pool_destroy(rx_ring->pool); > >> + rx_ring->dev = dev; > >> + > >> if (rx_ring->desc) { > >> dma_free_coherent(rx_ring->dev, rx_ring->size, > >> rx_ring->desc, rx_ring->dma); > > > > Not a fan of this switching back and forth between being a page pool > > pointer and a dev pointer. Seems problematic as it is easily > > misinterpreted. I would say that at a minimum stick to either it is > > page_pool(Rx) or dev(Tx) on a ring type basis. > > The problem is that page_pool has lifetime from ifup to ifdown, while > its ring lives longer. So I had to do something with this, but also I > didn't want to have 2 pointers at the same time since it's redundant and > +8 bytes to the ring for nothing. It might be better to just go with NULL rather than populating it w/ two different possible values. Then at least you know if it is an rx_ring it is a page_pool and if it is a tx_ring it is dev. You can reset to the page pool when you repopulate the rest of the ring. > > This setup works for iavf, however for i40e/ice you may run into issues > > since the setup_rx_descriptors call is also used to setup the ethtool > > loopback test w/o a napi struct as I recall so there may not be a > > q_vector. > > I'll handle that. Somehow :D Thanks for noticing, I'll take a look > whether I should do something right now or it can be done later when > switching the actual mentioned drivers. > > [...] > > >> @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { > >> struct iavf_ring { > >> struct iavf_ring *next; /* pointer to next ring in q_vector */ > >> void *desc; /* Descriptor ring memory */ > >> - struct device *dev; /* Used for DMA mapping */ > >> + union { > >> + struct page_pool *pool; /* Used for Rx page management */ > >> + struct device *dev; /* Used for DMA mapping on Tx */ > >> + }; > >> struct net_device *netdev; /* netdev ring maps to */ > >> union { > >> struct iavf_tx_buffer *tx_bi; > > > > Would it make more sense to have the page pool in the q_vector rather > > than the ring? Essentially the page pool is associated per napi > > instance so it seems like it would make more sense to store it with the > > napi struct rather than potentially have multiple instances per napi. > > As per Page Pool design, you should have it per ring. Plus you have > rxq_info (XDP-related structure), which is also per-ring and > participates in recycling in some cases. So I wouldn't complicate. > I went down the chain and haven't found any place where having more than > 1 PP per NAPI would break anything. If I got it correctly, Jakub's > optimization discourages having 1 PP per several NAPIs (or scheduling > one NAPI on different CPUs), but not the other way around. The goal was > to exclude concurrent access to one PP from different threads, and here > it's impossible. The xdp_rxq can be mapped many:1 to the page pool if I am not mistaken. The only reason why I am a fan of trying to keep the page_pool tightly associated with the napi instance is because the napi instance is what essentially is guaranteeing the page_pool is consistent as it is only accessed by that one napi instance. > Lemme know. I can always disable NAPI optimization for cases when one > vector is shared by several queues -- and it's not a usual case for > these NICs anyway -- but I haven't found a reason for that. I suppose we should be fine if we have a many to one mapping though I suppose. As you said the issue would be if multiple NAPI were accessing the same page pool.
From: Alexander Duyck <alexander.duyck@gmail.com> Date: Fri, 2 Jun 2023 11:00:07 -0700 > On Fri, Jun 2, 2023 at 9:31 AM Alexander Lobakin > <aleksander.lobakin@intel.com> wrote: [...] >>> Not a fan of this switching back and forth between being a page pool >>> pointer and a dev pointer. Seems problematic as it is easily >>> misinterpreted. I would say that at a minimum stick to either it is >>> page_pool(Rx) or dev(Tx) on a ring type basis. >> >> The problem is that page_pool has lifetime from ifup to ifdown, while >> its ring lives longer. So I had to do something with this, but also I >> didn't want to have 2 pointers at the same time since it's redundant and >> +8 bytes to the ring for nothing. > > It might be better to just go with NULL rather than populating it w/ > two different possible values. Then at least you know if it is an > rx_ring it is a page_pool and if it is a tx_ring it is dev. You can > reset to the page pool when you repopulate the rest of the ring. IIRC I did that to have struct device pointer at the moment of creating page_pools. But sounds reasonable, I'll take a look. > >>> This setup works for iavf, however for i40e/ice you may run into issues >>> since the setup_rx_descriptors call is also used to setup the ethtool >>> loopback test w/o a napi struct as I recall so there may not be a >>> q_vector. >> >> I'll handle that. Somehow :D Thanks for noticing, I'll take a look >> whether I should do something right now or it can be done later when >> switching the actual mentioned drivers. >> >> [...] >> >>>> @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { >>>> struct iavf_ring { >>>> struct iavf_ring *next; /* pointer to next ring in q_vector */ >>>> void *desc; /* Descriptor ring memory */ >>>> - struct device *dev; /* Used for DMA mapping */ >>>> + union { >>>> + struct page_pool *pool; /* Used for Rx page management */ >>>> + struct device *dev; /* Used for DMA mapping on Tx */ >>>> + }; >>>> struct net_device *netdev; /* netdev ring maps to */ >>>> union { >>>> struct iavf_tx_buffer *tx_bi; >>> >>> Would it make more sense to have the page pool in the q_vector rather >>> than the ring? Essentially the page pool is associated per napi >>> instance so it seems like it would make more sense to store it with the >>> napi struct rather than potentially have multiple instances per napi. >> >> As per Page Pool design, you should have it per ring. Plus you have >> rxq_info (XDP-related structure), which is also per-ring and >> participates in recycling in some cases. So I wouldn't complicate. >> I went down the chain and haven't found any place where having more than >> 1 PP per NAPI would break anything. If I got it correctly, Jakub's >> optimization discourages having 1 PP per several NAPIs (or scheduling >> one NAPI on different CPUs), but not the other way around. The goal was >> to exclude concurrent access to one PP from different threads, and here >> it's impossible. > > The xdp_rxq can be mapped many:1 to the page pool if I am not mistaken. > > The only reason why I am a fan of trying to keep the page_pool tightly > associated with the napi instance is because the napi instance is what > essentially is guaranteeing the page_pool is consistent as it is only > accessed by that one napi instance. Here we can't have more than one NAPI instance accessing one page_pool, so I did that unconditionally. I'm a fan of what you've said, too :p > >> Lemme know. I can always disable NAPI optimization for cases when one >> vector is shared by several queues -- and it's not a usual case for >> these NICs anyway -- but I haven't found a reason for that. > > I suppose we should be fine if we have a many to one mapping though I > suppose. As you said the issue would be if multiple NAPI were > accessing the same page pool. Thanks, Olek
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index cec4a938fbd0..a368afc42b8d 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -86,6 +86,7 @@ config E1000E_HWTS config LIBIE tristate + select PAGE_POOL help libie (Intel Ethernet library) is a common library containing routines shared by several Intel Ethernet drivers. diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index c33a3d681c83..1de67a70f045 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -3,7 +3,6 @@ #include <linux/net/intel/libie/rx.h> #include <linux/prefetch.h> -#include <net/page_pool.h> #include "iavf.h" #include "iavf_trace.h" @@ -691,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring) **/ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) { - u16 i; - /* ring already cleared, nothing to do */ if (!rx_ring->rx_pages) return; @@ -703,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) } /* Free all the Rx ring sk_buffs */ - for (i = 0; i < rx_ring->count; i++) { + for (u32 i = 0; i < rx_ring->count; i++) { struct page *page = rx_ring->rx_pages[i]; - dma_addr_t dma; if (!page) continue; - dma = page_pool_get_dma_addr(page); - /* Invalidate cache lines that may have been written to by * device so that we avoid corrupting memory. */ - dma_sync_single_range_for_cpu(rx_ring->dev, dma, - LIBIE_SKB_HEADROOM, - LIBIE_RX_BUF_LEN, - DMA_FROM_DEVICE); - - /* free resources associated with mapping */ - dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE, - DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR); - - __free_page(page); + page_pool_dma_sync_full_for_cpu(rx_ring->pool, page); + page_pool_put_full_page(rx_ring->pool, page, false); } rx_ring->next_to_clean = 0; @@ -739,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring) **/ void iavf_free_rx_resources(struct iavf_ring *rx_ring) { + struct device *dev = rx_ring->pool->p.dev; + iavf_clean_rx_ring(rx_ring); kfree(rx_ring->rx_pages); rx_ring->rx_pages = NULL; + page_pool_destroy(rx_ring->pool); + rx_ring->dev = dev; + if (rx_ring->desc) { dma_free_coherent(rx_ring->dev, rx_ring->size, rx_ring->desc, rx_ring->dma); @@ -759,13 +750,15 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring) int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) { struct device *dev = rx_ring->dev; + struct page_pool *pool; + int ret = -ENOMEM; /* warn if we are about to overwrite the pointer */ WARN_ON(rx_ring->rx_pages); rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages), GFP_KERNEL); if (!rx_ring->rx_pages) - return -ENOMEM; + return ret; u64_stats_init(&rx_ring->syncp); @@ -781,15 +774,27 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) goto err; } + pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi, + rx_ring->count); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + goto err_free_dma; + } + + rx_ring->pool = pool; + rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; return 0; + +err_free_dma: + dma_free_coherent(dev, rx_ring->size, rx_ring->desc, rx_ring->dma); err: kfree(rx_ring->rx_pages); rx_ring->rx_pages = NULL; - return -ENOMEM; + return ret; } /** @@ -810,40 +815,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * iavf_alloc_mapped_page - allocate and map a new page - * @dev: device used for DMA mapping - * @gfp: GFP mask to allocate page - * - * Returns a new &page if the it was successfully allocated, %NULL otherwise. - **/ -static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp) -{ - struct page *page; - dma_addr_t dma; - - /* alloc new page for storage */ - page = __dev_alloc_page(gfp); - if (unlikely(!page)) - return NULL; - - /* map page for use */ - dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - - /* if mapping failed free memory back to system since - * there isn't much point in holding memory we can't use - */ - if (dma_mapping_error(dev, dma)) { - __free_page(page); - return NULL; - } - - page_pool_set_dma_addr(page, dma); - - return page; -} - /** * iavf_receive_skb - Send a completed packet up the stack * @rx_ring: rx ring in play @@ -877,7 +848,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, gfp_t gfp) { - struct device *dev = rx_ring->dev; + struct page_pool *pool = rx_ring->pool; u32 ntu = rx_ring->next_to_use; union iavf_rx_desc *rx_desc; @@ -891,7 +862,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, struct page *page; dma_addr_t dma; - page = iavf_alloc_mapped_page(dev, gfp); + page = page_pool_alloc_pages(pool, gfp); if (!page) { rx_ring->rx_stats.alloc_page_failed++; break; @@ -900,11 +871,6 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill, rx_ring->rx_pages[ntu] = page; dma = page_pool_get_dma_addr(page); - /* sync the buffer for use by the device */ - dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM, - LIBIE_RX_BUF_LEN, - DMA_FROM_DEVICE); - /* Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. */ @@ -1091,21 +1057,6 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size) LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE); } -/** - * iavf_sync_rx_page - Synchronize received data for use - * @dev: device used for DMA mapping - * @page: Rx page containing the data - * @size: size of the received data - * - * This function will synchronize the Rx buffer for use by the CPU. - */ -static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size) -{ - dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page), - LIBIE_SKB_HEADROOM, size, - DMA_FROM_DEVICE); -} - /** * iavf_build_skb - Build skb around an existing buffer * @page: Rx page to with the data @@ -1128,6 +1079,8 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) if (unlikely(!skb)) return NULL; + skb_mark_for_recycle(skb); + /* update pointers within the skb to store the data */ skb_reserve(skb, LIBIE_SKB_HEADROOM); __skb_put(skb, size); @@ -1135,19 +1088,6 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size) return skb; } -/** - * iavf_unmap_rx_page - Unmap used page - * @dev: device used for DMA mapping - * @page: page to release - */ -static void iavf_unmap_rx_page(struct device *dev, struct page *page) -{ - dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), - LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE, - IAVF_RX_DMA_ATTR); - page_pool_set_dma_addr(page, 0); -} - /** * iavf_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed @@ -1190,8 +1130,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_packets = 0; const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; u32 to_refill = IAVF_DESC_UNUSED(rx_ring); + struct page_pool *pool = rx_ring->pool; struct sk_buff *skb = rx_ring->skb; - struct device *dev = rx_ring->dev; u32 ntc = rx_ring->next_to_clean; u32 ring_size = rx_ring->count; u32 cleaned_count = 0; @@ -1240,13 +1180,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) * stripped by the HW. */ if (unlikely(!size)) { - iavf_unmap_rx_page(dev, page); - __free_page(page); + page_pool_recycle_direct(pool, page); goto skip_data; } - iavf_sync_rx_page(dev, page, size); - iavf_unmap_rx_page(dev, page); + page_pool_dma_sync_for_cpu(pool, page, size); /* retrieve a buffer from the ring */ if (skb) @@ -1256,7 +1194,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { - __free_page(page); + page_pool_put_page(pool, page, size, true); rx_ring->rx_stats.alloc_buff_failed++; break; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 1421e90c7c4e..8fbe549ce6a5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -83,9 +83,6 @@ enum iavf_dyn_idx_t { #define iavf_rx_desc iavf_32byte_rx_desc -#define IAVF_RX_DMA_ATTR \ - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) - /** * iavf_test_staterr - tests bits in Rx descriptor status and error fields * @rx_desc: pointer to receive descriptor (in le64 format) @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats { struct iavf_ring { struct iavf_ring *next; /* pointer to next ring in q_vector */ void *desc; /* Descriptor ring memory */ - struct device *dev; /* Used for DMA mapping */ + union { + struct page_pool *pool; /* Used for Rx page management */ + struct device *dev; /* Used for DMA mapping on Tx */ + }; struct net_device *netdev; /* netdev ring maps to */ union { struct iavf_tx_buffer *tx_bi; diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index f503476d8eef..d68eab76593c 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -105,6 +105,34 @@ const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = { }; EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE); +/* Page Pool */ + +/** + * libie_rx_page_pool_create - create a PP with the default libie settings + * @napi: &napi_struct covering this PP (no usage outside its poll loops) + * @size: size of the PP, usually simply Rx queue len + * + * Returns &page_pool on success, casted -errno on failure. + */ +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, + u32 size) +{ + const struct page_pool_params pp = { + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, + .order = LIBIE_RX_PAGE_ORDER, + .pool_size = size, + .nid = NUMA_NO_NODE, + .dev = napi->dev->dev.parent, + .napi = napi, + .dma_dir = DMA_FROM_DEVICE, + .max_len = LIBIE_RX_BUF_LEN, + .offset = LIBIE_SKB_HEADROOM, + }; + + return page_pool_create(&pp); +} +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); + MODULE_AUTHOR("Intel Corporation"); MODULE_DESCRIPTION("Intel(R) Ethernet common library"); MODULE_LICENSE("GPL"); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 3e8d0d5206e1..b86cadd281f1 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -5,7 +5,7 @@ #define __LIBIE_RX_H #include <linux/if_vlan.h> -#include <linux/netdevice.h> +#include <net/page_pool.h> /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed * bitfield struct. @@ -160,4 +160,7 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash, /* Maximum frame size minus LL overhead */ #define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN) +struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, + u32 size); + #endif /* __LIBIE_RX_H */
Now that the IAVF driver simply uses dev_alloc_page() + free_page() with no custom recycling logics and one whole page per frame, it can easily be switched to using Page Pool API instead. Introduce libie_rx_page_pool_create(), a wrapper for creating a PP with the default libie settings applicable to all Intel hardware, and replace the alloc/free calls with the corresponding PP functions, including the newly added sync-for-CPU helpers. Use skb_mark_for_recycle() to bring back the recycling and restore the initial performance. From the important object code changes, worth mentioning that __iavf_alloc_rx_pages() is now inlined due to the greatly reduced size. The resulting driver is on par with the pre-series code and 1-2% slower than the "optimized" version right before the recycling removal. But the number of locs and object code bytes slaughtered is much more important here after all, not speaking of that there's still a vast space for optimization and improvements. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 126 +++++--------------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 8 +- drivers/net/ethernet/intel/libie/rx.c | 28 +++++ include/linux/net/intel/libie/rx.h | 5 +- 5 files changed, 69 insertions(+), 99 deletions(-)