diff mbox series

[net-next,v5,3/5] page_pool: Allow drivers to hint on SKB recycling

Message ID 20210513165846.23722-4-mcroce@linux.microsoft.com (mailing list archive)
State New, archived
Headers show
Series page_pool: recycle buffers | expand

Commit Message

Matteo Croce May 13, 2021, 4:58 p.m. UTC
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>

Up to now several high speed NICs have custom mechanisms of recycling
the allocated memory they use for their payloads.
Our page_pool API already has recycling capabilities that are always
used when we are running in 'XDP mode'. So let's tweak the API and the
kernel network stack slightly and allow the recycling to happen even
during the standard operation.
The API doesn't take into account 'split page' policies used by those
drivers currently, but can be extended once we have users for that.

The idea is to be able to intercept the packet on skb_release_data().
If it's a buffer coming from our page_pool API recycle it back to the
pool for further usage or just release the packet entirely.

To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
a field in struct page (page->pp) to store the page_pool pointer.
Storing the information in page->pp allows us to recycle both SKBs and
their fragments.
The SKB bit is needed for a couple of reasons. First of all in an effort
to affect the free path as less as possible, reading a single bit,
is better that trying to derive identical information for the page stored
data. We do have a special mark in the page, that won't allow this to
happen, but again deciding without having to read the entire page is
preferable.

The driver has to take care of the sync operations on it's own
during the buffer recycling since the buffer is, after opting-in to the
recycling, never unmapped.

Since the gain on the drivers depends on the architecture, we are not
enabling recycling by default if the page_pool API is used on a driver.
In order to enable recycling the driver must call skb_mark_for_recycle()
to store the information we need for recycling in page->pp and
enabling the recycling bit, or page_pool_store_mem_info() for a fragment.

Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Co-developed-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
 include/net/page_pool.h |  9 +++++++++
 net/core/page_pool.c    | 23 +++++++++++++++++++++++
 net/core/skbuff.c       | 25 +++++++++++++++++++++----
 4 files changed, 78 insertions(+), 7 deletions(-)

Comments

Yunsheng Lin May 14, 2021, 3:39 a.m. UTC | #1
On 2021/5/14 0:58, Matteo Croce wrote:
> From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> 
> Up to now several high speed NICs have custom mechanisms of recycling
> the allocated memory they use for their payloads.
> Our page_pool API already has recycling capabilities that are always
> used when we are running in 'XDP mode'. So let's tweak the API and the
> kernel network stack slightly and allow the recycling to happen even
> during the standard operation.
> The API doesn't take into account 'split page' policies used by those
> drivers currently, but can be extended once we have users for that.
> 
> The idea is to be able to intercept the packet on skb_release_data().
> If it's a buffer coming from our page_pool API recycle it back to the
> pool for further usage or just release the packet entirely.
> 
> To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and
> a field in struct page (page->pp) to store the page_pool pointer.
> Storing the information in page->pp allows us to recycle both SKBs and
> their fragments.
> The SKB bit is needed for a couple of reasons. First of all in an effort
> to affect the free path as less as possible, reading a single bit,
> is better that trying to derive identical information for the page stored
> data. We do have a special mark in the page, that won't allow this to
> happen, but again deciding without having to read the entire page is
> preferable.
> 
> The driver has to take care of the sync operations on it's own
> during the buffer recycling since the buffer is, after opting-in to the
> recycling, never unmapped.
> 
> Since the gain on the drivers depends on the architecture, we are not
> enabling recycling by default if the page_pool API is used on a driver.
> In order to enable recycling the driver must call skb_mark_for_recycle()
> to store the information we need for recycling in page->pp and
> enabling the recycling bit, or page_pool_store_mem_info() for a fragment.
> 
> Co-developed-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Co-developed-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> ---
>  include/linux/skbuff.h  | 28 +++++++++++++++++++++++++---
>  include/net/page_pool.h |  9 +++++++++
>  net/core/page_pool.c    | 23 +++++++++++++++++++++++
>  net/core/skbuff.c       | 25 +++++++++++++++++++++----
>  4 files changed, 78 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7fcfea7e7b21..057b40ad29bd 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -40,6 +40,9 @@
>  #if IS_ENABLED(CONFIG_NF_CONNTRACK)
>  #include <linux/netfilter/nf_conntrack_common.h>
>  #endif
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  /* The interface for checksum offload between the stack and networking drivers
>   * is as follows...
> @@ -667,6 +670,8 @@ typedef unsigned char *sk_buff_data_t;
>   *	@head_frag: skb was allocated from page fragments,
>   *		not allocated by kmalloc() or vmalloc().
>   *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
> + *	@pp_recycle: mark the packet for recycling instead of freeing (implies
> + *		page_pool support on driver)
>   *	@active_extensions: active extensions (skb_ext_id types)
>   *	@ndisc_nodetype: router type (from link layer)
>   *	@ooo_okay: allow the mapping of a socket to a queue to be changed
> @@ -791,10 +796,12 @@ struct sk_buff {
>  				fclone:2,
>  				peeked:1,
>  				head_frag:1,
> -				pfmemalloc:1;
> +				pfmemalloc:1,
> +				pp_recycle:1; /* page_pool recycle indicator */
>  #ifdef CONFIG_SKB_EXTENSIONS
>  	__u8			active_extensions;
>  #endif
> +
>  	/* fields enclosed in headers_start/headers_end are copied
>  	 * using a single memcpy() in __copy_skb_header()
>  	 */
> @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
>   */
>  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)

Does it make sure to define a new function like recyclable_skb_frag_unref()
instead of adding the recycle parameter? This way we may avoid checking
skb->pp_recycle for head data and every frag?

>  {
> -	put_page(skb_frag_page(frag));
> +	struct page *page = skb_frag_page(frag);
> +
> +#ifdef CONFIG_PAGE_POOL
> +	if (recycle && page_pool_return_skb_page(page_address(page)))
> +		return;
> +#endif
> +	put_page(page);
>  }
>  
>  /**
> @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
>   */
>  static inline void skb_frag_unref(struct sk_buff *skb, int f)
>  {
> -	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
> +	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
>  }
>  
>  /**
> @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
>  #endif
>  }
>  
> +#ifdef CONFIG_PAGE_POOL
> +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
> +					struct page_pool *pp)
> +{
> +	skb->pp_recycle = 1;
> +	page_pool_store_mem_info(page, pp);
> +}
> +#endif
> +
>  #endif	/* __KERNEL__ */
>  #endif	/* _LINUX_SKBUFF_H */
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 24b3d42c62c0..ce75abeddb29 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
>  	return pool->p.dma_dir;
>  }
>  
> +bool page_pool_return_skb_page(void *data);
> +
>  struct page_pool *page_pool_create(const struct page_pool_params *params);
>  
>  #ifdef CONFIG_PAGE_POOL
> @@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
>  		spin_unlock_bh(&pool->ring.producer_lock);
>  }
>  
> +/* Store mem_info on struct page and use it while recycling skb frags */
> +static inline
> +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> +{
> +	page->pp = pp;
> +}
> +
>  #endif /* _NET_PAGE_POOL_H */
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 9de5d8c08c17..fa9f17db7c48 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>  	}
>  }
>  EXPORT_SYMBOL(page_pool_update_nid);
> +
> +bool page_pool_return_skb_page(void *data)
> +{
> +	struct page_pool *pp;
> +	struct page *page;
> +
> +	page = virt_to_head_page(data);
> +	if (unlikely(page->pp_magic != PP_SIGNATURE))

we have checked the skb->pp_recycle before checking the page->pp_magic,
so the above seems like a likely() instead of unlikely()?

> +		return false;
> +
> +	pp = (struct page_pool *)page->pp;
> +
> +	/* Driver set this to memory recycling info. Reset it on recycle.
> +	 * This will *not* work for NIC using a split-page memory model.
> +	 * The page will be returned to the pool here regardless of the
> +	 * 'flipped' fragment being in use or not.
> +	 */
> +	page->pp = NULL;

Why not only clear the page->pp when the page can not be recycled
by the page pool? so that we do not need to set and clear it every
time the page is recycled。

> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> +
> +	return true;
> +}
> +EXPORT_SYMBOL(page_pool_return_skb_page);
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 12b7e90dd2b5..9581af44d587 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -70,6 +70,9 @@
>  #include <net/xfrm.h>
>  #include <net/mpls.h>
>  #include <net/mptcp.h>
> +#ifdef CONFIG_PAGE_POOL
> +#include <net/page_pool.h>
> +#endif
>  
>  #include <linux/uaccess.h>
>  #include <trace/events/skb.h>
> @@ -645,10 +648,15 @@ static void skb_free_head(struct sk_buff *skb)
>  {
>  	unsigned char *head = skb->head;
>  
> -	if (skb->head_frag)
> +	if (skb->head_frag) {
> +#ifdef CONFIG_PAGE_POOL
> +		if (skb->pp_recycle && page_pool_return_skb_page(head))
> +			return;
> +#endif
>  		skb_free_frag(head);
> -	else
> +	} else {
>  		kfree(head);
> +	}
>  }
>  
>  static void skb_release_data(struct sk_buff *skb)
> @@ -664,7 +672,7 @@ static void skb_release_data(struct sk_buff *skb)
>  	skb_zcopy_clear(skb, true);
>  
>  	for (i = 0; i < shinfo->nr_frags; i++)
> -		__skb_frag_unref(&shinfo->frags[i], false);
> +		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
>  
>  	if (shinfo->frag_list)
>  		kfree_skb_list(shinfo->frag_list);
> @@ -1046,6 +1054,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
>  	n->nohdr = 0;
>  	n->peeked = 0;
>  	C(pfmemalloc);
> +	C(pp_recycle);
>  	n->destructor = NULL;
>  	C(tail);
>  	C(end);
> @@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
>  	skb->cloned   = 0;
>  	skb->hdr_len  = 0;
>  	skb->nohdr    = 0;
> +	skb->pp_recycle = 0;

I am not sure why we clear the skb->pp_recycle here.
As my understanding, the pskb_expand_head() only allocate new head
data, the old frag page in skb_shinfo()->frags still could be from
page pool, right?

>  	atomic_set(&skb_shinfo(skb)->dataref, 1);
>  
>  	skb_metadata_clear(skb);
> @@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
>  		fragto = &skb_shinfo(tgt)->frags[merge];
>  
>  		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
> -		__skb_frag_unref(fragfrom, false);
> +		__skb_frag_unref(fragfrom, skb->pp_recycle);
>  	}
>  
>  	/* Reposition in the original skb */
> @@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>  	if (skb_cloned(to))
>  		return false;
>  
> +	/* We can't coalesce skb that are allocated from slab and page_pool
> +	 * The recycle mark is on the skb, so that might end up trying to
> +	 * recycle slab allocated skb->head
> +	 */
> +	if (to->pp_recycle != from->pp_recycle)
> +		return false;

Since we are also depending on page->pp_magic to decide whether to
recycle a page, we could just set the to->pp_recycle according to
from->pp_recycle and do the coalesce?

> +
>  	if (len <= skb_tailroom(to)) {
>  		if (len)
>  			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
>
Ilias Apalodimas May 14, 2021, 7:36 a.m. UTC | #2
[...]
> >  	 * using a single memcpy() in __copy_skb_header()
> >  	 */
> > @@ -3088,7 +3095,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
> >   */
> >  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> 
> Does it make sure to define a new function like recyclable_skb_frag_unref()
> instead of adding the recycle parameter? This way we may avoid checking
> skb->pp_recycle for head data and every frag?
> 

We'd still have to check when to run __skb_frag_unref or
recyclable_skb_frag_unref so I am not sure we can avoid that.
In any case I'll have a look 

> >  {
> > -	put_page(skb_frag_page(frag));
> > +	struct page *page = skb_frag_page(frag);
> > +
> > +#ifdef CONFIG_PAGE_POOL
> > +	if (recycle && page_pool_return_skb_page(page_address(page)))
> > +		return;
> > +#endif
> > +	put_page(page);
> >  }
> >  
> >  /**
> > @@ -3100,7 +3113,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
> >   */
> >  static inline void skb_frag_unref(struct sk_buff *skb, int f)
> >  {
> > -	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
> > +	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
> >  }
> >  
> >  /**
> > @@ -4699,5 +4712,14 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
> >  #endif
> >  }
> >  
> > +#ifdef CONFIG_PAGE_POOL
> > +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
> > +					struct page_pool *pp)
> > +{
> > +	skb->pp_recycle = 1;
> > +	page_pool_store_mem_info(page, pp);
> > +}
> > +#endif
> > +
> >  #endif	/* __KERNEL__ */
> >  #endif	/* _LINUX_SKBUFF_H */
> > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > index 24b3d42c62c0..ce75abeddb29 100644
> > --- a/include/net/page_pool.h
> > +++ b/include/net/page_pool.h
> > @@ -148,6 +148,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
> >  	return pool->p.dma_dir;
> >  }
> >  
> > +bool page_pool_return_skb_page(void *data);
> > +
> >  struct page_pool *page_pool_create(const struct page_pool_params *params);
> >  
> >  #ifdef CONFIG_PAGE_POOL
> > @@ -253,4 +255,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool)
> >  		spin_unlock_bh(&pool->ring.producer_lock);
> >  }
> >  
> > +/* Store mem_info on struct page and use it while recycling skb frags */
> > +static inline
> > +void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
> > +{
> > +	page->pp = pp;
> > +}
> > +
> >  #endif /* _NET_PAGE_POOL_H */
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 9de5d8c08c17..fa9f17db7c48 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -626,3 +626,26 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> >  	}
> >  }
> >  EXPORT_SYMBOL(page_pool_update_nid);
> > +
> > +bool page_pool_return_skb_page(void *data)
> > +{
> > +	struct page_pool *pp;
> > +	struct page *page;
> > +
> > +	page = virt_to_head_page(data);
> > +	if (unlikely(page->pp_magic != PP_SIGNATURE))
> 
> we have checked the skb->pp_recycle before checking the page->pp_magic,
> so the above seems like a likely() instead of unlikely()?
> 

The check here is ! = PP_SIGNATURE. So since we already checked for
pp_recycle, it's unlikely the signature won't match.

> > +		return false;
> > +
> > +	pp = (struct page_pool *)page->pp;
> > +
> > +	/* Driver set this to memory recycling info. Reset it on recycle.
> > +	 * This will *not* work for NIC using a split-page memory model.
> > +	 * The page will be returned to the pool here regardless of the
> > +	 * 'flipped' fragment being in use or not.
> > +	 */
> > +	page->pp = NULL;
> 
> Why not only clear the page->pp when the page can not be recycled
> by the page pool? so that we do not need to set and clear it every
> time the page is recycled。
> 

If the page cannot be recycled, page->pp will not probably be set to begin
with. Since we don't embed the feature in page_pool and we require the
driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
it as is.  When we set/clear the page->pp, the page is probably already in 
cache, so I doubt this will have any measurable impact.

> > +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> > +
> >  	C(end);

[...]

> > @@ -1725,6 +1734,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
> >  	skb->cloned   = 0;
> >  	skb->hdr_len  = 0;
> >  	skb->nohdr    = 0;
> > +	skb->pp_recycle = 0;
> 
> I am not sure why we clear the skb->pp_recycle here.
> As my understanding, the pskb_expand_head() only allocate new head
> data, the old frag page in skb_shinfo()->frags still could be from
> page pool, right?
> 

Ah correct! In that case we must not clear skb->pp_recycle.  The new head
will fail on the signature check and end up being freed, while the
remaining frags will be recycled. The *original* head will be
unmapped/recycled (based of the page refcnt)  on the pskb_expand_head()
itself.

> >  	atomic_set(&skb_shinfo(skb)->dataref, 1);
> >  
> >  	skb_metadata_clear(skb);
> > @@ -3495,7 +3505,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
> >  		fragto = &skb_shinfo(tgt)->frags[merge];
> >  
> >  		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
> > -		__skb_frag_unref(fragfrom, false);
> > +		__skb_frag_unref(fragfrom, skb->pp_recycle);
> >  	}
> >  
> >  	/* Reposition in the original skb */
> > @@ -5285,6 +5295,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
> >  	if (skb_cloned(to))
> >  		return false;
> >  
> > +	/* We can't coalesce skb that are allocated from slab and page_pool
> > +	 * The recycle mark is on the skb, so that might end up trying to
> > +	 * recycle slab allocated skb->head
> > +	 */
> > +	if (to->pp_recycle != from->pp_recycle)
> > +		return false;
> 
> Since we are also depending on page->pp_magic to decide whether to
> recycle a page, we could just set the to->pp_recycle according to
> from->pp_recycle and do the coalesce?

So I was think about this myself.  This check is a 'leftover' from my
initial version, were I only had the pp_recycle bit + struct page
meta-data (without the signature).  Since that version didn't have the
signature you could not coalesce 2 skb's coming from page_pool/slab. 
We could now do what you suggest, but honestly I can't think of many use
cases that this can happen to begin with.  I think I'd prefer leaving it as
is and adjusting the comment.  If we can somehow prove this happens
oftenly and has a performance impact, we can go ahead and remove it.

[...]

Thanks
/Ilias
Yunsheng Lin May 14, 2021, 8:31 a.m. UTC | #3
On 2021/5/14 15:36, Ilias Apalodimas wrote:
> [...]
>>> +		return false;
>>> +
>>> +	pp = (struct page_pool *)page->pp;
>>> +
>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>> +	 * This will *not* work for NIC using a split-page memory model.
>>> +	 * The page will be returned to the pool here regardless of the
>>> +	 * 'flipped' fragment being in use or not.
>>> +	 */
>>> +	page->pp = NULL;
>>
>> Why not only clear the page->pp when the page can not be recycled
>> by the page pool? so that we do not need to set and clear it every
>> time the page is recycled。
>>
> 
> If the page cannot be recycled, page->pp will not probably be set to begin
> with. Since we don't embed the feature in page_pool and we require the
> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> it as is.  When we set/clear the page->pp, the page is probably already in 
> cache, so I doubt this will have any measurable impact.

The point is that we already have the skb->pp_recycle to let driver to
explicitly enable recycling, as part of the 'skb flow, if the page pool keep
the page->pp while it owns the page, then the driver may only need to call
one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
for each page frag of a skb.

Maybe we can add a parameter in "struct page_pool_params" to let driver
to decide if the page pool ptr is stored in page->pp while the page pool
owns the page?

Another thing accured to me is that if the driver use page from the
page pool to form a skb, and it does not call skb_mark_for_recycle(),
then there will be resource leaking, right? if yes, it seems the
skb_mark_for_recycle() call does not seems to add any value?


> 
>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
>>> +
>>>  	C(end);
> 
> [...]
Ilias Apalodimas May 14, 2021, 9:17 a.m. UTC | #4
On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote:
> On 2021/5/14 15:36, Ilias Apalodimas wrote:
> > [...]
> >>> +		return false;
> >>> +
> >>> +	pp = (struct page_pool *)page->pp;
> >>> +
> >>> +	/* Driver set this to memory recycling info. Reset it on recycle.
> >>> +	 * This will *not* work for NIC using a split-page memory model.
> >>> +	 * The page will be returned to the pool here regardless of the
> >>> +	 * 'flipped' fragment being in use or not.
> >>> +	 */
> >>> +	page->pp = NULL;
> >>
> >> Why not only clear the page->pp when the page can not be recycled
> >> by the page pool? so that we do not need to set and clear it every
> >> time the page is recycled。
> >>
> > 
> > If the page cannot be recycled, page->pp will not probably be set to begin
> > with. Since we don't embed the feature in page_pool and we require the
> > driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> > it as is.  When we set/clear the page->pp, the page is probably already in 
> > cache, so I doubt this will have any measurable impact.
> 
> The point is that we already have the skb->pp_recycle to let driver to
> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
> the page->pp while it owns the page, then the driver may only need to call
> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
> for each page frag of a skb.
> 

The driver is meant to call skb_mark_for_recycle for the skb and
page_pool_store_mem_info() for the fragments (in order to store page->pp).
Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
but in any case you need to store the page_pool pointer of each frag to
struct page.

> Maybe we can add a parameter in "struct page_pool_params" to let driver
> to decide if the page pool ptr is stored in page->pp while the page pool
> owns the page?

Then you'd have to check the page pool config before saving the meta-data,
and you would have to make the skb path aware of that as well (I assume you
mean replace pp_recycle with this?).
If not and you just want to add an extra flag on page_pool_params and be able 
to enable recycling depending on that flag, we just add a patch afterwards.
I am not sure we need an extra if for each packet though.

> 
> Another thing accured to me is that if the driver use page from the
> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> then there will be resource leaking, right? if yes, it seems the
> skb_mark_for_recycle() call does not seems to add any value?
> 

Not really, the driver has 2 choices:
- call page_pool_release_page() once it receives the payload. That will
  clean up dma mappings (if page pool is responsible for them) and free the
  buffer
- call skb_mark_for_recycle(). Which will end up recycling the buffer.

If you call none of those, you'd leak a page, but that's a driver bug.
patches [4/5, 5/5] do that for two marvell drivers.
I really want to make drivers opt-in in the feature instead of always
enabling it.

Thanks
/Ilias
> 
> > 
> >>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> >>> +
> >>>  	C(end);
> > 
> > [...]
> 
>
Yunsheng Lin May 15, 2021, 2:07 a.m. UTC | #5
On 2021/5/14 17:17, Ilias Apalodimas wrote:
> On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote:
>> On 2021/5/14 15:36, Ilias Apalodimas wrote:
>>> [...]
>>>>> +		return false;
>>>>> +
>>>>> +	pp = (struct page_pool *)page->pp;
>>>>> +
>>>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>>>> +	 * This will *not* work for NIC using a split-page memory model.
>>>>> +	 * The page will be returned to the pool here regardless of the
>>>>> +	 * 'flipped' fragment being in use or not.
>>>>> +	 */
>>>>> +	page->pp = NULL;
>>>>
>>>> Why not only clear the page->pp when the page can not be recycled
>>>> by the page pool? so that we do not need to set and clear it every
>>>> time the page is recycled。
>>>>
>>>
>>> If the page cannot be recycled, page->pp will not probably be set to begin
>>> with. Since we don't embed the feature in page_pool and we require the
>>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
>>> it as is.  When we set/clear the page->pp, the page is probably already in 
>>> cache, so I doubt this will have any measurable impact.
>>
>> The point is that we already have the skb->pp_recycle to let driver to
>> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
>> the page->pp while it owns the page, then the driver may only need to call
>> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
>> for each page frag of a skb.
>>
> 
> The driver is meant to call skb_mark_for_recycle for the skb and
> page_pool_store_mem_info() for the fragments (in order to store page->pp).
> Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
> but in any case you need to store the page_pool pointer of each frag to
> struct page.

Right. Nothing bad will happen when we keep the page_pool pointer in
page->pp while page pool owns the page too, even if the skb->pp_recycle
is not set, right?

> 
>> Maybe we can add a parameter in "struct page_pool_params" to let driver
>> to decide if the page pool ptr is stored in page->pp while the page pool
>> owns the page?
> 
> Then you'd have to check the page pool config before saving the meta-data,

I am not sure what the "saving the meta-data" meant?

> and you would have to make the skb path aware of that as well (I assume you
> mean replace pp_recycle with this?).

I meant we could set the in page->pp when the page is allocated from
alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
according to a newly add filed in pool->p, and only clear it in
page_pool_release_page(), between which the page is owned by page pool,
right?

> If not and you just want to add an extra flag on page_pool_params and be able 
> to enable recycling depending on that flag, we just add a patch afterwards.
> I am not sure we need an extra if for each packet though.

In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
but not the pool->p.

> 
>>
>> Another thing accured to me is that if the driver use page from the
>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
>> then there will be resource leaking, right? if yes, it seems the
>> skb_mark_for_recycle() call does not seems to add any value?
>>
> 
> Not really, the driver has 2 choices:
> - call page_pool_release_page() once it receives the payload. That will
>   clean up dma mappings (if page pool is responsible for them) and free the
>   buffer

The is only needed before SKB recycling is supported or the driver does not
want the SKB recycling support explicitly, right?

> - call skb_mark_for_recycle(). Which will end up recycling the buffer.

If the driver need to add extra flag to enable recycling based on skb
instead of page pool, then adding skb_mark_for_recycle() makes sense to
me too, otherwise it seems adding a field in pool->p to recycling based
on skb makes more sense?

> 
> If you call none of those, you'd leak a page, but that's a driver bug.
> patches [4/5, 5/5] do that for two marvell drivers.
> I really want to make drivers opt-in in the feature instead of always
> enabling it.
> 
> Thanks
> /Ilias
>>
>>>
>>>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
>>>>> +
>>>>>  	C(end);
>>>
>>> [...]
>>
>>
> 
> .
>
Ilias Apalodimas May 17, 2021, 6:38 a.m. UTC | #6
[...]
> >>>> by the page pool? so that we do not need to set and clear it every
> >>>> time the page is recycled。
> >>>>
> >>>
> >>> If the page cannot be recycled, page->pp will not probably be set to begin
> >>> with. Since we don't embed the feature in page_pool and we require the
> >>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
> >>> it as is.  When we set/clear the page->pp, the page is probably already in 
> >>> cache, so I doubt this will have any measurable impact.
> >>
> >> The point is that we already have the skb->pp_recycle to let driver to
> >> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
> >> the page->pp while it owns the page, then the driver may only need to call
> >> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
> >> for each page frag of a skb.
> >>
> > 
> > The driver is meant to call skb_mark_for_recycle for the skb and
> > page_pool_store_mem_info() for the fragments (in order to store page->pp).
> > Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
> > but in any case you need to store the page_pool pointer of each frag to
> > struct page.
> 
> Right. Nothing bad will happen when we keep the page_pool pointer in
> page->pp while page pool owns the page too, even if the skb->pp_recycle
> is not set, right?

Yep, nothing bad will happen. Both functions using this (__skb_frag_unref and
skb_free_head) always check the skb bit as well.

> 
> > 
> >> Maybe we can add a parameter in "struct page_pool_params" to let driver
> >> to decide if the page pool ptr is stored in page->pp while the page pool
> >> owns the page?
> > 
> > Then you'd have to check the page pool config before saving the meta-data,
> 
> I am not sure what the "saving the meta-data" meant?

I was referring to struct page_pool* and the signature we store in struct
page.

> 
> > and you would have to make the skb path aware of that as well (I assume you
> > mean replace pp_recycle with this?).
> 
> I meant we could set the in page->pp when the page is allocated from
> alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
> according to a newly add filed in pool->p, and only clear it in
> page_pool_release_page(), between which the page is owned by page pool,
> right?
> 
> > If not and you just want to add an extra flag on page_pool_params and be able 
> > to enable recycling depending on that flag, we just add a patch afterwards.
> > I am not sure we need an extra if for each packet though.
> 
> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
> but not the pool->p.
> 
> > 
> >>
> >> Another thing accured to me is that if the driver use page from the
> >> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> >> then there will be resource leaking, right? if yes, it seems the
> >> skb_mark_for_recycle() call does not seems to add any value?
> >>
> > 
> > Not really, the driver has 2 choices:
> > - call page_pool_release_page() once it receives the payload. That will
> >   clean up dma mappings (if page pool is responsible for them) and free the
> >   buffer
> 
> The is only needed before SKB recycling is supported or the driver does not
> want the SKB recycling support explicitly, right?
> 

This is needed in general even before recycling.  It's used to unmap the
buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
that's what all the drivers that use page_pool call today.

> > - call skb_mark_for_recycle(). Which will end up recycling the buffer.
> 
> If the driver need to add extra flag to enable recycling based on skb
> instead of page pool, then adding skb_mark_for_recycle() makes sense to
> me too, otherwise it seems adding a field in pool->p to recycling based
> on skb makes more sense?
> 

The recycling is essentially an SKB feature though isn't it?  You achieve the
SKB recycling with the help of page_pool API, not the other way around.  So I
think this should remain on the SKB and maybe in the future find ways to turn
in on/off?

Thanks
/Ilias

> > 
> > If you call none of those, you'd leak a page, but that's a driver bug.
> > patches [4/5, 5/5] do that for two marvell drivers.
> > I really want to make drivers opt-in in the feature instead of always
> > enabling it.
> > 
> > Thanks
> > /Ilias
> >>
> >>>
> >>>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
> >>>>> +
> >>>>>  	C(end);
> >>>
> >>> [...]
> >>
> >>
> > 
> > .
> > 
>
Yunsheng Lin May 17, 2021, 8:25 a.m. UTC | #7
On 2021/5/17 14:38, Ilias Apalodimas wrote:
> [...]
>>
>>>
>>>> Maybe we can add a parameter in "struct page_pool_params" to let driver
>>>> to decide if the page pool ptr is stored in page->pp while the page pool
>>>> owns the page?
>>>
>>> Then you'd have to check the page pool config before saving the meta-data,
>>
>> I am not sure what the "saving the meta-data" meant?
> 
> I was referring to struct page_pool* and the signature we store in struct
> page.
> 
>>
>>> and you would have to make the skb path aware of that as well (I assume you
>>> mean replace pp_recycle with this?).
>>
>> I meant we could set the in page->pp when the page is allocated from
>> alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
>> according to a newly add filed in pool->p, and only clear it in
>> page_pool_release_page(), between which the page is owned by page pool,
>> right?
>>
>>> If not and you just want to add an extra flag on page_pool_params and be able 
>>> to enable recycling depending on that flag, we just add a patch afterwards.
>>> I am not sure we need an extra if for each packet though.
>>
>> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
>> but not the pool->p.
>>
>>>
>>>>
>>>> Another thing accured to me is that if the driver use page from the
>>>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
>>>> then there will be resource leaking, right? if yes, it seems the
>>>> skb_mark_for_recycle() call does not seems to add any value?
>>>>
>>>
>>> Not really, the driver has 2 choices:
>>> - call page_pool_release_page() once it receives the payload. That will
>>>   clean up dma mappings (if page pool is responsible for them) and free the
>>>   buffer
>>
>> The is only needed before SKB recycling is supported or the driver does not
>> want the SKB recycling support explicitly, right?
>>
> 
> This is needed in general even before recycling.  It's used to unmap the
> buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
> that's what all the drivers that use page_pool call today.

As my understanding:
1. If the driver is using page allocated from page allocator directly to
   form a skb, let's say the page is owned by skb(or not owned by anyone:)),
   when a skb is freed, the put_page() should be called.

2. If the driver is using page allocated from page pool to form a skb, let's
   say the page is owned by page pool, when a skb is freed, page_pool_put_page()
   should be called.

What page_pool_release_page() mainly do is to make page in case 2 return back
to case 1.

And page_pool_release_page() is replaced with skb_mark_for_recycle() in patch
4/5 to avoid the above "case 2" -> "case 1" changing, so that the page is still
owned by page pool, right?

So the point is that skb_mark_for_recycle() does not really do anything about
the owner of the page, it is still owned by page pool, so it makes more sense
to keep the page pool ptr instead of setting it every time when
skb_mark_for_recycle() is called?

> 
>>> - call skb_mark_for_recycle(). Which will end up recycling the buffer.
>>
>> If the driver need to add extra flag to enable recycling based on skb
>> instead of page pool, then adding skb_mark_for_recycle() makes sense to
>> me too, otherwise it seems adding a field in pool->p to recycling based
>> on skb makes more sense?
>>
> 
> The recycling is essentially an SKB feature though isn't it?  You achieve the
> SKB recycling with the help of page_pool API, not the other way around.  So I
> think this should remain on the SKB and maybe in the future find ways to turn
> in on/off?

As above, does it not make more sense to call page_pool_release_page() if the
driver does not need the SKB recycling?

Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
or page pool are both supported, so it seems page->signature need to be reliable
to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
the page->signature does not need checking?

> 
> Thanks
> /Ilias
Ilias Apalodimas May 17, 2021, 9:36 a.m. UTC | #8
> >>

[...]

> >> In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
> >> but not the pool->p.
> >>
> >>>
> >>>>
> >>>> Another thing accured to me is that if the driver use page from the
> >>>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
> >>>> then there will be resource leaking, right? if yes, it seems the
> >>>> skb_mark_for_recycle() call does not seems to add any value?
> >>>>
> >>>
> >>> Not really, the driver has 2 choices:
> >>> - call page_pool_release_page() once it receives the payload. That will
> >>>   clean up dma mappings (if page pool is responsible for them) and free the
> >>>   buffer
> >>
> >> The is only needed before SKB recycling is supported or the driver does not
> >> want the SKB recycling support explicitly, right?
> >>
> > 
> > This is needed in general even before recycling.  It's used to unmap the
> > buffer, so once you free the SKB you don't leave any stale DMA mappings.  So
> > that's what all the drivers that use page_pool call today.
> 
> As my understanding:
> 1. If the driver is using page allocated from page allocator directly to
>    form a skb, let's say the page is owned by skb(or not owned by anyone:)),
>    when a skb is freed, the put_page() should be called.
> 
> 2. If the driver is using page allocated from page pool to form a skb, let's
>    say the page is owned by page pool, when a skb is freed, page_pool_put_page()
>    should be called.
> 
> What page_pool_release_page() mainly do is to make page in case 2 return back
> to case 1.

Yea but this is done deliberately.  Let me try to explain the reasoning a
bit.  I don't think mixing the SKB path with page_pool is the right idea. 
page_pool allocates the memory you want to build an SKB and imho it must be 
kept completely disjoint with the generic SKB code.  So once you free an SKB,
I don't like having page_pool_put_page() in the release code explicitly.  
What we do instead is call page_pool_release_page() from the driver.  So the 
page is disconnected from page pool and the skb release path works as it used 
to.

> 
> And page_pool_release_page() is replaced with skb_mark_for_recycle() in patch
> 4/5 to avoid the above "case 2" -> "case 1" changing, so that the page is still
> owned by page pool, right?
> 
> So the point is that skb_mark_for_recycle() does not really do anything about
> the owner of the page, it is still owned by page pool, so it makes more sense
> to keep the page pool ptr instead of setting it every time when
> skb_mark_for_recycle() is called?

Yes it doesn't do anything wrt to ownership.  The page must always come
from page pool if you want to recycle it. But as I tried to explain above,
it felt more intuitive to keep the driver flow as-is as well as  the
release path.  On a driver right now when you are done with the skb creation, 
you unmap the skb->head + fragments.  So if you want to recycle it it instead, 
you mark the skb and fragments.

> 
> > 
> >>> - call skb_mark_for_recycle(). Which will end up recycling the buffer.
> >>
> >> If the driver need to add extra flag to enable recycling based on skb
> >> instead of page pool, then adding skb_mark_for_recycle() makes sense to
> >> me too, otherwise it seems adding a field in pool->p to recycling based
> >> on skb makes more sense?
> >>
> > 
> > The recycling is essentially an SKB feature though isn't it?  You achieve the
> > SKB recycling with the help of page_pool API, not the other way around.  So I
> > think this should remain on the SKB and maybe in the future find ways to turn
> > in on/off?
> 
> As above, does it not make more sense to call page_pool_release_page() if the
> driver does not need the SKB recycling?

Call it were? As i tried to explain it makes no sense to me having it in
generic SKB code (unless recycling is enabled).

That's what's happening right now when recycling is enabled.
Basically the call path is:
if (skb bit is set) {
	if (page signature matches)
		page_pool_put_full_page() 
}
page_pool_put_full_page() will either:
1. recycle the page in the 'fast cache' of page pool
2. recycle the page in the ptr ring of page pool
3. Release it calling page_pool_release_page()

If you don't want to enable it you just call page_pool_release_page() on
your driver and the generic path will free the allocated page.

> 
> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
> or page pool are both supported, so it seems page->signature need to be reliable
> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
> the page->signature does not need checking?

Yes, the idea for the recycling bit, is that you don't have to fetch the page
in cache do do more processing (since freeing is asynchronous and we
can't have any guarantees on what the cache will have at that point).  So we
are trying to affect the existing release path a less as possible. However it's
that new skb bit that triggers the whole path.

What you propose could still be doable though.  As you said we can add the
page pointer to struct page when we allocate a page_pool page and never
reset it when we recycle the buffer. But I don't think there will be any
performance impact whatsoever. So I prefer the 'visible' approach, at least for
the first iteration.

Thanks
/Ilias
Yunsheng Lin May 17, 2021, 11:10 a.m. UTC | #9
On 2021/5/17 17:36, Ilias Apalodimas wrote:
 >>
>> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
>> or page pool are both supported, so it seems page->signature need to be reliable
>> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
>> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
>> the page->signature does not need checking?
> 
> Yes, the idea for the recycling bit, is that you don't have to fetch the page
> in cache do do more processing (since freeing is asynchronous and we
> can't have any guarantees on what the cache will have at that point).  So we
> are trying to affect the existing release path a less as possible. However it's
> that new skb bit that triggers the whole path.
> 
> What you propose could still be doable though.  As you said we can add the
> page pointer to struct page when we allocate a page_pool page and never
> reset it when we recycle the buffer. But I don't think there will be any
> performance impact whatsoever. So I prefer the 'visible' approach, at least for

setting and unsetting the page_pool ptr every time the page is recycled may
cause a cache bouncing problem when rx cleaning and skb releasing is not
happening on the same cpu.

> the first iteration.
> 
> Thanks
> /Ilias
>  
> 
> .
>
Ilias Apalodimas May 17, 2021, 11:35 a.m. UTC | #10
On Mon, May 17, 2021 at 07:10:09PM +0800, Yunsheng Lin wrote:
> On 2021/5/17 17:36, Ilias Apalodimas wrote:
>  >>
> >> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly
> >> or page pool are both supported, so it seems page->signature need to be reliable
> >> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle
> >> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that
> >> the page->signature does not need checking?
> > 
> > Yes, the idea for the recycling bit, is that you don't have to fetch the page
> > in cache do do more processing (since freeing is asynchronous and we
> > can't have any guarantees on what the cache will have at that point).  So we
> > are trying to affect the existing release path a less as possible. However it's
> > that new skb bit that triggers the whole path.
> > 
> > What you propose could still be doable though.  As you said we can add the
> > page pointer to struct page when we allocate a page_pool page and never
> > reset it when we recycle the buffer. But I don't think there will be any
> > performance impact whatsoever. So I prefer the 'visible' approach, at least for
> 
> setting and unsetting the page_pool ptr every time the page is recycled may
> cause a cache bouncing problem when rx cleaning and skb releasing is not
> happening on the same cpu.

In our case since the skb is asynchronous and not protected by a NAPI context,
the buffer wont end up in the 'fast' page pool cache.  So we'll recycle by
calling page_pool_recycle_in_ring() not page_pool_recycle_in_cache().  Which
means that the page you recycled will be re-filled later, in batches, when
page_pool_refill_alloc_cache() is called to refill the fast cache.  I am not i
saying it might not happen, but I don't really know if it's going to make a
difference or not.  So I just really prefer taking this as is and perhaps
later, when 40/100gbit drivers start using it we can justify the optimization
(along with supporting the split page model).

Thanks
/Ilias

> 
> > the first iteration.
> > 
> > Thanks
> > /Ilias
> >  
> > 
> > .
> > 
>
diff mbox series

Patch

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7fcfea7e7b21..057b40ad29bd 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -40,6 +40,9 @@ 
 #if IS_ENABLED(CONFIG_NF_CONNTRACK)
 #include <linux/netfilter/nf_conntrack_common.h>
 #endif
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 /* The interface for checksum offload between the stack and networking drivers
  * is as follows...
@@ -667,6 +670,8 @@  typedef unsigned char *sk_buff_data_t;
  *	@head_frag: skb was allocated from page fragments,
  *		not allocated by kmalloc() or vmalloc().
  *	@pfmemalloc: skbuff was allocated from PFMEMALLOC reserves
+ *	@pp_recycle: mark the packet for recycling instead of freeing (implies
+ *		page_pool support on driver)
  *	@active_extensions: active extensions (skb_ext_id types)
  *	@ndisc_nodetype: router type (from link layer)
  *	@ooo_okay: allow the mapping of a socket to a queue to be changed
@@ -791,10 +796,12 @@  struct sk_buff {
 				fclone:2,
 				peeked:1,
 				head_frag:1,
-				pfmemalloc:1;
+				pfmemalloc:1,
+				pp_recycle:1; /* page_pool recycle indicator */
 #ifdef CONFIG_SKB_EXTENSIONS
 	__u8			active_extensions;
 #endif
+
 	/* fields enclosed in headers_start/headers_end are copied
 	 * using a single memcpy() in __copy_skb_header()
 	 */
@@ -3088,7 +3095,13 @@  static inline void skb_frag_ref(struct sk_buff *skb, int f)
  */
 static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
 {
-	put_page(skb_frag_page(frag));
+	struct page *page = skb_frag_page(frag);
+
+#ifdef CONFIG_PAGE_POOL
+	if (recycle && page_pool_return_skb_page(page_address(page)))
+		return;
+#endif
+	put_page(page);
 }
 
 /**
@@ -3100,7 +3113,7 @@  static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
  */
 static inline void skb_frag_unref(struct sk_buff *skb, int f)
 {
-	__skb_frag_unref(&skb_shinfo(skb)->frags[f], false);
+	__skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle);
 }
 
 /**
@@ -4699,5 +4712,14 @@  static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_PAGE_POOL
+static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page,
+					struct page_pool *pp)
+{
+	skb->pp_recycle = 1;
+	page_pool_store_mem_info(page, pp);
+}
+#endif
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 24b3d42c62c0..ce75abeddb29 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -148,6 +148,8 @@  inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
 	return pool->p.dma_dir;
 }
 
+bool page_pool_return_skb_page(void *data);
+
 struct page_pool *page_pool_create(const struct page_pool_params *params);
 
 #ifdef CONFIG_PAGE_POOL
@@ -253,4 +255,11 @@  static inline void page_pool_ring_unlock(struct page_pool *pool)
 		spin_unlock_bh(&pool->ring.producer_lock);
 }
 
+/* Store mem_info on struct page and use it while recycling skb frags */
+static inline
+void page_pool_store_mem_info(struct page *page, struct page_pool *pp)
+{
+	page->pp = pp;
+}
+
 #endif /* _NET_PAGE_POOL_H */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 9de5d8c08c17..fa9f17db7c48 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -626,3 +626,26 @@  void page_pool_update_nid(struct page_pool *pool, int new_nid)
 	}
 }
 EXPORT_SYMBOL(page_pool_update_nid);
+
+bool page_pool_return_skb_page(void *data)
+{
+	struct page_pool *pp;
+	struct page *page;
+
+	page = virt_to_head_page(data);
+	if (unlikely(page->pp_magic != PP_SIGNATURE))
+		return false;
+
+	pp = (struct page_pool *)page->pp;
+
+	/* Driver set this to memory recycling info. Reset it on recycle.
+	 * This will *not* work for NIC using a split-page memory model.
+	 * The page will be returned to the pool here regardless of the
+	 * 'flipped' fragment being in use or not.
+	 */
+	page->pp = NULL;
+	page_pool_put_full_page(pp, virt_to_head_page(data), false);
+
+	return true;
+}
+EXPORT_SYMBOL(page_pool_return_skb_page);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 12b7e90dd2b5..9581af44d587 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -70,6 +70,9 @@ 
 #include <net/xfrm.h>
 #include <net/mpls.h>
 #include <net/mptcp.h>
+#ifdef CONFIG_PAGE_POOL
+#include <net/page_pool.h>
+#endif
 
 #include <linux/uaccess.h>
 #include <trace/events/skb.h>
@@ -645,10 +648,15 @@  static void skb_free_head(struct sk_buff *skb)
 {
 	unsigned char *head = skb->head;
 
-	if (skb->head_frag)
+	if (skb->head_frag) {
+#ifdef CONFIG_PAGE_POOL
+		if (skb->pp_recycle && page_pool_return_skb_page(head))
+			return;
+#endif
 		skb_free_frag(head);
-	else
+	} else {
 		kfree(head);
+	}
 }
 
 static void skb_release_data(struct sk_buff *skb)
@@ -664,7 +672,7 @@  static void skb_release_data(struct sk_buff *skb)
 	skb_zcopy_clear(skb, true);
 
 	for (i = 0; i < shinfo->nr_frags; i++)
-		__skb_frag_unref(&shinfo->frags[i], false);
+		__skb_frag_unref(&shinfo->frags[i], skb->pp_recycle);
 
 	if (shinfo->frag_list)
 		kfree_skb_list(shinfo->frag_list);
@@ -1046,6 +1054,7 @@  static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)
 	n->nohdr = 0;
 	n->peeked = 0;
 	C(pfmemalloc);
+	C(pp_recycle);
 	n->destructor = NULL;
 	C(tail);
 	C(end);
@@ -1725,6 +1734,7 @@  int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
 	skb->cloned   = 0;
 	skb->hdr_len  = 0;
 	skb->nohdr    = 0;
+	skb->pp_recycle = 0;
 	atomic_set(&skb_shinfo(skb)->dataref, 1);
 
 	skb_metadata_clear(skb);
@@ -3495,7 +3505,7 @@  int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen)
 		fragto = &skb_shinfo(tgt)->frags[merge];
 
 		skb_frag_size_add(fragto, skb_frag_size(fragfrom));
-		__skb_frag_unref(fragfrom, false);
+		__skb_frag_unref(fragfrom, skb->pp_recycle);
 	}
 
 	/* Reposition in the original skb */
@@ -5285,6 +5295,13 @@  bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 	if (skb_cloned(to))
 		return false;
 
+	/* We can't coalesce skb that are allocated from slab and page_pool
+	 * The recycle mark is on the skb, so that might end up trying to
+	 * recycle slab allocated skb->head
+	 */
+	if (to->pp_recycle != from->pp_recycle)
+		return false;
+
 	if (len <= skb_tailroom(to)) {
 		if (len)
 			BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));