diff mbox series

[v3,17/26] page_pool: Convert page_pool_return_skb_page() to use netmem

Message ID 20230111042214.907030-18-willy@infradead.org (mailing list archive)
State New
Headers show
Series Split netmem from struct page | expand

Commit Message

Matthew Wilcox (Oracle) Jan. 11, 2023, 4:22 a.m. UTC
This function accesses the pagepool members of struct page directly,
so it needs to become netmem.  Add page_pool_put_full_netmem().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
---
 include/net/page_pool.h |  9 ++++++++-
 net/core/page_pool.c    | 13 ++++++-------
 2 files changed, 14 insertions(+), 8 deletions(-)

Comments

Jesper Dangaard Brouer Jan. 12, 2023, 8:46 a.m. UTC | #1
On 11/01/2023 05.22, Matthew Wilcox (Oracle) wrote:
> This function accesses the pagepool members of struct page directly,
> so it needs to become netmem.  Add page_pool_put_full_netmem().
> 
> Signed-off-by: Matthew Wilcox (Oracle)<willy@infradead.org>
> Reviewed-by: Jesse Brandeburg<jesse.brandeburg@intel.com>
> ---
>   include/net/page_pool.h |  9 ++++++++-
>   net/core/page_pool.c    | 13 ++++++-------
>   2 files changed, 14 insertions(+), 8 deletions(-)

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
diff mbox series

Patch

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index a568d94043af..e205eaed21a5 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -468,10 +468,17 @@  static inline void page_pool_put_page(struct page_pool *pool,
 }
 
 /* Same as above but will try to sync the entire area pool->max_len */
+static inline void page_pool_put_full_netmem(struct page_pool *pool,
+		struct netmem *nmem, bool allow_direct)
+{
+	page_pool_put_netmem(pool, nmem, -1, allow_direct);
+}
+
+/* Compat, remove when all users gone */
 static inline void page_pool_put_full_page(struct page_pool *pool,
 					   struct page *page, bool allow_direct)
 {
-	page_pool_put_page(pool, page, -1, allow_direct);
+	page_pool_put_full_netmem(pool, page_netmem(page), allow_direct);
 }
 
 /* Same as above but the caller must guarantee safe context. e.g NAPI */
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index cd469a9970e7..ddf9f2bb85f7 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -886,28 +886,27 @@  EXPORT_SYMBOL(page_pool_update_nid);
 
 bool page_pool_return_skb_page(struct page *page)
 {
+	struct netmem *nmem = page_netmem(compound_head(page));
 	struct page_pool *pp;
 
-	page = compound_head(page);
-
-	/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
+	/* nmem->pp_magic is OR'ed with PP_SIGNATURE after the allocation
 	 * in order to preserve any existing bits, such as bit 0 for the
 	 * head page of compound page and bit 1 for pfmemalloc page, so
 	 * mask those bits for freeing side when doing below checking,
-	 * and page_is_pfmemalloc() is checked in __page_pool_put_page()
+	 * and netmem_is_pfmemalloc() is checked in __page_pool_put_netmem()
 	 * to avoid recycling the pfmemalloc page.
 	 */
-	if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
+	if (unlikely((nmem->pp_magic & ~0x3UL) != PP_SIGNATURE))
 		return false;
 
-	pp = page->pp;
+	pp = nmem->pp;
 
 	/* Driver set this to memory recycling info. Reset it on recycle.
 	 * This will *not* work for NIC using a split-page memory model.
 	 * The page will be returned to the pool here regardless of the
 	 * 'flipped' fragment being in use or not.
 	 */
-	page_pool_put_full_page(pp, page, false);
+	page_pool_put_full_netmem(pp, nmem, false);
 
 	return true;
 }