diff mbox series

[RFC,v4,03/16] net: page_pool: add ->scrub mem provider callback

Message ID 20240312214430.2923019-4-dw@davidwei.uk (mailing list archive)
State New
Headers show
Series Zero copy Rx using io_uring | expand

Commit Message

David Wei March 12, 2024, 9:44 p.m. UTC
From: Pavel Begunkov <asml.silence@gmail.com>

page pool is now waiting for all ppiovs to return before destroying
itself, and for that to happen the memory provider might need to push
some buffers, flush caches and so on.

todo: we'll try to get by without it before the final release

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: David Wei <dw@davidwei.uk>
---
 include/net/page_pool/types.h | 1 +
 net/core/page_pool.c          | 3 +++
 2 files changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 096cd2455b2c..347837b83d36 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -134,6 +134,7 @@  struct memory_provider_ops {
 	void (*destroy)(struct page_pool *pool);
 	netmem_ref (*alloc_pages)(struct page_pool *pool, gfp_t gfp);
 	bool (*release_page)(struct page_pool *pool, netmem_ref netmem);
+	void (*scrub)(struct page_pool *pool);
 };
 
 extern const struct memory_provider_ops dmabuf_devmem_ops;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 5d5b78878473..fc92e551ed13 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -984,6 +984,9 @@  static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
 
 static void page_pool_scrub(struct page_pool *pool)
 {
+	if (pool->mp_ops && pool->mp_ops->scrub)
+		pool->mp_ops->scrub(pool);
+
 	page_pool_empty_alloc_cache_once(pool);
 	pool->destroy_cnt++;