Message ID | 20241029230521.2385749-8-dw@davidwei.uk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io_uring zero copy rx | expand |
On Tue, Oct 29, 2024 at 4:06 PM David Wei <dw@davidwei.uk> wrote: > > From: Pavel Begunkov <asml.silence@gmail.com> > > Add a helper that allows a page pool memory provider to efficiently > return a netmem off the allocation callback. > > Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> > Signed-off-by: David Wei <dw@davidwei.uk> > --- > include/net/page_pool/memory_provider.h | 4 ++++ > net/core/page_pool.c | 19 +++++++++++++++++++ > 2 files changed, 23 insertions(+) > > diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h > index 83d7eec0058d..352b3a35d31c 100644 > --- a/include/net/page_pool/memory_provider.h > +++ b/include/net/page_pool/memory_provider.h > @@ -1,3 +1,5 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later */ > + > #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H > #define _NET_PAGE_POOL_MEMORY_PROVIDER_H > > @@ -7,4 +9,6 @@ int page_pool_mp_init_paged_area(struct page_pool *pool, > void page_pool_mp_release_area(struct page_pool *pool, > struct net_iov_area *area); > > +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem); > + > #endif > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 8bd4a3c80726..9078107c906d 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -1213,3 +1213,22 @@ void page_pool_mp_release_area(struct page_pool *pool, > page_pool_release_page_dma(pool, net_iov_to_netmem(niov)); > } > } > + > +/* > + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache. > + * @pool: pool from which pages were allocated > + * @netmem: netmem to return > + * > + * Return already allocated and accounted netmem to the page pool's allocation > + * cache. The function doesn't provide synchronisation and must only be called > + * from the napi context. Maybe add: /* Caller must verify that there is room in the cache */ > + */ > +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem) > +{ > + if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL)) > + return; The caller must verify this anyway, right? so maybe this WARN_ON_ONCE is too defensive. > + > + page_pool_dma_sync_for_device(pool, netmem, -1); > + page_pool_fragment_netmem(netmem, 1); > + pool->alloc.cache[pool->alloc.count++] = netmem; > +} > -- > 2.43.5 >
diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 83d7eec0058d..352b3a35d31c 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -1,3 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H #define _NET_PAGE_POOL_MEMORY_PROVIDER_H @@ -7,4 +9,6 @@ int page_pool_mp_init_paged_area(struct page_pool *pool, void page_pool_mp_release_area(struct page_pool *pool, struct net_iov_area *area); +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem); + #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8bd4a3c80726..9078107c906d 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -1213,3 +1213,22 @@ void page_pool_mp_release_area(struct page_pool *pool, page_pool_release_page_dma(pool, net_iov_to_netmem(niov)); } } + +/* + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache. + * @pool: pool from which pages were allocated + * @netmem: netmem to return + * + * Return already allocated and accounted netmem to the page pool's allocation + * cache. The function doesn't provide synchronisation and must only be called + * from the napi context. + */ +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem) +{ + if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL)) + return; + + page_pool_dma_sync_for_device(pool, netmem, -1); + page_pool_fragment_netmem(netmem, 1); + pool->alloc.cache[pool->alloc.count++] = netmem; +}