Message ID | 1643499540-8351-10-git-send-email-jdamato@fastly.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | page_pool: Add page_pool stat counters | expand |
diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 4991109..e411ef6 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -150,6 +150,7 @@ struct page_pool_stats { * slow path allocation */ u64 refill; /* allocations via successful refill */ + u64 waive; /* failed refills due to numa zone mismatch */ } alloc; }; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ffb68b8..c6f31c5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -161,6 +161,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) * This limit stress on page buddy alloactor. */ page_pool_return_page(pool, page); + page_pool_stat_alloc_inc(waive); page = NULL; break; }
Track how often pages obtained from the ring cannot be added to the cache because of a NUMA mismatch. Signed-off-by: Joe Damato <jdamato@fastly.com> --- include/net/page_pool.h | 1 + net/core/page_pool.c | 1 + 2 files changed, 2 insertions(+)