diff mbox series

[6/6] net: page_pool: Add a stat tracking waived pages.

Message ID 1643237300-44904-7-git-send-email-jdamato@fastly.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series net: page_pool: Add page_pool stat counters | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 5991 this patch: 5991
netdev/cc_maintainers success CCed 5 of 5 maintainers
netdev/build_clang success Errors and warnings before: 882 this patch: 882
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 6142 this patch: 6142
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 36 lines checked
netdev/kdoc success Errors and warnings before: 2 this patch: 2
netdev/source_inline success Was 0 now: 0

Commit Message

Joe Damato Jan. 26, 2022, 10:48 p.m. UTC
Track how often pages obtained from the ring cannot be added to the cache
because of a NUMA mismatch. A static inline wrapper is added for accessing
this stat.

Signed-off-by: Joe Damato <jdamato@fastly.com>
---
 include/net/page_pool.h | 11 +++++++++++
 net/core/page_pool.c    |  1 +
 2 files changed, 12 insertions(+)
diff mbox series

Patch

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index a68d05f..cf65d78 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -88,6 +88,7 @@  struct page_pool_stats {
 			    * slow path allocation
 			    */
 		u64 refill; /* allocations via successful refill */
+		u64 waive;  /* failed refills due to numa zone mismatch */
 	} alloc;
 };
 
@@ -226,6 +227,11 @@  static inline u64 page_pool_stats_get_refill(struct page_pool *pool)
 {
 	return pool->ps.alloc.refill;
 }
+
+static inline u64 page_pool_stats_get_waive(struct page_pool *pool)
+{
+	return pool->ps.alloc.waive;
+}
 #else
 static inline void page_pool_destroy(struct page_pool *pool)
 {
@@ -270,6 +276,11 @@  static inline u64 page_pool_stats_get_refill(struct page_pool *pool)
 {
 	return 0;
 }
+
+static inline u64 page_pool_stats_get_waive(struct page_pool *pool)
+{
+	return 0;
+}
 #endif
 
 void page_pool_put_page(struct page_pool *pool, struct page *page,
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 15f4e73..7c4ae2e 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -147,6 +147,7 @@  static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
 			 * This limit stress on page buddy alloactor.
 			 */
 			page_pool_return_page(pool, page);
+			pool->ps.alloc.waive++;
 			page = NULL;
 			break;
 		}