diff mbox series

[net-next] net: page_pool: add pages and released_pages counters

Message ID a20f97acccce65d174f704eadbf685d0ce1201af.1681422222.git.lorenzo@kernel.org (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net-next] net: page_pool: add pages and released_pages counters | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 5213 this patch: 5213
netdev/cc_maintainers warning 2 maintainers not CCed: corbet@lwn.net linux-doc@vger.kernel.org
netdev/build_clang success Errors and warnings before: 965 this patch: 965
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 5419 this patch: 5419
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 82 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Lorenzo Bianconi April 13, 2023, 9:46 p.m. UTC
Introduce pages and released_pages counters to page_pool ethtool stats
in order to track the number of allocated and released pages from the
pool.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 Documentation/networking/page_pool.rst | 2 ++
 include/net/page_pool.h                | 2 ++
 net/core/page_pool.c                   | 8 ++++++++
 3 files changed, 12 insertions(+)

Comments

Jakub Kicinski April 15, 2023, 1:46 a.m. UTC | #1
On Thu, 13 Apr 2023 23:46:03 +0200 Lorenzo Bianconi wrote:
> @@ -411,6 +417,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
>  		pool->pages_state_hold_cnt++;
>  		trace_page_pool_state_hold(pool, page,
>  					   pool->pages_state_hold_cnt);
> +		alloc_stat_inc(pool, pages);
>  	}
>  
>  	/* Return last page */

What about high order? If we use bulk API for high order one day, 
will @slow_high_order not count calls like @slow does? So we should
bump the new counter for high order, too.

Which makes it very similar to pages_state_hold_cnt, just 64bit...
Lorenzo Bianconi April 15, 2023, 11:16 a.m. UTC | #2
> On Thu, 13 Apr 2023 23:46:03 +0200 Lorenzo Bianconi wrote:
> > @@ -411,6 +417,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> >  		pool->pages_state_hold_cnt++;
> >  		trace_page_pool_state_hold(pool, page,
> >  					   pool->pages_state_hold_cnt);
> > +		alloc_stat_inc(pool, pages);
> >  	}
> >  
> >  	/* Return last page */
> 
> What about high order? If we use bulk API for high order one day, 
> will @slow_high_order not count calls like @slow does? So we should
> bump the new counter for high order, too.

yes, right. AFAIU "slow_high_order" and "slow" just count number of
pages returned to the pool consumer and not the number of pages
allocated to the pool (as you said, since we do not use bulking
for high_order allocation there is no difference at the moment).
What I would like to track is the number of allocated pages
(of any order) so I guess we can just increment "pages" counter in
__page_pool_alloc_page_order() as well. Agree?

> 
> Which makes it very similar to pages_state_hold_cnt, just 64bit...

do you prefer to use pages_state_hold_cnt instead of adding a new
pages counter?

Regards,
Lorenzo
Jakub Kicinski April 17, 2023, 6:12 p.m. UTC | #3
On Sat, 15 Apr 2023 13:16:40 +0200 Lorenzo Bianconi wrote:
> > What about high order? If we use bulk API for high order one day, 
> > will @slow_high_order not count calls like @slow does? So we should
> > bump the new counter for high order, too.  
> 
> yes, right. AFAIU "slow_high_order" and "slow" just count number of
> pages returned to the pool consumer and not the number of pages
> allocated to the pool (as you said, since we do not use bulking
> for high_order allocation there is no difference at the moment).
> What I would like to track is the number of allocated pages
> (of any order) so I guess we can just increment "pages" counter in
> __page_pool_alloc_page_order() as well. Agree?

Yup, that sounds better.

> > Which makes it very similar to pages_state_hold_cnt, just 64bit...  
> 
> do you prefer to use pages_state_hold_cnt instead of adding a new
> pages counter?

No strong preference either way. It's a tradeoff between saving 4B 
and making the code a little more complex. Perhaps we should stick 
to simplicity and add the counter like you did. Nothing stops us from
optimizing later.
Lorenzo Bianconi April 17, 2023, 9:39 p.m. UTC | #4
> On Sat, 15 Apr 2023 13:16:40 +0200 Lorenzo Bianconi wrote:
> > > What about high order? If we use bulk API for high order one day, 
> > > will @slow_high_order not count calls like @slow does? So we should
> > > bump the new counter for high order, too.  
> > 
> > yes, right. AFAIU "slow_high_order" and "slow" just count number of
> > pages returned to the pool consumer and not the number of pages
> > allocated to the pool (as you said, since we do not use bulking
> > for high_order allocation there is no difference at the moment).
> > What I would like to track is the number of allocated pages
> > (of any order) so I guess we can just increment "pages" counter in
> > __page_pool_alloc_page_order() as well. Agree?
> 
> Yup, that sounds better.

ack, fine. I am now wondering if these counters are useful just during
debugging or even in the normal use-case.
@Jesper, Ilias, Joe: what do you think?

Regards,
Lorenzo

> 
> > > Which makes it very similar to pages_state_hold_cnt, just 64bit...  
> > 
> > do you prefer to use pages_state_hold_cnt instead of adding a new
> > pages counter?
> 
> No strong preference either way. It's a tradeoff between saving 4B 
> and making the code a little more complex. Perhaps we should stick 
> to simplicity and add the counter like you did. Nothing stops us from
> optimizing later.
Jakub Kicinski April 17, 2023, 11:36 p.m. UTC | #5
On Mon, 17 Apr 2023 23:39:40 +0200 Lorenzo Bianconi wrote:
> > Yup, that sounds better.  
> 
> ack, fine. I am now wondering if these counters are useful just during
> debugging or even in the normal use-case.

I was wondering about it too, FWIW, I think the most useful info is
the number of outstanding references. But it may be a little too
unflexible as a statistic. BPF attached to the trace point may be
a better choice there, and tracepoints are already in place.
Lorenzo Bianconi April 18, 2023, 7:23 a.m. UTC | #6
> On Mon, 17 Apr 2023 23:39:40 +0200 Lorenzo Bianconi wrote:
> > > Yup, that sounds better.  
> > 
> > ack, fine. I am now wondering if these counters are useful just during
> > debugging or even in the normal use-case.
> 
> I was wondering about it too, FWIW, I think the most useful info is
> the number of outstanding references. But it may be a little too
> unflexible as a statistic. BPF attached to the trace point may be
> a better choice there, and tracepoints are already in place.

ack, I agree. Let's drop this patch in this case.

Regards,
Lorenzo
diff mbox series

Patch

diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
index 30f1344e7cca..71b3eb4555f1 100644
--- a/Documentation/networking/page_pool.rst
+++ b/Documentation/networking/page_pool.rst
@@ -138,6 +138,7 @@  The ``struct page_pool_alloc_stats`` has the following fields:
   * ``refill``: an allocation which triggered a refill of the cache
   * ``waive``: pages obtained from the ptr ring that cannot be added to
     the cache due to a NUMA mismatch.
+  * ``pages``: number of allocated pages to the pool
 
 The ``struct page_pool_recycle_stats`` has the following fields:
   * ``cached``: recycling placed page in the page pool cache
@@ -145,6 +146,7 @@  The ``struct page_pool_recycle_stats`` has the following fields:
   * ``ring``: page placed into the ptr ring
   * ``ring_full``: page released from page pool because the ptr ring was full
   * ``released_refcnt``: page released (and not recycled) because refcnt > 1
+  * ``released_pages``: number of released pages from the pool
 
 Coding examples
 ===============
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index ddfa0b328677..b16d5320d963 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -94,6 +94,7 @@  struct page_pool_alloc_stats {
 		    */
 	u64 refill; /* allocations via successful refill */
 	u64 waive;  /* failed refills due to numa zone mismatch */
+	u64 pages; /* number of allocated pages to the pool */
 };
 
 struct page_pool_recycle_stats {
@@ -106,6 +107,7 @@  struct page_pool_recycle_stats {
 	u64 released_refcnt; /* page released because of elevated
 			      * refcnt
 			      */
+	u64 released_pages; /* number of released pages from the pool */
 };
 
 /* This struct wraps the above stats structs so users of the
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 193c18799865..418d443d7e7c 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -50,11 +50,13 @@  static const char pp_stats[][ETH_GSTRING_LEN] = {
 	"rx_pp_alloc_empty",
 	"rx_pp_alloc_refill",
 	"rx_pp_alloc_waive",
+	"rx_pp_alloc_pages",
 	"rx_pp_recycle_cached",
 	"rx_pp_recycle_cache_full",
 	"rx_pp_recycle_ring",
 	"rx_pp_recycle_ring_full",
 	"rx_pp_recycle_released_ref",
+	"rx_pp_released_pages",
 };
 
 bool page_pool_get_stats(struct page_pool *pool,
@@ -72,6 +74,7 @@  bool page_pool_get_stats(struct page_pool *pool,
 	stats->alloc_stats.empty += pool->alloc_stats.empty;
 	stats->alloc_stats.refill += pool->alloc_stats.refill;
 	stats->alloc_stats.waive += pool->alloc_stats.waive;
+	stats->alloc_stats.pages += pool->alloc_stats.pages;
 
 	for_each_possible_cpu(cpu) {
 		const struct page_pool_recycle_stats *pcpu =
@@ -82,6 +85,7 @@  bool page_pool_get_stats(struct page_pool *pool,
 		stats->recycle_stats.ring += pcpu->ring;
 		stats->recycle_stats.ring_full += pcpu->ring_full;
 		stats->recycle_stats.released_refcnt += pcpu->released_refcnt;
+		stats->recycle_stats.released_pages += pcpu->released_pages;
 	}
 
 	return true;
@@ -117,11 +121,13 @@  u64 *page_pool_ethtool_stats_get(u64 *data, void *stats)
 	*data++ = pool_stats->alloc_stats.empty;
 	*data++ = pool_stats->alloc_stats.refill;
 	*data++ = pool_stats->alloc_stats.waive;
+	*data++ = pool_stats->alloc_stats.pages;
 	*data++ = pool_stats->recycle_stats.cached;
 	*data++ = pool_stats->recycle_stats.cache_full;
 	*data++ = pool_stats->recycle_stats.ring;
 	*data++ = pool_stats->recycle_stats.ring_full;
 	*data++ = pool_stats->recycle_stats.released_refcnt;
+	*data++ = pool_stats->recycle_stats.released_pages;
 
 	return data;
 }
@@ -411,6 +417,7 @@  static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
 		pool->pages_state_hold_cnt++;
 		trace_page_pool_state_hold(pool, page,
 					   pool->pages_state_hold_cnt);
+		alloc_stat_inc(pool, pages);
 	}
 
 	/* Return last page */
@@ -493,6 +500,7 @@  void page_pool_release_page(struct page_pool *pool, struct page *page)
 	 */
 	count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
 	trace_page_pool_state_release(pool, page, count);
+	recycle_stat_inc(pool, released_pages);
 }
 EXPORT_SYMBOL(page_pool_release_page);