diff mbox series

[net-next,v2,2/5] page_pool: Add per-queue statistics.

Message ID 20250307115722.705311-3-bigeasy@linutronix.de (mailing list archive)
State New
Headers show
Series page_pool: Convert stats to u64_stats_t. | expand

Commit Message

Sebastian Andrzej Siewior March 7, 2025, 11:57 a.m. UTC
The mlx5 driver supports per-channel statistics. To make support generic
it is required to have a template to fill the individual channel/ queue.

Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for
multiple queue.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/net/page_pool/helpers.h |  5 +++++
 net/core/page_pool.c            | 23 +++++++++++++++++++++++
 2 files changed, 28 insertions(+)

Comments

Jakub Kicinski March 7, 2025, 4:11 p.m. UTC | #1
On Fri,  7 Mar 2025 12:57:19 +0100 Sebastian Andrzej Siewior wrote:
> The mlx5 driver supports per-channel statistics. To make support generic
> it is required to have a template to fill the individual channel/ queue.
> 
> Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for
> multiple queue.

Sorry to say this is useless as a common helper, you should move it 
to mlx5.

The page pool stats have a standard interface, they are exposed over
netlink. If my grep-foo isn't failing me no driver uses the exact
strings mlx5 uses. "New drivers" are not supposed to add these stats
to ethtool -S, and should just steer users towards the netlink stats.

IOW mlx5 is and will remain the only user of this helper forever.
Sebastian Andrzej Siewior March 7, 2025, 4:50 p.m. UTC | #2
On 2025-03-07 08:11:35 [-0800], Jakub Kicinski wrote:
> On Fri,  7 Mar 2025 12:57:19 +0100 Sebastian Andrzej Siewior wrote:
> > The mlx5 driver supports per-channel statistics. To make support generic
> > it is required to have a template to fill the individual channel/ queue.
> > 
> > Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for
> > multiple queue.
> 
> Sorry to say this is useless as a common helper, you should move it 
> to mlx5.
> 
> The page pool stats have a standard interface, they are exposed over
> netlink. If my grep-foo isn't failing me no driver uses the exact
> strings mlx5 uses. "New drivers" are not supposed to add these stats
> to ethtool -S, and should just steer users towards the netlink stats.
> 
> IOW mlx5 is and will remain the only user of this helper forever.

Okay, so per-runqueue stats is not something other/ new drivers are
interested in?
The strings are the same, except for the rx%d_ prefix, but yes this
makes it unique.
The mlx5 folks seem to be the only one interested in this. The veth
driver for instance iterates over real_num_rx_queues and adds all
per-queue stats into one counter. It could also expose this per-runqueue
as it does with xdp_packets for instance. But then it uses the
rx_queue_%d prefix…

I don't care, I just intended to provide some generic facility so we
don't have every driver rolling its own thing. I have no problem to move
this to the mlx5 driver.

Sebastian
Jakub Kicinski March 7, 2025, 4:59 p.m. UTC | #3
On Fri, 7 Mar 2025 17:50:46 +0100 Sebastian Andrzej Siewior wrote:
> On 2025-03-07 08:11:35 [-0800], Jakub Kicinski wrote:
> > On Fri,  7 Mar 2025 12:57:19 +0100 Sebastian Andrzej Siewior wrote:  
> > > The mlx5 driver supports per-channel statistics. To make support generic
> > > it is required to have a template to fill the individual channel/ queue.
> > > 
> > > Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for
> > > multiple queue.  
> > 
> > Sorry to say this is useless as a common helper, you should move it 
> > to mlx5.
> > 
> > The page pool stats have a standard interface, they are exposed over
> > netlink. If my grep-foo isn't failing me no driver uses the exact
> > strings mlx5 uses. "New drivers" are not supposed to add these stats
> > to ethtool -S, and should just steer users towards the netlink stats.
> > 
> > IOW mlx5 is and will remain the only user of this helper forever.  
> 
> Okay, so per-runqueue stats is not something other/ new drivers are
> interested in?
> The strings are the same, except for the rx%d_ prefix, but yes this
> makes it unique.
> The mlx5 folks seem to be the only one interested in this. The veth
> driver for instance iterates over real_num_rx_queues and adds all
> per-queue stats into one counter. It could also expose this per-runqueue
> as it does with xdp_packets for instance. But then it uses the
> rx_queue_%d prefix…

What I'm saying is they are already available per queue, via netlink,
with no driver work necessary. See tools/net/ynl/samples/page-pool.c
The mlx5 stats predate the standard interface.

> I don't care, I just intended to provide some generic facility so we
> don't have every driver rolling its own thing. I have no problem to move
> this to the mlx5 driver.

Thanks, and sorry for not catching the conversation earlier.
Sebastian Andrzej Siewior March 7, 2025, 5:05 p.m. UTC | #4
On 2025-03-07 08:59:46 [-0800], Jakub Kicinski wrote:
> What I'm saying is they are already available per queue, via netlink,
> with no driver work necessary. See tools/net/ynl/samples/page-pool.c
> The mlx5 stats predate the standard interface.

ach, got you.

> > I don't care, I just intended to provide some generic facility so we
> > don't have every driver rolling its own thing. I have no problem to move
> > this to the mlx5 driver.
> 
> Thanks, and sorry for not catching the conversation earlier.

No worries, thanks.

Sebastian
diff mbox series

Patch

diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 4622db90f88f2..a815b0ff97448 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -62,6 +62,7 @@ 
 /* Deprecated driver-facing API, use netlink instead */
 int page_pool_ethtool_stats_get_count(void);
 u8 *page_pool_ethtool_stats_get_strings(u8 *data);
+void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue);
 u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats);
 
 bool page_pool_get_stats(const struct page_pool *pool,
@@ -77,6 +78,10 @@  static inline u8 *page_pool_ethtool_stats_get_strings(u8 *data)
 	return data;
 }
 
+static inline void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue)
+{
+}
+
 static inline u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats)
 {
 	return data;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index f5e908c9e7ad8..2290d80443d1e 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -68,6 +68,20 @@  static const char pp_stats[][ETH_GSTRING_LEN] = {
 	"rx_pp_recycle_released_ref",
 };
 
+static const char pp_stats_mq[][ETH_GSTRING_LEN] = {
+	"rx%d_pp_alloc_fast",
+	"rx%d_pp_alloc_slow",
+	"rx%d_pp_alloc_slow_ho",
+	"rx%d_pp_alloc_empty",
+	"rx%d_pp_alloc_refill",
+	"rx%d_pp_alloc_waive",
+	"rx%d_pp_recycle_cached",
+	"rx%d_pp_recycle_cache_full",
+	"rx%d_pp_recycle_ring",
+	"rx%d_pp_recycle_ring_full",
+	"rx%d_pp_recycle_released_ref",
+};
+
 /**
  * page_pool_get_stats() - fetch page pool stats
  * @pool:	pool from which page was allocated
@@ -123,6 +137,15 @@  u8 *page_pool_ethtool_stats_get_strings(u8 *data)
 }
 EXPORT_SYMBOL(page_pool_ethtool_stats_get_strings);
 
+void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(pp_stats_mq); i++)
+		ethtool_sprintf(data, pp_stats_mq[i], queue);
+}
+EXPORT_SYMBOL(page_pool_ethtool_stats_get_strings_mq);
+
 int page_pool_ethtool_stats_get_count(void)
 {
 	return ARRAY_SIZE(pp_stats);