diff mbox series

[v2,4/9] SUNRPC: Count ingress RPC messages per svc_pool

Message ID 168842927591.139194.16920372497489479670.stgit@manet.1015granger.net (mailing list archive)
State New, archived
Headers show
Series SUNRPC service thread scheduler optimizations | expand

Commit Message

Chuck Lever July 4, 2023, 12:07 a.m. UTC
From: Chuck Lever <chuck.lever@oracle.com>

To get a sense of the average number of transport enqueue operations
needed to process an incoming RPC message, re-use the "packets" pool
stat. Track the number of complete RPC messages processed by each
thread pool.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 include/linux/sunrpc/svc.h |    1 +
 net/sunrpc/svc.c           |    2 ++
 net/sunrpc/svc_xprt.c      |    3 ++-
 3 files changed, 5 insertions(+), 1 deletion(-)

Comments

NeilBrown July 4, 2023, 12:45 a.m. UTC | #1
On Tue, 04 Jul 2023, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
> 
> To get a sense of the average number of transport enqueue operations
> needed to process an incoming RPC message, re-use the "packets" pool
> stat. Track the number of complete RPC messages processed by each
> thread pool.

If I understand this correctly, then I would say it differently.

I think there are either zero or one transport enqueue operations for
each incoming RPC message.  Is that correct?  So the average would be in
(0,1].
Wouldn't it be more natural to talk about the average number of incoming
RPC messages processed per enqueue operation?  This would be typically
be around 1 on a lightly loaded server and would climb up as things get
busy. 

Was there a reason you wrote it the way around that you did?

NeilBrown
Chuck Lever July 4, 2023, 1:47 a.m. UTC | #2
On Tue, Jul 04, 2023 at 10:45:20AM +1000, NeilBrown wrote:
> On Tue, 04 Jul 2023, Chuck Lever wrote:
> > From: Chuck Lever <chuck.lever@oracle.com>
> > 
> > To get a sense of the average number of transport enqueue operations
> > needed to process an incoming RPC message, re-use the "packets" pool
> > stat. Track the number of complete RPC messages processed by each
> > thread pool.
> 
> If I understand this correctly, then I would say it differently.
> 
> I think there are either zero or one transport enqueue operations for
> each incoming RPC message.  Is that correct?  So the average would be in
> (0,1].
> Wouldn't it be more natural to talk about the average number of incoming
> RPC messages processed per enqueue operation?  This would be typically
> be around 1 on a lightly loaded server and would climb up as things get
> busy. 
> 
> Was there a reason you wrote it the way around that you did?

Yes: more than one enqueue is done per incoming RPC. For example,
svc_data_ready() enqueues, and so does svc_xprt_receive().

If the RPC requires more than one call to ->recvfrom() to complete
the receive operation, each one of those calls does an enqueue.
NeilBrown July 4, 2023, 2:30 a.m. UTC | #3
On Tue, 04 Jul 2023, Chuck Lever wrote:
> On Tue, Jul 04, 2023 at 10:45:20AM +1000, NeilBrown wrote:
> > On Tue, 04 Jul 2023, Chuck Lever wrote:
> > > From: Chuck Lever <chuck.lever@oracle.com>
> > > 
> > > To get a sense of the average number of transport enqueue operations
> > > needed to process an incoming RPC message, re-use the "packets" pool
> > > stat. Track the number of complete RPC messages processed by each
> > > thread pool.
> > 
> > If I understand this correctly, then I would say it differently.
> > 
> > I think there are either zero or one transport enqueue operations for
> > each incoming RPC message.  Is that correct?  So the average would be in
> > (0,1].
> > Wouldn't it be more natural to talk about the average number of incoming
> > RPC messages processed per enqueue operation?  This would be typically
> > be around 1 on a lightly loaded server and would climb up as things get
> > busy. 
> > 
> > Was there a reason you wrote it the way around that you did?
> 
> Yes: more than one enqueue is done per incoming RPC. For example,
> svc_data_ready() enqueues, and so does svc_xprt_receive().
> 
> If the RPC requires more than one call to ->recvfrom() to complete
> the receive operation, each one of those calls does an enqueue.
> 

Ahhhh - that makes sense.  Thanks.
So its really that a number of transport enqueue operations are needed
to *receive* the message.  Once it is received, it is then processed
with no more enqueuing.

I was partly thrown by the fact that the series is mostly about the
queue of threads, but this is about the queue of transports.
I guess the more times a transport if queued, the more times a thread
needs to be woken?

Thanks,
NeilBrown
Chuck Lever July 4, 2023, 3:04 p.m. UTC | #4
> On Jul 3, 2023, at 10:30 PM, NeilBrown <neilb@suse.de> wrote:
> 
> On Tue, 04 Jul 2023, Chuck Lever wrote:
>> On Tue, Jul 04, 2023 at 10:45:20AM +1000, NeilBrown wrote:
>>> On Tue, 04 Jul 2023, Chuck Lever wrote:
>>>> From: Chuck Lever <chuck.lever@oracle.com>
>>>> 
>>>> To get a sense of the average number of transport enqueue operations
>>>> needed to process an incoming RPC message, re-use the "packets" pool
>>>> stat. Track the number of complete RPC messages processed by each
>>>> thread pool.
>>> 
>>> If I understand this correctly, then I would say it differently.
>>> 
>>> I think there are either zero or one transport enqueue operations for
>>> each incoming RPC message.  Is that correct?  So the average would be in
>>> (0,1].
>>> Wouldn't it be more natural to talk about the average number of incoming
>>> RPC messages processed per enqueue operation?  This would be typically
>>> be around 1 on a lightly loaded server and would climb up as things get
>>> busy. 
>>> 
>>> Was there a reason you wrote it the way around that you did?
>> 
>> Yes: more than one enqueue is done per incoming RPC. For example,
>> svc_data_ready() enqueues, and so does svc_xprt_receive().
>> 
>> If the RPC requires more than one call to ->recvfrom() to complete
>> the receive operation, each one of those calls does an enqueue.
>> 
> 
> Ahhhh - that makes sense.  Thanks.
> So its really that a number of transport enqueue operations are needed
> to *receive* the message.  Once it is received, it is then processed
> with no more enqueuing.
> 
> I was partly thrown by the fact that the series is mostly about the
> queue of threads, but this is about the queue of transports.
> I guess the more times a transport if queued, the more times a thread
> needs to be woken?

Yes: this new metric is an indirect measure of the workload
on the new thread wake-up mechanism.


--
Chuck Lever
diff mbox series

Patch

diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index fbfe6ea737c8..74ea13270679 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -38,6 +38,7 @@  struct svc_pool {
 	struct list_head	sp_all_threads;	/* all server threads */
 
 	/* statistics on pool operation */
+	struct percpu_counter	sp_messages_arrived;
 	struct percpu_counter	sp_sockets_queued;
 	struct percpu_counter	sp_threads_woken;
 	struct percpu_counter	sp_threads_timedout;
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 04151e22ec44..ccc7ff3142dd 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -513,6 +513,7 @@  __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
 		INIT_LIST_HEAD(&pool->sp_all_threads);
 		spin_lock_init(&pool->sp_lock);
 
+		percpu_counter_init(&pool->sp_messages_arrived, 0, GFP_KERNEL);
 		percpu_counter_init(&pool->sp_sockets_queued, 0, GFP_KERNEL);
 		percpu_counter_init(&pool->sp_threads_woken, 0, GFP_KERNEL);
 		percpu_counter_init(&pool->sp_threads_timedout, 0, GFP_KERNEL);
@@ -589,6 +590,7 @@  svc_destroy(struct kref *ref)
 	for (i = 0; i < serv->sv_nrpools; i++) {
 		struct svc_pool *pool = &serv->sv_pools[i];
 
+		percpu_counter_destroy(&pool->sp_messages_arrived);
 		percpu_counter_destroy(&pool->sp_sockets_queued);
 		percpu_counter_destroy(&pool->sp_threads_woken);
 		percpu_counter_destroy(&pool->sp_threads_timedout);
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 7ee095d03996..ecbccf0d89b9 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -897,6 +897,7 @@  int svc_recv(struct svc_rqst *rqstp, long timeout)
 
 	if (serv->sv_stats)
 		serv->sv_stats->netcnt++;
+	percpu_counter_inc(&rqstp->rq_pool->sp_messages_arrived);
 	rqstp->rq_stime = ktime_get();
 	return len;
 out_release:
@@ -1446,7 +1447,7 @@  static int svc_pool_stats_show(struct seq_file *m, void *p)
 
 	seq_printf(m, "%u %llu %llu %llu %llu %llu\n",
 		pool->sp_id,
-		percpu_counter_sum_positive(&pool->sp_sockets_queued),
+		percpu_counter_sum_positive(&pool->sp_messages_arrived),
 		percpu_counter_sum_positive(&pool->sp_sockets_queued),
 		percpu_counter_sum_positive(&pool->sp_threads_woken),
 		percpu_counter_sum_positive(&pool->sp_threads_timedout),