diff mbox series

[resend,ftrace] Asynchronous grace period for register_ftrace_direct()

Message ID ac05be77-2972-475b-9b57-56bef15aa00a@paulmck-laptop (mailing list archive)
State Queued
Commit 33f137143e651321f10eb67ae6404a13bfbf69f8
Delegated to: Steven Rostedt
Headers show
Series [resend,ftrace] Asynchronous grace period for register_ftrace_direct() | expand

Commit Message

Paul E. McKenney May 1, 2024, 11:12 p.m. UTC
Note that the immediate pressure for this patch should be relieved by the
NAPI patch series [1], but this sort of problem could easily arise again.

When running heavy test workloads with KASAN enabled, RCU Tasks grace
periods can extend for many tens of seconds, significantly slowing
trace registration.  Therefore, make the registration-side RCU Tasks
grace period be asynchronous via call_rcu_tasks().

[1] https://lore.kernel.org/all/cover.1710877680.git.yan@cloudflare.com/

Reported-by: Jakub Kicinski <kuba@kernel.org>
Reported-by: Alexei Starovoitov <ast@kernel.org>
Reported-by: Chris Mason <clm@fb.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: <linux-trace-kernel@vger.kernel.org>

Comments

Masami Hiramatsu (Google) May 2, 2024, 2:05 a.m. UTC | #1
On Wed, 1 May 2024 16:12:37 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> Note that the immediate pressure for this patch should be relieved by the
> NAPI patch series [1], but this sort of problem could easily arise again.
> 
> When running heavy test workloads with KASAN enabled, RCU Tasks grace
> periods can extend for many tens of seconds, significantly slowing
> trace registration.  Therefore, make the registration-side RCU Tasks
> grace period be asynchronous via call_rcu_tasks().
> 

Good catch! AFAICS, there is no reason to wait for synchronization
when adding a new direct trampoline.
This looks good to me.

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thank you,

> [1] https://lore.kernel.org/all/cover.1710877680.git.yan@cloudflare.com/
> 
> Reported-by: Jakub Kicinski <kuba@kernel.org>
> Reported-by: Alexei Starovoitov <ast@kernel.org>
> Reported-by: Chris Mason <clm@fb.com>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Cc: <linux-trace-kernel@vger.kernel.org>
> 
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 6c96b30f3d63b..32ea92934268c 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5365,6 +5365,13 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
>  	}
>  }
>  
> +static void register_ftrace_direct_cb(struct rcu_head *rhp)
> +{
> +	struct ftrace_hash *fhp = container_of(rhp, struct ftrace_hash, rcu);
> +
> +	free_ftrace_hash(fhp);
> +}
> +
>  /**
>   * register_ftrace_direct - Call a custom trampoline directly
>   * for multiple functions registered in @ops
> @@ -5463,10 +5470,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>   out_unlock:
>  	mutex_unlock(&direct_mutex);
>  
> -	if (free_hash && free_hash != EMPTY_HASH) {
> -		synchronize_rcu_tasks();
> -		free_ftrace_hash(free_hash);
> -	}
> +	if (free_hash && free_hash != EMPTY_HASH)
> +		call_rcu_tasks(&free_hash->rcu, register_ftrace_direct_cb);
>  
>  	if (new_hash)
>  		free_ftrace_hash(new_hash);
Paul E. McKenney May 2, 2024, 3:31 a.m. UTC | #2
On Thu, May 02, 2024 at 11:05:01AM +0900, Masami Hiramatsu wrote:
> On Wed, 1 May 2024 16:12:37 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > Note that the immediate pressure for this patch should be relieved by the
> > NAPI patch series [1], but this sort of problem could easily arise again.
> > 
> > When running heavy test workloads with KASAN enabled, RCU Tasks grace
> > periods can extend for many tens of seconds, significantly slowing
> > trace registration.  Therefore, make the registration-side RCU Tasks
> > grace period be asynchronous via call_rcu_tasks().
> 
> Good catch! AFAICS, there is no reason to wait for synchronization
> when adding a new direct trampoline.
> This looks good to me.
> 
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thank you very much!  I will apply this on my next rebase.

							Thanx, Paul

> Thank you,
> 
> > [1] https://lore.kernel.org/all/cover.1710877680.git.yan@cloudflare.com/
> > 
> > Reported-by: Jakub Kicinski <kuba@kernel.org>
> > Reported-by: Alexei Starovoitov <ast@kernel.org>
> > Reported-by: Chris Mason <clm@fb.com>
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> > Cc: <linux-trace-kernel@vger.kernel.org>
> > 
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 6c96b30f3d63b..32ea92934268c 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -5365,6 +5365,13 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
> >  	}
> >  }
> >  
> > +static void register_ftrace_direct_cb(struct rcu_head *rhp)
> > +{
> > +	struct ftrace_hash *fhp = container_of(rhp, struct ftrace_hash, rcu);
> > +
> > +	free_ftrace_hash(fhp);
> > +}
> > +
> >  /**
> >   * register_ftrace_direct - Call a custom trampoline directly
> >   * for multiple functions registered in @ops
> > @@ -5463,10 +5470,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >   out_unlock:
> >  	mutex_unlock(&direct_mutex);
> >  
> > -	if (free_hash && free_hash != EMPTY_HASH) {
> > -		synchronize_rcu_tasks();
> > -		free_ftrace_hash(free_hash);
> > -	}
> > +	if (free_hash && free_hash != EMPTY_HASH)
> > +		call_rcu_tasks(&free_hash->rcu, register_ftrace_direct_cb);
> >  
> >  	if (new_hash)
> >  		free_ftrace_hash(new_hash);
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
Steven Rostedt May 2, 2024, 9:31 p.m. UTC | #3
On Wed, 1 May 2024 20:31:06 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> On Thu, May 02, 2024 at 11:05:01AM +0900, Masami Hiramatsu wrote:
> > On Wed, 1 May 2024 16:12:37 -0700
> > "Paul E. McKenney" <paulmck@kernel.org> wrote:
> >   
> > > Note that the immediate pressure for this patch should be relieved by the
> > > NAPI patch series [1], but this sort of problem could easily arise again.
> > > 
> > > When running heavy test workloads with KASAN enabled, RCU Tasks grace
> > > periods can extend for many tens of seconds, significantly slowing
> > > trace registration.  Therefore, make the registration-side RCU Tasks
> > > grace period be asynchronous via call_rcu_tasks().  
> > 
> > Good catch! AFAICS, there is no reason to wait for synchronization
> > when adding a new direct trampoline.
> > This looks good to me.
> > 
> > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>  
> 
> Thank you very much!  I will apply this on my next rebase.

I can take it.

It's not a bug fix but just an performance improvement, so it can go into
the next merge window.

-- Steve



> 
> > Thank you,
> >   
> > > [1]
> > > https://lore.kernel.org/all/cover.1710877680.git.yan@cloudflare.com/
> > > 
> > > Reported-by: Jakub Kicinski <kuba@kernel.org>
> > > Reported-by: Alexei Starovoitov <ast@kernel.org>
> > > Reported-by: Chris Mason <clm@fb.com>
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > Cc: Steven Rostedt <rostedt@goodmis.org>
> > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> > > Cc: <linux-trace-kernel@vger.kernel.org>
> > > 
> > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > > index 6c96b30f3d63b..32ea92934268c 100644
> > > --- a/kernel/trace/ftrace.c
> > > +++ b/kernel/trace/ftrace.c
> > > @@ -5365,6 +5365,13 @@ static void
> > > remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long }
> > >  }
> > >  
> > > +static void register_ftrace_direct_cb(struct rcu_head *rhp)
> > > +{
> > > +	struct ftrace_hash *fhp = container_of(rhp, struct
> > > ftrace_hash, rcu); +
> > > +	free_ftrace_hash(fhp);
> > > +}
> > > +
> > >  /**
> > >   * register_ftrace_direct - Call a custom trampoline directly
> > >   * for multiple functions registered in @ops
> > > @@ -5463,10 +5470,8 @@ int register_ftrace_direct(struct ftrace_ops
> > > *ops, unsigned long addr) out_unlock:
> > >  	mutex_unlock(&direct_mutex);
> > >  
> > > -	if (free_hash && free_hash != EMPTY_HASH) {
> > > -		synchronize_rcu_tasks();
> > > -		free_ftrace_hash(free_hash);
> > > -	}
> > > +	if (free_hash && free_hash != EMPTY_HASH)
> > > +		call_rcu_tasks(&free_hash->rcu,
> > > register_ftrace_direct_cb); 
> > >  	if (new_hash)
> > >  		free_ftrace_hash(new_hash);  
> > 
> > 
> > -- 
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>
Paul E. McKenney May 2, 2024, 11:13 p.m. UTC | #4
On Thu, May 02, 2024 at 05:31:00PM -0400, Steven Rostedt wrote:
> On Wed, 1 May 2024 20:31:06 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > On Thu, May 02, 2024 at 11:05:01AM +0900, Masami Hiramatsu wrote:
> > > On Wed, 1 May 2024 16:12:37 -0700
> > > "Paul E. McKenney" <paulmck@kernel.org> wrote:
> > >   
> > > > Note that the immediate pressure for this patch should be relieved by the
> > > > NAPI patch series [1], but this sort of problem could easily arise again.
> > > > 
> > > > When running heavy test workloads with KASAN enabled, RCU Tasks grace
> > > > periods can extend for many tens of seconds, significantly slowing
> > > > trace registration.  Therefore, make the registration-side RCU Tasks
> > > > grace period be asynchronous via call_rcu_tasks().  
> > > 
> > > Good catch! AFAICS, there is no reason to wait for synchronization
> > > when adding a new direct trampoline.
> > > This looks good to me.
> > > 
> > > Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>  
> > 
> > Thank you very much!  I will apply this on my next rebase.
> 
> I can take it.
> 
> It's not a bug fix but just an performance improvement, so it can go into
> the next merge window.

Very good, and thank you!

I will drop it from RCU as soon as it shows up in either -next or in
mainline.

							Thanx, Paul

> -- Steve
> 
> 
> 
> > 
> > > Thank you,
> > >   
> > > > [1]
> > > > https://lore.kernel.org/all/cover.1710877680.git.yan@cloudflare.com/
> > > > 
> > > > Reported-by: Jakub Kicinski <kuba@kernel.org>
> > > > Reported-by: Alexei Starovoitov <ast@kernel.org>
> > > > Reported-by: Chris Mason <clm@fb.com>
> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > > Cc: Steven Rostedt <rostedt@goodmis.org>
> > > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > > Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> > > > Cc: <linux-trace-kernel@vger.kernel.org>
> > > > 
> > > > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > > > index 6c96b30f3d63b..32ea92934268c 100644
> > > > --- a/kernel/trace/ftrace.c
> > > > +++ b/kernel/trace/ftrace.c
> > > > @@ -5365,6 +5365,13 @@ static void
> > > > remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long }
> > > >  }
> > > >  
> > > > +static void register_ftrace_direct_cb(struct rcu_head *rhp)
> > > > +{
> > > > +	struct ftrace_hash *fhp = container_of(rhp, struct
> > > > ftrace_hash, rcu); +
> > > > +	free_ftrace_hash(fhp);
> > > > +}
> > > > +
> > > >  /**
> > > >   * register_ftrace_direct - Call a custom trampoline directly
> > > >   * for multiple functions registered in @ops
> > > > @@ -5463,10 +5470,8 @@ int register_ftrace_direct(struct ftrace_ops
> > > > *ops, unsigned long addr) out_unlock:
> > > >  	mutex_unlock(&direct_mutex);
> > > >  
> > > > -	if (free_hash && free_hash != EMPTY_HASH) {
> > > > -		synchronize_rcu_tasks();
> > > > -		free_ftrace_hash(free_hash);
> > > > -	}
> > > > +	if (free_hash && free_hash != EMPTY_HASH)
> > > > +		call_rcu_tasks(&free_hash->rcu,
> > > > register_ftrace_direct_cb); 
> > > >  	if (new_hash)
> > > >  		free_ftrace_hash(new_hash);  
> > > 
> > > 
> > > -- 
> > > Masami Hiramatsu (Google) <mhiramat@kernel.org>  
>
Steven Rostedt May 3, 2024, 12:04 a.m. UTC | #5
On Thu, 2 May 2024 16:13:59 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> Very good, and thank you!
> 
> I will drop it from RCU as soon as it shows up in either -next or in
> mainline.

Sounds good.

I'm currently working on updates to get into -rc7 and plan to add my next
work on top of that (I know, I know, it's probably the latest release I had
for next, but things are still being worked on).

-- Steve
diff mbox series

Patch

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 6c96b30f3d63b..32ea92934268c 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5365,6 +5365,13 @@  static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
 	}
 }
 
+static void register_ftrace_direct_cb(struct rcu_head *rhp)
+{
+	struct ftrace_hash *fhp = container_of(rhp, struct ftrace_hash, rcu);
+
+	free_ftrace_hash(fhp);
+}
+
 /**
  * register_ftrace_direct - Call a custom trampoline directly
  * for multiple functions registered in @ops
@@ -5463,10 +5470,8 @@  int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
  out_unlock:
 	mutex_unlock(&direct_mutex);
 
-	if (free_hash && free_hash != EMPTY_HASH) {
-		synchronize_rcu_tasks();
-		free_ftrace_hash(free_hash);
-	}
+	if (free_hash && free_hash != EMPTY_HASH)
+		call_rcu_tasks(&free_hash->rcu, register_ftrace_direct_cb);
 
 	if (new_hash)
 		free_ftrace_hash(new_hash);