diff mbox series

[v2,1/2] uprobes: Remove redundant spinlock in uprobe_deny_signal()

Message ID 20240809061004.2112369-2-liaochang1@huawei.com (mailing list archive)
State Superseded
Headers show
Series uprobes: Improve scalability by reducing the contention on siglock | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Liao, Chang Aug. 9, 2024, 6:10 a.m. UTC
Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.

Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
---
 kernel/events/uprobes.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Oleg Nesterov Aug. 12, 2024, 12:07 p.m. UTC | #1
On 08/09, Liao Chang wrote:
>
> Since clearing a bit in thread_info is an atomic operation, the spinlock
> is redundant and can be removed, reducing lock contention is good for
> performance.

My ack still stays, but let me add some notes...

sighand->siglock doesn't protect clear_bit() per se. It was used to not
break the "the state of TIF_SIGPENDING of every thread is stable with
sighand->siglock held" rule.

But we already have the lockless users of clear_thread_flag(TIF_SIGPENDING)
(some if not most of them look buggy), and afaics in this (very special)
case it should be fine.

Oleg.

> Acked-by: Oleg Nesterov <oleg@redhat.com>
> Signed-off-by: Liao Chang <liaochang1@huawei.com>
> ---
>  kernel/events/uprobes.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 73cc47708679..76a51a1f51e2 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
>  	WARN_ON_ONCE(utask->state != UTASK_SSTEP);
>  
>  	if (task_sigpending(t)) {
> -		spin_lock_irq(&t->sighand->siglock);
>  		clear_tsk_thread_flag(t, TIF_SIGPENDING);
> -		spin_unlock_irq(&t->sighand->siglock);
>  
>  		if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
>  			utask->state = UTASK_SSTEP_TRAPPED;
> -- 
> 2.34.1
>
Liao, Chang Aug. 13, 2024, 12:30 p.m. UTC | #2
在 2024/8/12 20:07, Oleg Nesterov 写道:
> On 08/09, Liao Chang wrote:
>>
>> Since clearing a bit in thread_info is an atomic operation, the spinlock
>> is redundant and can be removed, reducing lock contention is good for
>> performance.
> 
> My ack still stays, but let me add some notes...
> 
> sighand->siglock doesn't protect clear_bit() per se. It was used to not
> break the "the state of TIF_SIGPENDING of every thread is stable with
> sighand->siglock held" rule.
> 
> But we already have the lockless users of clear_thread_flag(TIF_SIGPENDING)
> (some if not most of them look buggy), and afaics in this (very special)
> case it should be fine.

Oleg, your explaination is more accurate. So I will reword the commit log and
quote some of your note like this:

  Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING).
  And for uprobe singlestep case, it doesn't break the rule of "the state of
  TIF_SIGPENDING of every thread is stable with sighand->siglock held". So
  removing sighand->siglock to reduce contention for better performance.

> 
> Oleg.
> 
>> Acked-by: Oleg Nesterov <oleg@redhat.com>
>> Signed-off-by: Liao Chang <liaochang1@huawei.com>
>> ---
>>  kernel/events/uprobes.c | 2 --
>>  1 file changed, 2 deletions(-)
>>
>> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
>> index 73cc47708679..76a51a1f51e2 100644
>> --- a/kernel/events/uprobes.c
>> +++ b/kernel/events/uprobes.c
>> @@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
>>  	WARN_ON_ONCE(utask->state != UTASK_SSTEP);
>>  
>>  	if (task_sigpending(t)) {
>> -		spin_lock_irq(&t->sighand->siglock);
>>  		clear_tsk_thread_flag(t, TIF_SIGPENDING);
>> -		spin_unlock_irq(&t->sighand->siglock);
>>  
>>  		if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
>>  			utask->state = UTASK_SSTEP_TRAPPED;
>> -- 
>> 2.34.1
>>
> 
>
Oleg Nesterov Aug. 13, 2024, 12:47 p.m. UTC | #3
On 08/13, Liao, Chang wrote:
>
>
> Oleg, your explaination is more accurate. So I will reword the commit log and
> quote some of your note like this:

Oh, please don't. I just tried to explain the history of this spin_lock(siglock).

>   Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING).
>   And for uprobe singlestep case, it doesn't break the rule of "the state of
>   TIF_SIGPENDING of every thread is stable with sighand->siglock held".

It obviously does break the rule above. Please keep your changelog as is.

Oleg.
Andrii Nakryiko Sept. 5, 2024, 8:53 p.m. UTC | #4
On Tue, Aug 13, 2024 at 5:47 AM Oleg Nesterov <oleg@redhat.com> wrote:
>
> On 08/13, Liao, Chang wrote:
> >
> >
> > Oleg, your explaination is more accurate. So I will reword the commit log and
> > quote some of your note like this:
>
> Oh, please don't. I just tried to explain the history of this spin_lock(siglock).
>
> >   Since we already have the lockless user of clear_thread_flag(TIF_SIGPENDING).
> >   And for uprobe singlestep case, it doesn't break the rule of "the state of
> >   TIF_SIGPENDING of every thread is stable with sighand->siglock held".
>
> It obviously does break the rule above. Please keep your changelog as is.
>
> Oleg.
>

Liao,

Can you please rebase and resend your patches now that the first part
of my uprobe patches landed in perf/core? Seems like there is some
tiny merge conflict or something.

Thanks!
diff mbox series

Patch

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 73cc47708679..76a51a1f51e2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1979,9 +1979,7 @@  bool uprobe_deny_signal(void)
 	WARN_ON_ONCE(utask->state != UTASK_SSTEP);
 
 	if (task_sigpending(t)) {
-		spin_lock_irq(&t->sighand->siglock);
 		clear_tsk_thread_flag(t, TIF_SIGPENDING);
-		spin_unlock_irq(&t->sighand->siglock);
 
 		if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
 			utask->state = UTASK_SSTEP_TRAPPED;