diff mbox series

[v5,1/5] exit: perform add_device_randomness() without tasklist_lock

Message ID 20250205200929.406568-2-mjguzik@gmail.com (mailing list archive)
State New
Headers show
Series reduce tasklist_lock hold time on exit and do some pid cleanup | expand

Commit Message

Mateusz Guzik Feb. 5, 2025, 8:09 p.m. UTC
Parallel calls to add_device_randomness() contend on their own.

The clone side aleady runs outside of tasklist_lock, which in turn means
any caller on the exit side extends the tasklist_lock hold time while
contending on the random-private lock.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
---
 kernel/exit.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Oleg Nesterov Feb. 5, 2025, 8:56 p.m. UTC | #1
On 02/05, Mateusz Guzik wrote:
>
> Parallel calls to add_device_randomness() contend on their own.
...
> +	add_device_randomness(&p->se.sum_exec_runtime,
> +			      sizeof(p->se.sum_exec_runtime));

OK, but

> +	free_pids(post.pids);

wait... free_pids() comes later in 4/5 ?

Oleg.
diff mbox series

Patch

diff --git a/kernel/exit.c b/kernel/exit.c
index 3485e5fc499e..c79b41509cd3 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -174,9 +174,6 @@  static void __exit_signal(struct task_struct *tsk)
 			sig->curr_target = next_thread(tsk);
 	}
 
-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-			      sizeof(unsigned long long));
-
 	/*
 	 * Accumulate here the counters for all threads as they die. We could
 	 * skip the group leader because it is the last user of signal_struct,
@@ -278,6 +275,9 @@  void release_task(struct task_struct *p)
 	write_unlock_irq(&tasklist_lock);
 	proc_flush_pid(thread_pid);
 	put_pid(thread_pid);
+	add_device_randomness(&p->se.sum_exec_runtime,
+			      sizeof(p->se.sum_exec_runtime));
+	free_pids(post.pids);
 	release_thread(p);
 	put_task_struct_rcu_user(p);