Message ID | 2870146.1734037095@warthog.procyon.org.uk (mailing list archive) |
---|---|
State | Accepted |
Commit | d920270a6dbf756384b125ce39c17666a7c0c9f4 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] rxrpc: Disable IRQ, not BH, to take the lock for ->attend_link | expand |
Hello: This patch was applied to netdev/net-next.git (main) by Jakub Kicinski <kuba@kernel.org>: On Thu, 12 Dec 2024 20:58:15 +0000 you wrote: > Use spin_lock_irq(), not spin_lock_bh() to take the lock when accessing the > ->attend_link() to stop a delay in the I/O thread due to an interrupt being > taken in the app thread whilst that holds the lock and vice versa. > > Fixes: a2ea9a907260 ("rxrpc: Use irq-disabling spinlocks between app and I/O thread") > Signed-off-by: David Howells <dhowells@redhat.com> > cc: Marc Dionne <marc.dionne@auristor.com> > cc: Jakub Kicinski <kuba@kernel.org> > cc: "David S. Miller" <davem@davemloft.net> > cc: Eric Dumazet <edumazet@google.com> > cc: Paolo Abeni <pabeni@redhat.com> > cc: linux-afs@lists.infradead.org > cc: netdev@vger.kernel.org > > [...] Here is the summary with links: - [net-next] rxrpc: Disable IRQ, not BH, to take the lock for ->attend_link https://git.kernel.org/netdev/net-next/c/d920270a6dbf You are awesome, thank you!
diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 2925c7fc82cf..64f8d77b8731 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -508,9 +508,9 @@ int rxrpc_io_thread(void *data) while ((conn = list_first_entry_or_null(&conn_attend_q, struct rxrpc_connection, attend_link))) { - spin_lock_bh(&local->lock); + spin_lock_irq(&local->lock); list_del_init(&conn->attend_link); - spin_unlock_bh(&local->lock); + spin_unlock_irq(&local->lock); rxrpc_input_conn_event(conn, NULL); rxrpc_put_connection(conn, rxrpc_conn_put_poke); } @@ -527,9 +527,9 @@ int rxrpc_io_thread(void *data) while ((call = list_first_entry_or_null(&call_attend_q, struct rxrpc_call, attend_link))) { - spin_lock_bh(&local->lock); + spin_lock_irq(&local->lock); list_del_init(&call->attend_link); - spin_unlock_bh(&local->lock); + spin_unlock_irq(&local->lock); trace_rxrpc_call_poked(call); rxrpc_input_call_event(call); rxrpc_put_call(call, rxrpc_call_put_poke);
Use spin_lock_irq(), not spin_lock_bh() to take the lock when accessing the ->attend_link() to stop a delay in the I/O thread due to an interrupt being taken in the app thread whilst that holds the lock and vice versa. Fixes: a2ea9a907260 ("rxrpc: Use irq-disabling spinlocks between app and I/O thread") Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Jakub Kicinski <kuba@kernel.org> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/io_thread.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)