diff mbox series

sunrpc: hold a ref on netns for tcp sockets

Message ID 512efbd56ad3679068759586c6fa9b681aec14f0.1710877783.git.josef@toxicpanda.com (mailing list archive)
State New
Headers show
Series sunrpc: hold a ref on netns for tcp sockets | expand

Commit Message

Josef Bacik March 19, 2024, 7:49 p.m. UTC
We've been seeing variations of the following panic in production

  BUG: kernel NULL pointer dereference, address: 0000000000000000
  RIP: 0010:ip6_pol_route+0x59/0x7a0
  Call Trace:
   <IRQ>
   ? __die+0x78/0xc0
   ? page_fault_oops+0x286/0x380
   ? fib6_table_lookup+0x95/0xf40
   ? exc_page_fault+0x5d/0x110
   ? asm_exc_page_fault+0x22/0x30
   ? ip6_pol_route+0x59/0x7a0
   ? unlink_anon_vmas+0x370/0x370
   fib6_rule_lookup+0x56/0x1b0
   ? update_blocked_averages+0x2c6/0x6a0
   ip6_route_output_flags+0xd2/0x130
   ip6_dst_lookup_tail+0x3b/0x220
   ip6_dst_lookup_flow+0x2c/0x80
   inet6_sk_rebuild_header+0x14c/0x1e0
   ? tcp_release_cb+0x150/0x150
   __tcp_retransmit_skb+0x68/0x6b0
   ? tcp_current_mss+0xca/0x150
   ? tcp_release_cb+0x150/0x150
   tcp_send_loss_probe+0x8e/0x220
   tcp_write_timer+0xbe/0x2d0
   run_timer_softirq+0x272/0x840
   ? hrtimer_interrupt+0x2c9/0x5f0
   ? sched_clock_cpu+0xc/0x170
   irq_exit_rcu+0x171/0x330
   sysvec_apic_timer_interrupt+0x6d/0x80
   </IRQ>
   <TASK>
   asm_sysvec_apic_timer_interrupt+0x16/0x20
  RIP: 0010:cpuidle_enter_state+0xe7/0x243

Inspecting the vmcore with drgn you can see why this is a NULL pointer deref

    >>> prog.crashed_thread().stack_trace()[0]
    #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in ip6_pol_route at net/ipv6/route.c:2212:40

    2212        if (net->ipv6.devconf_all->forwarding == 0)
    2213              strict |= RT6_LOOKUP_F_REACHABLE;

    >>> prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
    (struct ipv6_devconf *)0x0

Looking at the socket you can see that it's been closed

    >>> decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_flags, prog.type('enum sock_flags'))
    'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
    >>> decode_enum_type_flags(1 << prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.value_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
    'TCPF_FIN_WAIT1'

This occurs in our container setup where we have an NFS mount that
belongs to the containers network namespace.  On container shutdown our
netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
panic.  In the kernel we're responsible for destroying our sockets when
the network namespace exits, or holding a reference on the network
namespace for our sockets so this doesn't happen.

Even once we shutdown the socket we can still have TCP timers that fire
in the background, hence this panic.  SUNRPC shuts down the socket and
throws away all knowledge of it, but it's still doing things in the
background.

Fix this by grabbing a reference on the network namespace for any tcp
sockets we open.  With this patch I'm able to cycle my 500 node stress
tier over and over again without panicing, whereas previously I was
losing 10-20 nodes every shutdown cycle.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

Comments

Chuck Lever March 19, 2024, 7:53 p.m. UTC | #1
> On Mar 19, 2024, at 3:49 PM, Josef Bacik <josef@toxicpanda.com> wrote:
> 
> We've been seeing variations of the following panic in production
> 
>  BUG: kernel NULL pointer dereference, address: 0000000000000000
>  RIP: 0010:ip6_pol_route+0x59/0x7a0
>  Call Trace:
>   <IRQ>
>   ? __die+0x78/0xc0
>   ? page_fault_oops+0x286/0x380
>   ? fib6_table_lookup+0x95/0xf40
>   ? exc_page_fault+0x5d/0x110
>   ? asm_exc_page_fault+0x22/0x30
>   ? ip6_pol_route+0x59/0x7a0
>   ? unlink_anon_vmas+0x370/0x370
>   fib6_rule_lookup+0x56/0x1b0
>   ? update_blocked_averages+0x2c6/0x6a0
>   ip6_route_output_flags+0xd2/0x130
>   ip6_dst_lookup_tail+0x3b/0x220
>   ip6_dst_lookup_flow+0x2c/0x80
>   inet6_sk_rebuild_header+0x14c/0x1e0
>   ? tcp_release_cb+0x150/0x150
>   __tcp_retransmit_skb+0x68/0x6b0
>   ? tcp_current_mss+0xca/0x150
>   ? tcp_release_cb+0x150/0x150
>   tcp_send_loss_probe+0x8e/0x220
>   tcp_write_timer+0xbe/0x2d0
>   run_timer_softirq+0x272/0x840
>   ? hrtimer_interrupt+0x2c9/0x5f0
>   ? sched_clock_cpu+0xc/0x170
>   irq_exit_rcu+0x171/0x330
>   sysvec_apic_timer_interrupt+0x6d/0x80
>   </IRQ>
>   <TASK>
>   asm_sysvec_apic_timer_interrupt+0x16/0x20
>  RIP: 0010:cpuidle_enter_state+0xe7/0x243
> 
> Inspecting the vmcore with drgn you can see why this is a NULL pointer deref
> 
>>>> prog.crashed_thread().stack_trace()[0]
>    #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in ip6_pol_route at net/ipv6/route.c:2212:40
> 
>    2212        if (net->ipv6.devconf_all->forwarding == 0)
>    2213              strict |= RT6_LOOKUP_F_REACHABLE;
> 
>>>> prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
>    (struct ipv6_devconf *)0x0
> 
> Looking at the socket you can see that it's been closed
> 
>>>> decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_flags, prog.type('enum sock_flags'))
>    'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
>>>> decode_enum_type_flags(1 << prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.value_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
>    'TCPF_FIN_WAIT1'
> 
> This occurs in our container setup where we have an NFS mount that
> belongs to the containers network namespace.  On container shutdown our
> netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> panic.  In the kernel we're responsible for destroying our sockets when
> the network namespace exits, or holding a reference on the network
> namespace for our sockets so this doesn't happen.
> 
> Even once we shutdown the socket we can still have TCP timers that fire
> in the background, hence this panic.  SUNRPC shuts down the socket and
> throws away all knowledge of it, but it's still doing things in the
> background.
> 
> Fix this by grabbing a reference on the network namespace for any tcp
> sockets we open.  With this patch I'm able to cycle my 500 node stress
> tier over and over again without panicing, whereas previously I was
> losing 10-20 nodes every shutdown cycle.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
> net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index bb81050c870e..f02387751a94 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
> 
> if (!transport->inet) {
> struct sock *sk = sock->sk;
> + struct net *net = sock_net(sk);
> 
> /* Avoid temporary address, they are bad for long-lived
> * connections such as NFS mounts.
> @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
> tcp_sock_set_nodelay(sk);
> 
> lock_sock(sk);
> + /*
> + * Because timers can fire after the fact we need to hold a
> + * reference on the netns for this socket.
> + */
> + if (!sk->sk_net_refcnt) {
> + if (!maybe_get_net(net)) {
> +       release_sock(sk);
> +       return -ENOTCONN;
> +       }
> +       /*
> + * For kernel sockets we have a tracker put in place for
> + * the tracing, we need to free this to maintaine
> + * consistent tracking info.
> + */
> +       __netns_tracker_free(net, &sk->ns_tracker, false);
> 
> +       sk->sk_net_refcnt = 1;
> +       netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL);
> +       sock_inuse_add(net, 1);
> + }
> xs_save_old_callbacks(transport, sk);
> 
> sk->sk_user_data = xprt;
> -- 
> 2.43.0
> 

Hi Josef-

This is an RPC client side fix. Can you resend to Trond and Anna?


--
Chuck Lever
Trond Myklebust March 19, 2024, 9:59 p.m. UTC | #2
On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> We've been seeing variations of the following panic in production
> 
>   BUG: kernel NULL pointer dereference, address: 0000000000000000
>   RIP: 0010:ip6_pol_route+0x59/0x7a0
>   Call Trace:
>    <IRQ>
>    ? __die+0x78/0xc0
>    ? page_fault_oops+0x286/0x380
>    ? fib6_table_lookup+0x95/0xf40
>    ? exc_page_fault+0x5d/0x110
>    ? asm_exc_page_fault+0x22/0x30
>    ? ip6_pol_route+0x59/0x7a0
>    ? unlink_anon_vmas+0x370/0x370
>    fib6_rule_lookup+0x56/0x1b0
>    ? update_blocked_averages+0x2c6/0x6a0
>    ip6_route_output_flags+0xd2/0x130
>    ip6_dst_lookup_tail+0x3b/0x220
>    ip6_dst_lookup_flow+0x2c/0x80
>    inet6_sk_rebuild_header+0x14c/0x1e0
>    ? tcp_release_cb+0x150/0x150
>    __tcp_retransmit_skb+0x68/0x6b0
>    ? tcp_current_mss+0xca/0x150
>    ? tcp_release_cb+0x150/0x150
>    tcp_send_loss_probe+0x8e/0x220
>    tcp_write_timer+0xbe/0x2d0
>    run_timer_softirq+0x272/0x840
>    ? hrtimer_interrupt+0x2c9/0x5f0
>    ? sched_clock_cpu+0xc/0x170
>    irq_exit_rcu+0x171/0x330
>    sysvec_apic_timer_interrupt+0x6d/0x80
>    </IRQ>
>    <TASK>
>    asm_sysvec_apic_timer_interrupt+0x16/0x20
>   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> 
> Inspecting the vmcore with drgn you can see why this is a NULL
> pointer deref
> 
>     >>> prog.crashed_thread().stack_trace()[0]
>     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> ip6_pol_route at net/ipv6/route.c:2212:40
> 
>     2212        if (net->ipv6.devconf_all->forwarding == 0)
>     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> 
>     >>>
> prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
>     (struct ipv6_devconf *)0x0
> 
> Looking at the socket you can see that it's been closed
> 
>     >>>
> decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> __sk_common.skc_flags, prog.type('enum sock_flags'))
>     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
>     >>> decode_enum_type_flags(1 <<
> prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
>     'TCPF_FIN_WAIT1'
> 
> This occurs in our container setup where we have an NFS mount that
> belongs to the containers network namespace.  On container shutdown
> our
> netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> panic.  In the kernel we're responsible for destroying our sockets
> when
> the network namespace exits, or holding a reference on the network
> namespace for our sockets so this doesn't happen.
> 
> Even once we shutdown the socket we can still have TCP timers that
> fire
> in the background, hence this panic.  SUNRPC shuts down the socket
> and
> throws away all knowledge of it, but it's still doing things in the
> background.
> 
> Fix this by grabbing a reference on the network namespace for any tcp
> sockets we open.  With this patch I'm able to cycle my 500 node
> stress
> tier over and over again without panicing, whereas previously I was
> losing 10-20 nodes every shutdown cycle.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
> Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> realize there was
> a division of the client and server side of SUNRPC.
> 
>  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index bb81050c870e..f02387751a94 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> rpc_xprt *xprt, struct socket *sock)
>  
>  	if (!transport->inet) {
>  		struct sock *sk = sock->sk;
> +		struct net *net = sock_net(sk);
>  
>  		/* Avoid temporary address, they are bad for long-
> lived
>  		 * connections such as NFS mounts.
> @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> rpc_xprt *xprt, struct socket *sock)
>  		tcp_sock_set_nodelay(sk);
>  
>  		lock_sock(sk);
> +		/*
> +		 * Because timers can fire after the fact we need to
> hold a
> +		 * reference on the netns for this socket.
> +		 */
> +		if (!sk->sk_net_refcnt) {
> +			if (!maybe_get_net(net)) {
> +			       release_sock(sk);
> +			       return -ENOTCONN;
> +		       }
> +		       /*
> +			* For kernel sockets we have a tracker put
> in place for
> +			* the tracing, we need to free this to
> maintaine
> +			* consistent tracking info.
> +			*/
> +		       __netns_tracker_free(net, &sk->ns_tracker,
> false);
>  
> +		       sk->sk_net_refcnt = 1;
> +		       netns_tracker_alloc(net, &sk->ns_tracker,
> GFP_KERNEL);
> +		       sock_inuse_add(net, 1);
> +		}
>  		xs_save_old_callbacks(transport, sk);
>  
>  		sk->sk_user_data = xprt;

Hmm... Doesn't this end up being more or less equivalent to calling
__sock_create() with the kernel flag being set to 0?
Josef Bacik March 20, 2024, 2:10 p.m. UTC | #3
On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > We've been seeing variations of the following panic in production
> > 
> >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> >   Call Trace:
> >    <IRQ>
> >    ? __die+0x78/0xc0
> >    ? page_fault_oops+0x286/0x380
> >    ? fib6_table_lookup+0x95/0xf40
> >    ? exc_page_fault+0x5d/0x110
> >    ? asm_exc_page_fault+0x22/0x30
> >    ? ip6_pol_route+0x59/0x7a0
> >    ? unlink_anon_vmas+0x370/0x370
> >    fib6_rule_lookup+0x56/0x1b0
> >    ? update_blocked_averages+0x2c6/0x6a0
> >    ip6_route_output_flags+0xd2/0x130
> >    ip6_dst_lookup_tail+0x3b/0x220
> >    ip6_dst_lookup_flow+0x2c/0x80
> >    inet6_sk_rebuild_header+0x14c/0x1e0
> >    ? tcp_release_cb+0x150/0x150
> >    __tcp_retransmit_skb+0x68/0x6b0
> >    ? tcp_current_mss+0xca/0x150
> >    ? tcp_release_cb+0x150/0x150
> >    tcp_send_loss_probe+0x8e/0x220
> >    tcp_write_timer+0xbe/0x2d0
> >    run_timer_softirq+0x272/0x840
> >    ? hrtimer_interrupt+0x2c9/0x5f0
> >    ? sched_clock_cpu+0xc/0x170
> >    irq_exit_rcu+0x171/0x330
> >    sysvec_apic_timer_interrupt+0x6d/0x80
> >    </IRQ>
> >    <TASK>
> >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > 
> > Inspecting the vmcore with drgn you can see why this is a NULL
> > pointer deref
> > 
> >     >>> prog.crashed_thread().stack_trace()[0]
> >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > ip6_pol_route at net/ipv6/route.c:2212:40
> > 
> >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > 
> >     >>>
> > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> >     (struct ipv6_devconf *)0x0
> > 
> > Looking at the socket you can see that it's been closed
> > 
> >     >>>
> > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > __sk_common.skc_flags, prog.type('enum sock_flags'))
> >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> >     >>> decode_enum_type_flags(1 <<
> > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> >     'TCPF_FIN_WAIT1'
> > 
> > This occurs in our container setup where we have an NFS mount that
> > belongs to the containers network namespace.  On container shutdown
> > our
> > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > panic.  In the kernel we're responsible for destroying our sockets
> > when
> > the network namespace exits, or holding a reference on the network
> > namespace for our sockets so this doesn't happen.
> > 
> > Even once we shutdown the socket we can still have TCP timers that
> > fire
> > in the background, hence this panic.  SUNRPC shuts down the socket
> > and
> > throws away all knowledge of it, but it's still doing things in the
> > background.
> > 
> > Fix this by grabbing a reference on the network namespace for any tcp
> > sockets we open.  With this patch I'm able to cycle my 500 node
> > stress
> > tier over and over again without panicing, whereas previously I was
> > losing 10-20 nodes every shutdown cycle.
> > 
> > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > ---
> > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > realize there was
> > a division of the client and server side of SUNRPC.
> > 
> >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index bb81050c870e..f02387751a94 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > rpc_xprt *xprt, struct socket *sock)
> >  
> >  	if (!transport->inet) {
> >  		struct sock *sk = sock->sk;
> > +		struct net *net = sock_net(sk);
> >  
> >  		/* Avoid temporary address, they are bad for long-
> > lived
> >  		 * connections such as NFS mounts.
> > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > rpc_xprt *xprt, struct socket *sock)
> >  		tcp_sock_set_nodelay(sk);
> >  
> >  		lock_sock(sk);
> > +		/*
> > +		 * Because timers can fire after the fact we need to
> > hold a
> > +		 * reference on the netns for this socket.
> > +		 */
> > +		if (!sk->sk_net_refcnt) {
> > +			if (!maybe_get_net(net)) {
> > +			       release_sock(sk);
> > +			       return -ENOTCONN;
> > +		       }
> > +		       /*
> > +			* For kernel sockets we have a tracker put
> > in place for
> > +			* the tracing, we need to free this to
> > maintaine
> > +			* consistent tracking info.
> > +			*/
> > +		       __netns_tracker_free(net, &sk->ns_tracker,
> > false);
> >  
> > +		       sk->sk_net_refcnt = 1;
> > +		       netns_tracker_alloc(net, &sk->ns_tracker,
> > GFP_KERNEL);
> > +		       sock_inuse_add(net, 1);
> > +		}
> >  		xs_save_old_callbacks(transport, sk);
> >  
> >  		sk->sk_user_data = xprt;
> 
> Hmm... Doesn't this end up being more or less equivalent to calling
> __sock_create() with the kernel flag being set to 0?

AFAICT yes, but there are a lot of other things that happen with kern being set
to 1, so I think this is a safer bet, and is analagous to this other fix
3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,

Josef
Eric Dumazet March 20, 2024, 2:28 p.m. UTC | #4
On Wed, Mar 20, 2024 at 3:10 PM Josef Bacik <josef@toxicpanda.com> wrote:
>
> On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> > On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > > We've been seeing variations of the following panic in production
> > >
> > >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> > >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> > >   Call Trace:
> > >    <IRQ>
> > >    ? __die+0x78/0xc0
> > >    ? page_fault_oops+0x286/0x380
> > >    ? fib6_table_lookup+0x95/0xf40
> > >    ? exc_page_fault+0x5d/0x110
> > >    ? asm_exc_page_fault+0x22/0x30
> > >    ? ip6_pol_route+0x59/0x7a0
> > >    ? unlink_anon_vmas+0x370/0x370
> > >    fib6_rule_lookup+0x56/0x1b0
> > >    ? update_blocked_averages+0x2c6/0x6a0
> > >    ip6_route_output_flags+0xd2/0x130
> > >    ip6_dst_lookup_tail+0x3b/0x220
> > >    ip6_dst_lookup_flow+0x2c/0x80
> > >    inet6_sk_rebuild_header+0x14c/0x1e0
> > >    ? tcp_release_cb+0x150/0x150
> > >    __tcp_retransmit_skb+0x68/0x6b0
> > >    ? tcp_current_mss+0xca/0x150
> > >    ? tcp_release_cb+0x150/0x150
> > >    tcp_send_loss_probe+0x8e/0x220
> > >    tcp_write_timer+0xbe/0x2d0
> > >    run_timer_softirq+0x272/0x840
> > >    ? hrtimer_interrupt+0x2c9/0x5f0
> > >    ? sched_clock_cpu+0xc/0x170
> > >    irq_exit_rcu+0x171/0x330
> > >    sysvec_apic_timer_interrupt+0x6d/0x80
> > >    </IRQ>
> > >    <TASK>
> > >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> > >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > >
> > > Inspecting the vmcore with drgn you can see why this is a NULL
> > > pointer deref
> > >
> > >     >>> prog.crashed_thread().stack_trace()[0]
> > >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > > ip6_pol_route at net/ipv6/route.c:2212:40
> > >
> > >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> > >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > >
> > >     >>>
> > > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> > >     (struct ipv6_devconf *)0x0
> > >
> > > Looking at the socket you can see that it's been closed
> > >
> > >     >>>
> > > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > > __sk_common.skc_flags, prog.type('enum sock_flags'))
> > >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> > >     >>> decode_enum_type_flags(1 <<
> > > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> > >     'TCPF_FIN_WAIT1'
> > >
> > > This occurs in our container setup where we have an NFS mount that
> > > belongs to the containers network namespace.  On container shutdown
> > > our
> > > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > > panic.  In the kernel we're responsible for destroying our sockets
> > > when
> > > the network namespace exits, or holding a reference on the network
> > > namespace for our sockets so this doesn't happen.
> > >
> > > Even once we shutdown the socket we can still have TCP timers that
> > > fire
> > > in the background, hence this panic.  SUNRPC shuts down the socket
> > > and
> > > throws away all knowledge of it, but it's still doing things in the
> > > background.
> > >
> > > Fix this by grabbing a reference on the network namespace for any tcp
> > > sockets we open.  With this patch I'm able to cycle my 500 node
> > > stress
> > > tier over and over again without panicing, whereas previously I was
> > > losing 10-20 nodes every shutdown cycle.
> > >
> > > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > > ---
> > > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > > realize there was
> > > a division of the client and server side of SUNRPC.
> > >
> > >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> > >  1 file changed, 20 insertions(+)
> > >
> > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > index bb81050c870e..f02387751a94 100644
> > > --- a/net/sunrpc/xprtsock.c
> > > +++ b/net/sunrpc/xprtsock.c
> > > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > > rpc_xprt *xprt, struct socket *sock)
> > >
> > >     if (!transport->inet) {
> > >             struct sock *sk = sock->sk;
> > > +           struct net *net = sock_net(sk);
> > >
> > >             /* Avoid temporary address, they are bad for long-
> > > lived
> > >              * connections such as NFS mounts.
> > > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > > rpc_xprt *xprt, struct socket *sock)
> > >             tcp_sock_set_nodelay(sk);
> > >
> > >             lock_sock(sk);
> > > +           /*
> > > +            * Because timers can fire after the fact we need to
> > > hold a
> > > +            * reference on the netns for this socket.
> > > +            */
> > > +           if (!sk->sk_net_refcnt) {
> > > +                   if (!maybe_get_net(net)) {
> > > +                          release_sock(sk);
> > > +                          return -ENOTCONN;
> > > +                  }
> > > +                  /*
> > > +                   * For kernel sockets we have a tracker put
> > > in place for
> > > +                   * the tracing, we need to free this to
> > > maintaine
> > > +                   * consistent tracking info.
> > > +                   */
> > > +                  __netns_tracker_free(net, &sk->ns_tracker,
> > > false);
> > >
> > > +                  sk->sk_net_refcnt = 1;
> > > +                  netns_tracker_alloc(net, &sk->ns_tracker,
> > > GFP_KERNEL);
> > > +                  sock_inuse_add(net, 1);
> > > +           }
> > >             xs_save_old_callbacks(transport, sk);
> > >
> > >             sk->sk_user_data = xprt;
> >
> > Hmm... Doesn't this end up being more or less equivalent to calling
> > __sock_create() with the kernel flag being set to 0?
>
> AFAICT yes, but there are a lot of other things that happen with kern being set
> to 1, so I think this is a safer bet, and is analagous to this other fix
> 3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,
>

Hmm... this would prevent a netns with one or more TCP flows owned by
this layer to be dismantled,
even if all other processes/sockets disappeared ?

Have you looked at my suggestion instead ?

https://lore.kernel.org/bpf/CANn89i+484ffqb93aQm1N-tjxxvb3WDKX0EbD7318RwRgsatjw@mail.gmail.com/

I never formally submitted this patch because I got no feedback.
Josef Bacik March 20, 2024, 2:56 p.m. UTC | #5
On Wed, Mar 20, 2024 at 03:28:15PM +0100, Eric Dumazet wrote:
> On Wed, Mar 20, 2024 at 3:10 PM Josef Bacik <josef@toxicpanda.com> wrote:
> >
> > On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> > > On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > > > We've been seeing variations of the following panic in production
> > > >
> > > >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> > > >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> > > >   Call Trace:
> > > >    <IRQ>
> > > >    ? __die+0x78/0xc0
> > > >    ? page_fault_oops+0x286/0x380
> > > >    ? fib6_table_lookup+0x95/0xf40
> > > >    ? exc_page_fault+0x5d/0x110
> > > >    ? asm_exc_page_fault+0x22/0x30
> > > >    ? ip6_pol_route+0x59/0x7a0
> > > >    ? unlink_anon_vmas+0x370/0x370
> > > >    fib6_rule_lookup+0x56/0x1b0
> > > >    ? update_blocked_averages+0x2c6/0x6a0
> > > >    ip6_route_output_flags+0xd2/0x130
> > > >    ip6_dst_lookup_tail+0x3b/0x220
> > > >    ip6_dst_lookup_flow+0x2c/0x80
> > > >    inet6_sk_rebuild_header+0x14c/0x1e0
> > > >    ? tcp_release_cb+0x150/0x150
> > > >    __tcp_retransmit_skb+0x68/0x6b0
> > > >    ? tcp_current_mss+0xca/0x150
> > > >    ? tcp_release_cb+0x150/0x150
> > > >    tcp_send_loss_probe+0x8e/0x220
> > > >    tcp_write_timer+0xbe/0x2d0
> > > >    run_timer_softirq+0x272/0x840
> > > >    ? hrtimer_interrupt+0x2c9/0x5f0
> > > >    ? sched_clock_cpu+0xc/0x170
> > > >    irq_exit_rcu+0x171/0x330
> > > >    sysvec_apic_timer_interrupt+0x6d/0x80
> > > >    </IRQ>
> > > >    <TASK>
> > > >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > > >
> > > > Inspecting the vmcore with drgn you can see why this is a NULL
> > > > pointer deref
> > > >
> > > >     >>> prog.crashed_thread().stack_trace()[0]
> > > >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > > > ip6_pol_route at net/ipv6/route.c:2212:40
> > > >
> > > >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> > > >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > > >
> > > >     >>>
> > > > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> > > >     (struct ipv6_devconf *)0x0
> > > >
> > > > Looking at the socket you can see that it's been closed
> > > >
> > > >     >>>
> > > > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > > > __sk_common.skc_flags, prog.type('enum sock_flags'))
> > > >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> > > >     >>> decode_enum_type_flags(1 <<
> > > > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > > > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> > > >     'TCPF_FIN_WAIT1'
> > > >
> > > > This occurs in our container setup where we have an NFS mount that
> > > > belongs to the containers network namespace.  On container shutdown
> > > > our
> > > > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > > > panic.  In the kernel we're responsible for destroying our sockets
> > > > when
> > > > the network namespace exits, or holding a reference on the network
> > > > namespace for our sockets so this doesn't happen.
> > > >
> > > > Even once we shutdown the socket we can still have TCP timers that
> > > > fire
> > > > in the background, hence this panic.  SUNRPC shuts down the socket
> > > > and
> > > > throws away all knowledge of it, but it's still doing things in the
> > > > background.
> > > >
> > > > Fix this by grabbing a reference on the network namespace for any tcp
> > > > sockets we open.  With this patch I'm able to cycle my 500 node
> > > > stress
> > > > tier over and over again without panicing, whereas previously I was
> > > > losing 10-20 nodes every shutdown cycle.
> > > >
> > > > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > > > ---
> > > > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > > > realize there was
> > > > a division of the client and server side of SUNRPC.
> > > >
> > > >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> > > >  1 file changed, 20 insertions(+)
> > > >
> > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > index bb81050c870e..f02387751a94 100644
> > > > --- a/net/sunrpc/xprtsock.c
> > > > +++ b/net/sunrpc/xprtsock.c
> > > > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > > > rpc_xprt *xprt, struct socket *sock)
> > > >
> > > >     if (!transport->inet) {
> > > >             struct sock *sk = sock->sk;
> > > > +           struct net *net = sock_net(sk);
> > > >
> > > >             /* Avoid temporary address, they are bad for long-
> > > > lived
> > > >              * connections such as NFS mounts.
> > > > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > > > rpc_xprt *xprt, struct socket *sock)
> > > >             tcp_sock_set_nodelay(sk);
> > > >
> > > >             lock_sock(sk);
> > > > +           /*
> > > > +            * Because timers can fire after the fact we need to
> > > > hold a
> > > > +            * reference on the netns for this socket.
> > > > +            */
> > > > +           if (!sk->sk_net_refcnt) {
> > > > +                   if (!maybe_get_net(net)) {
> > > > +                          release_sock(sk);
> > > > +                          return -ENOTCONN;
> > > > +                  }
> > > > +                  /*
> > > > +                   * For kernel sockets we have a tracker put
> > > > in place for
> > > > +                   * the tracing, we need to free this to
> > > > maintaine
> > > > +                   * consistent tracking info.
> > > > +                   */
> > > > +                  __netns_tracker_free(net, &sk->ns_tracker,
> > > > false);
> > > >
> > > > +                  sk->sk_net_refcnt = 1;
> > > > +                  netns_tracker_alloc(net, &sk->ns_tracker,
> > > > GFP_KERNEL);
> > > > +                  sock_inuse_add(net, 1);
> > > > +           }
> > > >             xs_save_old_callbacks(transport, sk);
> > > >
> > > >             sk->sk_user_data = xprt;
> > >
> > > Hmm... Doesn't this end up being more or less equivalent to calling
> > > __sock_create() with the kernel flag being set to 0?
> >
> > AFAICT yes, but there are a lot of other things that happen with kern being set
> > to 1, so I think this is a safer bet, and is analagous to this other fix
> > 3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,
> >
> 
> Hmm... this would prevent a netns with one or more TCP flows owned by
> this layer to be dismantled,
> even if all other processes/sockets disappeared ?

Yeah but if sockets are still in use then we want the netns to still be up
right?  I personally am very confused about how the lifetime stuff works for
sockets, I don't understand how shutting down the socket means it gets to stick
around after the fact forever, but feels like if it's tied to a netns then it's
completely valid to hold the netns open until we're done with the socket.

> 
> Have you looked at my suggestion instead ?
> 
> https://lore.kernel.org/bpf/CANn89i+484ffqb93aQm1N-tjxxvb3WDKX0EbD7318RwRgsatjw@mail.gmail.com/
> 
> I never formally submitted this patch because I got no feedback.

I did something similar, tho not with _sync so maybe that was the problem, but
this is what I did originally in production before I emailed you the first time

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 40a2aa9fd00e..73ae59a5a488 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2827,6 +2827,8 @@ void tcp_shutdown(struct sock *sk, int how)
        if (!(how & SEND_SHUTDOWN))
                return;

+       tcp_clear_xmit_timers(sk);
+
        /* If we've already sent a FIN, or it's a closed state, skip this. */
        if ((1 << sk->sk_state) &
            (TCPF_ESTABLISHED | TCPF_SYN_SENT |

But I still hit the panic.  I followed this up with a patch where I set a
special flag that I set on the socket _after_ i called kernel_sock_shutdown(),
and then I added a WARN_ON() to the icsk_retransmit_timer re-arm stuff if my
flag was set, and this fired alllll the time.  Stopping the timer doesn't help,
because after we close the socket we'll still get some incoming ACK that'll
re-arm the timer and then we're in the same position we were originally.

I'm happy to be wrong here, because this is clearly not my area of expertise,
but I wandered down this road and it wasn't the right one.  Thanks,

Josef
Eric Dumazet March 20, 2024, 3 p.m. UTC | #6
On Wed, Mar 20, 2024 at 3:56 PM Josef Bacik <josef@toxicpanda.com> wrote:
>
> On Wed, Mar 20, 2024 at 03:28:15PM +0100, Eric Dumazet wrote:
> > On Wed, Mar 20, 2024 at 3:10 PM Josef Bacik <josef@toxicpanda.com> wrote:
> > >
> > > On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > > > > We've been seeing variations of the following panic in production
> > > > >
> > > > >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> > > > >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> > > > >   Call Trace:
> > > > >    <IRQ>
> > > > >    ? __die+0x78/0xc0
> > > > >    ? page_fault_oops+0x286/0x380
> > > > >    ? fib6_table_lookup+0x95/0xf40
> > > > >    ? exc_page_fault+0x5d/0x110
> > > > >    ? asm_exc_page_fault+0x22/0x30
> > > > >    ? ip6_pol_route+0x59/0x7a0
> > > > >    ? unlink_anon_vmas+0x370/0x370
> > > > >    fib6_rule_lookup+0x56/0x1b0
> > > > >    ? update_blocked_averages+0x2c6/0x6a0
> > > > >    ip6_route_output_flags+0xd2/0x130
> > > > >    ip6_dst_lookup_tail+0x3b/0x220
> > > > >    ip6_dst_lookup_flow+0x2c/0x80
> > > > >    inet6_sk_rebuild_header+0x14c/0x1e0
> > > > >    ? tcp_release_cb+0x150/0x150
> > > > >    __tcp_retransmit_skb+0x68/0x6b0
> > > > >    ? tcp_current_mss+0xca/0x150
> > > > >    ? tcp_release_cb+0x150/0x150
> > > > >    tcp_send_loss_probe+0x8e/0x220
> > > > >    tcp_write_timer+0xbe/0x2d0
> > > > >    run_timer_softirq+0x272/0x840
> > > > >    ? hrtimer_interrupt+0x2c9/0x5f0
> > > > >    ? sched_clock_cpu+0xc/0x170
> > > > >    irq_exit_rcu+0x171/0x330
> > > > >    sysvec_apic_timer_interrupt+0x6d/0x80
> > > > >    </IRQ>
> > > > >    <TASK>
> > > > >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > > >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > > > >
> > > > > Inspecting the vmcore with drgn you can see why this is a NULL
> > > > > pointer deref
> > > > >
> > > > >     >>> prog.crashed_thread().stack_trace()[0]
> > > > >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > > > > ip6_pol_route at net/ipv6/route.c:2212:40
> > > > >
> > > > >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> > > > >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > > > >
> > > > >     >>>
> > > > > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> > > > >     (struct ipv6_devconf *)0x0
> > > > >
> > > > > Looking at the socket you can see that it's been closed
> > > > >
> > > > >     >>>
> > > > > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > > > > __sk_common.skc_flags, prog.type('enum sock_flags'))
> > > > >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> > > > >     >>> decode_enum_type_flags(1 <<
> > > > > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > > > > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> > > > >     'TCPF_FIN_WAIT1'
> > > > >
> > > > > This occurs in our container setup where we have an NFS mount that
> > > > > belongs to the containers network namespace.  On container shutdown
> > > > > our
> > > > > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > > > > panic.  In the kernel we're responsible for destroying our sockets
> > > > > when
> > > > > the network namespace exits, or holding a reference on the network
> > > > > namespace for our sockets so this doesn't happen.
> > > > >
> > > > > Even once we shutdown the socket we can still have TCP timers that
> > > > > fire
> > > > > in the background, hence this panic.  SUNRPC shuts down the socket
> > > > > and
> > > > > throws away all knowledge of it, but it's still doing things in the
> > > > > background.
> > > > >
> > > > > Fix this by grabbing a reference on the network namespace for any tcp
> > > > > sockets we open.  With this patch I'm able to cycle my 500 node
> > > > > stress
> > > > > tier over and over again without panicing, whereas previously I was
> > > > > losing 10-20 nodes every shutdown cycle.
> > > > >
> > > > > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > > > > ---
> > > > > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > > > > realize there was
> > > > > a division of the client and server side of SUNRPC.
> > > > >
> > > > >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> > > > >  1 file changed, 20 insertions(+)
> > > > >
> > > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > > index bb81050c870e..f02387751a94 100644
> > > > > --- a/net/sunrpc/xprtsock.c
> > > > > +++ b/net/sunrpc/xprtsock.c
> > > > > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > > > > rpc_xprt *xprt, struct socket *sock)
> > > > >
> > > > >     if (!transport->inet) {
> > > > >             struct sock *sk = sock->sk;
> > > > > +           struct net *net = sock_net(sk);
> > > > >
> > > > >             /* Avoid temporary address, they are bad for long-
> > > > > lived
> > > > >              * connections such as NFS mounts.
> > > > > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > > > > rpc_xprt *xprt, struct socket *sock)
> > > > >             tcp_sock_set_nodelay(sk);
> > > > >
> > > > >             lock_sock(sk);
> > > > > +           /*
> > > > > +            * Because timers can fire after the fact we need to
> > > > > hold a
> > > > > +            * reference on the netns for this socket.
> > > > > +            */
> > > > > +           if (!sk->sk_net_refcnt) {
> > > > > +                   if (!maybe_get_net(net)) {
> > > > > +                          release_sock(sk);
> > > > > +                          return -ENOTCONN;
> > > > > +                  }
> > > > > +                  /*
> > > > > +                   * For kernel sockets we have a tracker put
> > > > > in place for
> > > > > +                   * the tracing, we need to free this to
> > > > > maintaine
> > > > > +                   * consistent tracking info.
> > > > > +                   */
> > > > > +                  __netns_tracker_free(net, &sk->ns_tracker,
> > > > > false);
> > > > >
> > > > > +                  sk->sk_net_refcnt = 1;
> > > > > +                  netns_tracker_alloc(net, &sk->ns_tracker,
> > > > > GFP_KERNEL);
> > > > > +                  sock_inuse_add(net, 1);
> > > > > +           }
> > > > >             xs_save_old_callbacks(transport, sk);
> > > > >
> > > > >             sk->sk_user_data = xprt;
> > > >
> > > > Hmm... Doesn't this end up being more or less equivalent to calling
> > > > __sock_create() with the kernel flag being set to 0?
> > >
> > > AFAICT yes, but there are a lot of other things that happen with kern being set
> > > to 1, so I think this is a safer bet, and is analagous to this other fix
> > > 3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,
> > >
> >
> > Hmm... this would prevent a netns with one or more TCP flows owned by
> > this layer to be dismantled,
> > even if all other processes/sockets disappeared ?
>
> Yeah but if sockets are still in use then we want the netns to still be up
> right?  I personally am very confused about how the lifetime stuff works for
> sockets, I don't understand how shutting down the socket means it gets to stick
> around after the fact forever, but feels like if it's tied to a netns then it's
> completely valid to hold the netns open until we're done with the socket.
>
> >
> > Have you looked at my suggestion instead ?
> >
> > https://lore.kernel.org/bpf/CANn89i+484ffqb93aQm1N-tjxxvb3WDKX0EbD7318RwRgsatjw@mail.gmail.com/
> >
> > I never formally submitted this patch because I got no feedback.
>
> I did something similar, tho not with _sync so maybe that was the problem, but
> this is what I did originally in production before I emailed you the first time


The _sync part is mandatory really for this context.

Not that it needs to be done while the socket is not locked, or risk a deadlock.

Note that modern trees have timer_shutdown_sync() which might even be better.


>
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 40a2aa9fd00e..73ae59a5a488 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2827,6 +2827,8 @@ void tcp_shutdown(struct sock *sk, int how)
>         if (!(how & SEND_SHUTDOWN))
>                 return;
>
> +       tcp_clear_xmit_timers(sk);
> +
>         /* If we've already sent a FIN, or it's a closed state, skip this. */
>         if ((1 << sk->sk_state) &
>             (TCPF_ESTABLISHED | TCPF_SYN_SENT |
>
> But I still hit the panic.  I followed this up with a patch where I set a
> special flag that I set on the socket _after_ i called kernel_sock_shutdown(),
> and then I added a WARN_ON() to the icsk_retransmit_timer re-arm stuff if my
> flag was set, and this fired alllll the time.  Stopping the timer doesn't help,
> because after we close the socket we'll still get some incoming ACK that'll
> re-arm the timer and then we're in the same position we were originally.

timer_shutdown() to the rescue perhaps.


>
> I'm happy to be wrong here, because this is clearly not my area of expertise,
> but I wandered down this road and it wasn't the right one.  Thanks,
>
> Josef
Josef Bacik March 20, 2024, 3:35 p.m. UTC | #7
On Wed, Mar 20, 2024 at 04:00:09PM +0100, Eric Dumazet wrote:
> On Wed, Mar 20, 2024 at 3:56 PM Josef Bacik <josef@toxicpanda.com> wrote:
> >
> > On Wed, Mar 20, 2024 at 03:28:15PM +0100, Eric Dumazet wrote:
> > > On Wed, Mar 20, 2024 at 3:10 PM Josef Bacik <josef@toxicpanda.com> wrote:
> > > >
> > > > On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > > > > > We've been seeing variations of the following panic in production
> > > > > >
> > > > > >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> > > > > >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> > > > > >   Call Trace:
> > > > > >    <IRQ>
> > > > > >    ? __die+0x78/0xc0
> > > > > >    ? page_fault_oops+0x286/0x380
> > > > > >    ? fib6_table_lookup+0x95/0xf40
> > > > > >    ? exc_page_fault+0x5d/0x110
> > > > > >    ? asm_exc_page_fault+0x22/0x30
> > > > > >    ? ip6_pol_route+0x59/0x7a0
> > > > > >    ? unlink_anon_vmas+0x370/0x370
> > > > > >    fib6_rule_lookup+0x56/0x1b0
> > > > > >    ? update_blocked_averages+0x2c6/0x6a0
> > > > > >    ip6_route_output_flags+0xd2/0x130
> > > > > >    ip6_dst_lookup_tail+0x3b/0x220
> > > > > >    ip6_dst_lookup_flow+0x2c/0x80
> > > > > >    inet6_sk_rebuild_header+0x14c/0x1e0
> > > > > >    ? tcp_release_cb+0x150/0x150
> > > > > >    __tcp_retransmit_skb+0x68/0x6b0
> > > > > >    ? tcp_current_mss+0xca/0x150
> > > > > >    ? tcp_release_cb+0x150/0x150
> > > > > >    tcp_send_loss_probe+0x8e/0x220
> > > > > >    tcp_write_timer+0xbe/0x2d0
> > > > > >    run_timer_softirq+0x272/0x840
> > > > > >    ? hrtimer_interrupt+0x2c9/0x5f0
> > > > > >    ? sched_clock_cpu+0xc/0x170
> > > > > >    irq_exit_rcu+0x171/0x330
> > > > > >    sysvec_apic_timer_interrupt+0x6d/0x80
> > > > > >    </IRQ>
> > > > > >    <TASK>
> > > > > >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > > > >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > > > > >
> > > > > > Inspecting the vmcore with drgn you can see why this is a NULL
> > > > > > pointer deref
> > > > > >
> > > > > >     >>> prog.crashed_thread().stack_trace()[0]
> > > > > >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > > > > > ip6_pol_route at net/ipv6/route.c:2212:40
> > > > > >
> > > > > >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> > > > > >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > > > > >
> > > > > >     >>>
> > > > > > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> > > > > >     (struct ipv6_devconf *)0x0
> > > > > >
> > > > > > Looking at the socket you can see that it's been closed
> > > > > >
> > > > > >     >>>
> > > > > > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > > > > > __sk_common.skc_flags, prog.type('enum sock_flags'))
> > > > > >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> > > > > >     >>> decode_enum_type_flags(1 <<
> > > > > > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > > > > > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> > > > > >     'TCPF_FIN_WAIT1'
> > > > > >
> > > > > > This occurs in our container setup where we have an NFS mount that
> > > > > > belongs to the containers network namespace.  On container shutdown
> > > > > > our
> > > > > > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > > > > > panic.  In the kernel we're responsible for destroying our sockets
> > > > > > when
> > > > > > the network namespace exits, or holding a reference on the network
> > > > > > namespace for our sockets so this doesn't happen.
> > > > > >
> > > > > > Even once we shutdown the socket we can still have TCP timers that
> > > > > > fire
> > > > > > in the background, hence this panic.  SUNRPC shuts down the socket
> > > > > > and
> > > > > > throws away all knowledge of it, but it's still doing things in the
> > > > > > background.
> > > > > >
> > > > > > Fix this by grabbing a reference on the network namespace for any tcp
> > > > > > sockets we open.  With this patch I'm able to cycle my 500 node
> > > > > > stress
> > > > > > tier over and over again without panicing, whereas previously I was
> > > > > > losing 10-20 nodes every shutdown cycle.
> > > > > >
> > > > > > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > > > > > ---
> > > > > > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > > > > > realize there was
> > > > > > a division of the client and server side of SUNRPC.
> > > > > >
> > > > > >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> > > > > >  1 file changed, 20 insertions(+)
> > > > > >
> > > > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > > > index bb81050c870e..f02387751a94 100644
> > > > > > --- a/net/sunrpc/xprtsock.c
> > > > > > +++ b/net/sunrpc/xprtsock.c
> > > > > > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > > > > > rpc_xprt *xprt, struct socket *sock)
> > > > > >
> > > > > >     if (!transport->inet) {
> > > > > >             struct sock *sk = sock->sk;
> > > > > > +           struct net *net = sock_net(sk);
> > > > > >
> > > > > >             /* Avoid temporary address, they are bad for long-
> > > > > > lived
> > > > > >              * connections such as NFS mounts.
> > > > > > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > > > > > rpc_xprt *xprt, struct socket *sock)
> > > > > >             tcp_sock_set_nodelay(sk);
> > > > > >
> > > > > >             lock_sock(sk);
> > > > > > +           /*
> > > > > > +            * Because timers can fire after the fact we need to
> > > > > > hold a
> > > > > > +            * reference on the netns for this socket.
> > > > > > +            */
> > > > > > +           if (!sk->sk_net_refcnt) {
> > > > > > +                   if (!maybe_get_net(net)) {
> > > > > > +                          release_sock(sk);
> > > > > > +                          return -ENOTCONN;
> > > > > > +                  }
> > > > > > +                  /*
> > > > > > +                   * For kernel sockets we have a tracker put
> > > > > > in place for
> > > > > > +                   * the tracing, we need to free this to
> > > > > > maintaine
> > > > > > +                   * consistent tracking info.
> > > > > > +                   */
> > > > > > +                  __netns_tracker_free(net, &sk->ns_tracker,
> > > > > > false);
> > > > > >
> > > > > > +                  sk->sk_net_refcnt = 1;
> > > > > > +                  netns_tracker_alloc(net, &sk->ns_tracker,
> > > > > > GFP_KERNEL);
> > > > > > +                  sock_inuse_add(net, 1);
> > > > > > +           }
> > > > > >             xs_save_old_callbacks(transport, sk);
> > > > > >
> > > > > >             sk->sk_user_data = xprt;
> > > > >
> > > > > Hmm... Doesn't this end up being more or less equivalent to calling
> > > > > __sock_create() with the kernel flag being set to 0?
> > > >
> > > > AFAICT yes, but there are a lot of other things that happen with kern being set
> > > > to 1, so I think this is a safer bet, and is analagous to this other fix
> > > > 3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,
> > > >
> > >
> > > Hmm... this would prevent a netns with one or more TCP flows owned by
> > > this layer to be dismantled,
> > > even if all other processes/sockets disappeared ?
> >
> > Yeah but if sockets are still in use then we want the netns to still be up
> > right?  I personally am very confused about how the lifetime stuff works for
> > sockets, I don't understand how shutting down the socket means it gets to stick
> > around after the fact forever, but feels like if it's tied to a netns then it's
> > completely valid to hold the netns open until we're done with the socket.
> >
> > >
> > > Have you looked at my suggestion instead ?
> > >
> > > https://lore.kernel.org/bpf/CANn89i+484ffqb93aQm1N-tjxxvb3WDKX0EbD7318RwRgsatjw@mail.gmail.com/
> > >
> > > I never formally submitted this patch because I got no feedback.
> >
> > I did something similar, tho not with _sync so maybe that was the problem, but
> > this is what I did originally in production before I emailed you the first time
> 
> 
> The _sync part is mandatory really for this context.
> 
> Not that it needs to be done while the socket is not locked, or risk a deadlock.
> 
> Note that modern trees have timer_shutdown_sync() which might even be better.
> 

Sounds good, I've reverted my patches and I've applied this patch, I should have
results by the end of the day on wether or not it fixed the problem.  Thanks,

Josef
Josef Bacik March 21, 2024, 6:49 p.m. UTC | #8
On Wed, Mar 20, 2024 at 04:00:09PM +0100, Eric Dumazet wrote:
> On Wed, Mar 20, 2024 at 3:56 PM Josef Bacik <josef@toxicpanda.com> wrote:
> >
> > On Wed, Mar 20, 2024 at 03:28:15PM +0100, Eric Dumazet wrote:
> > > On Wed, Mar 20, 2024 at 3:10 PM Josef Bacik <josef@toxicpanda.com> wrote:
> > > >
> > > > On Tue, Mar 19, 2024 at 09:59:48PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2024-03-19 at 16:07 -0400, Josef Bacik wrote:
> > > > > > We've been seeing variations of the following panic in production
> > > > > >
> > > > > >   BUG: kernel NULL pointer dereference, address: 0000000000000000
> > > > > >   RIP: 0010:ip6_pol_route+0x59/0x7a0
> > > > > >   Call Trace:
> > > > > >    <IRQ>
> > > > > >    ? __die+0x78/0xc0
> > > > > >    ? page_fault_oops+0x286/0x380
> > > > > >    ? fib6_table_lookup+0x95/0xf40
> > > > > >    ? exc_page_fault+0x5d/0x110
> > > > > >    ? asm_exc_page_fault+0x22/0x30
> > > > > >    ? ip6_pol_route+0x59/0x7a0
> > > > > >    ? unlink_anon_vmas+0x370/0x370
> > > > > >    fib6_rule_lookup+0x56/0x1b0
> > > > > >    ? update_blocked_averages+0x2c6/0x6a0
> > > > > >    ip6_route_output_flags+0xd2/0x130
> > > > > >    ip6_dst_lookup_tail+0x3b/0x220
> > > > > >    ip6_dst_lookup_flow+0x2c/0x80
> > > > > >    inet6_sk_rebuild_header+0x14c/0x1e0
> > > > > >    ? tcp_release_cb+0x150/0x150
> > > > > >    __tcp_retransmit_skb+0x68/0x6b0
> > > > > >    ? tcp_current_mss+0xca/0x150
> > > > > >    ? tcp_release_cb+0x150/0x150
> > > > > >    tcp_send_loss_probe+0x8e/0x220
> > > > > >    tcp_write_timer+0xbe/0x2d0
> > > > > >    run_timer_softirq+0x272/0x840
> > > > > >    ? hrtimer_interrupt+0x2c9/0x5f0
> > > > > >    ? sched_clock_cpu+0xc/0x170
> > > > > >    irq_exit_rcu+0x171/0x330
> > > > > >    sysvec_apic_timer_interrupt+0x6d/0x80
> > > > > >    </IRQ>
> > > > > >    <TASK>
> > > > > >    asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > > > >   RIP: 0010:cpuidle_enter_state+0xe7/0x243
> > > > > >
> > > > > > Inspecting the vmcore with drgn you can see why this is a NULL
> > > > > > pointer deref
> > > > > >
> > > > > >     >>> prog.crashed_thread().stack_trace()[0]
> > > > > >     #0 at 0xffffffff810bfa89 (ip6_pol_route+0x59/0x796) in
> > > > > > ip6_pol_route at net/ipv6/route.c:2212:40
> > > > > >
> > > > > >     2212        if (net->ipv6.devconf_all->forwarding == 0)
> > > > > >     2213              strict |= RT6_LOOKUP_F_REACHABLE;
> > > > > >
> > > > > >     >>>
> > > > > > prog.crashed_thread().stack_trace()[0]['net'].ipv6.devconf_all
> > > > > >     (struct ipv6_devconf *)0x0
> > > > > >
> > > > > > Looking at the socket you can see that it's been closed
> > > > > >
> > > > > >     >>>
> > > > > > decode_enum_type_flags(prog.crashed_thread().stack_trace()[11]['sk'].
> > > > > > __sk_common.skc_flags, prog.type('enum sock_flags'))
> > > > > >     'SOCK_DEAD|SOCK_KEEPOPEN|SOCK_ZAPPED|SOCK_USE_WRITE_QUEUE'
> > > > > >     >>> decode_enum_type_flags(1 <<
> > > > > > prog.crashed_thread().stack_trace()[11]['sk'].__sk_common.skc_state.v
> > > > > > alue_(), prog["TCPF_CLOSE"].type_, bit_numbers=False)
> > > > > >     'TCPF_FIN_WAIT1'
> > > > > >
> > > > > > This occurs in our container setup where we have an NFS mount that
> > > > > > belongs to the containers network namespace.  On container shutdown
> > > > > > our
> > > > > > netns goes away, which sets net->ipv6.defconf_all = NULL, and then we
> > > > > > panic.  In the kernel we're responsible for destroying our sockets
> > > > > > when
> > > > > > the network namespace exits, or holding a reference on the network
> > > > > > namespace for our sockets so this doesn't happen.
> > > > > >
> > > > > > Even once we shutdown the socket we can still have TCP timers that
> > > > > > fire
> > > > > > in the background, hence this panic.  SUNRPC shuts down the socket
> > > > > > and
> > > > > > throws away all knowledge of it, but it's still doing things in the
> > > > > > background.
> > > > > >
> > > > > > Fix this by grabbing a reference on the network namespace for any tcp
> > > > > > sockets we open.  With this patch I'm able to cycle my 500 node
> > > > > > stress
> > > > > > tier over and over again without panicing, whereas previously I was
> > > > > > losing 10-20 nodes every shutdown cycle.
> > > > > >
> > > > > > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> > > > > > ---
> > > > > > Apologies, I just grepped for SUNRPC in MAINTAINERS and didn't
> > > > > > realize there was
> > > > > > a division of the client and server side of SUNRPC.
> > > > > >
> > > > > >  net/sunrpc/xprtsock.c | 20 ++++++++++++++++++++
> > > > > >  1 file changed, 20 insertions(+)
> > > > > >
> > > > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > > > index bb81050c870e..f02387751a94 100644
> > > > > > --- a/net/sunrpc/xprtsock.c
> > > > > > +++ b/net/sunrpc/xprtsock.c
> > > > > > @@ -2333,6 +2333,7 @@ static int xs_tcp_finish_connecting(struct
> > > > > > rpc_xprt *xprt, struct socket *sock)
> > > > > >
> > > > > >     if (!transport->inet) {
> > > > > >             struct sock *sk = sock->sk;
> > > > > > +           struct net *net = sock_net(sk);
> > > > > >
> > > > > >             /* Avoid temporary address, they are bad for long-
> > > > > > lived
> > > > > >              * connections such as NFS mounts.
> > > > > > @@ -2350,7 +2351,26 @@ static int xs_tcp_finish_connecting(struct
> > > > > > rpc_xprt *xprt, struct socket *sock)
> > > > > >             tcp_sock_set_nodelay(sk);
> > > > > >
> > > > > >             lock_sock(sk);
> > > > > > +           /*
> > > > > > +            * Because timers can fire after the fact we need to
> > > > > > hold a
> > > > > > +            * reference on the netns for this socket.
> > > > > > +            */
> > > > > > +           if (!sk->sk_net_refcnt) {
> > > > > > +                   if (!maybe_get_net(net)) {
> > > > > > +                          release_sock(sk);
> > > > > > +                          return -ENOTCONN;
> > > > > > +                  }
> > > > > > +                  /*
> > > > > > +                   * For kernel sockets we have a tracker put
> > > > > > in place for
> > > > > > +                   * the tracing, we need to free this to
> > > > > > maintaine
> > > > > > +                   * consistent tracking info.
> > > > > > +                   */
> > > > > > +                  __netns_tracker_free(net, &sk->ns_tracker,
> > > > > > false);
> > > > > >
> > > > > > +                  sk->sk_net_refcnt = 1;
> > > > > > +                  netns_tracker_alloc(net, &sk->ns_tracker,
> > > > > > GFP_KERNEL);
> > > > > > +                  sock_inuse_add(net, 1);
> > > > > > +           }
> > > > > >             xs_save_old_callbacks(transport, sk);
> > > > > >
> > > > > >             sk->sk_user_data = xprt;
> > > > >
> > > > > Hmm... Doesn't this end up being more or less equivalent to calling
> > > > > __sock_create() with the kernel flag being set to 0?
> > > >
> > > > AFAICT yes, but there are a lot of other things that happen with kern being set
> > > > to 1, so I think this is a safer bet, and is analagous to this other fix
> > > > 3a58f13a881e ("net: rds: acquire refcount on TCP sockets").  Thanks,
> > > >
> > >
> > > Hmm... this would prevent a netns with one or more TCP flows owned by
> > > this layer to be dismantled,
> > > even if all other processes/sockets disappeared ?
> >
> > Yeah but if sockets are still in use then we want the netns to still be up
> > right?  I personally am very confused about how the lifetime stuff works for
> > sockets, I don't understand how shutting down the socket means it gets to stick
> > around after the fact forever, but feels like if it's tied to a netns then it's
> > completely valid to hold the netns open until we're done with the socket.
> >
> > >
> > > Have you looked at my suggestion instead ?
> > >
> > > https://lore.kernel.org/bpf/CANn89i+484ffqb93aQm1N-tjxxvb3WDKX0EbD7318RwRgsatjw@mail.gmail.com/
> > >
> > > I never formally submitted this patch because I got no feedback.
> >
> > I did something similar, tho not with _sync so maybe that was the problem, but
> > this is what I did originally in production before I emailed you the first time
> 
> 
> The _sync part is mandatory really for this context.
> 
> Not that it needs to be done while the socket is not locked, or risk a deadlock.
> 
> Note that modern trees have timer_shutdown_sync() which might even be better.
> 

Your patch fixes the problem as well, I've been starting and stopping the task
sporadically for a day and haven't tripped the panic.  You can add my Tested-by
when you send it.  Thanks!

Josef
diff mbox series

Patch

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index bb81050c870e..f02387751a94 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -2333,6 +2333,7 @@  static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
 
 	if (!transport->inet) {
 		struct sock *sk = sock->sk;
+		struct net *net = sock_net(sk);
 
 		/* Avoid temporary address, they are bad for long-lived
 		 * connections such as NFS mounts.
@@ -2350,7 +2351,26 @@  static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
 		tcp_sock_set_nodelay(sk);
 
 		lock_sock(sk);
+		/*
+		 * Because timers can fire after the fact we need to hold a
+		 * reference on the netns for this socket.
+		 */
+		if (!sk->sk_net_refcnt) {
+			if (!maybe_get_net(net)) {
+			       release_sock(sk);
+			       return -ENOTCONN;
+		       }
+		       /*
+			* For kernel sockets we have a tracker put in place for
+			* the tracing, we need to free this to maintaine
+			* consistent tracking info.
+			*/
+		       __netns_tracker_free(net, &sk->ns_tracker, false);
 
+		       sk->sk_net_refcnt = 1;
+		       netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL);
+		       sock_inuse_add(net, 1);
+		}
 		xs_save_old_callbacks(transport, sk);
 
 		sk->sk_user_data = xprt;