From patchwork Thu Apr 18 13:32:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Xing X-Patchwork-Id: 13634745 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00E371635C5; Thu, 18 Apr 2024 13:33:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713447194; cv=none; b=FHgtK0xp6yD/cPLN8pHaAExDA31zY95MEKhII8G09+TLwQxKkk2uqGLRNR9eqJRuByOQiXmcaWpSOx4VlVstKwiipginy1U17xUK1GtVcrOTsT/9sxVsW5wMwszONIktEyJJpqhuiPtoNe/bdDRG0BFMEqP2BfC/VfXolhHHa+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713447194; c=relaxed/simple; bh=UQ8OKu/6XO1gzteA1wAsmRbCyz+zioVQYep7p/MIbLM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=USxeTiA3YMJkhGHlQdkcaB3YSl1f7dEWWcRsB0B3zzTU3eQQrn4M9FcVdsCABaDTnN6+/yiwOmHZlpPkKqcgK0UR6qq3//GGW5SnvO+fAdzyTu6ybW65RR5IjW0H7k9tqEmoocpI4ttK7NFkVtkaN0CEhkQS5B3cwDyorPRYUkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=eZgbbp8+; arc=none smtp.client-ip=209.85.215.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eZgbbp8+" Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-5dbf7b74402so455263a12.0; Thu, 18 Apr 2024 06:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713447192; x=1714051992; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OLE5sB7nZh8FDwAJ64hDtj2qNR1n4lihjWtB4CW/Oiw=; b=eZgbbp8+f0U9htBHnP8PRZ7lYAU3Ni9bAwMtQNh5BFm/ZJaKOCV9zth0BQYVo/BUfI a0r8t+zot7nHQktbhxKQcby83z9Toc4EWvh3teoak3YzByF8JnUbNDFkvHAJdzx839cB JDZpzE1ZaVfLGrh/jHer1Vm+esgiX/skt4LeXAGjWJVCbuC4L1HcVDIU1Pmd9rkfOMx3 vSatIMcrOB4Iuiox0YXc5hcBIakXIoRhfx31pQxgEjkPoeIC2yFaCQKuSxCzGJgqx6CA MlpQq7f8WJeGn1DwA87oE3EWbR+lS1+HxFgnZEKUqIqwWrbr2dQVDrnnb63HmyMks/2t mOnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713447192; x=1714051992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OLE5sB7nZh8FDwAJ64hDtj2qNR1n4lihjWtB4CW/Oiw=; b=DIIttoXhXF9EL2re9WGVTDdMEo2Ls4vWnSCDMxbtxYoMhTa6NNbiNWLmddykzxs9sP B6300lFDHN6CJcwzdViAk6xWHNrJWEtCnX24lLNKKP+2vV69jY1+EZL2bQ3LtnkSQ58i AuPZQ211XtcBrbYmnTXUho3AYAqXgTYrLTbnqQsDOPe6LAVOORzwoHalDpqcUisloacf 7VgI3UPevJQmJJbXV8Reu3JEAJNjXYTVYNVEGdseLxlcvWUKcxRACxxL9es2sakFNsux XwVgEKIlMFT+n5CJf+TOD5IwY3VM/tE1niQtKNa5cUemB2UBlQs6ZDjUiy07cp0kZjS7 0Cmw== X-Forwarded-Encrypted: i=1; AJvYcCWjs3QwfiK4fGTDbehIPe8AwB/MxzNmA9uf67CO6VDwdan2cPo+7ntPJ7VrV8/mpcTnsAeC0+QHIuwwHQKV3nY3a1CI7bq0lAvRSob+960IUsznIPLF5Nc8DlGj3BxbSt+Lyk46BZednQ34 X-Gm-Message-State: AOJu0YyGwtz9sCxt0IYQm9uWCDnvianTohMbVQV9M/VwgdZAMnG+rbqi 2Fm/9VMzMVaOHD4jBnWWuovsmSQwXVpdsOFKunDEuXZRxapzYdjm X-Google-Smtp-Source: AGHT+IE7671jCFjjYaJuLxe4rk+siPtGqz2Oec+emC8FDiJW3krty52aNM3Valc0QDVadSCQWOxaMA== X-Received: by 2002:a17:90a:f184:b0:2a5:733c:3105 with SMTP id bv4-20020a17090af18400b002a5733c3105mr2805153pjb.26.1713447192255; Thu, 18 Apr 2024 06:33:12 -0700 (PDT) Received: from KERNELXING-MB0.tencent.com ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id bt19-20020a17090af01300b002a2b06cbe46sm1448819pjb.22.2024.04.18.06.33.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 06:33:11 -0700 (PDT) From: Jason Xing To: edumazet@google.com, dsahern@kernel.org, matttbe@kernel.org, martineau@kernel.org, geliang@kernel.org, kuba@kernel.org, pabeni@redhat.com, davem@davemloft.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, atenart@kernel.org Cc: mptcp@lists.linux.dev, netdev@vger.kernel.org, linux-trace-kernel@vger.kernel.org, kerneljasonxing@gmail.com, Jason Xing Subject: [PATCH net-next v6 3/7] rstreason: prepare for active reset Date: Thu, 18 Apr 2024 21:32:44 +0800 Message-Id: <20240418133248.56378-4-kerneljasonxing@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240418133248.56378-1-kerneljasonxing@gmail.com> References: <20240418133248.56378-1-kerneljasonxing@gmail.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jason Xing Like what we did to passive reset: only passing possible reset reason in each active reset path. No functional changes. Signed-off-by: Jason Xing --- include/net/tcp.h | 3 ++- net/ipv4/tcp.c | 15 ++++++++++----- net/ipv4/tcp_output.c | 3 ++- net/ipv4/tcp_timer.c | 9 ++++++--- net/mptcp/protocol.c | 4 +++- net/mptcp/subflow.c | 5 +++-- 6 files changed, 26 insertions(+), 13 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index b935e1ae4caf..adeacc9aa28a 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -670,7 +670,8 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue, void tcp_send_probe0(struct sock *); int tcp_write_wakeup(struct sock *, int mib); void tcp_send_fin(struct sock *sk); -void tcp_send_active_reset(struct sock *sk, gfp_t priority); +void tcp_send_active_reset(struct sock *sk, gfp_t priority, + enum sk_rst_reason reason); int tcp_send_synack(struct sock *); void tcp_push_one(struct sock *, unsigned int mss_now); void __tcp_send_ack(struct sock *sk, u32 rcv_nxt); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f23b97777ea5..4ec0f4feee00 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -275,6 +275,7 @@ #include #include #include +#include #include #include @@ -2811,7 +2812,8 @@ void __tcp_close(struct sock *sk, long timeout) /* Unread data was tossed, zap the connection. */ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE); tcp_set_state(sk, TCP_CLOSE); - tcp_send_active_reset(sk, sk->sk_allocation); + tcp_send_active_reset(sk, sk->sk_allocation, + SK_RST_REASON_NOT_SPECIFIED); } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) { /* Check zero linger _after_ checking for unread data. */ sk->sk_prot->disconnect(sk, 0); @@ -2885,7 +2887,8 @@ void __tcp_close(struct sock *sk, long timeout) struct tcp_sock *tp = tcp_sk(sk); if (READ_ONCE(tp->linger2) < 0) { tcp_set_state(sk, TCP_CLOSE); - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONLINGER); } else { @@ -2903,7 +2906,8 @@ void __tcp_close(struct sock *sk, long timeout) if (sk->sk_state != TCP_CLOSE) { if (tcp_check_oom(sk, 0)) { tcp_set_state(sk, TCP_CLOSE); - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY); } else if (!check_net(sock_net(sk))) { @@ -3007,7 +3011,7 @@ int tcp_disconnect(struct sock *sk, int flags) /* The last check adjusts for discrepancy of Linux wrt. RFC * states */ - tcp_send_active_reset(sk, gfp_any()); + tcp_send_active_reset(sk, gfp_any(), SK_RST_REASON_NOT_SPECIFIED); WRITE_ONCE(sk->sk_err, ECONNRESET); } else if (old_state == TCP_SYN_SENT) WRITE_ONCE(sk->sk_err, ECONNRESET); @@ -4564,7 +4568,8 @@ int tcp_abort(struct sock *sk, int err) smp_wmb(); sk_error_report(sk); if (tcp_need_reset(sk->sk_state)) - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); tcp_done(sk); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 61119d42b0fd..276d9d541b01 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3586,7 +3586,8 @@ void tcp_send_fin(struct sock *sk) * was unread data in the receive queue. This behavior is recommended * by RFC 2525, section 2.17. -DaveM */ -void tcp_send_active_reset(struct sock *sk, gfp_t priority) +void tcp_send_active_reset(struct sock *sk, gfp_t priority, + enum sk_rst_reason reason) { struct sk_buff *skb; diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 976db57b95d4..83fe7f62f7f1 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -22,6 +22,7 @@ #include #include #include +#include static u32 tcp_clamp_rto_to_user_timeout(const struct sock *sk) { @@ -127,7 +128,8 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset) (!tp->snd_wnd && !tp->packets_out)) do_reset = true; if (do_reset) - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); tcp_done(sk); __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY); return 1; @@ -768,7 +770,7 @@ static void tcp_keepalive_timer (struct timer_list *t) goto out; } } - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, SK_RST_REASON_NOT_SPECIFIED); goto death; } @@ -795,7 +797,8 @@ static void tcp_keepalive_timer (struct timer_list *t) icsk->icsk_probes_out > 0) || (user_timeout == 0 && icsk->icsk_probes_out >= keepalive_probes(tp))) { - tcp_send_active_reset(sk, GFP_ATOMIC); + tcp_send_active_reset(sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); tcp_write_err(sk); goto out; } diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index f8bc34f0d973..065967086492 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -21,6 +21,7 @@ #endif #include #include +#include #include #include "protocol.h" #include "mib.h" @@ -2569,7 +2570,8 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk) slow = lock_sock_fast(tcp_sk); if (tcp_sk->sk_state != TCP_CLOSE) { - tcp_send_active_reset(tcp_sk, GFP_ATOMIC); + tcp_send_active_reset(tcp_sk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); tcp_set_state(tcp_sk, TCP_CLOSE); } unlock_sock_fast(tcp_sk, slow); diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 32fe2ef36d56..ac867d277860 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -414,7 +414,7 @@ void mptcp_subflow_reset(struct sock *ssk) /* must hold: tcp_done() could drop last reference on parent */ sock_hold(sk); - tcp_send_active_reset(ssk, GFP_ATOMIC); + tcp_send_active_reset(ssk, GFP_ATOMIC, SK_RST_REASON_NOT_SPECIFIED); tcp_done(ssk); if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags)) mptcp_schedule_work(sk); @@ -1350,7 +1350,8 @@ static bool subflow_check_data_avail(struct sock *ssk) tcp_set_state(ssk, TCP_CLOSE); while ((skb = skb_peek(&ssk->sk_receive_queue))) sk_eat_skb(ssk, skb); - tcp_send_active_reset(ssk, GFP_ATOMIC); + tcp_send_active_reset(ssk, GFP_ATOMIC, + SK_RST_REASON_NOT_SPECIFIED); WRITE_ONCE(subflow->data_avail, false); return false; }