From patchwork Tue Apr 16 11:39:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Xing X-Patchwork-Id: 13631718 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C4BE129A66 for ; Tue, 16 Apr 2024 11:40:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713267628; cv=none; b=A5+/adZg2HBsqaLkHcJSFcIoAW+2k4JSNUMF+06Z5zLaeEU9m4rxtWZ6Mtyk7usSOXIfB8h+WGk2F4j5vJJmSJjkM286MjEJF3PkKQl3miB3moPVGSX7sy0hlyUX4e562IoZmI33JmGrEhkoggkcrqc3i/5XaaXDOQAgUiGrvkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713267628; c=relaxed/simple; bh=R72y8G3W+r5Gv772gRuKqyNYdMiisYHVyoC5CLyv1S8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=rtcpEkZm7UuHc+bVLFPqoYRAryEH1untXJQyb2zt091kzFKC88TOU2cFdWJGrfsEoyJUXB7dUUKvl0m4AyvOpZ6AYL7RUZaYTwjr3DKF92ccJ0IiHC/Hf1FcHriWnHIstQ7oAbdQ3DvNjPU4vDEaVaTphz+wPkMIc6+4p28kwWU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=B0RHbIp+; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="B0RHbIp+" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-6ecf3943040so3356169b3a.0 for ; Tue, 16 Apr 2024 04:40:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713267626; x=1713872426; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i6oTHYgz5TngiL2FyP+veB9hzruY6XcRakjEoX1j6fI=; b=B0RHbIp+3rzZ55pIhOTTwHm6adnzGEJ9YM+DpFYl4DleIF2GursNb0aH+tShZKIkiH Qq0drJsPXR5ECkCEkliy8mo1BTR5qFf+4qJQi8dzHfJBzh5MQHyIqqJTgJVjTMEATlkH hIw+EBqlegoTEQPIuK/SoEIkwtmQMdzQOiurMkWDi4JjXnj1xPOQbUghHjaR7sG7+V2w VOzGyE9+2FNVM+zYOwgI23Tx0cKERYKGp6WUVj75zooe2HVx2dEPxTF4uxkj3tTeSQV/ rwW0/uQh0M+57jeJep/AWp2GkOzEZLcsZYPHgWf8qGdrLADeuG+KpJhWHR/gnY5Yv0xr q1sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713267626; x=1713872426; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i6oTHYgz5TngiL2FyP+veB9hzruY6XcRakjEoX1j6fI=; b=CHLYlabhfLovYFCigX/INBfIDLas1blNyVKafWvKVVdCbAld1oGEVC4w18TIYNGDkR z6yS1Ph+B4JVNetDMcZZipjRpQTwOGY0OCMvo1gIYifLnsHPcc5ZdjnfDIdbbICZYL3C H1c+G8e0GF1/YG0+QY9geS/eQ93yinE3f1of7XCGQrNoXVEAEiP4+rBl12YO5/rvwuDx E9VFRSW4nR+PzQCSZ50YSB/ftXsOoLFodDq3rdSvjt8TbPd/NSE/Cf1X7HgmpFtC+BqK wfX4vvjJ816VcrHdA1LYwxBW6eBq+PMbiQZKobr9TEboYrKkIEJCzWkiqZ21BHGH+qlf iPbw== X-Forwarded-Encrypted: i=1; AJvYcCWRXIkhAylijGSvuXE+MxTI3IEOtxdynhVAM4HyJOU1pdDcymbkJSVW5Bakgl05KkU2jBv3QhiO6D1R16gb64WcaibCOxST X-Gm-Message-State: AOJu0YyR+MmDAlUV+6dCpZWkXbu1IIRm7G80VemGF6PG5kUc0I099nA8 dTs9/uGP0Zm7WarnNenTnqH/WSmBTRNgQh91qN/Z3sXY4qQ38KUD X-Google-Smtp-Source: AGHT+IHwBgtTq9t2BUzSZi+qYWm8XShSYiNyddMRKqDyI7088zu5/lJprBncOBCfByd0geGHFSmq6A== X-Received: by 2002:a05:6a21:998f:b0:1a7:75ee:6742 with SMTP id ve15-20020a056a21998f00b001a775ee6742mr14223583pzb.2.1713267625692; Tue, 16 Apr 2024 04:40:25 -0700 (PDT) Received: from KERNELXING-MB0.tencent.com ([43.132.141.24]) by smtp.gmail.com with ESMTPSA id a21-20020aa78655000000b006e6c16179dbsm8862045pfo.24.2024.04.16.04.40.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Apr 2024 04:40:25 -0700 (PDT) From: Jason Xing To: edumazet@google.com, dsahern@kernel.org, matttbe@kernel.org, martineau@kernel.org, geliang@kernel.org, kuba@kernel.org, pabeni@redhat.com, davem@davemloft.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, atenart@kernel.org Cc: mptcp@lists.linux.dev, netdev@vger.kernel.org, kerneljasonxing@gmail.com, Jason Xing Subject: [PATCH net-next v5 2/7] rstreason: prepare for passive reset Date: Tue, 16 Apr 2024 19:39:58 +0800 Message-Id: <20240416114003.62110-3-kerneljasonxing@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240416114003.62110-1-kerneljasonxing@gmail.com> References: <20240416114003.62110-1-kerneljasonxing@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jason Xing Adjust the parameter and support passing reason of reset which is for now NOT_SPECIFIED. No functional changes. Signed-off-by: Jason Xing --- include/net/request_sock.h | 4 +++- net/dccp/ipv4.c | 10 ++++++---- net/dccp/ipv6.c | 10 ++++++---- net/dccp/minisocks.c | 3 ++- net/ipv4/tcp_ipv4.c | 12 +++++++----- net/ipv4/tcp_minisocks.c | 3 ++- net/ipv6/tcp_ipv6.c | 15 +++++++++------ net/mptcp/subflow.c | 8 +++++--- 8 files changed, 40 insertions(+), 25 deletions(-) diff --git a/include/net/request_sock.h b/include/net/request_sock.h index 004e651e6067..bdc737832da6 100644 --- a/include/net/request_sock.h +++ b/include/net/request_sock.h @@ -18,6 +18,7 @@ #include #include +#include struct request_sock; struct sk_buff; @@ -34,7 +35,8 @@ struct request_sock_ops { void (*send_ack)(const struct sock *sk, struct sk_buff *skb, struct request_sock *req); void (*send_reset)(const struct sock *sk, - struct sk_buff *skb); + struct sk_buff *skb, + enum sk_rst_reason reason); void (*destructor)(struct request_sock *req); void (*syn_ack_timeout)(const struct request_sock *req); }; diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index 9fc9cea4c251..ff41bd6f99c3 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -24,6 +24,7 @@ #include #include #include +#include #include "ackvec.h" #include "ccid.h" @@ -521,7 +522,8 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req return err; } -static void dccp_v4_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb) +static void dccp_v4_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb, + enum sk_rst_reason reason) { int err; const struct iphdr *rxiph; @@ -706,7 +708,7 @@ int dccp_v4_do_rcv(struct sock *sk, struct sk_buff *skb) return 0; reset: - dccp_v4_ctl_send_reset(sk, skb); + dccp_v4_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); kfree_skb(skb); return 0; } @@ -869,7 +871,7 @@ static int dccp_v4_rcv(struct sk_buff *skb) if (nsk == sk) { reqsk_put(req); } else if (dccp_child_process(sk, nsk, skb)) { - dccp_v4_ctl_send_reset(sk, skb); + dccp_v4_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); goto discard_and_relse; } else { sock_put(sk); @@ -909,7 +911,7 @@ static int dccp_v4_rcv(struct sk_buff *skb) if (dh->dccph_type != DCCP_PKT_RESET) { DCCP_SKB_CB(skb)->dccpd_reset_code = DCCP_RESET_CODE_NO_CONNECTION; - dccp_v4_ctl_send_reset(sk, skb); + dccp_v4_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); } discard_it: diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index c8ca703dc331..85f4b8fdbe5e 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -29,6 +29,7 @@ #include #include #include +#include #include "dccp.h" #include "ipv6.h" @@ -256,7 +257,8 @@ static void dccp_v6_reqsk_destructor(struct request_sock *req) kfree_skb(inet_rsk(req)->pktopts); } -static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb) +static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb, + enum sk_rst_reason reason) { const struct ipv6hdr *rxip6h; struct sk_buff *skb; @@ -656,7 +658,7 @@ static int dccp_v6_do_rcv(struct sock *sk, struct sk_buff *skb) return 0; reset: - dccp_v6_ctl_send_reset(sk, skb); + dccp_v6_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); discard: if (opt_skb != NULL) __kfree_skb(opt_skb); @@ -762,7 +764,7 @@ static int dccp_v6_rcv(struct sk_buff *skb) if (nsk == sk) { reqsk_put(req); } else if (dccp_child_process(sk, nsk, skb)) { - dccp_v6_ctl_send_reset(sk, skb); + dccp_v6_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); goto discard_and_relse; } else { sock_put(sk); @@ -801,7 +803,7 @@ static int dccp_v6_rcv(struct sk_buff *skb) if (dh->dccph_type != DCCP_PKT_RESET) { DCCP_SKB_CB(skb)->dccpd_reset_code = DCCP_RESET_CODE_NO_CONNECTION; - dccp_v6_ctl_send_reset(sk, skb); + dccp_v6_ctl_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); } discard_it: diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c index 64d805b27add..251a57cf5822 100644 --- a/net/dccp/minisocks.c +++ b/net/dccp/minisocks.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "ackvec.h" #include "ccid.h" @@ -202,7 +203,7 @@ struct sock *dccp_check_req(struct sock *sk, struct sk_buff *skb, DCCP_SKB_CB(skb)->dccpd_reset_code = DCCP_RESET_CODE_TOO_BUSY; drop: if (dccp_hdr(skb)->dccph_type != DCCP_PKT_RESET) - req->rsk_ops->send_reset(sk, skb); + req->rsk_ops->send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); inet_csk_reqsk_queue_drop(sk, req); out: diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 88c83ac42129..418d11902fa7 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -70,6 +70,7 @@ #include #include #include +#include #include #include @@ -723,7 +724,8 @@ static bool tcp_v4_ao_sign_reset(const struct sock *sk, struct sk_buff *skb, * Exception: precedence violation. We do not implement it in any case. */ -static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb) +static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb, + enum sk_rst_reason reason) { const struct tcphdr *th = tcp_hdr(skb); struct { @@ -1934,7 +1936,7 @@ int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb) return 0; reset: - tcp_v4_send_reset(rsk, skb); + tcp_v4_send_reset(rsk, skb, SK_RST_REASON_NOT_SPECIFIED); discard: kfree_skb_reason(skb, reason); /* Be careful here. If this function gets more complicated and @@ -2276,7 +2278,7 @@ int tcp_v4_rcv(struct sk_buff *skb) } else { drop_reason = tcp_child_process(sk, nsk, skb); if (drop_reason) { - tcp_v4_send_reset(nsk, skb); + tcp_v4_send_reset(nsk, skb, SK_RST_REASON_NOT_SPECIFIED); goto discard_and_relse; } sock_put(sk); @@ -2355,7 +2357,7 @@ int tcp_v4_rcv(struct sk_buff *skb) bad_packet: __TCP_INC_STATS(net, TCP_MIB_INERRS); } else { - tcp_v4_send_reset(NULL, skb); + tcp_v4_send_reset(NULL, skb, SK_RST_REASON_NOT_SPECIFIED); } discard_it: @@ -2407,7 +2409,7 @@ int tcp_v4_rcv(struct sk_buff *skb) tcp_v4_timewait_ack(sk, skb); break; case TCP_TW_RST: - tcp_v4_send_reset(sk, skb); + tcp_v4_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); inet_twsk_deschedule_put(inet_twsk(sk)); goto discard_it; case TCP_TW_SUCCESS:; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index f53c7ada2ace..0bc19aca2759 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -22,6 +22,7 @@ #include #include #include +#include static bool tcp_in_window(u32 seq, u32 end_seq, u32 s_win, u32 e_win) { @@ -879,7 +880,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, * avoid becoming vulnerable to outside attack aiming at * resetting legit local connections. */ - req->rsk_ops->send_reset(sk, skb); + req->rsk_ops->send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); } else if (fastopen) { /* received a valid RST pkt */ reqsk_fastopen_remove(sk, req, true); tcp_reset(sk, skb); diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index bb7c3caf4f85..017f6293b5f4 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include @@ -69,7 +70,8 @@ #include -static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb); +static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb, + enum sk_rst_reason reason); static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb, struct request_sock *req); @@ -1008,7 +1010,8 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 kfree_skb(buff); } -static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb) +static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb, + enum sk_rst_reason reason) { const struct tcphdr *th = tcp_hdr(skb); struct ipv6hdr *ipv6h = ipv6_hdr(skb); @@ -1677,7 +1680,7 @@ int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb) return 0; reset: - tcp_v6_send_reset(sk, skb); + tcp_v6_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); discard: if (opt_skb) __kfree_skb(opt_skb); @@ -1862,7 +1865,7 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb) } else { drop_reason = tcp_child_process(sk, nsk, skb); if (drop_reason) { - tcp_v6_send_reset(nsk, skb); + tcp_v6_send_reset(nsk, skb, SK_RST_REASON_NOT_SPECIFIED); goto discard_and_relse; } sock_put(sk); @@ -1939,7 +1942,7 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb) bad_packet: __TCP_INC_STATS(net, TCP_MIB_INERRS); } else { - tcp_v6_send_reset(NULL, skb); + tcp_v6_send_reset(NULL, skb, SK_RST_REASON_NOT_SPECIFIED); } discard_it: @@ -1995,7 +1998,7 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb) tcp_v6_timewait_ack(sk, skb); break; case TCP_TW_RST: - tcp_v6_send_reset(sk, skb); + tcp_v6_send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); inet_twsk_deschedule_put(inet_twsk(sk)); goto discard_it; case TCP_TW_SUCCESS: diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index b94d1dca1094..32fe2ef36d56 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -20,6 +20,8 @@ #include #endif #include +#include + #include "protocol.h" #include "mib.h" @@ -308,7 +310,7 @@ static struct dst_entry *subflow_v4_route_req(const struct sock *sk, dst_release(dst); if (!req->syncookie) - tcp_request_sock_ops.send_reset(sk, skb); + tcp_request_sock_ops.send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); return NULL; } @@ -376,7 +378,7 @@ static struct dst_entry *subflow_v6_route_req(const struct sock *sk, dst_release(dst); if (!req->syncookie) - tcp6_request_sock_ops.send_reset(sk, skb); + tcp6_request_sock_ops.send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); return NULL; } #endif @@ -911,7 +913,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk, tcp_rsk(req)->drop_req = true; inet_csk_prepare_for_destroy_sock(child); tcp_done(child); - req->rsk_ops->send_reset(sk, skb); + req->rsk_ops->send_reset(sk, skb, SK_RST_REASON_NOT_SPECIFIED); /* The last child reference will be released by the caller */ return child;