From patchwork Thu May 30 23:36:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13680953 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EAD555896; Thu, 30 May 2024 23:36:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112185; cv=none; b=GMZcU6ZZKXzDE40BWw4RsuiV8bc7M6uG/afJO4ECEhzNhDeWSA7W4ple8FlGI8WsZ17dsoFtBDMiMT8wH/I/Y6FhVPRSyawWi7s6VZDJgcRT4kV4bycKVr3YmBWkrHVYu/xcqvwrO8bIKXwpshxcbe7XVBWYT/JjnP7w3szQrj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112185; c=relaxed/simple; bh=7vmH6yCpl8nIABl2n0tr/dfno3/kdI3WWJlSpv9XZTo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XRlf5JgKrqChg8Qb31/aTi4M4c2rMJ9fHp1yiXoSFZU+SbtG+7wFKkTScc2eEXU8m4tDKd8+eFJAv+szOrfrkNALbWmB1kScFnKWyqjjv+M69Oyn82pMij1Spa9bCkuKzcUXrGY6QpbSR8irYJd05fWBCCCAF2JLiy5kV+unTxE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WeMWqy/1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WeMWqy/1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E74EC4AF09; Thu, 30 May 2024 23:36:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717112185; bh=7vmH6yCpl8nIABl2n0tr/dfno3/kdI3WWJlSpv9XZTo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WeMWqy/1Yl0oVNzzDbjsoqUbobQiYv7VvAojosHA3h4RZJfaoJRxvtNVEf2AEjU4e o1j9rOYl4hGE/0B2nAJQKNs+x6RpODXoB1h7zt4wxeC2Z4GbEiV7RkjxyhYJVCrXK7 Fv4aiaEQLc/C61Gs5x72Fq3y+RlLdfgmlquVYC2c2A/Rw7YPpKWEQw0ri7Uq8LpXRS po1WyjyNBnSs1M8dMPd+IVfS5tkIP30hQ3xN/0LSfMOWhio0yLLST6fj0bLrbMumrF 06+gv8BIDn6dGzBboi993qUMOLyM9BY/moerCU5fmQZ6D7sB1lbBd8PCUs9WkGYb76 5XzjLGb7I62xg== From: Jakub Kicinski To: edumazet@google.com, pabeni@redhat.com Cc: davem@davemloft.net, netdev@vger.kernel.org, mptcp@lists.linux.dev, matttbe@kernel.org, martineau@kernel.org, borisp@nvidia.com, willemdebruijn.kernel@gmail.com, Jakub Kicinski Subject: [PATCH net-next 1/3] tcp: wrap mptcp and decrypted checks into tcp_skb_can_collapse_rx() Date: Thu, 30 May 2024 16:36:14 -0700 Message-ID: <20240530233616.85897-2-kuba@kernel.org> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240530233616.85897-1-kuba@kernel.org> References: <20240530233616.85897-1-kuba@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org tcp_skb_can_collapse() checks for conditions which don't make sense on input. Because of this we ended up sprinkling a few pairs of mptcp_skb_can_collapse() and skb_cmp_decrypted() calls on the input path. Group them in a new helper. This should make it less likely that someone will check mptcp and not decrypted or vice versa when adding new code. This implicitly adds a decrypted check early in tcp_collapse(). AFAIU this will very slightly increase our ability to collapse packets under memory pressure, not a real bug. Signed-off-by: Jakub Kicinski Reviewed-by: Eric Dumazet Reviewed-by: Matthieu Baerts (NGI0) Reviewed-by: Willem de Bruijn --- include/net/tcp.h | 7 +++++++ net/ipv4/tcp_input.c | 11 +++-------- net/ipv4/tcp_ipv4.c | 3 +-- 3 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 32815a40dea1..32741856da01 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1071,6 +1071,13 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to, skb_pure_zcopy_same(to, from)); } +static inline bool tcp_skb_can_collapse_rx(const struct sk_buff *to, + const struct sk_buff *from) +{ + return likely(mptcp_skb_can_collapse(to, from) && + !skb_cmp_decrypted(to, from)); +} + /* Events passed to congestion control interface */ enum tcp_ca_event { CA_EVENT_TX_START, /* first transmit when no packets in flight */ diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5aadf64e554d..212b6fd0caf7 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4813,10 +4813,7 @@ static bool tcp_try_coalesce(struct sock *sk, if (TCP_SKB_CB(from)->seq != TCP_SKB_CB(to)->end_seq) return false; - if (!mptcp_skb_can_collapse(to, from)) - return false; - - if (skb_cmp_decrypted(from, to)) + if (!tcp_skb_can_collapse_rx(to, from)) return false; if (!skb_try_coalesce(to, from, fragstolen, &delta)) @@ -5372,7 +5369,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, break; } - if (n && n != tail && mptcp_skb_can_collapse(skb, n) && + if (n && n != tail && tcp_skb_can_collapse_rx(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; @@ -5423,11 +5420,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, skb = tcp_collapse_one(sk, skb, list, root); if (!skb || skb == tail || - !mptcp_skb_can_collapse(nskb, skb) || + !tcp_skb_can_collapse_rx(nskb, skb) || (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) goto end; - if (skb_cmp_decrypted(skb, nskb)) - goto end; } } } diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 041c7eda9abe..228de0c95a9d 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2049,8 +2049,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, TCP_SKB_CB(skb)->tcp_flags) & TCPHDR_ACK) || ((TCP_SKB_CB(tail)->tcp_flags ^ TCP_SKB_CB(skb)->tcp_flags) & (TCPHDR_ECE | TCPHDR_CWR)) || - !mptcp_skb_can_collapse(tail, skb) || - skb_cmp_decrypted(tail, skb) || + !tcp_skb_can_collapse_rx(tail, skb) || thtail->doff != th->doff || memcmp(thtail + 1, th + 1, hdrlen - sizeof(*th))) goto no_coalesce; From patchwork Thu May 30 23:36:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13680954 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBCD218397C; Thu, 30 May 2024 23:36:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112186; cv=none; b=YClGm+uPIsGiVVf1/JLGfn046LoS/DwPpu5tdjLAiUeB1oZajuZ6romB15VaYYbuZmAqEmBVXb2kB0Pw0gBH5ecpdLVzx+rYnRB/ZbLUsr8nacZEPjDSb6rglkOBaQfRjkqQhg5rUZvdU8UyGak1L+3rrpOMz/aRf7K1qJlLonY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112186; c=relaxed/simple; bh=Tc3mfR659n1UEdqi4bi378GMou0NHGPpevDsK5F2eK8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L4dP8afUrKM1m90mQH575wJ01AZpcRCyA8VU7cNf9dnAOe1usGwTV5zU1a+QLUTp1FHGeiQ6aZ0JZpcoYGjJDszGgwLzaOe3Je6SvZ+f8sz2BF0sejbU2glKZ4ZGZCC+jUB1wx7mrMJTCfOrwNFWYQ2PXByqsmx4EiUY0NXl+7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hJapuffz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hJapuffz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28748C4AF0C; Thu, 30 May 2024 23:36:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717112185; bh=Tc3mfR659n1UEdqi4bi378GMou0NHGPpevDsK5F2eK8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hJapuffzj+Ah08nz7JesYCuLgG7/6ADXBDTv3ViohtuON8qxHJbSTLm5+M8K5tkWA cop6ZnUrO5EqoQSq4cw0YdUx9bBmKYIH/MJFoMt8aKDvARHu9J5ZSKJSeYh5/Yg4kO 5nC8QF3Kpd8iQhz5BztnXw+KXFBKG9AnNJqJk7Ojf/AAE4k+0MynVOnPEUnoXhXgJk rCMAZV+qoqWX0qrBLjybF+nNRi8YK2TjlgTpTNV9ahv0iU1qgrgEXJLYvZ8MTg0ueb 07SgCM8sPTS2EKmJRMYV7mmrqsl+1rRGbJIhKT4gBPL3HYNuhjBKdFVDp4lPOFQ3Fr y1CbDKreim89Q== From: Jakub Kicinski To: edumazet@google.com, pabeni@redhat.com Cc: davem@davemloft.net, netdev@vger.kernel.org, mptcp@lists.linux.dev, matttbe@kernel.org, martineau@kernel.org, borisp@nvidia.com, willemdebruijn.kernel@gmail.com, Jakub Kicinski Subject: [PATCH net-next 2/3] tcp: add a helper for setting EOR on tail skb Date: Thu, 30 May 2024 16:36:15 -0700 Message-ID: <20240530233616.85897-3-kuba@kernel.org> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240530233616.85897-1-kuba@kernel.org> References: <20240530233616.85897-1-kuba@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org TLS (and hopefully soon PSP will) use EOR to prevent skbs with different decrypted state from getting merged, without adding new tests to the skb handling. In both cases once the connection switches to an "encrypted" state, all subsequent skbs will be encrypted, so a single "EOR fence" is sufficient to prevent mixing. Add a helper for setting the EOR bit, to make this arrangement more explicit. Signed-off-by: Jakub Kicinski Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn --- include/net/tcp.h | 9 +++++++++ net/tls/tls_device.c | 11 ++--------- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 32741856da01..08c3b99501cf 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1066,6 +1066,7 @@ static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) static inline bool tcp_skb_can_collapse(const struct sk_buff *to, const struct sk_buff *from) { + /* skb_cmp_decrypted() not needed, use tcp_write_collapse_fence() */ return likely(tcp_skb_can_collapse_to(to) && mptcp_skb_can_collapse(to, from) && skb_pure_zcopy_same(to, from)); @@ -2102,6 +2103,14 @@ static inline void tcp_rtx_queue_unlink_and_free(struct sk_buff *skb, struct soc tcp_wmem_free_skb(sk, skb); } +static inline void tcp_write_collapse_fence(struct sock *sk) +{ + struct sk_buff *skb = tcp_write_queue_tail(sk); + + if (skb) + TCP_SKB_CB(skb)->eor = 1; +} + static inline void tcp_push_pending_frames(struct sock *sk) { if (tcp_send_head(sk)) { diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index ab6e694f7bc2..dc063c2c7950 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -231,14 +231,10 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx, u32 seq) { struct net_device *netdev; - struct sk_buff *skb; int err = 0; u8 *rcd_sn; - skb = tcp_write_queue_tail(sk); - if (skb) - TCP_SKB_CB(skb)->eor = 1; - + tcp_write_collapse_fence(sk); rcd_sn = tls_ctx->tx.rec_seq; trace_tls_device_tx_resync_send(sk, seq, rcd_sn); @@ -1067,7 +1063,6 @@ int tls_set_device_offload(struct sock *sk) struct tls_prot_info *prot; struct net_device *netdev; struct tls_context *ctx; - struct sk_buff *skb; char *iv, *rec_seq; int rc; @@ -1138,9 +1133,7 @@ int tls_set_device_offload(struct sock *sk) * SKBs where only part of the payload needs to be encrypted. * So mark the last skb in the write queue as end of record. */ - skb = tcp_write_queue_tail(sk); - if (skb) - TCP_SKB_CB(skb)->eor = 1; + tcp_write_collapse_fence(sk); /* Avoid offloading if the device is down * We don't want to offload new flows after From patchwork Thu May 30 23:36:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 13680955 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B370F184131; Thu, 30 May 2024 23:36:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112186; cv=none; b=H3tr27GQoi0v2vE43PnUUXO/6CStutCsrNIBmr76dNmpnXiNGYBNfBVLevvZ1m+JWZlQJnehuvtoeoHwH0p6HWCv3up3bQ0OPl1GQQgPiOABGGXnnmszXQTwMx7E3z8mbl3iJhj+2yh7AbJInkMsIx6C3jx6HCNlDo0FZfkTThs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717112186; c=relaxed/simple; bh=S5ZLn3YP+Ae1I3NDRK6GuB5Vzzi/vin+j0G3JYyNVNg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QRj+U7gBg/0Rg0L7aptUON8PMXBTGegC33chLTaf0k/tkAJbn/rfWuGj47mWHQQvH0e9DtG7H4ACxD5rTgOqdQ8qErhv2Vm1+lyYjW94w3XGCMlKPAicQkftd0BxBAH9AevdyBab0FvqoFELY3ySaFpxLuF+EkiEsf1OK1K1/Ks= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q3EZLpEu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q3EZLpEu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1ED2C4AF14; Thu, 30 May 2024 23:36:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717112186; bh=S5ZLn3YP+Ae1I3NDRK6GuB5Vzzi/vin+j0G3JYyNVNg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q3EZLpEur6Bm7E+erfes/IqHtJceHUtwHx0GU69lCY8bZnxLoPNUd4ZBEEupGqwV3 bJJAZyfM/7i8QPyR9hdhKi1kBnjsM/bwg4Y3PBjOn7UqWw97/KQAP3i5paF7gqJ5Fo +mvu7sJLWD1m3okoLFlGVjCczKlbC6TkHG8FBXmRVwTgoN5CFazKsswAFaH44OpLR7 rqF06a7S2nz0cyPcV7/tdEbTJhhee4Ak4G4FSrXHNSZHNMGvbPXnnrg7vv1nBuuH4c D762TnPSE7/PJh3hdHO3Yvnh6B0cu0+D3x4NXStx58wUymNRGBbB8/D/1tXkszO92S iA8305VdHYZlg== From: Jakub Kicinski To: edumazet@google.com, pabeni@redhat.com Cc: davem@davemloft.net, netdev@vger.kernel.org, mptcp@lists.linux.dev, matttbe@kernel.org, martineau@kernel.org, borisp@nvidia.com, willemdebruijn.kernel@gmail.com, Jakub Kicinski Subject: [PATCH net-next 3/3] net: skb: add compatibility warnings to skb_shift() Date: Thu, 30 May 2024 16:36:16 -0700 Message-ID: <20240530233616.85897-4-kuba@kernel.org> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240530233616.85897-1-kuba@kernel.org> References: <20240530233616.85897-1-kuba@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org According to current semantics we should never try to shift data between skbs which differ on decrypted or pp_recycle status. Signed-off-by: Jakub Kicinski Reviewed-by: Eric Dumazet Reviewed-by: Willem de Bruijn --- net/core/skbuff.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 466999a7515e..c8ac79851cd6 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4139,6 +4139,9 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) if (skb_zcopy(tgt) || skb_zcopy(skb)) return 0; + DEBUG_NET_WARN_ON_ONCE(tgt->pp_recycle != skb->pp_recycle); + DEBUG_NET_WARN_ON_ONCE(skb_cmp_decrypted(tgt, skb)); + todo = shiftlen; from = 0; to = skb_shinfo(tgt)->nr_frags;