diff mbox series

[net,v2] gve: Fix an edge case for TSO skb validity check

Message ID 20240724143431.3343722-1-pkaligineedi@google.com (mailing list archive)
State Accepted
Commit 36e3b949e35964e22b9a57f960660fc599038dd4
Delegated to: Netdev Maintainers
Headers show
Series [net,v2] gve: Fix an edge case for TSO skb validity check | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 273 this patch: 273
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers warning 1 maintainers not CCed: rushilg@google.com
netdev/build_clang success Errors and warnings before: 281 this patch: 281
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 281 this patch: 281
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 43 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-07-25--00-00 (tests: 703)

Commit Message

Praveen Kaligineedi July 24, 2024, 2:34 p.m. UTC
From: Bailey Forrest <bcf@google.com>

The NIC requires each TSO segment to not span more than 10
descriptors. NIC further requires each descriptor to not exceed
16KB - 1 (GVE_TX_MAX_BUF_SIZE_DQO).

The descriptors for an skb are generated by
gve_tx_add_skb_no_copy_dqo() for DQO RDA queue format.
gve_tx_add_skb_no_copy_dqo() loops through each skb frag and
generates a descriptor for the entire frag if the frag size is
not greater than GVE_TX_MAX_BUF_SIZE_DQO. If the frag size is
greater than GVE_TX_MAX_BUF_SIZE_DQO, it is split into descriptor(s)
of size GVE_TX_MAX_BUF_SIZE_DQO and a descriptor is generated for
the remainder (frag size % GVE_TX_MAX_BUF_SIZE_DQO).

gve_can_send_tso() checks if the descriptors thus generated for an
skb would meet the requirement that each TSO-segment not span more
than 10 descriptors. However, the current code misses an edge case
when a TSO segment spans multiple descriptors within a large frag.
This change fixes the edge case.

gve_can_send_tso() relies on the assumption that max gso size (9728)
is less than GVE_TX_MAX_BUF_SIZE_DQO and therefore within an skb
fragment a TSO segment can never span more than 2 descriptors.

Fixes: a57e5de476be ("gve: DQO: Add TX path")
Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
Signed-off-by: Bailey Forrest <bcf@google.com>
Reviewed-by: Jeroen de Borst <jeroendb@google.com>
Cc: stable@vger.kernel.org
---
Changes from v1:
 - Added 'stable tag'
 - Added more explanation in the commit message
 - Modified comments to clarify the changes made
 - Changed variable names 'last_frag_size' to 'prev_frag_size' and
   'last_frag_remain' to 'prev_frag_remain'
 - Removed parentheses around single line statement

 drivers/net/ethernet/google/gve/gve_tx_dqo.c | 22 +++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

Comments

Willem de Bruijn July 24, 2024, 2:51 p.m. UTC | #1
On Wed, Jul 24, 2024 at 10:35 AM Praveen Kaligineedi
<pkaligineedi@google.com> wrote:
>
> From: Bailey Forrest <bcf@google.com>
>
> The NIC requires each TSO segment to not span more than 10
> descriptors. NIC further requires each descriptor to not exceed
> 16KB - 1 (GVE_TX_MAX_BUF_SIZE_DQO).
>
> The descriptors for an skb are generated by
> gve_tx_add_skb_no_copy_dqo() for DQO RDA queue format.
> gve_tx_add_skb_no_copy_dqo() loops through each skb frag and
> generates a descriptor for the entire frag if the frag size is
> not greater than GVE_TX_MAX_BUF_SIZE_DQO. If the frag size is
> greater than GVE_TX_MAX_BUF_SIZE_DQO, it is split into descriptor(s)
> of size GVE_TX_MAX_BUF_SIZE_DQO and a descriptor is generated for
> the remainder (frag size % GVE_TX_MAX_BUF_SIZE_DQO).
>
> gve_can_send_tso() checks if the descriptors thus generated for an
> skb would meet the requirement that each TSO-segment not span more
> than 10 descriptors. However, the current code misses an edge case
> when a TSO segment spans multiple descriptors within a large frag.
> This change fixes the edge case.
>
> gve_can_send_tso() relies on the assumption that max gso size (9728)
> is less than GVE_TX_MAX_BUF_SIZE_DQO and therefore within an skb
> fragment a TSO segment can never span more than 2 descriptors.
>
> Fixes: a57e5de476be ("gve: DQO: Add TX path")
> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com>
> Signed-off-by: Bailey Forrest <bcf@google.com>
> Reviewed-by: Jeroen de Borst <jeroendb@google.com>
> Cc: stable@vger.kernel.org

Reviewed-by: Willem de Bruijn <willemb@google.com>

Thanks for the extra description. The way gve_tx_add_skb_no_copy_dqo
lays out descriptors, and the descriptor and segment max lengths are key.
Now I follow the calculation.
patchwork-bot+netdevbpf@kernel.org July 25, 2024, 3 p.m. UTC | #2
Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Wed, 24 Jul 2024 07:34:31 -0700 you wrote:
> From: Bailey Forrest <bcf@google.com>
> 
> The NIC requires each TSO segment to not span more than 10
> descriptors. NIC further requires each descriptor to not exceed
> 16KB - 1 (GVE_TX_MAX_BUF_SIZE_DQO).
> 
> The descriptors for an skb are generated by
> gve_tx_add_skb_no_copy_dqo() for DQO RDA queue format.
> gve_tx_add_skb_no_copy_dqo() loops through each skb frag and
> generates a descriptor for the entire frag if the frag size is
> not greater than GVE_TX_MAX_BUF_SIZE_DQO. If the frag size is
> greater than GVE_TX_MAX_BUF_SIZE_DQO, it is split into descriptor(s)
> of size GVE_TX_MAX_BUF_SIZE_DQO and a descriptor is generated for
> the remainder (frag size % GVE_TX_MAX_BUF_SIZE_DQO).
> 
> [...]

Here is the summary with links:
  - [net,v2] gve: Fix an edge case for TSO skb validity check
    https://git.kernel.org/netdev/net/c/36e3b949e359

You are awesome, thank you!
diff mbox series

Patch

diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
index 0b3cca3fc792..f879426cb552 100644
--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
@@ -866,22 +866,42 @@  static bool gve_can_send_tso(const struct sk_buff *skb)
 	const int header_len = skb_tcp_all_headers(skb);
 	const int gso_size = shinfo->gso_size;
 	int cur_seg_num_bufs;
+	int prev_frag_size;
 	int cur_seg_size;
 	int i;
 
 	cur_seg_size = skb_headlen(skb) - header_len;
+	prev_frag_size = skb_headlen(skb);
 	cur_seg_num_bufs = cur_seg_size > 0;
 
 	for (i = 0; i < shinfo->nr_frags; i++) {
 		if (cur_seg_size >= gso_size) {
 			cur_seg_size %= gso_size;
 			cur_seg_num_bufs = cur_seg_size > 0;
+
+			if (prev_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) {
+				int prev_frag_remain = prev_frag_size %
+					GVE_TX_MAX_BUF_SIZE_DQO;
+
+				/* If the last descriptor of the previous frag
+				 * is less than cur_seg_size, the segment will
+				 * span two descriptors in the previous frag.
+				 * Since max gso size (9728) is less than
+				 * GVE_TX_MAX_BUF_SIZE_DQO, it is impossible
+				 * for the segment to span more than two
+				 * descriptors.
+				 */
+				if (prev_frag_remain &&
+				    cur_seg_size > prev_frag_remain)
+					cur_seg_num_bufs++;
+			}
 		}
 
 		if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg))
 			return false;
 
-		cur_seg_size += skb_frag_size(&shinfo->frags[i]);
+		prev_frag_size = skb_frag_size(&shinfo->frags[i]);
+		cur_seg_size += prev_frag_size;
 	}
 
 	return true;