From patchwork Thu Jul 18 19:02:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Praveen Kaligineedi X-Patchwork-Id: 13736650 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D2A1145A11 for ; Thu, 18 Jul 2024 19:02:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721329354; cv=none; b=uHSuQs3hmnkn3Cx5HcQlzqvBNC8iB4Vd5VdvSXstwAnvkxzTq497/V1DIGw6VuSYt3bN7GiRkZQk2qdCa4QKLt/HjBFC/DYyV3KOcz/TnqkFDncDCgH3lJDcn7NhvqJf40LJo189dK9eoOj7YVKnqY40OS548rjljWo6FGwbeZk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721329354; c=relaxed/simple; bh=ww6qQONtJZStqms+LWq6iOpNCGMQQmzuptYiSaUtzFw=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=TAbnZVvgUnsaGs7KV91Ksb/muhy9N7ZP5HmTIciQ0NeQhgPjEe19t/a1DeJS3Tt/cH8PjAl2dvlOqa0VYzIAVB9y2RqU2azxfI4bJXDODe46sojhHxS5u5Gsr2byK9B3BmV5pECoW+aNh4Ct7knDv9dncAmsJ2GlJp4foe0XmM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qHmJkA83; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--pkaligineedi.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qHmJkA83" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-65bbd01d146so29110587b3.3 for ; Thu, 18 Jul 2024 12:02:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721329352; x=1721934152; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=Y9R8gsfsq0yB/G35OWfVJg7891znOv21k6n7O/XgO+A=; b=qHmJkA83FNlPOgsQARHUI7vnDaex5dPniffgoZnSUVrHpoDAGfzx0BqPn4zTYPdLNQ 6AhYF+ekSr34CDIUrvGnglqrDc7VfIe4LvzrXfIPuwD2C/J0ZswrLKaTNY3g9DTi7wu5 1iV13VDioUu0ak1E7CGoOrjwY63piS6vWfEnLJcCitYQdO9BvJAR5Wv8H1QbXIDoYqJj nKFFXUScDiDS1czrP4549WRkqDUN+/3ZH3Vf9EO+5lfjlvzSQoD19YQPoHtsZzkZpMHi kyLSYgpRHYH6e8pH92o4F3xR6lf4H3v8YmY0QHQVfqQrZYrsZzysIDRMtEyrhtKoSDfZ Bpxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721329352; x=1721934152; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Y9R8gsfsq0yB/G35OWfVJg7891znOv21k6n7O/XgO+A=; b=mqnTo2Al5sPedsg9W2l/djoGIodP2nogrgo8+Wf53yzJ7RxdHLf/GuX9noQjw0XgJM 36I1IFFrDRo6afoZKHnGdQAALZsDRgkv6ih3Rnv/NYoOAyNxEJWejQBXw0K2yQvVpQi4 9cll6+BOaPcGuJagVovo9co4lfko8MkJH3MVbGNkqciKnxHFneAI8BXi1jrYACfK+MmA 3N4AC7E7TlCcGd/ohZGUCMX9YHd83HsUgz9wA7NpllnoBwoaYGtXR/RF0N3VLoW2OSli wGmWQaeojT/T4fN0YIQwnZevRGVswB7J3ftBiZFi0Xieyl62Tiz9HhqgXp8bIIRGSYY3 rqjA== X-Gm-Message-State: AOJu0Yx4vTFtJJGfGsSoS9/z/Q7daoY8mfSsDqX5FNeRpi1OWpjqd3Xo PokhOguWocXZFZm3f2Mjrs5pALfgpPIV2juv6jZsZ3ZuEF6JRnc4gmp7V0pMDZWm11/RzipyiFu 9VoyK7tvbC3lCL0j+EADr6rTKggb37ExD2/rVgffmLhLRdqC5lCPYMlsgFcgIuYI4JxE0T5hhQ5 Un/BfEBthwWrXNMgEdH130solNnbib2rVR1Ey+faRKbHa2jlYRWhjyUMSpbWq/jhVR X-Google-Smtp-Source: AGHT+IFt/ODkK6Zz6XaDieQLKxcWgnHGrTVXxY/TNdU1FzuT0sBAnFjBsx3I9pMmlW4eO+SErUVCKS3lv13K755WN7M= X-Received: from pkaligineedi.sea.corp.google.com ([2620:15c:11c:202:566:5def:e0a7:b439]) (user=pkaligineedi job=sendgmr) by 2002:a05:690c:f0a:b0:62c:ea0b:a44c with SMTP id 00721157ae682-666015f061emr960507b3.2.1721329351738; Thu, 18 Jul 2024 12:02:31 -0700 (PDT) Date: Thu, 18 Jul 2024 12:02:21 -0700 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.45.2.1089.g2a221341d9-goog Message-ID: <20240718190221.2219835-1-pkaligineedi@google.com> Subject: [PATCH net] gve: Fix an edge case for TSO skb validity check From: Praveen Kaligineedi To: netdev@vger.kernel.org Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, shailend@google.com, hramamurthy@google.com, csully@google.com, jfraker@google.com, stable@vger.kernel.org, Bailey Forrest , Praveen Kaligineedi , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org From: Bailey Forrest The NIC requires each TSO segment to not span more than 10 descriptors. gve_can_send_tso() performs this check. However, the current code misses an edge case when a TSO skb has a large frag that needs to be split into multiple descriptors, causing the 10 descriptor limit per TSO-segment to be exceeded. This change fixes the edge case. Fixes: a57e5de476be ("gve: DQO: Add TX path") Signed-off-by: Praveen Kaligineedi Signed-off-by: Bailey Forrest Reviewed-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 22 +++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index 0b3cca3fc792..dc39dc481f21 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -866,22 +866,42 @@ static bool gve_can_send_tso(const struct sk_buff *skb) const int header_len = skb_tcp_all_headers(skb); const int gso_size = shinfo->gso_size; int cur_seg_num_bufs; + int last_frag_size; int cur_seg_size; int i; cur_seg_size = skb_headlen(skb) - header_len; + last_frag_size = skb_headlen(skb); cur_seg_num_bufs = cur_seg_size > 0; for (i = 0; i < shinfo->nr_frags; i++) { if (cur_seg_size >= gso_size) { cur_seg_size %= gso_size; cur_seg_num_bufs = cur_seg_size > 0; + + /* If the last buffer is split in the middle of a TSO + * segment, then it will count as two descriptors. + */ + if (last_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) { + int last_frag_remain = last_frag_size % + GVE_TX_MAX_BUF_SIZE_DQO; + + /* If the last frag was evenly divisible by + * GVE_TX_MAX_BUF_SIZE_DQO, then it will not be + * split in the current segment. + */ + if (last_frag_remain && + cur_seg_size > last_frag_remain) { + cur_seg_num_bufs++; + } + } } if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg)) return false; - cur_seg_size += skb_frag_size(&shinfo->frags[i]); + last_frag_size = skb_frag_size(&shinfo->frags[i]); + cur_seg_size += last_frag_size; } return true;