From patchwork Wed Jun 14 10:02:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13279794 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 974C033E9 for ; Wed, 14 Jun 2023 10:02:16 +0000 (UTC) Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F630195 for ; Wed, 14 Jun 2023 03:02:15 -0700 (PDT) Date: Wed, 14 Jun 2023 12:02:02 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1686736933; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=5FDuYeB5AKs54vZHsoaZmJwLazDWO5pDPMx37LgAmj8=; b=WcZ397BmdG936qT4fjJtpkFGS1u52tthhicOOryIACHBss219mKYgOydZ6BDl7FXrObsSC U1rCgnTKTMvrYEpRXjLsTL3nydds8VjjF8uBU2T5Mv7DMPEpFGnF1UkaOfLKPNJXaxBZpr pg8Nu7x7RymtOT3liOPcu4jYD8wA+KwKK5H8KJpVL4sD1WNUCIAVCt6aCjdCYrG16n4bYr GTFFIGdVnp0j9G2DuMCcwcLAXqzFSxgvHKjRRcSOEZ3SQq3gq8vdsbkt1cTzkx7+c/YIUc Pa5ASdCbTK6ifyjw25vVK8m+v21Pyj4IE/wehByW5SWRx8X9nvyS+/wvNqYsfA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1686736933; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=5FDuYeB5AKs54vZHsoaZmJwLazDWO5pDPMx37LgAmj8=; b=uVRti/osEMgmA32vUlHAOtpSaNmgw9zdaAu52hiYGjQskmwSVjH2PlfcdCMyFUIu0nuLVy bL4d5bjC0Js6gnBg== From: Sebastian Andrzej Siewior To: netdev@vger.kernel.org Cc: "David S. Miller" , David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , Paolo Abeni , Steffen Klassert , Thomas Gleixner Subject: [PATCH net] xfrm: Linearize the skb after offloading if needed. Message-ID: <20230614100202.1-YtK7H5@linutronix.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org With offloading enabled, esp_xmit() gets invoked very late, from within validate_xmit_xfrm() which is after validate_xmit_skb() validates and linearizes the skb if the underlying device does not support fragments. esp_output_tail() may add a fragment to the skb while adding the auth tag/ IV. Devices without the proper support will then send skb->data points to with the correct length so the packet will have garbage at the end. A pcap sniffer will claim that the proper data has been sent since it parses the skb properly. It is not affected with INET_ESP_OFFLOAD disabled. Linearize the skb after offloading if the sending hardware requires it. It was tested on v4, v6 has been adopted. Fixes: 7785bba299a8d ("esp: Add a software GRO codepath") Signed-off-by: Sebastian Andrzej Siewior --- net/ipv4/esp4_offload.c | 3 +++ net/ipv6/esp6_offload.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c index 3969fa805679c..ee848be59e65a 100644 --- a/net/ipv4/esp4_offload.c +++ b/net/ipv4/esp4_offload.c @@ -340,6 +340,9 @@ static int esp_xmit(struct xfrm_state *x, struct sk_buff *skb, netdev_features_ secpath_reset(skb); + if (skb_needs_linearize(skb, skb->dev->features) && + __skb_linearize(skb)) + return -ENOMEM; return 0; } diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c index 75c02992c520f..7723402689973 100644 --- a/net/ipv6/esp6_offload.c +++ b/net/ipv6/esp6_offload.c @@ -374,6 +374,9 @@ static int esp6_xmit(struct xfrm_state *x, struct sk_buff *skb, netdev_features secpath_reset(skb); + if (skb_needs_linearize(skb, skb->dev->features) && + __skb_linearize(skb)) + return -ENOMEM; return 0; }