From patchwork Mon Nov 28 19:19:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 13057899 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A599EC433FE for ; Mon, 28 Nov 2022 19:22:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233558AbiK1TWU (ORCPT ); Mon, 28 Nov 2022 14:22:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233521AbiK1TWF (ORCPT ); Mon, 28 Nov 2022 14:22:05 -0500 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8B6C2AE2; Mon, 28 Nov 2022 11:22:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=evNThKKzTHRSsY8TsT5XONaB25F1pHn4lpYX0EXuSE0=; b=nJ9S8zhb61Obd+/wB44rao/SCeR493edUJbjoff4EZlnladH77TUr52KsBOeBU K9yMeUxWzQky8/ruiqF0603Zjr9jTu7vLd6fAWU+BH5TOZhOUl+LspenFmZU4o VB05TKpPkb1nekUVXD6k7NoJS8pjmlYCftjtApBGd0ClYlY= Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Mon, 28 Nov 2022 11:19:44 -0800 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id C2D5A2016E; Mon, 28 Nov 2022 11:19:58 -0800 (PST) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id B62B4AE1A6; Mon, 28 Nov 2022 11:19:58 -0800 (PST) From: Ronak Doshi To: CC: Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH net 1/2] vmxnet3: correctly report encapsulated LRO packet Date: Mon, 28 Nov 2022 11:19:54 -0800 Message-ID: <20221128191955.2556-2-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221128191955.2556-1-doshir@vmware.com> References: <20221128191955.2556-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Commit dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support") added support for encapsulation offload. However, the pathc did not report correctly the encapsulated packet which is LRO'ed by the hypervisor. This patch fixes this issue by using correct callback for the LRO'ed encapsulated packet. Fixes: dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support") Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_drv.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index d3e7b27eb933..8da6e06f1f06 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1396,6 +1396,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, }; u32 num_pkts = 0; bool skip_page_frags = false; + bool encap_lro = false; struct Vmxnet3_RxCompDesc *rcd; struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; u16 segCnt = 0, mss = 0; @@ -1563,6 +1564,8 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, mss = rcdlro->mss; if (unlikely(segCnt <= 1)) segCnt = 0; + encap_lro = (le32_to_cpu(gdesc->dword[0]) & + (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)); } else { segCnt = 0; } @@ -1630,7 +1633,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, vmxnet3_rx_csum(adapter, skb, (union Vmxnet3_GenericDesc *)rcd); skb->protocol = eth_type_trans(skb, adapter->netdev); - if (!rcd->tcp || + if ((!rcd->tcp && !encap_lro) || !(adapter->netdev->features & NETIF_F_LRO)) goto not_lro; @@ -1639,7 +1642,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, SKB_GSO_TCPV4 : SKB_GSO_TCPV6; skb_shinfo(skb)->gso_size = mss; skb_shinfo(skb)->gso_segs = segCnt; - } else if (segCnt != 0 || skb->len > mtu) { + } else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) { u32 hlen; hlen = vmxnet3_get_hdr_len(adapter, skb, @@ -1668,6 +1671,8 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, napi_gro_receive(&rq->napi, skb); ctx->skb = NULL; + if (encap_lro) + encap_lro = false; num_pkts++; }