From patchwork Wed Nov 30 08:21:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 13059535 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AA1DC433FE for ; Wed, 30 Nov 2022 08:22:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234613AbiK3IWQ (ORCPT ); Wed, 30 Nov 2022 03:22:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234569AbiK3IWK (ORCPT ); Wed, 30 Nov 2022 03:22:10 -0500 Received: from EX-PRD-EDGE02.vmware.com (ex-prd-edge02.vmware.com [208.91.3.34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3350258BD2; Wed, 30 Nov 2022 00:22:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=B2CFhb1/uLnnSDm/9F7z3okCJJ3GNu+p5WEbSseyeCU=; b=OVa42LiqVAOoaoPTpIYvi+OfXXpcGepIKFRXOKKMFetLJQouwKC5vgOtnMlK9d UNIyA3EmHD1RV3Pqf7EcKMzVuWVLYW2JtTohzC6kSc6RJj1FuYg5ZR0TW5EBHt UAl5zQMcFQ+eUwy+dpSqfOraUVfCwACwoNL34JYLgGP6voo= Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX-PRD-EDGE02.vmware.com (10.188.245.7) with Microsoft SMTP Server id 15.1.2308.14; Wed, 30 Nov 2022 00:21:47 -0800 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 4147C2030C; Wed, 30 Nov 2022 00:22:02 -0800 (PST) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 3B217AE1D1; Wed, 30 Nov 2022 00:22:02 -0800 (PST) From: Ronak Doshi To: CC: , Ronak Doshi , VMware PV-Drivers Reviewers , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , open list Subject: [PATCH v2 net 1/2] vmxnet3: correctly report encapsulated LRO packet Date: Wed, 30 Nov 2022 00:21:46 -0800 Message-ID: <20221130082148.9605-2-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20221130082148.9605-1-doshir@vmware.com> References: <20221130082148.9605-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE02.vmware.com: doshir@vmware.com does not designate permitted sender hosts) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Commit dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support") added support for encapsulation offload. However, the pathc did not report correctly the encapsulated packet which is LRO'ed by the hypervisor. This patch fixes this issue by using correct callback for the LRO'ed encapsulated packet. Fixes: dacce2be3312 ("vmxnet3: add geneve and vxlan tunnel offload support") Signed-off-by: Ronak Doshi Acked-by: Guolin Yang --- drivers/net/vmxnet3/vmxnet3_drv.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index d3e7b27eb933..3111a8a6b26a 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1396,6 +1396,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, }; u32 num_pkts = 0; bool skip_page_frags = false; + bool encap_lro = false; struct Vmxnet3_RxCompDesc *rcd; struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; u16 segCnt = 0, mss = 0; @@ -1556,13 +1557,18 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, if (VMXNET3_VERSION_GE_2(adapter) && rcd->type == VMXNET3_CDTYPE_RXCOMP_LRO) { struct Vmxnet3_RxCompDescExt *rcdlro; + union Vmxnet3_GenericDesc *gdesc; + rcdlro = (struct Vmxnet3_RxCompDescExt *)rcd; + gdesc = (union Vmxnet3_GenericDesc *)rcd; segCnt = rcdlro->segCnt; WARN_ON_ONCE(segCnt == 0); mss = rcdlro->mss; if (unlikely(segCnt <= 1)) segCnt = 0; + encap_lro = (le32_to_cpu(gdesc->dword[0]) & + (1UL << VMXNET3_RCD_HDR_INNER_SHIFT)); } else { segCnt = 0; } @@ -1630,7 +1636,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, vmxnet3_rx_csum(adapter, skb, (union Vmxnet3_GenericDesc *)rcd); skb->protocol = eth_type_trans(skb, adapter->netdev); - if (!rcd->tcp || + if ((!rcd->tcp && !encap_lro) || !(adapter->netdev->features & NETIF_F_LRO)) goto not_lro; @@ -1639,7 +1645,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, SKB_GSO_TCPV4 : SKB_GSO_TCPV6; skb_shinfo(skb)->gso_size = mss; skb_shinfo(skb)->gso_segs = segCnt; - } else if (segCnt != 0 || skb->len > mtu) { + } else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) { u32 hlen; hlen = vmxnet3_get_hdr_len(adapter, skb, @@ -1668,6 +1674,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, napi_gro_receive(&rq->napi, skb); ctx->skb = NULL; + encap_lro = false; num_pkts++; }