From patchwork Wed Nov 11 20:45:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 11898539 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0210EC5517A for ; Wed, 11 Nov 2020 20:45:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 860E5208B8 for ; Wed, 11 Nov 2020 20:45:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="LaabHsZW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726776AbgKKUpd (ORCPT ); Wed, 11 Nov 2020 15:45:33 -0500 Received: from mail-40131.protonmail.ch ([185.70.40.131]:56125 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725900AbgKKUpd (ORCPT ); Wed, 11 Nov 2020 15:45:33 -0500 X-Greylist: delayed 161883 seconds by postgrey-1.27 at vger.kernel.org; Wed, 11 Nov 2020 15:45:31 EST Date: Wed, 11 Nov 2020 20:45:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1605127530; bh=bsLJnkXi8MeTMrm7MqVIftcvsnbFJMQr1qH+ckDYG2k=; h=Date:To:From:Cc:Reply-To:Subject:From; b=LaabHsZW/4JyVGzDppF9Qsi1NifWmzt017nLH0mUXhbmTxR6qnRwAOmnupAFgxX05 bbE4t4Yt+Wivm9xgyLaYTnRZoKb7FPOBn+LoAMVV07vZlWgc6qwXD/TOn6uCjup+Xk xvGkzzSiKeCgJLCSRujLkdEnJoHCQYo4GKeaEcLS7YwsjM9uMo0W+XnuHaBk4HRUsh x9iMux9RfALDHdCeFL+9kR1+jTUPZnwEDTrNYCzp8WJbtE9DO/sGsAI1MOA6mcv7y0 EZnQfr0XRk/SAPfj1b6ihVHPtw5o1UjxX7cZUGxfofms5gSX96GOjn0Gy8V/G/CWrk 0/aIc/IkQmrIg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Alexey Kuznetsov , Hideaki YOSHIFUJI , Paolo Abeni , Willem de Bruijn , Steffen Klassert , Alexander Lobakin , Eric Dumazet , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v5 net 1/2] net: udp: fix UDP header access on Fast/frag0 UDP GRO Message-ID: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org UDP GRO uses udp_hdr(skb) in its .gro_receive() callback. While it's probably OK for non-frag0 paths (when all headers or even the entire frame are already in skb head), this inline points to junk when using Fast GRO (napi_gro_frags() or napi_gro_receive() with only Ethernet header in skb head and all the rest in the frags) and breaks GRO packet compilation and the packet flow itself. To support both modes, skb_gro_header_fast() + skb_gro_header_slow() are typically used. UDP even has an inline helper that makes use of them, udp_gro_udphdr(). Use that instead of troublemaking udp_hdr() to get rid of the out-of-order delivers. Present since the introduction of plain UDP GRO in 5.0-rc1. Fixes: e20cf8d3f1f7 ("udp: implement GRO for plain UDP sockets.") Cc: Eric Dumazet Cc: Jakub Kicinski Cc: Willem de Bruijn Signed-off-by: Alexander Lobakin Acked-by: Willem de Bruijn --- net/ipv4/udp_offload.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index e67a66fbf27b..13740e9fe6ec 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -366,7 +366,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb, static struct sk_buff *udp_gro_receive_segment(struct list_head *head, struct sk_buff *skb) { - struct udphdr *uh = udp_hdr(skb); + struct udphdr *uh = udp_gro_udphdr(skb); struct sk_buff *pp = NULL; struct udphdr *uh2; struct sk_buff *p; From patchwork Wed Nov 11 20:45:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 11898537 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A318AC388F9 for ; Wed, 11 Nov 2020 20:45:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 116B120797 for ; Wed, 11 Nov 2020 20:45:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=pm.me header.i=@pm.me header.b="EbZ6JiWR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726852AbgKKUpw (ORCPT ); Wed, 11 Nov 2020 15:45:52 -0500 Received: from mail2.protonmail.ch ([185.70.40.22]:28405 "EHLO mail2.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726162AbgKKUpv (ORCPT ); Wed, 11 Nov 2020 15:45:51 -0500 Date: Wed, 11 Nov 2020 20:45:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1605127548; bh=PBGKEoBc5TqnNvKjdre9o7gs+BzG9F9/O6Jd+eE2xuQ=; h=Date:To:From:Cc:Reply-To:Subject:From; b=EbZ6JiWR22TFUuM/vh+9LAASIOmEU0ukpQSy20Xzg4GDNubDLLWbSwteYDViiFHtR O2cMKYclwgCRMomNRCi0CxZ8Qxn44pc1IKxYqs90QtgO17XXYQAwOJbbcIfffDtrBU TuwoaK/rSjc+5vzmP/42e+8tr3Z9IrVqJRO+W6oLmF/uYTqHv1hfWcTr9+jFKT34Yu bvbTGUvT7xEabRdo9jolS+MmjA1Bp0DugikS8Cb2mRXRHpjJH2LEjklIGs+L3buYwH 7VUNO5jyGo7JjuPCyLWonX8iwCnpXATNW+YQ7sQDV2KcJkd9UzRc1ra0S5Tz7tzC8A rbOkENCW5vHkQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Alexey Kuznetsov , Hideaki YOSHIFUJI , Paolo Abeni , Willem de Bruijn , Steffen Klassert , Alexander Lobakin , Eric Dumazet , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH v5 net 2/2] net: udp: fix IP header access and skb lookup on Fast/frag0 UDP GRO Message-ID: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org udp{4,6}_lib_lookup_skb() use ip{,v6}_hdr() to get IP header of the packet. While it's probably OK for non-frag0 paths, this helpers will also point to junk on Fast/frag0 GRO when all headers are located in frags. As a result, sk/skb lookup may fail or give wrong results. To support both GRO modes, skb_gro_network_header() might be used. To not modify original functions, add private versions of udp{4,6}_lib_lookup_skb() only to perform correct sk lookups on GRO. Present since the introduction of "application-level" UDP GRO in 4.7-rc1. Misc: replace totally unneeded ternaries with plain ifs. Fixes: a6024562ffd7 ("udp: Add GRO functions to UDP socket") Suggested-by: Willem de Bruijn Cc: Eric Dumazet Cc: Jakub Kicinski Signed-off-by: Alexander Lobakin Acked-by: Willem de Bruijn --- net/ipv4/udp_offload.c | 17 +++++++++++++++-- net/ipv6/udp_offload.c | 17 +++++++++++++++-- 2 files changed, 30 insertions(+), 4 deletions(-) diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index 13740e9fe6ec..c62805cd3131 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -500,12 +500,22 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb, } EXPORT_SYMBOL(udp_gro_receive); +static struct sock *udp4_gro_lookup_skb(struct sk_buff *skb, __be16 sport, + __be16 dport) +{ + const struct iphdr *iph = skb_gro_network_header(skb); + + return __udp4_lib_lookup(dev_net(skb->dev), iph->saddr, sport, + iph->daddr, dport, inet_iif(skb), + inet_sdif(skb), &udp_table, NULL); +} + INDIRECT_CALLABLE_SCOPE struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb) { struct udphdr *uh = udp_gro_udphdr(skb); + struct sock *sk = NULL; struct sk_buff *pp; - struct sock *sk; if (unlikely(!uh)) goto flush; @@ -523,7 +533,10 @@ struct sk_buff *udp4_gro_receive(struct list_head *head, struct sk_buff *skb) skip: NAPI_GRO_CB(skb)->is_ipv6 = 0; rcu_read_lock(); - sk = static_branch_unlikely(&udp_encap_needed_key) ? udp4_lib_lookup_skb(skb, uh->source, uh->dest) : NULL; + + if (static_branch_unlikely(&udp_encap_needed_key)) + sk = udp4_gro_lookup_skb(skb, uh->source, uh->dest); + pp = udp_gro_receive(head, skb, uh, sk); rcu_read_unlock(); return pp; diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 584157a07759..f9e888d1b9af 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -111,12 +111,22 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb, return segs; } +static struct sock *udp6_gro_lookup_skb(struct sk_buff *skb, __be16 sport, + __be16 dport) +{ + const struct ipv6hdr *iph = skb_gro_network_header(skb); + + return __udp6_lib_lookup(dev_net(skb->dev), &iph->saddr, sport, + &iph->daddr, dport, inet6_iif(skb), + inet6_sdif(skb), &udp_table, NULL); +} + INDIRECT_CALLABLE_SCOPE struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) { struct udphdr *uh = udp_gro_udphdr(skb); + struct sock *sk = NULL; struct sk_buff *pp; - struct sock *sk; if (unlikely(!uh)) goto flush; @@ -135,7 +145,10 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) skip: NAPI_GRO_CB(skb)->is_ipv6 = 1; rcu_read_lock(); - sk = static_branch_unlikely(&udpv6_encap_needed_key) ? udp6_lib_lookup_skb(skb, uh->source, uh->dest) : NULL; + + if (static_branch_unlikely(&udpv6_encap_needed_key)) + sk = udp6_gro_lookup_skb(skb, uh->source, uh->dest); + pp = udp_gro_receive(head, skb, uh, sk); rcu_read_unlock(); return pp;