From patchwork Wed Jun 28 09:48:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83D5BC001DD for ; Wed, 28 Jun 2023 09:53:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232946AbjF1JxD (ORCPT ); Wed, 28 Jun 2023 05:53:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232360AbjF1JuZ (ORCPT ); Wed, 28 Jun 2023 05:50:25 -0400 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC9CE213D for ; Wed, 28 Jun 2023 02:48:38 -0700 (PDT) Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-3fa8ce2307dso48664775e9.2 for ; Wed, 28 Jun 2023 02:48:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945717; x=1690537717; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Q0Ycx9rqV3D/zA7/T+TxIq8v+2HMo/m8p4cd8mt3s7E=; b=QWjxAtXeNyxUEr3HqyMhBAli7xZgz1k4UcSW1H9UcZerboU0lSU8IGFC6XyeWGQ9JR U7VBsYZbTjMejj7KA5MSKN10jSwf62ZuNt3bFw83lanp/XfY2K+koZEVyevzvAG7fbC3 a+YSOOXJCPGX4ENL0Q30+xCdmUG0xm7dXBvWYrqzVxBUnNd2yXkFLl4DCc5qFsUezakr zAXrWsGL1gLMjj72NRUqzd/FldkOu6bW/7dibagnwl9jBTMPPm1kMJ/fq2/HDJuvfqc8 ZkAwOUYyegU9R3OoDxoVkv2QlL+YRZVzy0eU5M1jPZ72EX7y1yckk3aQPm5q2yZB3mkD zxAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945717; x=1690537717; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q0Ycx9rqV3D/zA7/T+TxIq8v+2HMo/m8p4cd8mt3s7E=; b=X1jIObmx5XurVS+0zNFjP7BSiIY8yRDS7XvlNi08I3jQHImjOico266/hrO0WXO6qG hUlWz+7an8+oS6YGNy8Zge2ISs7Lns/xYzu7AJMSOtYORpuGf8E1cbv8zK5Yz7tcA1Ag TR8c9vJwVCP8X+xiNRGvEmT79adhJ1ncEeABbZvqzFY2UHaLsDC2aKtoqK5A171/rO5q xvCeUm5RBmgcA3osX+WqxvK1FA6BfrY974bGrNOLtySQ5KJ3AjTAjRlh/o/wu0hBUMBB Vfi4iBZsXJH7wu3fXdHQi/rx6pfzn9uhD4pyVEBubyME4WFPyWbE+G/OUG6f+kPk2Sw9 jrpw== X-Gm-Message-State: AC+VfDwxif/0kXSV0TN+C+1lr+uId6mrH/oGQG6ZVuUviRFZSqv+y/v4 5MVri+qBbFKw5ZAwEX3c3nbivw== X-Google-Smtp-Source: ACHHUZ5IGzACiYjzObSRQVwyoJgzXYQReA2O8yPefYLwoaO1JQB0+XETZLjXXBWR2k6AwfGSPIB2GQ== X-Received: by 2002:a7b:c393:0:b0:3fb:b005:99d6 with SMTP id s19-20020a7bc393000000b003fbb00599d6mr2104686wmj.2.1687945717263; Wed, 28 Jun 2023 02:48:37 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:37 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:16 +0100 Subject: [PATCH bpf-next v4 1/7] udp: re-score reuseport groups when connected sockets are present MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-1-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Contrary to TCP, UDP reuseport groups can contain TCP_ESTABLISHED sockets. To support these properly we remember whether a group has a connected socket and skip the fast reuseport early-return. In effect we continue scoring all reuseport sockets and then choose the one with the highest score. The current code fails to re-calculate the score for the result of lookup_reuseport. According to Kuniyuki Iwashima: 1) SO_INCOMING_CPU is set -> selected sk might have +1 score 2) BPF prog returns ESTABLISHED and/or SO_INCOMING_CPU sk -> selected sk will have more than 8 Using the old score could trigger more lookups depending on the order that sockets are created. sk -> sk (SO_INCOMING_CPU) -> sk (ESTABLISHED) | | `-> select the next SO_INCOMING_CPU sk | `-> select itself (We should save this lookup) Fixes: efc6b6f6c311 ("udp: Improve load balancing for SO_REUSEPORT.") Reviewed-by: Kuniyuki Iwashima Signed-off-by: Lorenz Bauer --- net/ipv4/udp.c | 20 +++++++++++++++----- net/ipv6/udp.c | 19 ++++++++++++++----- 2 files changed, 29 insertions(+), 10 deletions(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index fd3dae081f3a..5ef478d2c408 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -450,14 +450,24 @@ static struct sock *udp4_lib_lookup2(struct net *net, score = compute_score(sk, net, saddr, sport, daddr, hnum, dif, sdif); if (score > badness) { - result = lookup_reuseport(net, sk, skb, - saddr, sport, daddr, hnum); + badness = score; + result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + if (!result) { + result = sk; + continue; + } + /* Fall back to scoring if group has connections */ - if (result && !reuseport_has_conns(sk)) + if (!reuseport_has_conns(sk)) return result; - result = result ? : sk; - badness = score; + /* Reuseport logic returned an error, keep original score. */ + if (IS_ERR(result)) + continue; + + badness = compute_score(result, net, saddr, sport, + daddr, hnum, dif, sdif); + } } return result; diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index e5a337e6b970..8b3cb1d7da7c 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -193,14 +193,23 @@ static struct sock *udp6_lib_lookup2(struct net *net, score = compute_score(sk, net, saddr, sport, daddr, hnum, dif, sdif); if (score > badness) { - result = lookup_reuseport(net, sk, skb, - saddr, sport, daddr, hnum); + badness = score; + result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + if (!result) { + result = sk; + continue; + } + /* Fall back to scoring if group has connections */ - if (result && !reuseport_has_conns(sk)) + if (!reuseport_has_conns(sk)) return result; - result = result ? : sk; - badness = score; + /* Reuseport logic returned an error, keep original score. */ + if (IS_ERR(result)) + continue; + + badness = compute_score(sk, net, saddr, sport, + daddr, hnum, dif, sdif); } } return result; From patchwork Wed Jun 28 09:48:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E118AEB64DD for ; Wed, 28 Jun 2023 09:53:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232824AbjF1Jwv (ORCPT ); Wed, 28 Jun 2023 05:52:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232376AbjF1Ju1 (ORCPT ); Wed, 28 Jun 2023 05:50:27 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F34FD2948 for ; Wed, 28 Jun 2023 02:48:39 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3fa8cd4a113so34847125e9.2 for ; Wed, 28 Jun 2023 02:48:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945718; x=1690537718; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=96/f/RvXpGow2TyWU+GM8deXUy7W/19EXjrkjTO+eEA=; b=fPBPFp/rKQx/d+HhD6y+c0OYsGdTxbVGlAyAA1ZBY+/Z3NzYK1MW6JNDIlq14ru8NF dznczzbIQXdc8Q+I6Z3coMOxMKdO6CfQ7pgrnKvUiSyseVMgQApGX+uCvs7dmzQLFK+K 19MsacVQvrRIguC8IEpOdSSbfN6aavYkb2lINwXlQi9wGS5iH7vM+F6svCnGfSzpASKI O708ILKTOs6OIw2rwi0J+fgjggPPhrkME/v8/Ad98R7Zc1pO8xX9H0vU/e6jioInqeo5 ScRm1PKf4/jVTcQhSl4+42b/mz35Bu4VJKWFr0BFHwYoR0tzL00cArsDFzUbA+ArW+6W gRLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945718; x=1690537718; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=96/f/RvXpGow2TyWU+GM8deXUy7W/19EXjrkjTO+eEA=; b=UO7JyB6XCz2P2OFzPTexL0JCVCvJ4sqMFHcC4M3FAPRgnuFnUqTqcfMms2TOncWkw7 5FWLCZw30ZA4XPblpd19VWy5w8CoOW9ZvgyqdGE0rWjGEdL6T6UtrdApL10F0Y/wO/wO 28XB9iNZ40eSys75e9hBxScgjTZPGCeYsCr3ws9xzSjLABo/PdhgNC/eBL9LwvRMuoyd Ndg16ZwvGYyCETDq6oHJzqGAXV38iLndRytZTNAn+VKX2hSQHDYNnHeBBLZxRApVbO99 4Grx1OtRBmZuLipbd6lWrRRUWs9/ZiHcbl7X0cSW1Xq7J25G+R+60rJZfMng6bt7nqL4 xi4w== X-Gm-Message-State: AC+VfDzwCklrDeGWGC+CeM3qmNL4YPz5XMAu4VYwBGGw3yHAJ62Sf3F0 wbN/FYsavQOjK4hmOJQoKW0T8g== X-Google-Smtp-Source: ACHHUZ7Hy2PQCfttWP5fxXbGdwJlBeaT0jUQ7qvk/Xug6qDjmUR8qVtwmPWQt57TWr7Mhu1AOdnKmA== X-Received: by 2002:a05:600c:2204:b0:3fa:8db4:91ec with SMTP id z4-20020a05600c220400b003fa8db491ecmr6771751wml.10.1687945718336; Wed, 28 Jun 2023 02:48:38 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:38 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:17 +0100 Subject: [PATCH bpf-next v4 2/7] net: export inet_lookup_reuseport and inet6_lookup_reuseport MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-2-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Rename the existing reuseport helpers for IPv4 and IPv6 so that they can be invoked in the follow up commit. Export them so that building DCCP and IPv6 as a module works. No change in functionality. Signed-off-by: Lorenz Bauer Reviewed-by: Kuniyuki Iwashima --- include/net/inet6_hashtables.h | 7 +++++++ include/net/inet_hashtables.h | 5 +++++ net/ipv4/inet_hashtables.c | 15 ++++++++------- net/ipv6/inet6_hashtables.c | 19 ++++++++++--------- 4 files changed, 30 insertions(+), 16 deletions(-) diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h index 56f1286583d3..032ddab48f8f 100644 --- a/include/net/inet6_hashtables.h +++ b/include/net/inet6_hashtables.h @@ -48,6 +48,13 @@ struct sock *__inet6_lookup_established(struct net *net, const u16 hnum, const int dif, const int sdif); +struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk, + struct sk_buff *skb, int doff, + const struct in6_addr *saddr, + __be16 sport, + const struct in6_addr *daddr, + unsigned short hnum); + struct sock *inet6_lookup_listener(struct net *net, struct inet_hashinfo *hashinfo, struct sk_buff *skb, int doff, diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index 99bd823e97f6..8734f3488f5d 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -379,6 +379,11 @@ struct sock *__inet_lookup_established(struct net *net, const __be32 daddr, const u16 hnum, const int dif, const int sdif); +struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, + struct sk_buff *skb, int doff, + __be32 saddr, __be16 sport, + __be32 daddr, unsigned short hnum); + static inline struct sock * inet_lookup_established(struct net *net, struct inet_hashinfo *hashinfo, const __be32 saddr, const __be16 sport, diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index e7391bf310a7..920131e4a65d 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -332,10 +332,10 @@ static inline int compute_score(struct sock *sk, struct net *net, return score; } -static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk, - struct sk_buff *skb, int doff, - __be32 saddr, __be16 sport, - __be32 daddr, unsigned short hnum) +struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, + struct sk_buff *skb, int doff, + __be32 saddr, __be16 sport, + __be32 daddr, unsigned short hnum) { struct sock *reuse_sk = NULL; u32 phash; @@ -346,6 +346,7 @@ static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk, } return reuse_sk; } +EXPORT_SYMBOL_GPL(inet_lookup_reuseport); /* * Here are some nice properties to exploit here. The BSD API @@ -369,8 +370,8 @@ static struct sock *inet_lhash2_lookup(struct net *net, sk_nulls_for_each_rcu(sk, node, &ilb2->nulls_head) { score = compute_score(sk, net, hnum, daddr, dif, sdif); if (score > hiscore) { - result = lookup_reuseport(net, sk, skb, doff, - saddr, sport, daddr, hnum); + result = inet_lookup_reuseport(net, sk, skb, doff, + saddr, sport, daddr, hnum); if (result) return result; @@ -399,7 +400,7 @@ static inline struct sock *inet_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); + reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); if (reuse_sk) sk = reuse_sk; return sk; diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index b64b49012655..b7c56867314e 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -111,12 +111,12 @@ static inline int compute_score(struct sock *sk, struct net *net, return score; } -static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk, - struct sk_buff *skb, int doff, - const struct in6_addr *saddr, - __be16 sport, - const struct in6_addr *daddr, - unsigned short hnum) +struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk, + struct sk_buff *skb, int doff, + const struct in6_addr *saddr, + __be16 sport, + const struct in6_addr *daddr, + unsigned short hnum) { struct sock *reuse_sk = NULL; u32 phash; @@ -127,6 +127,7 @@ static inline struct sock *lookup_reuseport(struct net *net, struct sock *sk, } return reuse_sk; } +EXPORT_SYMBOL_GPL(inet6_lookup_reuseport); /* called with rcu_read_lock() */ static struct sock *inet6_lhash2_lookup(struct net *net, @@ -143,8 +144,8 @@ static struct sock *inet6_lhash2_lookup(struct net *net, sk_nulls_for_each_rcu(sk, node, &ilb2->nulls_head) { score = compute_score(sk, net, hnum, daddr, dif, sdif); if (score > hiscore) { - result = lookup_reuseport(net, sk, skb, doff, - saddr, sport, daddr, hnum); + result = inet6_lookup_reuseport(net, sk, skb, doff, + saddr, sport, daddr, hnum); if (result) return result; @@ -175,7 +176,7 @@ static inline struct sock *inet6_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); + reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); if (reuse_sk) sk = reuse_sk; return sk; From patchwork Wed Jun 28 09:48:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2FD2EB64DA for ; Wed, 28 Jun 2023 10:05:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230090AbjF1KFu (ORCPT ); Wed, 28 Jun 2023 06:05:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231640AbjF1JwX (ORCPT ); Wed, 28 Jun 2023 05:52:23 -0400 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F106A2969 for ; Wed, 28 Jun 2023 02:48:40 -0700 (PDT) Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-3fb4146e8fcso7077065e9.0 for ; Wed, 28 Jun 2023 02:48:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945719; x=1690537719; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=k/PWucZQ213a5PDLOS2NZqrmnh7Xejco/gIkhplqP/Y=; b=DCqEsdv14XBjYf/yMPZStSV7KwrUn87VZr3bMCCcqH8f2OcqC4Wf0WgdFAdEq6/YLU LvpDowPWSzImPSg8fcKoAqCrg5EIUuasbv+rTYA0DRpFM9iE/CVXSMSgV4JH5wq1Hq/A q8Z0pyfI5QpA9p+HSvW+4whfwL/uJY/DwszIKaGXuckwsFMkUqxe1FgO6dTv1/JcDkmt ttOzfZA3bouH4O72BrXi8kq16c/tzeTd1p58IPDlik/5BcEHOJVgk40bBAtJKwEialsB QWVx4Z36ENxFx24/JWTVq6CR3TT7LxNl/3wQdSBW6uYCM9VVAM41fNonAJPc1sphVcRB 5IxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945719; x=1690537719; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k/PWucZQ213a5PDLOS2NZqrmnh7Xejco/gIkhplqP/Y=; b=K1WWdodOz8cbICyTWkJtfmJNIVV1f+aKEsiyoYr3Rpc+gzQbz1VCRGr4hdgs26QoUT P4e0LakfR+DaiSey4vy8StRmdNs5py0HDYbigBFaH5tULwY6WIMlW5a4lr2w8N3mcnH2 y7dTb3ZQMf/zvR65pYmK1ZwaVK2FdEwuvV0AuuYe3KCpQII4VVyUpmubUARwqOZCuNF4 V+HDPIfAKc0yjG7aufXPMQyC2t6RRj6S6kDNnONrLycFlJ4BFlY/zabiTcYRFpiRN6Qy RBkR9P8g6jzYhgJhMfKFM9LFEKyHDnpQXxQFPIaSyyAm72TunxVypPQ5qenIgOTAv+QD UdTA== X-Gm-Message-State: AC+VfDzru/ARhfPA7IHR35ivBIZ233uPvqW9XvjoPbkgo2FTJle5X76M 5PXNOQfFlMngshLUQKlF1gBxUg== X-Google-Smtp-Source: ACHHUZ47+NM/gCQZXDoGOr3lZR3vWhuByXJVOZrZ4LwZGlxuxK9sthd+S72QqSYJQO6K70N+2Ra46w== X-Received: by 2002:a05:600c:28c:b0:3fb:2d92:ecf1 with SMTP id 12-20020a05600c028c00b003fb2d92ecf1mr979693wmk.7.1687945719370; Wed, 28 Jun 2023 02:48:39 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:39 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:18 +0100 Subject: [PATCH bpf-next v4 3/7] net: remove duplicate reuseport_lookup functions MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-3-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org There are currently four copies of reuseport_lookup: one each for (TCP, UDP)x(IPv4, IPv6). This forces us to duplicate all callers of those functions as well. This is already the case for sk_lookup helpers (inet,inet6,udp4,udp6)_lookup_run_bpf. There are two differences between the reuseport_lookup helpers: 1. They call different hash functions depending on protocol 2. UDP reuseport_lookup checks that sk_state != TCP_ESTABLISHED Move the check for sk_state into the caller and use the INDIRECT_CALL infrastructure to cut down the helpers to one per IP version. Signed-off-by: Lorenz Bauer Reviewed-by: Kuniyuki Iwashima --- include/net/inet6_hashtables.h | 11 ++++++++++- include/net/inet_hashtables.h | 15 ++++++++++----- net/ipv4/inet_hashtables.c | 20 +++++++++++++------- net/ipv4/udp.c | 34 +++++++++++++--------------------- net/ipv6/inet6_hashtables.c | 14 ++++++++++---- net/ipv6/udp.c | 41 ++++++++++++++++------------------------- 6 files changed, 72 insertions(+), 63 deletions(-) diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h index 032ddab48f8f..f89320b6fee3 100644 --- a/include/net/inet6_hashtables.h +++ b/include/net/inet6_hashtables.h @@ -48,12 +48,21 @@ struct sock *__inet6_lookup_established(struct net *net, const u16 hnum, const int dif, const int sdif); +typedef u32 (inet6_ehashfn_t)(const struct net *net, + const struct in6_addr *laddr, const u16 lport, + const struct in6_addr *faddr, const __be16 fport); + +inet6_ehashfn_t inet6_ehashfn; + +INDIRECT_CALLABLE_DECLARE(inet6_ehashfn_t udp6_ehashfn); + struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, const struct in6_addr *saddr, __be16 sport, const struct in6_addr *daddr, - unsigned short hnum); + unsigned short hnum, + inet6_ehashfn_t *ehashfn); struct sock *inet6_lookup_listener(struct net *net, struct inet_hashinfo *hashinfo, diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index 8734f3488f5d..ddfa2e67fdb5 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -379,10 +379,19 @@ struct sock *__inet_lookup_established(struct net *net, const __be32 daddr, const u16 hnum, const int dif, const int sdif); +typedef u32 (inet_ehashfn_t)(const struct net *net, + const __be32 laddr, const __u16 lport, + const __be32 faddr, const __be16 fport); + +inet_ehashfn_t inet_ehashfn; + +INDIRECT_CALLABLE_DECLARE(inet_ehashfn_t udp_ehashfn); + struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, __be32 saddr, __be16 sport, - __be32 daddr, unsigned short hnum); + __be32 daddr, unsigned short hnum, + inet_ehashfn_t *ehashfn); static inline struct sock * inet_lookup_established(struct net *net, struct inet_hashinfo *hashinfo, @@ -453,10 +462,6 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo, refcounted); } -u32 inet6_ehashfn(const struct net *net, - const struct in6_addr *laddr, const u16 lport, - const struct in6_addr *faddr, const __be16 fport); - static inline void sk_daddr_set(struct sock *sk, __be32 addr) { sk->sk_daddr = addr; /* alias of inet_daddr */ diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 920131e4a65d..352eb371c93b 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -28,9 +28,9 @@ #include #include -static u32 inet_ehashfn(const struct net *net, const __be32 laddr, - const __u16 lport, const __be32 faddr, - const __be16 fport) +u32 inet_ehashfn(const struct net *net, const __be32 laddr, + const __u16 lport, const __be32 faddr, + const __be16 fport) { static u32 inet_ehash_secret __read_mostly; @@ -39,6 +39,7 @@ static u32 inet_ehashfn(const struct net *net, const __be32 laddr, return __inet_ehashfn(laddr, lport, faddr, fport, inet_ehash_secret + net_hash_mix(net)); } +EXPORT_SYMBOL_GPL(inet_ehashfn); /* This function handles inet_sock, but also timewait and request sockets * for IPv4/IPv6. @@ -332,16 +333,20 @@ static inline int compute_score(struct sock *sk, struct net *net, return score; } +INDIRECT_CALLABLE_DECLARE(inet_ehashfn_t udp_ehashfn); + struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, __be32 saddr, __be16 sport, - __be32 daddr, unsigned short hnum) + __be32 daddr, unsigned short hnum, + inet_ehashfn_t *ehashfn) { struct sock *reuse_sk = NULL; u32 phash; if (sk->sk_reuseport) { - phash = inet_ehashfn(net, daddr, hnum, saddr, sport); + phash = INDIRECT_CALL_2(ehashfn, udp_ehashfn, inet_ehashfn, + net, daddr, hnum, saddr, sport); reuse_sk = reuseport_select_sock(sk, phash, skb, doff); } return reuse_sk; @@ -371,7 +376,7 @@ static struct sock *inet_lhash2_lookup(struct net *net, score = compute_score(sk, net, hnum, daddr, dif, sdif); if (score > hiscore) { result = inet_lookup_reuseport(net, sk, skb, doff, - saddr, sport, daddr, hnum); + saddr, sport, daddr, hnum, inet_ehashfn); if (result) return result; @@ -400,7 +405,8 @@ static inline struct sock *inet_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); + reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum, + inet_ehashfn); if (reuse_sk) sk = reuse_sk; return sk; diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 5ef478d2c408..7258edece691 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -405,9 +405,9 @@ static int compute_score(struct sock *sk, struct net *net, return score; } -static u32 udp_ehashfn(const struct net *net, const __be32 laddr, - const __u16 lport, const __be32 faddr, - const __be16 fport) +INDIRECT_CALLABLE_SCOPE +u32 udp_ehashfn(const struct net *net, const __be32 laddr, const __u16 lport, + const __be32 faddr, const __be16 fport) { static u32 udp_ehash_secret __read_mostly; @@ -417,22 +417,6 @@ static u32 udp_ehashfn(const struct net *net, const __be32 laddr, udp_ehash_secret + net_hash_mix(net)); } -static struct sock *lookup_reuseport(struct net *net, struct sock *sk, - struct sk_buff *skb, - __be32 saddr, __be16 sport, - __be32 daddr, unsigned short hnum) -{ - struct sock *reuse_sk = NULL; - u32 hash; - - if (sk->sk_reuseport && sk->sk_state != TCP_ESTABLISHED) { - hash = udp_ehashfn(net, daddr, hnum, saddr, sport); - reuse_sk = reuseport_select_sock(sk, hash, skb, - sizeof(struct udphdr)); - } - return reuse_sk; -} - /* called with rcu_read_lock() */ static struct sock *udp4_lib_lookup2(struct net *net, __be32 saddr, __be16 sport, @@ -451,7 +435,14 @@ static struct sock *udp4_lib_lookup2(struct net *net, daddr, hnum, dif, sdif); if (score > badness) { badness = score; - result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + + if (sk->sk_state == TCP_ESTABLISHED) { + result = sk; + continue; + } + + result = inet_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, udp_ehashfn); if (!result) { result = sk; continue; @@ -490,7 +481,8 @@ static struct sock *udp4_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + reuse_sk = inet_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, udp_ehashfn); if (reuse_sk) sk = reuse_sk; return sk; diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index b7c56867314e..3616225c89ef 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -39,6 +39,7 @@ u32 inet6_ehashfn(const struct net *net, return __inet6_ehashfn(lhash, lport, fhash, fport, inet6_ehash_secret + net_hash_mix(net)); } +EXPORT_SYMBOL_GPL(inet6_ehashfn); /* * Sockets in TCP_CLOSE state are _always_ taken out of the hash, so @@ -111,18 +112,22 @@ static inline int compute_score(struct sock *sk, struct net *net, return score; } +INDIRECT_CALLABLE_DECLARE(inet6_ehashfn_t udp6_ehashfn); + struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, const struct in6_addr *saddr, __be16 sport, const struct in6_addr *daddr, - unsigned short hnum) + unsigned short hnum, + inet6_ehashfn_t *ehashfn) { struct sock *reuse_sk = NULL; u32 phash; if (sk->sk_reuseport) { - phash = inet6_ehashfn(net, daddr, hnum, saddr, sport); + phash = INDIRECT_CALL_INET(ehashfn, udp6_ehashfn, inet6_ehashfn, + net, daddr, hnum, saddr, sport); reuse_sk = reuseport_select_sock(sk, phash, skb, doff); } return reuse_sk; @@ -145,7 +150,7 @@ static struct sock *inet6_lhash2_lookup(struct net *net, score = compute_score(sk, net, hnum, daddr, dif, sdif); if (score > hiscore) { result = inet6_lookup_reuseport(net, sk, skb, doff, - saddr, sport, daddr, hnum); + saddr, sport, daddr, hnum, inet6_ehashfn); if (result) return result; @@ -176,7 +181,8 @@ static inline struct sock *inet6_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum); + reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff, + saddr, sport, daddr, hnum, inet6_ehashfn); if (reuse_sk) sk = reuse_sk; return sk; diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 8b3cb1d7da7c..ebac9200b15c 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -70,11 +70,12 @@ int udpv6_init_sock(struct sock *sk) return 0; } -static u32 udp6_ehashfn(const struct net *net, - const struct in6_addr *laddr, - const u16 lport, - const struct in6_addr *faddr, - const __be16 fport) +INDIRECT_CALLABLE_SCOPE +u32 udp6_ehashfn(const struct net *net, + const struct in6_addr *laddr, + const u16 lport, + const struct in6_addr *faddr, + const __be16 fport) { static u32 udp6_ehash_secret __read_mostly; static u32 udp_ipv6_hash_secret __read_mostly; @@ -159,24 +160,6 @@ static int compute_score(struct sock *sk, struct net *net, return score; } -static struct sock *lookup_reuseport(struct net *net, struct sock *sk, - struct sk_buff *skb, - const struct in6_addr *saddr, - __be16 sport, - const struct in6_addr *daddr, - unsigned int hnum) -{ - struct sock *reuse_sk = NULL; - u32 hash; - - if (sk->sk_reuseport && sk->sk_state != TCP_ESTABLISHED) { - hash = udp6_ehashfn(net, daddr, hnum, saddr, sport); - reuse_sk = reuseport_select_sock(sk, hash, skb, - sizeof(struct udphdr)); - } - return reuse_sk; -} - /* called with rcu_read_lock() */ static struct sock *udp6_lib_lookup2(struct net *net, const struct in6_addr *saddr, __be16 sport, @@ -194,7 +177,14 @@ static struct sock *udp6_lib_lookup2(struct net *net, daddr, hnum, dif, sdif); if (score > badness) { badness = score; - result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + + if (sk->sk_state == TCP_ESTABLISHED) { + result = sk; + continue; + } + + result = inet6_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, udp6_ehashfn); if (!result) { result = sk; continue; @@ -234,7 +224,8 @@ static inline struct sock *udp6_lookup_run_bpf(struct net *net, if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; - reuse_sk = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); + reuse_sk = inet6_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, udp6_ehashfn); if (reuse_sk) sk = reuse_sk; return sk; From patchwork Wed Jun 28 09:48:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8730FEB64D7 for ; Wed, 28 Jun 2023 10:06:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230418AbjF1KF6 (ORCPT ); Wed, 28 Jun 2023 06:05:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231643AbjF1JwY (ORCPT ); Wed, 28 Jun 2023 05:52:24 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBB93213F for ; Wed, 28 Jun 2023 02:48:41 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-3fa96fd7a01so38551705e9.1 for ; Wed, 28 Jun 2023 02:48:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945720; x=1690537720; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=EB6riD1WXxwAhPHQKptx8Hj/oVT2Y4njsIUAGTnV+oQ=; b=YHqeou5c6IyzYc35Am3dvA003R8/c/sUdngsOS4SuRzyMtyvrIFs3vUWsrjNA7ruj/ T82MvEwOoPDT9loqH9qK15lvk/TmrXnQWgqlvbb3fDEClVw7pjmRGlyiYje/T2Lnu1Kz S1wfk/F64MQYlBGhEuGr2DWde0/f7xvLB4Eqmk6BpwYDEwTJD1oLrOh9ocyP43Awg3Wz +0kMkOw6ydaLwfvGoLnhX+OetnVcNdROVXxCOt5nSY0VNUrLrp1va0AB6L4VAAXoZ1MB IOLZ/QbqPA2trW23k3HtyNLYCc6IF6EhH8ZHBLcFQAUEWmwu9adHEd7deaPRM7dcRd6p m6Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945720; x=1690537720; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EB6riD1WXxwAhPHQKptx8Hj/oVT2Y4njsIUAGTnV+oQ=; b=TKJ23SmD6X/KJFMqAdMK3CFE27lRkSyqLev89D1AsB9gtaY0G63cd1noZ6+RrEGPsi IieRJyianaL4R9amLMgMA7lOCgFkSVFar/jc5VGAOc/zdbA/3YkbuszaBbgGLkU6uOn1 xUn4Dkk7umqz1eik9kHR/yjkMJvKMhbdPRKrtT6fvHfpoWo0FKgLLOujmqDnCajvQX9P OhzYsLgbHOv/g93ftG5VTb8i8/v3HLh/7M/ngbfltOPY4urkdig7pzVIm+TuUBYOUvv/ sgENdDb7u5mW+Pu+0dNnEkra70g8wxMCIJsfZA+fW99JRFP/vExKoyObzGPHJQHWZP3J Fmnw== X-Gm-Message-State: AC+VfDyvjiTGxCkIV2U2YNKN7Cu5bEtOSpA+7F9o0e0SnEmi0QpafsQ/ KlVMpktvAhKQ2YiFv1F84Pq7bQ== X-Google-Smtp-Source: ACHHUZ4P5NKU1ykLHA5DLQTtECISZ00C+OSVji0anJvKktZD8iKpsd8voDo+TT3dppPu0iM51moOxA== X-Received: by 2002:a7b:cd16:0:b0:3fa:98f8:225f with SMTP id f22-20020a7bcd16000000b003fa98f8225fmr5768761wmj.26.1687945720442; Wed, 28 Jun 2023 02:48:40 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:40 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:19 +0100 Subject: [PATCH bpf-next v4 4/7] net: document inet[6]_lookup_reuseport sk_state requirements MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-4-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The current implementation was extracted from inet[6]_lhash2_lookup in commit 80b373f74f9e ("inet: Extract helper for selecting socket from reuseport group") and commit 5df6531292b5 ("inet6: Extract helper for selecting socket from reuseport group"). In the original context, sk is always in TCP_LISTEN state and so did not have a separate check. Add documentation that specifies which sk_state are valid to pass to the function. Signed-off-by: Lorenz Bauer Reviewed-by: Kuniyuki Iwashima --- net/ipv4/inet_hashtables.c | 14 ++++++++++++++ net/ipv6/inet6_hashtables.c | 14 ++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 352eb371c93b..ac927a635a6f 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -335,6 +335,20 @@ static inline int compute_score(struct sock *sk, struct net *net, INDIRECT_CALLABLE_DECLARE(inet_ehashfn_t udp_ehashfn); +/** + * inet_lookup_reuseport() - execute reuseport logic on AF_INET socket if necessary. + * @net: network namespace. + * @sk: AF_INET socket, must be in TCP_LISTEN state for TCP or TCP_CLOSE for UDP. + * @skb: context for a potential SK_REUSEPORT program. + * @doff: header offset. + * @saddr: source address. + * @sport: source port. + * @daddr: destination address. + * @hnum: destination port in host byte order. + * + * Return: NULL if sk doesn't have SO_REUSEPORT set, otherwise a pointer to + * the selected sock or an error. + */ struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, __be32 saddr, __be16 sport, diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index 3616225c89ef..d37602fabc00 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -114,6 +114,20 @@ static inline int compute_score(struct sock *sk, struct net *net, INDIRECT_CALLABLE_DECLARE(inet6_ehashfn_t udp6_ehashfn); +/** + * inet6_lookup_reuseport() - execute reuseport logic on AF_INET6 socket if necessary. + * @net: network namespace. + * @sk: AF_INET6 socket, must be in TCP_LISTEN state for TCP or TCP_CLOSE for UDP. + * @skb: context for a potential SK_REUSEPORT program. + * @doff: header offset. + * @saddr: source address. + * @sport: source port. + * @daddr: destination address. + * @hnum: destination port in host byte order. + * + * Return: NULL if sk doesn't have SO_REUSEPORT set, otherwise a pointer to + * the selected sock or an error. + */ struct sock *inet6_lookup_reuseport(struct net *net, struct sock *sk, struct sk_buff *skb, int doff, const struct in6_addr *saddr, From patchwork Wed Jun 28 09:48:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2CEBEB64D7 for ; Wed, 28 Jun 2023 10:05:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbjF1KFz (ORCPT ); Wed, 28 Jun 2023 06:05:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231659AbjF1JwY (ORCPT ); Wed, 28 Jun 2023 05:52:24 -0400 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18E4C2733 for ; Wed, 28 Jun 2023 02:48:43 -0700 (PDT) Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-3f9bdb01ec0so65283865e9.2 for ; Wed, 28 Jun 2023 02:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945721; x=1690537721; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=QeZGdd4rUtgKqzwCJQVIgRg4hn1CZ7BnN059R+TDiIQ=; b=bM0q80Cc+Q8VGRiGWT7s8M9IVMbfczVrvr8FFm+rwTdTpoYNqKfe5ix0rHbmbcOO5Q gFj//15MTaeMQmboXsv9UOwlc/IsqP5cbRMK5MvpA7+KNTd648LFGTr/c/3LhWg+xH4L OK+uT7rB5XNhHmvckJ7tOb1zRvehrxIjT5HvPfTFXijBuANLzYXultB3OWPJAI/iqTnV gWUP2QNUI7LfbzKLPsGc/YD143cpbWxDMp/0Bx07KoI06JSSThsrC1DwXmWpWHvfWj+j L5LUll6jP4xJSvsIqMBlg5vPAtMvQhulXhCItS2GvNOIS7J2Am4TOht+N1OZ4AgD5APU Kwtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945721; x=1690537721; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QeZGdd4rUtgKqzwCJQVIgRg4hn1CZ7BnN059R+TDiIQ=; b=R/NWEY9JLWLw5iaMBo0Mh/Vsnx9ycywSERx9mWptltY7jnJ0LfL5U/8g5gGksGGTRW 5gI0iFfoZnp0ybl/J7wbUATSgGII9KxpR+pGQSoZx3p0q6gxG4SeT3BISkF8LPY00QhL BexfJHsq2DbYnAd9so/J3TDmVgFuRGNstjS+OnfVoanf7iL40tAidyohUsH1ujc8lCVY z6fKQNMhhFuM0AvyxxrST/HlfJW4nGbhwaK/mQ3klGDa5/JpOEiT4xz4W4MgLdhvnfkB DiKDTRTNjQiiyA8mg32ne4NLze+CkIC0yVXDG9K4fYzJENmsUfEbLrKEcTJQyEefHnSx 8bVA== X-Gm-Message-State: AC+VfDx9pDsH1g30ctIVzEH7EiQgbVHnHDWYxW4wZMgw51FlITEHOEeA oQyuuMs4W2Xj59rzdnM4EHb+5g== X-Google-Smtp-Source: ACHHUZ5W9QvJ/GW3sCUSvq8r9xYv8m4L2ZLedBlE8qP8Kh3UPlV5HCEiRQbnVzamob2NwsITzxPe+Q== X-Received: by 2002:a05:600c:2212:b0:3fa:7b20:6193 with SMTP id z18-20020a05600c221200b003fa7b206193mr12747270wml.38.1687945721418; Wed, 28 Jun 2023 02:48:41 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:41 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:20 +0100 Subject: [PATCH bpf-next v4 5/7] net: remove duplicate sk_lookup helpers MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-5-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Now that inet[6]_lookup_reuseport are parameterised on the ehashfn we can remove two sk_lookup helpers. Reviewed-by: Kuniyuki Iwashima Signed-off-by: Lorenz Bauer --- include/net/inet6_hashtables.h | 9 +++++++++ include/net/inet_hashtables.h | 7 +++++++ net/ipv4/inet_hashtables.c | 26 +++++++++++++------------- net/ipv4/udp.c | 32 +++++--------------------------- net/ipv6/inet6_hashtables.c | 31 ++++++++++++++++--------------- net/ipv6/udp.c | 34 +++++----------------------------- 6 files changed, 55 insertions(+), 84 deletions(-) diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h index f89320b6fee3..a6722d6ef80f 100644 --- a/include/net/inet6_hashtables.h +++ b/include/net/inet6_hashtables.h @@ -73,6 +73,15 @@ struct sock *inet6_lookup_listener(struct net *net, const unsigned short hnum, const int dif, const int sdif); +struct sock *inet6_lookup_run_sk_lookup(struct net *net, + int protocol, + struct sk_buff *skb, int doff, + const struct in6_addr *saddr, + const __be16 sport, + const struct in6_addr *daddr, + const u16 hnum, const int dif, + inet6_ehashfn_t *ehashfn); + static inline struct sock *__inet6_lookup(struct net *net, struct inet_hashinfo *hashinfo, struct sk_buff *skb, int doff, diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index ddfa2e67fdb5..c0532cc7587f 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -393,6 +393,13 @@ struct sock *inet_lookup_reuseport(struct net *net, struct sock *sk, __be32 daddr, unsigned short hnum, inet_ehashfn_t *ehashfn); +struct sock *inet_lookup_run_sk_lookup(struct net *net, + int protocol, + struct sk_buff *skb, int doff, + __be32 saddr, __be16 sport, + __be32 daddr, u16 hnum, const int dif, + inet_ehashfn_t *ehashfn); + static inline struct sock * inet_lookup_established(struct net *net, struct inet_hashinfo *hashinfo, const __be32 saddr, const __be16 sport, diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index ac927a635a6f..a411ae560bdf 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -402,25 +402,23 @@ static struct sock *inet_lhash2_lookup(struct net *net, return result; } -static inline struct sock *inet_lookup_run_bpf(struct net *net, - struct inet_hashinfo *hashinfo, - struct sk_buff *skb, int doff, - __be32 saddr, __be16 sport, - __be32 daddr, u16 hnum, const int dif) +struct sock *inet_lookup_run_sk_lookup(struct net *net, + int protocol, + struct sk_buff *skb, int doff, + __be32 saddr, __be16 sport, + __be32 daddr, u16 hnum, const int dif, + inet_ehashfn_t *ehashfn) { struct sock *sk, *reuse_sk; bool no_reuseport; - if (hashinfo != net->ipv4.tcp_death_row.hashinfo) - return NULL; /* only TCP is supported */ - - no_reuseport = bpf_sk_lookup_run_v4(net, IPPROTO_TCP, saddr, sport, + no_reuseport = bpf_sk_lookup_run_v4(net, protocol, saddr, sport, daddr, hnum, dif, &sk); if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, saddr, sport, daddr, hnum, - inet_ehashfn); + ehashfn); if (reuse_sk) sk = reuse_sk; return sk; @@ -438,9 +436,11 @@ struct sock *__inet_lookup_listener(struct net *net, unsigned int hash2; /* Lookup redirect from BPF */ - if (static_branch_unlikely(&bpf_sk_lookup_enabled)) { - result = inet_lookup_run_bpf(net, hashinfo, skb, doff, - saddr, sport, daddr, hnum, dif); + if (static_branch_unlikely(&bpf_sk_lookup_enabled) && + hashinfo == net->ipv4.tcp_death_row.hashinfo) { + result = inet_lookup_run_sk_lookup(net, IPPROTO_TCP, skb, doff, + saddr, sport, daddr, hnum, dif, + inet_ehashfn); if (result) goto done; } diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 7258edece691..eb79268f216d 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -464,30 +464,6 @@ static struct sock *udp4_lib_lookup2(struct net *net, return result; } -static struct sock *udp4_lookup_run_bpf(struct net *net, - struct udp_table *udptable, - struct sk_buff *skb, - __be32 saddr, __be16 sport, - __be32 daddr, u16 hnum, const int dif) -{ - struct sock *sk, *reuse_sk; - bool no_reuseport; - - if (udptable != net->ipv4.udp_table) - return NULL; /* only UDP is supported */ - - no_reuseport = bpf_sk_lookup_run_v4(net, IPPROTO_UDP, saddr, sport, - daddr, hnum, dif, &sk); - if (no_reuseport || IS_ERR_OR_NULL(sk)) - return sk; - - reuse_sk = inet_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), - saddr, sport, daddr, hnum, udp_ehashfn); - if (reuse_sk) - sk = reuse_sk; - return sk; -} - /* UDP is nearly always wildcards out the wazoo, it makes no sense to try * harder than this. -DaveM */ @@ -512,9 +488,11 @@ struct sock *__udp4_lib_lookup(struct net *net, __be32 saddr, goto done; /* Lookup redirect from BPF */ - if (static_branch_unlikely(&bpf_sk_lookup_enabled)) { - sk = udp4_lookup_run_bpf(net, udptable, skb, - saddr, sport, daddr, hnum, dif); + if (static_branch_unlikely(&bpf_sk_lookup_enabled) && + udptable == net->ipv4.udp_table) { + sk = inet_lookup_run_sk_lookup(net, IPPROTO_UDP, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, dif, + udp_ehashfn); if (sk) { result = sk; goto done; diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index d37602fabc00..f9653675f577 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -176,31 +176,30 @@ static struct sock *inet6_lhash2_lookup(struct net *net, return result; } -static inline struct sock *inet6_lookup_run_bpf(struct net *net, - struct inet_hashinfo *hashinfo, - struct sk_buff *skb, int doff, - const struct in6_addr *saddr, - const __be16 sport, - const struct in6_addr *daddr, - const u16 hnum, const int dif) +struct sock *inet6_lookup_run_sk_lookup(struct net *net, + int protocol, + struct sk_buff *skb, int doff, + const struct in6_addr *saddr, + const __be16 sport, + const struct in6_addr *daddr, + const u16 hnum, const int dif, + inet6_ehashfn_t *ehashfn) { struct sock *sk, *reuse_sk; bool no_reuseport; - if (hashinfo != net->ipv4.tcp_death_row.hashinfo) - return NULL; /* only TCP is supported */ - - no_reuseport = bpf_sk_lookup_run_v6(net, IPPROTO_TCP, saddr, sport, + no_reuseport = bpf_sk_lookup_run_v6(net, protocol, saddr, sport, daddr, hnum, dif, &sk); if (no_reuseport || IS_ERR_OR_NULL(sk)) return sk; reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff, - saddr, sport, daddr, hnum, inet6_ehashfn); + saddr, sport, daddr, hnum, ehashfn); if (reuse_sk) sk = reuse_sk; return sk; } +EXPORT_SYMBOL_GPL(inet6_lookup_run_sk_lookup); struct sock *inet6_lookup_listener(struct net *net, struct inet_hashinfo *hashinfo, @@ -214,9 +213,11 @@ struct sock *inet6_lookup_listener(struct net *net, unsigned int hash2; /* Lookup redirect from BPF */ - if (static_branch_unlikely(&bpf_sk_lookup_enabled)) { - result = inet6_lookup_run_bpf(net, hashinfo, skb, doff, - saddr, sport, daddr, hnum, dif); + if (static_branch_unlikely(&bpf_sk_lookup_enabled) && + hashinfo == net->ipv4.tcp_death_row.hashinfo) { + result = inet6_lookup_run_sk_lookup(net, IPPROTO_TCP, skb, doff, + saddr, sport, daddr, hnum, dif, + inet6_ehashfn); if (result) goto done; } diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index ebac9200b15c..8a6d94cabee0 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -205,32 +205,6 @@ static struct sock *udp6_lib_lookup2(struct net *net, return result; } -static inline struct sock *udp6_lookup_run_bpf(struct net *net, - struct udp_table *udptable, - struct sk_buff *skb, - const struct in6_addr *saddr, - __be16 sport, - const struct in6_addr *daddr, - u16 hnum, const int dif) -{ - struct sock *sk, *reuse_sk; - bool no_reuseport; - - if (udptable != net->ipv4.udp_table) - return NULL; /* only UDP is supported */ - - no_reuseport = bpf_sk_lookup_run_v6(net, IPPROTO_UDP, saddr, sport, - daddr, hnum, dif, &sk); - if (no_reuseport || IS_ERR_OR_NULL(sk)) - return sk; - - reuse_sk = inet6_lookup_reuseport(net, sk, skb, sizeof(struct udphdr), - saddr, sport, daddr, hnum, udp6_ehashfn); - if (reuse_sk) - sk = reuse_sk; - return sk; -} - /* rcu_read_lock() must be held */ struct sock *__udp6_lib_lookup(struct net *net, const struct in6_addr *saddr, __be16 sport, @@ -255,9 +229,11 @@ struct sock *__udp6_lib_lookup(struct net *net, goto done; /* Lookup redirect from BPF */ - if (static_branch_unlikely(&bpf_sk_lookup_enabled)) { - sk = udp6_lookup_run_bpf(net, udptable, skb, - saddr, sport, daddr, hnum, dif); + if (static_branch_unlikely(&bpf_sk_lookup_enabled) && + udptable == net->ipv4.udp_table) { + sk = inet6_lookup_run_sk_lookup(net, IPPROTO_UDP, skb, sizeof(struct udphdr), + saddr, sport, daddr, hnum, dif, + udp6_ehashfn); if (sk) { result = sk; goto done; From patchwork Wed Jun 28 09:48:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 529CEEB64D7 for ; Wed, 28 Jun 2023 10:05:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230350AbjF1KFv (ORCPT ); Wed, 28 Jun 2023 06:05:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231709AbjF1JwY (ORCPT ); Wed, 28 Jun 2023 05:52:24 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2372D2D69 for ; Wed, 28 Jun 2023 02:48:44 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3fa8cd4a113so34847705e9.2 for ; Wed, 28 Jun 2023 02:48:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945722; x=1690537722; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ACu9x9znTzzrG5OOVX/Mf35kw5S9FPJgqrcruayEvUA=; b=b5HkArT3+z8GOWUG+9nyuz5QF4kb+DHqy6lgEmHJDfpdV0RJ0FFnJ2gQDonbVWHz5F SjPXWCDQ8sbGkRr/jZwrohskYiqkEfkkPCA1PliTY+y21BESwRBSPkHPskZt9Y54gWlq TTgi48gtMj2JzXSluJmVfIrueHm8HIZfLv80UBFGjsXv7Q8ssM4PGthyeFCyw51e+NLH pY4G8FjuojzuH2RALY0YO2eCL9xscPOkULZLV3jZHQWKHWw4ALbn3h2xTeZzqE1iiRx3 7zjfYolauoEuWndfK+cK2yX+iUGg66Pn0Q93mZ4RRRNSFzXuT/QXytZlsq8h5tbrepYm vmIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945722; x=1690537722; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ACu9x9znTzzrG5OOVX/Mf35kw5S9FPJgqrcruayEvUA=; b=jF6STmTD2U7/sM91KoXIyH9YZgl62HIsU+1R9y/4kV6i2Vu6VwUgxstDK8KeSLGFso VdOrJ/NHgzffy8C+jMsDFV60mLMM+vgeis249sb25jm7PftNTAnDnXkWm5p6LpEyfYat SuRPFs/5PcVNyQeMh8D0RLTxi/DvlesGPeyiTcn13eGfOBl/Gw5eOx1CDl4G450P0lzb ihhD1oDr5J0Dc5uAZLoNkTbwH6cB/YhX0gPtbDxg2tIUqUgbhBxUjyUwYD37OmwCFzWh HIqetep6vQKIrMEcZ1XGWqCT7WuF1ltLw8LHAKqb9nNY/+9EU6YCVQ/9ZlAbW6pm6WeM gVdw== X-Gm-Message-State: AC+VfDyvMTf07H9Om4zyZ1XrmHimVHWWY57wxmU8e3iwYz5TsHFm3x4M lLN7uhPXchHe0naxoTz3XBA0dg== X-Google-Smtp-Source: ACHHUZ45a0QJUGsm91jVXed9BAr4NffD2osdU9jn0fZa1wpkiy+BilWG1MqBEVXSDXOmzH1bJlZVxA== X-Received: by 2002:a05:600c:3786:b0:3f9:c0a9:3e22 with SMTP id o6-20020a05600c378600b003f9c0a93e22mr14927552wmr.12.1687945722587; Wed, 28 Jun 2023 02:48:42 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:42 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:21 +0100 Subject: [PATCH bpf-next v4 6/7] bpf, net: Support SO_REUSEPORT sockets with bpf_sk_assign MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-6-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer , Joe Stringer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Currently the bpf_sk_assign helper in tc BPF context refuses SO_REUSEPORT sockets. This means we can't use the helper to steer traffic to Envoy, which configures SO_REUSEPORT on its sockets. In turn, we're blocked from removing TPROXY from our setup. The reason that bpf_sk_assign refuses such sockets is that the bpf_sk_lookup helpers don't execute SK_REUSEPORT programs. Instead, one of the reuseport sockets is selected by hash. This could cause dispatch to the "wrong" socket: sk = bpf_sk_lookup_tcp(...) // select SO_REUSEPORT by hash bpf_sk_assign(skb, sk) // SK_REUSEPORT wasn't executed Fixing this isn't as simple as invoking SK_REUSEPORT from the lookup helpers unfortunately. In the tc context, L2 headers are at the start of the skb, while SK_REUSEPORT expects L3 headers instead. Instead, we execute the SK_REUSEPORT program when the assigned socket is pulled out of the skb, further up the stack. This creates some trickiness with regards to refcounting as bpf_sk_assign will put both refcounted and RCU freed sockets in skb->sk. reuseport sockets are RCU freed. We can infer that the sk_assigned socket is RCU freed if the reuseport lookup succeeds, but convincing yourself of this fact isn't straight forward. Therefore we defensively check refcounting on the sk_assign sock even though it's probably not required in practice. Fixes: 8e368dc72e86 ("bpf: Fix use of sk->sk_reuseport from sk_assign") Fixes: cf7fbe660f2d ("bpf: Add socket assign support") Co-developed-by: Daniel Borkmann Signed-off-by: Daniel Borkmann Signed-off-by: Lorenz Bauer Cc: Joe Stringer Link: https://lore.kernel.org/bpf/CACAyw98+qycmpQzKupquhkxbvWK4OFyDuuLMBNROnfWMZxUWeA@mail.gmail.com/ Reviewed-by: Kuniyuki Iwashima --- include/net/inet6_hashtables.h | 56 ++++++++++++++++++++++++++++++++++++++---- include/net/inet_hashtables.h | 49 ++++++++++++++++++++++++++++++++++-- include/net/sock.h | 7 ++++-- include/uapi/linux/bpf.h | 3 --- net/core/filter.c | 2 -- net/ipv4/udp.c | 8 ++++-- net/ipv6/udp.c | 10 +++++--- tools/include/uapi/linux/bpf.h | 3 --- 8 files changed, 116 insertions(+), 22 deletions(-) diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h index a6722d6ef80f..7d677b89f269 100644 --- a/include/net/inet6_hashtables.h +++ b/include/net/inet6_hashtables.h @@ -103,6 +103,46 @@ static inline struct sock *__inet6_lookup(struct net *net, daddr, hnum, dif, sdif); } +static inline +struct sock *inet6_steal_sock(struct net *net, struct sk_buff *skb, int doff, + const struct in6_addr *saddr, const __be16 sport, + const struct in6_addr *daddr, const __be16 dport, + bool *refcounted, inet6_ehashfn_t *ehashfn) +{ + struct sock *sk, *reuse_sk; + bool prefetched; + + sk = skb_steal_sock(skb, refcounted, &prefetched); + if (!sk) + return NULL; + + if (!prefetched) + return sk; + + if (sk->sk_protocol == IPPROTO_TCP) { + if (sk->sk_state != TCP_LISTEN) + return sk; + } else if (sk->sk_protocol == IPPROTO_UDP) { + if (sk->sk_state != TCP_CLOSE) + return sk; + } else { + return sk; + } + + reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff, + saddr, sport, daddr, ntohs(dport), + ehashfn); + if (!reuse_sk || reuse_sk == sk) + return sk; + + /* We've chosen a new reuseport sock which is never refcounted. This + * implies that sk also isn't refcounted. + */ + WARN_ON_ONCE(*refcounted); + + return reuse_sk; +} + static inline struct sock *__inet6_lookup_skb(struct inet_hashinfo *hashinfo, struct sk_buff *skb, int doff, const __be16 sport, @@ -110,14 +150,20 @@ static inline struct sock *__inet6_lookup_skb(struct inet_hashinfo *hashinfo, int iif, int sdif, bool *refcounted) { - struct sock *sk = skb_steal_sock(skb, refcounted); - + struct net *net = dev_net(skb_dst(skb)->dev); + const struct ipv6hdr *ip6h = ipv6_hdr(skb); + struct sock *sk; + + sk = inet6_steal_sock(net, skb, doff, &ip6h->saddr, sport, &ip6h->daddr, dport, + refcounted, inet6_ehashfn); + if (IS_ERR(sk)) + return NULL; if (sk) return sk; - return __inet6_lookup(dev_net(skb_dst(skb)->dev), hashinfo, skb, - doff, &ipv6_hdr(skb)->saddr, sport, - &ipv6_hdr(skb)->daddr, ntohs(dport), + return __inet6_lookup(net, hashinfo, skb, + doff, &ip6h->saddr, sport, + &ip6h->daddr, ntohs(dport), iif, sdif, refcounted); } diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index c0532cc7587f..c6ae0af12ce0 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -449,6 +449,46 @@ static inline struct sock *inet_lookup(struct net *net, return sk; } +static inline +struct sock *inet_steal_sock(struct net *net, struct sk_buff *skb, int doff, + const __be32 saddr, const __be16 sport, + const __be32 daddr, const __be16 dport, + bool *refcounted, inet_ehashfn_t *ehashfn) +{ + struct sock *sk, *reuse_sk; + bool prefetched; + + sk = skb_steal_sock(skb, refcounted, &prefetched); + if (!sk) + return NULL; + + if (!prefetched) + return sk; + + if (sk->sk_protocol == IPPROTO_TCP) { + if (sk->sk_state != TCP_LISTEN) + return sk; + } else if (sk->sk_protocol == IPPROTO_UDP) { + if (sk->sk_state != TCP_CLOSE) + return sk; + } else { + return sk; + } + + reuse_sk = inet_lookup_reuseport(net, sk, skb, doff, + saddr, sport, daddr, ntohs(dport), + ehashfn); + if (!reuse_sk || reuse_sk == sk) + return sk; + + /* We've chosen a new reuseport sock which is never refcounted. This + * implies that sk also isn't refcounted. + */ + WARN_ON_ONCE(*refcounted); + + return reuse_sk; +} + static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo, struct sk_buff *skb, int doff, @@ -457,13 +497,18 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo, const int sdif, bool *refcounted) { - struct sock *sk = skb_steal_sock(skb, refcounted); + struct net *net = dev_net(skb_dst(skb)->dev); const struct iphdr *iph = ip_hdr(skb); + struct sock *sk; + sk = inet_steal_sock(net, skb, doff, iph->saddr, sport, iph->daddr, dport, + refcounted, inet_ehashfn); + if (IS_ERR(sk)) + return NULL; if (sk) return sk; - return __inet_lookup(dev_net(skb_dst(skb)->dev), hashinfo, skb, + return __inet_lookup(net, hashinfo, skb, doff, iph->saddr, sport, iph->daddr, dport, inet_iif(skb), sdif, refcounted); diff --git a/include/net/sock.h b/include/net/sock.h index 656ea89f60ff..5645570c2a64 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2806,20 +2806,23 @@ sk_is_refcounted(struct sock *sk) * skb_steal_sock - steal a socket from an sk_buff * @skb: sk_buff to steal the socket from * @refcounted: is set to true if the socket is reference-counted + * @prefetched: is set to true if the socket was assigned from bpf */ static inline struct sock * -skb_steal_sock(struct sk_buff *skb, bool *refcounted) +skb_steal_sock(struct sk_buff *skb, bool *refcounted, bool *prefetched) { if (skb->sk) { struct sock *sk = skb->sk; *refcounted = true; - if (skb_sk_is_prefetched(skb)) + *prefetched = skb_sk_is_prefetched(skb); + if (*prefetched) *refcounted = sk_is_refcounted(sk); skb->destructor = NULL; skb->sk = NULL; return sk; } + *prefetched = false; *refcounted = false; return NULL; } diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index a7b5e91dd768..d6fb6f43b0f3 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4158,9 +4158,6 @@ union bpf_attr { * **-EOPNOTSUPP** if the operation is not supported, for example * a call from outside of TC ingress. * - * **-ESOCKTNOSUPPORT** if the socket type is not supported - * (reuseport). - * * long bpf_sk_assign(struct bpf_sk_lookup *ctx, struct bpf_sock *sk, u64 flags) * Description * Helper is overloaded depending on BPF program type. This diff --git a/net/core/filter.c b/net/core/filter.c index 428df050d021..d4be0a1d754c 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -7278,8 +7278,6 @@ BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags) return -EOPNOTSUPP; if (unlikely(dev_net(skb->dev) != sock_net(sk))) return -ENETUNREACH; - if (unlikely(sk_fullsock(sk) && sk->sk_reuseport)) - return -ESOCKTNOSUPPORT; if (sk_is_refcounted(sk) && unlikely(!refcount_inc_not_zero(&sk->sk_refcnt))) return -ENOENT; diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index eb79268f216d..b256f1f73b4d 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2388,7 +2388,11 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, if (udp4_csum_init(skb, uh, proto)) goto csum_error; - sk = skb_steal_sock(skb, &refcounted); + sk = inet_steal_sock(net, skb, sizeof(struct udphdr), saddr, uh->source, daddr, uh->dest, + &refcounted, udp_ehashfn); + if (IS_ERR(sk)) + goto no_sk; + if (sk) { struct dst_entry *dst = skb_dst(skb); int ret; @@ -2409,7 +2413,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable); if (sk) return udp_unicast_rcv_skb(sk, skb, uh); - +no_sk: if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) goto drop; nf_reset_ct(skb); diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 8a6d94cabee0..2d4c05bc322a 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -923,9 +923,9 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED; const struct in6_addr *saddr, *daddr; struct net *net = dev_net(skb->dev); + bool refcounted; struct udphdr *uh; struct sock *sk; - bool refcounted; u32 ulen = 0; if (!pskb_may_pull(skb, sizeof(struct udphdr))) @@ -962,7 +962,11 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, goto csum_error; /* Check if the socket is already available, e.g. due to early demux */ - sk = skb_steal_sock(skb, &refcounted); + sk = inet6_steal_sock(net, skb, sizeof(struct udphdr), saddr, uh->source, daddr, uh->dest, + &refcounted, udp6_ehashfn); + if (IS_ERR(sk)) + goto no_sk; + if (sk) { struct dst_entry *dst = skb_dst(skb); int ret; @@ -996,7 +1000,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, goto report_csum_error; return udp6_unicast_rcv_skb(sk, skb, uh); } - +no_sk: reason = SKB_DROP_REASON_NO_SOCKET; if (!uh->check) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index a7b5e91dd768..d6fb6f43b0f3 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4158,9 +4158,6 @@ union bpf_attr { * **-EOPNOTSUPP** if the operation is not supported, for example * a call from outside of TC ingress. * - * **-ESOCKTNOSUPPORT** if the socket type is not supported - * (reuseport). - * * long bpf_sk_assign(struct bpf_sk_lookup *ctx, struct bpf_sock *sk, u64 flags) * Description * Helper is overloaded depending on BPF program type. This From patchwork Wed Jun 28 09:48:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenz Bauer X-Patchwork-Id: 13295501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A8BEB64D7 for ; Wed, 28 Jun 2023 10:06:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbjF1KGI (ORCPT ); Wed, 28 Jun 2023 06:06:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231759AbjF1JwZ (ORCPT ); Wed, 28 Jun 2023 05:52:25 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C7A22D4B for ; Wed, 28 Jun 2023 02:48:45 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3fa96fd7a04so36669855e9.2 for ; Wed, 28 Jun 2023 02:48:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1687945723; x=1690537723; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Rrd+oVQwBiLqnLnpd+kxgQqY6NXFWO5dlCZVFiDqXJs=; b=iEES/poOKrLptVoHsJpDsv8Cs372Sd8KK4kv1/mhPfihmm3c6SnuvKjrfXX3Qs6HUW nJe8o7gXaiMNsNpm2cOu6si7noXu4ZL7G+31M8WB0NJvnI1+DPgsaFgDx46JlmRIv5RS yvCMiQbE8vj50UBnjrBNGOcpEVnPKiH23DfYUDqsM7tOaHLmyJ+QqGxN3V/qwo6cbW9N Z+DvmI4T/aUu5kE8Z6GDkIMXnvbnA+y+n1/KZ+sFjSiDMcB6lTMdrxbNjQC5mHgaFfan c6BfxxNXvn09W/+4U2HqtQAhKjhREqbF/WtyrX+6l8/3KwoeqVWShw0z+IHLfEEGqF0M RHZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687945723; x=1690537723; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rrd+oVQwBiLqnLnpd+kxgQqY6NXFWO5dlCZVFiDqXJs=; b=K3YR2Hl7IuZx36QwHs7Kyxbm6O0efyVgIvjVvw+YvkrTfMpV53j6pESslu+2dyFA/9 Ex5TjhvjSUvjXd7OcvLhzQQ+HlmIVb+kgG5gPwsUWoOUlwxVg5fIv6GbD/Zw7l+erXBM LhvH4j+5dyY4mzyC2XRNSW9zurnz+A2evXF11IyP/Hq8SnPpBFP6ByCTmRLp21DJZ6Br ycHD/5KZQdhxCb9DgVPut5JFShR8NnNlcNUqmmaomE7/8U/ntQ5hvvAj1a92Hv0yk5Nf XPEPekOIwns/tM9TYh6Lth0VnwGOL0vXZJGvHII4lXKnz5DRzj0shYelqrrxqyj93xaB 844Q== X-Gm-Message-State: AC+VfDzpH/hvBND9fjKCkL6JyDqB9SEHFEkl+oUx9dlgLpqpP8ip9HOu l19JVhCSjbaC5czIPcTAKjhzTg== X-Google-Smtp-Source: ACHHUZ6F6DU+I1zZoqKy0uQQiThIvfyFKpnnNjVjpk/AceBeqT24Hi/rUNCb6SlZOrpPOyEB+U1VKg== X-Received: by 2002:a7b:ca4c:0:b0:3f9:b30f:a013 with SMTP id m12-20020a7bca4c000000b003f9b30fa013mr20668470wml.6.1687945723591; Wed, 28 Jun 2023 02:48:43 -0700 (PDT) Received: from [192.168.1.193] (f.c.7.0.0.0.0.0.0.0.0.0.0.0.0.0.f.f.6.2.a.5.a.7.0.b.8.0.1.0.0.2.ip6.arpa. [2001:8b0:7a5a:26ff::7cf]) by smtp.gmail.com with ESMTPSA id k26-20020a7bc41a000000b003fbb1a9586esm1187613wmi.15.2023.06.28.02.48.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 02:48:43 -0700 (PDT) From: Lorenz Bauer Date: Wed, 28 Jun 2023 10:48:22 +0100 Subject: [PATCH bpf-next v4 7/7] selftests/bpf: Test that SO_REUSEPORT can be used with sk_assign helper MIME-Version: 1.0 Message-Id: <20230613-so-reuseport-v4-7-4ece76708bba@isovalent.com> References: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> In-Reply-To: <20230613-so-reuseport-v4-0-4ece76708bba@isovalent.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , David Ahern , Willem de Bruijn , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Joe Stringer , Mykola Lysenko , Shuah Khan , Kuniyuki Iwashima Cc: Hemanth Malla , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, Lorenz Bauer , Joe Stringer X-Mailer: b4 0.12.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Daniel Borkmann We use two programs to check that the new reuseport logic is executed appropriately. The first is a TC clsact program which bpf_sk_assigns the skb to a UDP or TCP socket created by user space. Since the test communicates via lo we see both directions of packets in the eBPF. Traffic ingressing to the reuseport socket is identified by looking at the destination port. For TCP, we additionally need to make sure that we only assign the initial SYN packets towards our listening socket. The network stack then creates a request socket which transitions to ESTABLISHED after the 3WHS. The second is a reuseport program which shares the fact that it has been executed with user space. This tells us that the delayed lookup mechanism is working. Signed-off-by: Daniel Borkmann Co-developed-by: Lorenz Bauer Signed-off-by: Lorenz Bauer Cc: Joe Stringer --- tools/testing/selftests/bpf/network_helpers.c | 3 + .../selftests/bpf/prog_tests/assign_reuse.c | 197 +++++++++++++++++++++ .../selftests/bpf/progs/test_assign_reuse.c | 142 +++++++++++++++ 3 files changed, 342 insertions(+) diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c index a105c0cd008a..8a33bcea97de 100644 --- a/tools/testing/selftests/bpf/network_helpers.c +++ b/tools/testing/selftests/bpf/network_helpers.c @@ -423,6 +423,9 @@ struct nstoken *open_netns(const char *name) void close_netns(struct nstoken *token) { + if (!token) + return; + ASSERT_OK(setns(token->orig_netns_fd, CLONE_NEWNET), "setns"); close(token->orig_netns_fd); free(token); diff --git a/tools/testing/selftests/bpf/prog_tests/assign_reuse.c b/tools/testing/selftests/bpf/prog_tests/assign_reuse.c new file mode 100644 index 000000000000..622f123410f4 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/assign_reuse.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Isovalent */ +#include +#include + +#include +#include + +#include "network_helpers.h" +#include "test_assign_reuse.skel.h" + +#define NS_TEST "assign_reuse" +#define LOOPBACK 1 +#define PORT 4443 + +static int attach_reuseport(int sock_fd, int prog_fd) +{ + return setsockopt(sock_fd, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, + &prog_fd, sizeof(prog_fd)); +} + +static __u64 cookie(int fd) +{ + __u64 cookie = 0; + socklen_t cookie_len = sizeof(cookie); + int ret; + + ret = getsockopt(fd, SOL_SOCKET, SO_COOKIE, &cookie, &cookie_len); + ASSERT_OK(ret, "cookie"); + ASSERT_GT(cookie, 0, "cookie_invalid"); + + return cookie; +} + +static int echo_test_udp(int fd_sv) +{ + struct sockaddr_storage addr = {}; + socklen_t len = sizeof(addr); + char buff[1] = {}; + int fd_cl = -1, ret; + + fd_cl = connect_to_fd(fd_sv, 100); + ASSERT_GT(fd_cl, 0, "create_client"); + ASSERT_EQ(getsockname(fd_cl, (void *)&addr, &len), 0, "getsockname"); + + ASSERT_EQ(send(fd_cl, buff, sizeof(buff), 0), 1, "send_client"); + + ret = recv(fd_sv, buff, sizeof(buff), 0); + if (ret < 0) + return errno; + + ASSERT_EQ(ret, 1, "recv_server"); + ASSERT_EQ(sendto(fd_sv, buff, sizeof(buff), 0, (void *)&addr, len), 1, "send_server"); + ASSERT_EQ(recv(fd_cl, buff, sizeof(buff), 0), 1, "recv_client"); + close(fd_cl); + return 0; +} + +static int echo_test_tcp(int fd_sv) +{ + char buff[1] = {}; + int fd_cl = -1, fd_sv_cl = -1; + + fd_cl = connect_to_fd(fd_sv, 100); + if (fd_cl < 0) + return errno; + + fd_sv_cl = accept(fd_sv, NULL, NULL); + ASSERT_GE(fd_sv_cl, 0, "accept_fd"); + + ASSERT_EQ(send(fd_cl, buff, sizeof(buff), 0), 1, "send_client"); + ASSERT_EQ(recv(fd_sv_cl, buff, sizeof(buff), 0), 1, "recv_server"); + ASSERT_EQ(send(fd_sv_cl, buff, sizeof(buff), 0), 1, "send_server"); + ASSERT_EQ(recv(fd_cl, buff, sizeof(buff), 0), 1, "recv_client"); + close(fd_sv_cl); + close(fd_cl); + return 0; +} + +void run_assign_reuse(int family, int sotype, const char *ip, __u16 port) +{ + DECLARE_LIBBPF_OPTS(bpf_tc_hook, tc_hook, + .ifindex = LOOPBACK, + .attach_point = BPF_TC_INGRESS, + ); + DECLARE_LIBBPF_OPTS(bpf_tc_opts, tc_opts, + .handle = 1, + .priority = 1, + ); + bool hook_created = false, tc_attached = false; + int ret, fd_tc, fd_accept, fd_drop, fd_map; + int *fd_sv = NULL; + __u64 fd_val; + struct test_assign_reuse *skel; + const int zero = 0; + + skel = test_assign_reuse__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + goto cleanup; + + skel->rodata->dest_port = port; + + ret = test_assign_reuse__load(skel); + if (!ASSERT_OK(ret, "skel_load")) + goto cleanup; + + ASSERT_EQ(skel->bss->sk_cookie_seen, 0, "cookie_init"); + + fd_tc = bpf_program__fd(skel->progs.tc_main); + fd_accept = bpf_program__fd(skel->progs.reuse_accept); + fd_drop = bpf_program__fd(skel->progs.reuse_drop); + fd_map = bpf_map__fd(skel->maps.sk_map); + + fd_sv = start_reuseport_server(family, sotype, ip, port, 100, 1); + if (!ASSERT_NEQ(fd_sv, NULL, "start_reuseport_server")) + goto cleanup; + + ret = attach_reuseport(*fd_sv, fd_drop); + if (!ASSERT_OK(ret, "attach_reuseport")) + goto cleanup; + + fd_val = *fd_sv; + ret = bpf_map_update_elem(fd_map, &zero, &fd_val, BPF_NOEXIST); + if (!ASSERT_OK(ret, "bpf_sk_map")) + goto cleanup; + + ret = bpf_tc_hook_create(&tc_hook); + if (ret == 0) + hook_created = true; + ret = ret == -EEXIST ? 0 : ret; + if (!ASSERT_OK(ret, "bpf_tc_hook_create")) + goto cleanup; + + tc_opts.prog_fd = fd_tc; + ret = bpf_tc_attach(&tc_hook, &tc_opts); + if (!ASSERT_OK(ret, "bpf_tc_attach")) + goto cleanup; + tc_attached = true; + + if (sotype == SOCK_STREAM) + ASSERT_EQ(echo_test_tcp(*fd_sv), ECONNREFUSED, "drop_tcp"); + else + ASSERT_EQ(echo_test_udp(*fd_sv), EAGAIN, "drop_udp"); + ASSERT_EQ(skel->bss->reuseport_executed, 1, "program executed once"); + + skel->bss->sk_cookie_seen = 0; + skel->bss->reuseport_executed = 0; + ASSERT_OK(attach_reuseport(*fd_sv, fd_accept), "attach_reuseport(accept)"); + + if (sotype == SOCK_STREAM) + ASSERT_EQ(echo_test_tcp(*fd_sv), 0, "echo_tcp"); + else + ASSERT_EQ(echo_test_udp(*fd_sv), 0, "echo_udp"); + + ASSERT_EQ(skel->bss->sk_cookie_seen, cookie(*fd_sv), + "cookie_mismatch"); + ASSERT_EQ(skel->bss->reuseport_executed, 1, "program executed once"); +cleanup: + if (tc_attached) { + tc_opts.flags = tc_opts.prog_fd = tc_opts.prog_id = 0; + ret = bpf_tc_detach(&tc_hook, &tc_opts); + ASSERT_OK(ret, "bpf_tc_detach"); + } + if (hook_created) { + tc_hook.attach_point = BPF_TC_INGRESS | BPF_TC_EGRESS; + bpf_tc_hook_destroy(&tc_hook); + } + test_assign_reuse__destroy(skel); + free_fds(fd_sv, 1); +} + +void test_assign_reuse(void) +{ + struct nstoken *tok = NULL; + + SYS(out, "ip netns add %s", NS_TEST); + SYS(cleanup, "ip -net %s link set dev lo up", NS_TEST); + + tok = open_netns(NS_TEST); + if (!ASSERT_OK_PTR(tok, "netns token")) + return; + + if (test__start_subtest("tcpv4")) + run_assign_reuse(AF_INET, SOCK_STREAM, "127.0.0.1", PORT); + if (test__start_subtest("tcpv6")) + run_assign_reuse(AF_INET6, SOCK_STREAM, "::1", PORT); + if (test__start_subtest("udpv4")) + run_assign_reuse(AF_INET, SOCK_DGRAM, "127.0.0.1", PORT); + if (test__start_subtest("udpv6")) + run_assign_reuse(AF_INET6, SOCK_DGRAM, "::1", PORT); + +cleanup: + close_netns(tok); + SYS_NOFAIL("ip netns delete %s", NS_TEST); +out: + return; +} diff --git a/tools/testing/selftests/bpf/progs/test_assign_reuse.c b/tools/testing/selftests/bpf/progs/test_assign_reuse.c new file mode 100644 index 000000000000..4f2e2321ea06 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_assign_reuse.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Isovalent */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +char LICENSE[] SEC("license") = "GPL"; + +__u64 sk_cookie_seen; +__u64 reuseport_executed; +union { + struct tcphdr tcp; + struct udphdr udp; +} headers; + +const volatile __u16 dest_port; + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u64); +} sk_map SEC(".maps"); + +SEC("sk_reuseport") +int reuse_accept(struct sk_reuseport_md *ctx) +{ + reuseport_executed++; + + if (ctx->ip_protocol == IPPROTO_TCP) { + if (ctx->data + sizeof(headers.tcp) > ctx->data_end) + return SK_DROP; + + if (__builtin_memcmp(&headers.tcp, ctx->data, sizeof(headers.tcp)) != 0) + return SK_DROP; + } else if (ctx->ip_protocol == IPPROTO_UDP) { + if (ctx->data + sizeof(headers.udp) > ctx->data_end) + return SK_DROP; + + if (__builtin_memcmp(&headers.udp, ctx->data, sizeof(headers.udp)) != 0) + return SK_DROP; + } else { + return SK_DROP; + } + + sk_cookie_seen = bpf_get_socket_cookie(ctx->sk); + return SK_PASS; +} + +SEC("sk_reuseport") +int reuse_drop(struct sk_reuseport_md *ctx) +{ + reuseport_executed++; + sk_cookie_seen = 0; + return SK_DROP; +} + +static int +assign_sk(struct __sk_buff *skb) +{ + int zero = 0, ret = 0; + struct bpf_sock *sk; + + sk = bpf_map_lookup_elem(&sk_map, &zero); + if (!sk) + return TC_ACT_SHOT; + ret = bpf_sk_assign(skb, sk, 0); + bpf_sk_release(sk); + return ret ? TC_ACT_SHOT : TC_ACT_OK; +} + +static bool +maybe_assign_tcp(struct __sk_buff *skb, struct tcphdr *th) +{ + if (th + 1 > (void *)(long)(skb->data_end)) + return TC_ACT_SHOT; + + if (!th->syn || th->ack || th->dest != bpf_htons(dest_port)) + return TC_ACT_OK; + + __builtin_memcpy(&headers.tcp, th, sizeof(headers.tcp)); + return assign_sk(skb); +} + +static bool +maybe_assign_udp(struct __sk_buff *skb, struct udphdr *uh) +{ + if (uh + 1 > (void *)(long)(skb->data_end)) + return TC_ACT_SHOT; + + if (uh->dest != bpf_htons(dest_port)) + return TC_ACT_OK; + + __builtin_memcpy(&headers.udp, uh, sizeof(headers.udp)); + return assign_sk(skb); +} + +SEC("tc") +int tc_main(struct __sk_buff *skb) +{ + void *data_end = (void *)(long)skb->data_end; + void *data = (void *)(long)skb->data; + struct ethhdr *eth; + + eth = (struct ethhdr *)(data); + if (eth + 1 > data_end) + return TC_ACT_SHOT; + + if (eth->h_proto == bpf_htons(ETH_P_IP)) { + struct iphdr *iph = (struct iphdr *)(data + sizeof(*eth)); + + if (iph + 1 > data_end) + return TC_ACT_SHOT; + + if (iph->protocol == IPPROTO_TCP) + return maybe_assign_tcp(skb, (struct tcphdr *)(iph + 1)); + else if (iph->protocol == IPPROTO_UDP) + return maybe_assign_udp(skb, (struct udphdr *)(iph + 1)); + else + return TC_ACT_SHOT; + } else { + struct ipv6hdr *ip6h = (struct ipv6hdr *)(data + sizeof(*eth)); + + if (ip6h + 1 > data_end) + return TC_ACT_SHOT; + + if (ip6h->nexthdr == IPPROTO_TCP) + return maybe_assign_tcp(skb, (struct tcphdr *)(ip6h + 1)); + else if (ip6h->nexthdr == IPPROTO_UDP) + return maybe_assign_udp(skb, (struct udphdr *)(ip6h + 1)); + else + return TC_ACT_SHOT; + } +}