From patchwork Tue Aug 15 19:14:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13354206 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC3F41BF0B for ; Tue, 15 Aug 2023 19:15:41 +0000 (UTC) Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2248BB8 for ; Tue, 15 Aug 2023 12:15:40 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-3fe2fb9b4d7so52235275e9.1 for ; Tue, 15 Aug 2023 12:15:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; t=1692126938; x=1692731738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7EyHuBFw5EK2jCcTcSbUeTOqQzO5Rjmm6B6zBd+yDm4=; b=VKUH/UbwxOhgithRt2gN9LMv74BX/XIRGKZ/xOACylB46RTasbr1mF8GgJBFnuZhu2 5mIW2PLBnZu/Zhg0Utb7cGVBdOHdvhCvIVHmskFub0c7QguYS5Z6nKsJ/DXM5zkCxPqN 5Cxgjf/HWUsOA3BdP6lYYpjsUBZFFpBZmTO7Y+/MmPu10Xi6ESQV5sKaEueg+gevHuZa eaF62c0TK2MW949l1tTzHZM4J1wwxwaHIoIwSKf+FbLDkREkv4G28p51TPqC+EWQXKh2 j115D22g5ml+OsV3Q5f6Wfb/FAJGopAr/UBRBhBk0oKq3QMz0Ul2KVzKPPY+Jl5u0Tj5 J9lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692126938; x=1692731738; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7EyHuBFw5EK2jCcTcSbUeTOqQzO5Rjmm6B6zBd+yDm4=; b=kcQJFXzLwfPXJLBXCv47uIx5NysDYyWvShFWiKjdVDKJ+UkhSM4Keqtr+s+SkbrwlW wq5dCSijIeaGiQRA4oEoftOAQCx68BTePk9oryirj+WmZxVFPOofrr5NXVO+XN8IJuqf 7yol1TskD3zOR65E20RZ4ysvOaQ22B/J3hl0Pp67fVCs15fUWtXRxTyObrZsrN8/7CfW ciylplVGcB/mVs0mJwx5tMBiARc0ikeSW/S4pI7JvU6XCSVultjR96hMCEyMYD9rmf3P oevcMyZAs5960795iCGTdLxRcnzK8ry+zoFB81pG3oRsUAaITIMQEYCZlO31rJQ8g9qF Grfg== X-Gm-Message-State: AOJu0YwLMS8cQYCJynm9Z33OJCtNoVnImlmTiBooPzegCwR0NGhPMqqw 7UmoxIpof8ztQ6qhcnGfJvbeTQ== X-Google-Smtp-Source: AGHT+IHGTrm+x1Vu5layC3iYh/XjPQMDHzSnKaBOiU5HOqrYbplpdlDZ4HU2LZK30Hw/fDEj3edw+w== X-Received: by 2002:a7b:cd0f:0:b0:3fa:77ed:9894 with SMTP id f15-20020a7bcd0f000000b003fa77ed9894mr10379835wmj.7.1692126938707; Tue, 15 Aug 2023 12:15:38 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id q9-20020a1ce909000000b003fbbe41fd78sm18779737wmc.10.2023.08.15.12.15.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Aug 2023 12:15:38 -0700 (PDT) From: Dmitry Safonov To: David Ahern , Eric Dumazet , Paolo Abeni , Jakub Kicinski , "David S. Miller" Cc: linux-kernel@vger.kernel.org, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , Bob Gilligan , Dan Carpenter , David Laight , Dmitry Safonov <0x7f454c46@gmail.com>, Donald Cassidy , Eric Biggers , "Eric W. Biederman" , Francesco Ruggeri , "Gaillardetz, Dominik" , Herbert Xu , Hideaki YOSHIFUJI , Ivan Delalande , Leonard Crestez , "Nassiri, Mohammad" , Salam Noureddine , Simon Horman , "Tetreault, Francois" , netdev@vger.kernel.org Subject: [PATCH v10 net-next 20/23] net/tcp: Add static_key for TCP-AO Date: Tue, 15 Aug 2023 20:14:49 +0100 Message-ID: <20230815191455.1872316-21-dima@arista.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230815191455.1872316-1-dima@arista.com> References: <20230815191455.1872316-1-dima@arista.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Similarly to TCP-MD5, add a static key to TCP-AO that is patched out when there are no keys on a machine and dynamically enabled with the first setsockopt(TCP_AO) adds a key on any socket. The static key is as well dynamically disabled later when the socket is destructed. The lifetime of enabled static key here is the same as ao_info: it is enabled on allocation, passed over from full socket to twsk and destructed when ao_info is scheduled for destruction. Signed-off-by: Dmitry Safonov Acked-by: David Ahern --- include/net/tcp.h | 3 +++ include/net/tcp_ao.h | 2 ++ net/ipv4/tcp_ao.c | 22 +++++++++++++++++++++ net/ipv4/tcp_input.c | 46 +++++++++++++++++++++++++++++--------------- 4 files changed, 57 insertions(+), 16 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 85e0f0b50261..b90ef7090dd6 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -2612,6 +2612,9 @@ static inline bool tcp_ao_required(struct sock *sk, const void *saddr, struct tcp_ao_info *ao_info; struct tcp_ao_key *ao_key; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return false; + ao_info = rcu_dereference_check(tcp_sk(sk)->ao_info, lockdep_sock_is_held(sk)); if (!ao_info) diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h index 705e791c0a48..a56b1a68a883 100644 --- a/include/net/tcp_ao.h +++ b/include/net/tcp_ao.h @@ -151,6 +151,8 @@ do { \ #ifdef CONFIG_TCP_AO /* TCP-AO structures and functions */ +#include +extern struct static_key_false_deferred tcp_ao_needed; struct tcp4_ao_context { __be32 saddr; diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c index 124e9489f202..88634fbb54f2 100644 --- a/net/ipv4/tcp_ao.c +++ b/net/ipv4/tcp_ao.c @@ -17,6 +17,8 @@ #include #include +DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_ao_needed, HZ); + int tcp_ao_calc_traffic_key(struct tcp_ao_key *mkt, u8 *key, void *ctx, unsigned int len, struct tcp_sigpool *hp) { @@ -50,6 +52,9 @@ bool tcp_ao_ignore_icmp(const struct sock *sk, int type, int code) bool ignore_icmp = false; struct tcp_ao_info *ao; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return false; + /* RFC5925, 7.8: * >> A TCP-AO implementation MUST default to ignore incoming ICMPv4 * messages of Type 3 (destination unreachable), Codes 2-4 (protocol @@ -185,6 +190,9 @@ static struct tcp_ao_key *__tcp_ao_do_lookup(const struct sock *sk, struct tcp_ao_key *key; struct tcp_ao_info *ao; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return NULL; + ao = rcu_dereference_check(tcp_sk(sk)->ao_info, lockdep_sock_is_held(sk)); if (!ao) @@ -276,6 +284,7 @@ void tcp_ao_destroy_sock(struct sock *sk, bool twsk) } kfree_rcu(ao, rcu); + static_branch_slow_dec_deferred(&tcp_ao_needed); } void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw, struct tcp_sock *tp) @@ -1130,6 +1139,11 @@ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk, goto free_and_exit; } + if (!static_key_fast_inc_not_disabled(&tcp_ao_needed.key.key)) { + ret = -EUSERS; + goto free_and_exit; + } + key_head = rcu_dereference(hlist_first_rcu(&new_ao->head)); first_key = hlist_entry_safe(key_head, struct tcp_ao_key, node); @@ -1557,6 +1571,10 @@ static int tcp_ao_add_cmd(struct sock *sk, unsigned short int family, tcp_ao_link_mkt(ao_info, key); if (first) { + if (!static_branch_inc(&tcp_ao_needed.key)) { + ret = -EUSERS; + goto err_free_sock; + } sk_gso_disable(sk); rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info); } @@ -1825,6 +1843,10 @@ static int tcp_ao_info_cmd(struct sock *sk, unsigned short int family, if (new_rnext) WRITE_ONCE(ao_info->rnext_key, new_rnext); if (first) { + if (!static_branch_inc(&tcp_ao_needed.key)) { + err = -EUSERS; + goto out; + } sk_gso_disable(sk); rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info); } diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 78e7b04fac4f..c554c25eba52 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3528,41 +3528,55 @@ static inline bool tcp_may_update_window(const struct tcp_sock *tp, (ack_seq == tp->snd_wl1 && (nwin > tp->snd_wnd || !nwin)); } -/* If we update tp->snd_una, also update tp->bytes_acked */ -static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack) +static void tcp_snd_sne_update(struct tcp_sock *tp, u32 ack) { - u32 delta = ack - tp->snd_una; #ifdef CONFIG_TCP_AO struct tcp_ao_info *ao; -#endif - sock_owned_by_me((struct sock *)tp); - tp->bytes_acked += delta; -#ifdef CONFIG_TCP_AO + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return; + ao = rcu_dereference_protected(tp->ao_info, lockdep_sock_is_held((struct sock *)tp)); if (ao && ack < tp->snd_una) ao->snd_sne++; #endif +} + +/* If we update tp->snd_una, also update tp->bytes_acked */ +static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack) +{ + u32 delta = ack - tp->snd_una; + + sock_owned_by_me((struct sock *)tp); + tp->bytes_acked += delta; + tcp_snd_sne_update(tp, ack); tp->snd_una = ack; } +static void tcp_rcv_sne_update(struct tcp_sock *tp, u32 seq) +{ +#ifdef CONFIG_TCP_AO + struct tcp_ao_info *ao; + + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return; + + ao = rcu_dereference_protected(tp->ao_info, + lockdep_sock_is_held((struct sock *)tp)); + if (ao && seq < tp->rcv_nxt) + ao->rcv_sne++; +#endif +} + /* If we update tp->rcv_nxt, also update tp->bytes_received */ static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq) { u32 delta = seq - tp->rcv_nxt; -#ifdef CONFIG_TCP_AO - struct tcp_ao_info *ao; -#endif sock_owned_by_me((struct sock *)tp); tp->bytes_received += delta; -#ifdef CONFIG_TCP_AO - ao = rcu_dereference_protected(tp->ao_info, - lockdep_sock_is_held((struct sock *)tp)); - if (ao && seq < tp->rcv_nxt) - ao->rcv_sne++; -#endif + tcp_rcv_sne_update(tp, seq); WRITE_ONCE(tp->rcv_nxt, seq); }