From patchwork Mon Sep 11 21:03:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13380285 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A21CA23D9 for ; Mon, 11 Sep 2023 22:29:12 +0000 (UTC) Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com [IPv6:2a00:1450:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 880BC6A395 for ; Mon, 11 Sep 2023 15:24:06 -0700 (PDT) Received: by mail-lj1-x22d.google.com with SMTP id 38308e7fff4ca-2bcb0b973a5so80603691fa.3 for ; Mon, 11 Sep 2023 15:24:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; t=1694470959; x=1695075759; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GupC7voHRJj5BwYpmyculQw3S+0sXKYYh1pK7zudlg4=; b=VRzZhKxHmc9YebxXE1Ir5goD0YSuaAxSDfJDmyZHAdT19nOtcpnQavIO4GjwOhd5Et T+K+nrGVUsevd2SBFSVVfq1La2yNe3yPl0fAiosqVZ3s0OJxaSlgCvD5fdtVVxlp8gsA 9tuu1BkE8oIsxFRwg++UQ5yQ7wzWiHsWcoBwSboSzpNAegQ712QaJcQ5AX+7COFEusFg eDrf8IOalIgXBplsI3cfvY6IFSkA9tXkvE5rjgbHdGgHgP+zpyIPYbReHqUthFHtHCp5 qnqh8YvZL5cEcE3ht5+n7k+shrMv/bcD53IJz7j88imf275T/D+pEaWX/wqcZQt8iPmU WI8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694470959; x=1695075759; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GupC7voHRJj5BwYpmyculQw3S+0sXKYYh1pK7zudlg4=; b=ZlKNg9KpyIqBk0SpmZF8eqEF2YYs8DzmnKHshqlb4AljJU+ckpJYdvLQcsDyccytdf 3NQBfiFfyUx3Pvewozeljvs4bYGxAu/6uLpxxGmn21XbCuFZ4W80PI6oroBaErJjMqiR qVGAW3UuRcrZGvJJmautOF4A9c4Fl72vMyhuz6FFjf0ld8GDTOhx5ufe8nL0McY1zn6Q /0wNiOaiiVtBV+/2y581OfcGbCs5AV4XGaPlNB7MFOs7c4hAq1WyaqBV6JAZ+lMxHAPx aRmlQQdgfFpQtTHjwuEUdfQqIKH6F4g+vMdJeE1V4KxWWH//l198q9EWf+/6Eobuxmh9 YrTg== X-Gm-Message-State: AOJu0YxcCxrvqhOnnNs907Ct9GAr3MNnqCWeCSdk+o9X8YFy6s0xEKMI BAW+zpgPFNYD3FKGfAZTW1fVibVtcbfKpyOMF1k= X-Google-Smtp-Source: AGHT+IEbvNZsQwqvkJHH1HayQF3cAA5oOwxVxzDO9gZt/4Z1E+lnSu17TM9l/tmGGefnqQ1ax7DZQQ== X-Received: by 2002:a1c:7917:0:b0:402:e68f:8898 with SMTP id l23-20020a1c7917000000b00402e68f8898mr8724677wme.0.1694466271877; Mon, 11 Sep 2023 14:04:31 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id z20-20020a1c4c14000000b00402e942561fsm14261699wmf.38.2023.09.11.14.04.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 14:04:31 -0700 (PDT) From: Dmitry Safonov To: David Ahern , Eric Dumazet , Paolo Abeni , Jakub Kicinski , "David S. Miller" Cc: linux-kernel@vger.kernel.org, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , Bob Gilligan , Dan Carpenter , David Laight , Dmitry Safonov <0x7f454c46@gmail.com>, Donald Cassidy , Eric Biggers , "Eric W. Biederman" , Francesco Ruggeri , "Gaillardetz, Dominik" , Herbert Xu , Hideaki YOSHIFUJI , Ivan Delalande , Leonard Crestez , "Nassiri, Mohammad" , Salam Noureddine , Simon Horman , "Tetreault, Francois" , netdev@vger.kernel.org Subject: [PATCH v11 net-next 20/23] net/tcp: Add static_key for TCP-AO Date: Mon, 11 Sep 2023 22:03:40 +0100 Message-ID: <20230911210346.301750-21-dima@arista.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230911210346.301750-1-dima@arista.com> References: <20230911210346.301750-1-dima@arista.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Similarly to TCP-MD5, add a static key to TCP-AO that is patched out when there are no keys on a machine and dynamically enabled with the first setsockopt(TCP_AO) adds a key on any socket. The static key is as well dynamically disabled later when the socket is destructed. The lifetime of enabled static key here is the same as ao_info: it is enabled on allocation, passed over from full socket to twsk and destructed when ao_info is scheduled for destruction. Signed-off-by: Dmitry Safonov Acked-by: David Ahern --- include/net/tcp.h | 3 +++ include/net/tcp_ao.h | 2 ++ net/ipv4/tcp_ao.c | 22 +++++++++++++++++++++ net/ipv4/tcp_input.c | 46 +++++++++++++++++++++++++++++--------------- 4 files changed, 57 insertions(+), 16 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index ac5f96f0ce19..fa100e4f2cde 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -2611,6 +2611,9 @@ static inline bool tcp_ao_required(struct sock *sk, const void *saddr, struct tcp_ao_info *ao_info; struct tcp_ao_key *ao_key; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return false; + ao_info = rcu_dereference_check(tcp_sk(sk)->ao_info, lockdep_sock_is_held(sk)); if (!ao_info) diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h index 09cf9d216b3a..b97e1b3c6448 100644 --- a/include/net/tcp_ao.h +++ b/include/net/tcp_ao.h @@ -151,6 +151,8 @@ do { \ #ifdef CONFIG_TCP_AO /* TCP-AO structures and functions */ +#include +extern struct static_key_false_deferred tcp_ao_needed; struct tcp4_ao_context { __be32 saddr; diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c index c5bde089916d..24fd8772deea 100644 --- a/net/ipv4/tcp_ao.c +++ b/net/ipv4/tcp_ao.c @@ -17,6 +17,8 @@ #include #include +DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_ao_needed, HZ); + int tcp_ao_calc_traffic_key(struct tcp_ao_key *mkt, u8 *key, void *ctx, unsigned int len, struct tcp_sigpool *hp) { @@ -50,6 +52,9 @@ bool tcp_ao_ignore_icmp(const struct sock *sk, int type, int code) bool ignore_icmp = false; struct tcp_ao_info *ao; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return false; + /* RFC5925, 7.8: * >> A TCP-AO implementation MUST default to ignore incoming ICMPv4 * messages of Type 3 (destination unreachable), Codes 2-4 (protocol @@ -185,6 +190,9 @@ static struct tcp_ao_key *__tcp_ao_do_lookup(const struct sock *sk, struct tcp_ao_key *key; struct tcp_ao_info *ao; + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return NULL; + ao = rcu_dereference_check(tcp_sk(sk)->ao_info, lockdep_sock_is_held(sk)); if (!ao) @@ -276,6 +284,7 @@ void tcp_ao_destroy_sock(struct sock *sk, bool twsk) } kfree_rcu(ao, rcu); + static_branch_slow_dec_deferred(&tcp_ao_needed); } void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw, struct tcp_sock *tp) @@ -1129,6 +1138,11 @@ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk, goto free_and_exit; } + if (!static_key_fast_inc_not_disabled(&tcp_ao_needed.key.key)) { + ret = -EUSERS; + goto free_and_exit; + } + key_head = rcu_dereference(hlist_first_rcu(&new_ao->head)); first_key = hlist_entry_safe(key_head, struct tcp_ao_key, node); @@ -1556,6 +1570,10 @@ static int tcp_ao_add_cmd(struct sock *sk, unsigned short int family, tcp_ao_link_mkt(ao_info, key); if (first) { + if (!static_branch_inc(&tcp_ao_needed.key)) { + ret = -EUSERS; + goto err_free_sock; + } sk_gso_disable(sk); rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info); } @@ -1824,6 +1842,10 @@ static int tcp_ao_info_cmd(struct sock *sk, unsigned short int family, if (new_rnext) WRITE_ONCE(ao_info->rnext_key, new_rnext); if (first) { + if (!static_branch_inc(&tcp_ao_needed.key)) { + err = -EUSERS; + goto out; + } sk_gso_disable(sk); rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info); } diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 1e9e423bb718..414c49d37390 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3528,41 +3528,55 @@ static inline bool tcp_may_update_window(const struct tcp_sock *tp, (ack_seq == tp->snd_wl1 && (nwin > tp->snd_wnd || !nwin)); } -/* If we update tp->snd_una, also update tp->bytes_acked */ -static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack) +static void tcp_snd_sne_update(struct tcp_sock *tp, u32 ack) { - u32 delta = ack - tp->snd_una; #ifdef CONFIG_TCP_AO struct tcp_ao_info *ao; -#endif - sock_owned_by_me((struct sock *)tp); - tp->bytes_acked += delta; -#ifdef CONFIG_TCP_AO + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return; + ao = rcu_dereference_protected(tp->ao_info, lockdep_sock_is_held((struct sock *)tp)); if (ao && ack < tp->snd_una) ao->snd_sne++; #endif +} + +/* If we update tp->snd_una, also update tp->bytes_acked */ +static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack) +{ + u32 delta = ack - tp->snd_una; + + sock_owned_by_me((struct sock *)tp); + tp->bytes_acked += delta; + tcp_snd_sne_update(tp, ack); tp->snd_una = ack; } +static void tcp_rcv_sne_update(struct tcp_sock *tp, u32 seq) +{ +#ifdef CONFIG_TCP_AO + struct tcp_ao_info *ao; + + if (!static_branch_unlikely(&tcp_ao_needed.key)) + return; + + ao = rcu_dereference_protected(tp->ao_info, + lockdep_sock_is_held((struct sock *)tp)); + if (ao && seq < tp->rcv_nxt) + ao->rcv_sne++; +#endif +} + /* If we update tp->rcv_nxt, also update tp->bytes_received */ static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq) { u32 delta = seq - tp->rcv_nxt; -#ifdef CONFIG_TCP_AO - struct tcp_ao_info *ao; -#endif sock_owned_by_me((struct sock *)tp); tp->bytes_received += delta; -#ifdef CONFIG_TCP_AO - ao = rcu_dereference_protected(tp->ao_info, - lockdep_sock_is_held((struct sock *)tp)); - if (ao && seq < tp->rcv_nxt) - ao->rcv_sne++; -#endif + tcp_rcv_sne_update(tp, seq); WRITE_ONCE(tp->rcv_nxt, seq); }