From patchwork Wed Nov 23 17:38:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13054037 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 166A6C433FE for ; Wed, 23 Nov 2022 17:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239137AbiKWRjV (ORCPT ); Wed, 23 Nov 2022 12:39:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239069AbiKWRjL (ORCPT ); Wed, 23 Nov 2022 12:39:11 -0500 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFB718FB29 for ; Wed, 23 Nov 2022 09:39:09 -0800 (PST) Received: by mail-wr1-x42b.google.com with SMTP id v1so30457525wrt.11 for ; Wed, 23 Nov 2022 09:39:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r5XPjM3RgE/cH2+A7+UFbzbdCk3mTFWCKPug1l8jbA0=; b=GiH4RbqJTKN/mly4lBU/H75GpRlBlxb0RYHmBN8b2PiA+WFBtoiaTYIY3iGo9JAh+N 3yPIiTFK5IwgKSZQvTymUaj9+FkwBORkQwhT9t2dWn/CoBR/wVQZUVeA797cKjK7x0gU APJXLW4Lc3ZVJMaaNWFbkrANbfIW43Gt5aCsbvCXV46a35pg9J7gudgfVxlW91qZB7kO 2ZY9eiMCD4F6073w+qH1/q3oLh/EaAXBbtHencOPgM/JpCQHx5wNDTIEZGycGbEASSLq z83KNrqZhUtMFdMdpr7hCitWRjEyuNU2Q35T06OSebu9NMWezsQ5zA8ou/FjGvYE7w91 G8jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r5XPjM3RgE/cH2+A7+UFbzbdCk3mTFWCKPug1l8jbA0=; b=0KDth4ygnRUCFSg6G/ivB8hazoCOiJQZ8PADxDXrlyfbshzSxXfXNxVDL8W44sSPRn btl0JY9msezYQWOX6Oxf4YBpuT5P4JlXT3HM/WKYYo1S8xTzAEaZ+32RISabeP7GDB71 DUNIndO5HaWQf1tXDhHi7OVR5rC3sbaB16JNxkQxkG4Nq75V87iSEjNvIQ67YcLBK1qx y6OKwmOtbxZqN+PopcWE34EWF8K6PGyu52h1d//Ev0lgxNTll+rnpfzd9HTGoQRrTkA2 5tqjbTaCjz1r/lOZOK6cLl1WDrgX0k75BAjUsbHkC9q0jnoBcFU1ZOMSW6F5dzWG49jw hxnw== X-Gm-Message-State: ANoB5pkYWGOet56VekeDi8LwloS0OVvquGWe7UdPBuuvwKopabmPD6/Z TFrsIMJz3Uh/OzC+3cc55G2tTA== X-Google-Smtp-Source: AA0mqf4WB//HRR7o8qsoYAXb9nei3HeIprtyCEwvLNr2Ci0zo6e2hcRvR3VFc7MtiiWtYdN31IomYw== X-Received: by 2002:adf:edd1:0:b0:241:7d0a:65ef with SMTP id v17-20020adfedd1000000b002417d0a65efmr10818977wro.491.1669225148331; Wed, 23 Nov 2022 09:39:08 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id v10-20020adfe28a000000b0023647841c5bsm17464636wri.60.2022.11.23.09.39.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 09:39:07 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Peter Zijlstra Cc: Dmitry Safonov , Ard Biesheuvel , Bob Gilligan , "David S. Miller" , Dmitry Safonov <0x7f454c46@gmail.com>, Francesco Ruggeri , Hideaki YOSHIFUJI , Jakub Kicinski , Jason Baron , Josh Poimboeuf , Paolo Abeni , Salam Noureddine , Steven Rostedt , netdev@vger.kernel.org Subject: [PATCH v6 1/5] jump_label: Prevent key->enabled int overflow Date: Wed, 23 Nov 2022 17:38:55 +0000 Message-Id: <20221123173859.473629-2-dima@arista.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123173859.473629-1-dima@arista.com> References: <20221123173859.473629-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org 1. With CONFIG_JUMP_LABEL=n static_key_slow_inc() doesn't have any protection against key->enabled refcounter overflow. 2. With CONFIG_JUMP_LABEL=y static_key_slow_inc_cpuslocked() still may turn the refcounter negative as (v + 1) may overflow. key->enabled is indeed a ref-counter as it's documented in multiple places: top comment in jump_label.h, Documentation/staging/static-keys.rst, etc. As -1 is reserved for static key that's in process of being enabled, functions would break with negative key->enabled refcount: - for CONFIG_JUMP_LABEL=n negative return of static_key_count() breaks static_key_false(), static_key_true() - the ref counter may become 0 from negative side by too many static_key_slow_inc() calls and lead to use-after-free issues. These flaws result in that some users have to introduce an additional mutex and prevent the reference counter from overflowing themselves, see bpf_enable_runtime_stats() checking the counter against INT_MAX / 2. Prevent the reference counter overflow by checking if (v + 1) > 0. Change functions API to return whether the increment was successful. Signed-off-by: Dmitry Safonov Acked-by: Jakub Kicinski Acked-by: Peter Zijlstra (Intel) --- include/linux/jump_label.h | 21 +++++++++++--- kernel/jump_label.c | 56 ++++++++++++++++++++++++++++++-------- 2 files changed, 61 insertions(+), 16 deletions(-) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 570831ca9951..4e968ebadce6 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -224,9 +224,10 @@ extern bool arch_jump_label_transform_queue(struct jump_entry *entry, enum jump_label_type type); extern void arch_jump_label_transform_apply(void); extern int jump_label_text_reserved(void *start, void *end); -extern void static_key_slow_inc(struct static_key *key); +extern bool static_key_slow_inc(struct static_key *key); +extern bool static_key_fast_inc_not_disabled(struct static_key *key); extern void static_key_slow_dec(struct static_key *key); -extern void static_key_slow_inc_cpuslocked(struct static_key *key); +extern bool static_key_slow_inc_cpuslocked(struct static_key *key); extern void static_key_slow_dec_cpuslocked(struct static_key *key); extern int static_key_count(struct static_key *key); extern void static_key_enable(struct static_key *key); @@ -278,11 +279,23 @@ static __always_inline bool static_key_true(struct static_key *key) return false; } -static inline void static_key_slow_inc(struct static_key *key) +static inline bool static_key_fast_inc_not_disabled(struct static_key *key) { + int v; + STATIC_KEY_CHECK_USE(key); - atomic_inc(&key->enabled); + /* + * Prevent key->enabled getting negative to follow the same semantics + * as for CONFIG_JUMP_LABEL=y, see kernel/jump_label.c comment. + */ + v = atomic_read(&key->enabled); + do { + if (v < 0 || (v + 1) < 0) + return false; + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); + return true; } +#define static_key_slow_inc(key) static_key_fast_inc_not_disabled(key) static inline void static_key_slow_dec(struct static_key *key) { diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 4d6c6f5f60db..d9c822bbffb8 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -113,9 +113,40 @@ int static_key_count(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_count); -void static_key_slow_inc_cpuslocked(struct static_key *key) +/* + * static_key_fast_inc_not_disabled - adds a user for a static key + * @key: static key that must be already enabled + * + * The caller must make sure that the static key can't get disabled while + * in this function. It doesn't patch jump labels, only adds a user to + * an already enabled static key. + * + * Returns true if the increment was done. Unlike refcount_t the ref counter + * is not saturated, but will fail to increment on overflow. + */ +bool static_key_fast_inc_not_disabled(struct static_key *key) { + int v; + STATIC_KEY_CHECK_USE(key); + /* + * Negative key->enabled has a special meaning: it sends + * static_key_slow_inc() down the slow path, and it is non-zero + * so it counts as "enabled" in jump_label_update(). Note that + * atomic_inc_unless_negative() checks >= 0, so roll our own. + */ + v = atomic_read(&key->enabled); + do { + if (v <= 0 || (v + 1) < 0) + return false; + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); + + return true; +} +EXPORT_SYMBOL_GPL(static_key_fast_inc_not_disabled); + +bool static_key_slow_inc_cpuslocked(struct static_key *key) +{ lockdep_assert_cpus_held(); /* @@ -124,15 +155,9 @@ void static_key_slow_inc_cpuslocked(struct static_key *key) * jump_label_update() process. At the same time, however, * the jump_label_update() call below wants to see * static_key_enabled(&key) for jumps to be updated properly. - * - * So give a special meaning to negative key->enabled: it sends - * static_key_slow_inc() down the slow path, and it is non-zero - * so it counts as "enabled" in jump_label_update(). Note that - * atomic_inc_unless_negative() checks >= 0, so roll our own. */ - for (int v = atomic_read(&key->enabled); v > 0; ) - if (likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))) - return; + if (static_key_fast_inc_not_disabled(key)) + return true; jump_label_lock(); if (atomic_read(&key->enabled) == 0) { @@ -144,16 +169,23 @@ void static_key_slow_inc_cpuslocked(struct static_key *key) */ atomic_set_release(&key->enabled, 1); } else { - atomic_inc(&key->enabled); + if (WARN_ON_ONCE(!static_key_fast_inc_not_disabled(key))) { + jump_label_unlock(); + return false; + } } jump_label_unlock(); + return true; } -void static_key_slow_inc(struct static_key *key) +bool static_key_slow_inc(struct static_key *key) { + bool ret; + cpus_read_lock(); - static_key_slow_inc_cpuslocked(key); + ret = static_key_slow_inc_cpuslocked(key); cpus_read_unlock(); + return ret; } EXPORT_SYMBOL_GPL(static_key_slow_inc); From patchwork Wed Nov 23 17:38:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13054039 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C789DC433FE for ; Wed, 23 Nov 2022 17:39:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239093AbiKWRj0 (ORCPT ); Wed, 23 Nov 2022 12:39:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237461AbiKWRjS (ORCPT ); Wed, 23 Nov 2022 12:39:18 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D4CE5657A for ; Wed, 23 Nov 2022 09:39:11 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id v124-20020a1cac82000000b003cf7a4ea2caso2010937wme.5 for ; Wed, 23 Nov 2022 09:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=olRr0NxHpe7WEA5OvPe/+zbTbmND6iy3IpcPyCocNn4=; b=MGZ7QykbkGij1YHktOvviVyuOHcrM4D71hg6bgmigrwh1aI2PXhTjkh/bhjj80SOLu HIfRLgpltd7ZARZGjE8AfzX4m8IsaSfJOMB4KF38mYZKM5amck060zKzxudmYDdpZ+cF 7sFTyBnqskBT8GbYEpR+TPRglwVIl7YThS7+wOU2BaMr79APWYKsHN3P9KG4x8i1/QqX qudcetSody7PYuQhyKZehAViaumz+mkzZgueaiUNLLb5nAMCUDQG3mACdNuZIfOzhYpP 2f9dk7nWj9UIChiAGAeP7So0MTiWDgUhuep5AMUJYEfeuK6QOimTBM0b32ceKoMp1If+ kXWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=olRr0NxHpe7WEA5OvPe/+zbTbmND6iy3IpcPyCocNn4=; b=eBmvUObgxqazYsmg62kl6gVHWRJxzO9Kr+QgyCrCmT5LneBMz8HAhBbHEWIZgP+Sgy N1mynBxc8JGjwKy7gI/+EqTUuNER4SbepUHVAG4S/iI2mLT110MKDy9a26JSntQ5XbdE uj+BD4dvilGpz3nDpDFAQIx8UcAtAlpnYlef3KFMwkUV12CZVRy0l0pvJ2w4rDiwQDoU ebtLWCbHs6LpZKqr1K7vo1oIjnnbioVwMkiUMPWRisayVYyEVlFP3aT8WivlsbuR3K1K TULlpGWsuG3ssP/VB1blVtNOuR6x0lkNVRnCxERZi1vPu4RkLIqtls2PQqyHL//2dhDu wdZg== X-Gm-Message-State: ANoB5pkuIzWq5heSipXZY0bQKEDzEAKQNuSWgYyfm9lmrUOw804/H9hj lNFvxBA5675nT+uHvZt+QDVciA== X-Google-Smtp-Source: AA0mqf5ZXb831DoeJ91bgKnU9Kw8Xp3nNPKytEPtbgieWFiQ8KaRRWhshV8qQr+zStcrrQyYDk138g== X-Received: by 2002:a05:600c:ac1:b0:3c6:d18b:304b with SMTP id c1-20020a05600c0ac100b003c6d18b304bmr9474739wmr.142.1669225149831; Wed, 23 Nov 2022 09:39:09 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id v10-20020adfe28a000000b0023647841c5bsm17464636wri.60.2022.11.23.09.39.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 09:39:09 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Peter Zijlstra Cc: Dmitry Safonov , Ard Biesheuvel , Bob Gilligan , "David S. Miller" , Dmitry Safonov <0x7f454c46@gmail.com>, Francesco Ruggeri , Hideaki YOSHIFUJI , Jakub Kicinski , Jason Baron , Josh Poimboeuf , Paolo Abeni , Salam Noureddine , Steven Rostedt , netdev@vger.kernel.org Subject: [PATCH v6 2/5] net/tcp: Separate tcp_md5sig_info allocation into tcp_md5sig_info_add() Date: Wed, 23 Nov 2022 17:38:56 +0000 Message-Id: <20221123173859.473629-3-dima@arista.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123173859.473629-1-dima@arista.com> References: <20221123173859.473629-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a helper to allocate tcp_md5sig_info, that will help later to do/allocate things when info allocated, once per socket. Signed-off-by: Dmitry Safonov Reviewed-by: Eric Dumazet Acked-by: Jakub Kicinski --- net/ipv4/tcp_ipv4.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index f0343538d1f8..2d76d50b8ae8 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1172,6 +1172,24 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, } EXPORT_SYMBOL(tcp_v4_md5_lookup); +static int tcp_md5sig_info_add(struct sock *sk, gfp_t gfp) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_md5sig_info *md5sig; + + if (rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) + return 0; + + md5sig = kmalloc(sizeof(*md5sig), gfp); + if (!md5sig) + return -ENOMEM; + + sk_gso_disable(sk); + INIT_HLIST_HEAD(&md5sig->head); + rcu_assign_pointer(tp->md5sig_info, md5sig); + return 0; +} + /* This can be called on a newly created socket, from other files */ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags, @@ -1202,17 +1220,11 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, return 0; } + if (tcp_md5sig_info_add(sk, gfp)) + return -ENOMEM; + md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); - if (!md5sig) { - md5sig = kmalloc(sizeof(*md5sig), gfp); - if (!md5sig) - return -ENOMEM; - - sk_gso_disable(sk); - INIT_HLIST_HEAD(&md5sig->head); - rcu_assign_pointer(tp->md5sig_info, md5sig); - } key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) From patchwork Wed Nov 23 17:38:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13054040 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF7CBC4332F for ; Wed, 23 Nov 2022 17:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236713AbiKWRj1 (ORCPT ); Wed, 23 Nov 2022 12:39:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239116AbiKWRjV (ORCPT ); Wed, 23 Nov 2022 12:39:21 -0500 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E608497341 for ; Wed, 23 Nov 2022 09:39:12 -0800 (PST) Received: by mail-wm1-x330.google.com with SMTP id m7-20020a05600c090700b003cf8a105d9eso1799419wmp.5 for ; Wed, 23 Nov 2022 09:39:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QJl43J7LXNMRLHYBO6ueXn5ohHHf1zFt6off0HzUS64=; b=O+P3+NPBswuZmxHBZhT5LoiwGGdVBmrAR1rDZ7kOsbz79B0AEzIgLVpBu+3HWlBKI1 TQ7qWutX8b24qNvejBKpUiigx0E8efTFGzIQrLUV5/34foR0vQRYfP7A0wg6qhtulG2M oETKzPqlS6CpBgRV4VyCdQklSkFr//fm7W7r3R1H+/BygJC+7yBvkF67/L7/AanciKDZ 9AmZa0GBBBLXNBd88n0lv9zC4stQAhN9Cwn6h90h1w5elT8XDjHRJgcEunVdUddRxeq0 nPqbnQPkqXqrbTDCYtwXIKvSAgdCb2wpjzNHCX/Q8nIxgZsUVFesWy2/77cdQEFMacs4 NNew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QJl43J7LXNMRLHYBO6ueXn5ohHHf1zFt6off0HzUS64=; b=FadrO18ERtK4xWX2P3wC7MN5Fd2LsPHeaoYvo6LYRtzOqIhzmFLwRmOSlO8AU5MjVH 5AMYQXYiQq2KZHheUt4uvblOAec/vRJwvDcQHIYyCgb90Al2NBrOxrz13r3hCcyWgdCV cso0Z1sso14E6vIIxkjpcld8qJYzkqy1T3wyhcoYMmJGdedlK02gSRKyGj/GS/hRYXjF dP6FEdtfjQQ+pXOdFyT3akPn0pxTg3quy1Wfi/vpu/zg7RLdRJs8MvzqIKjJJc675xSJ EMSX4I0pOPZ6FQ6nlGvXmcp8zF2KH+pT+iD0pS5vKYTI8LQYkn79aN9b2w3lXRdPGTw1 jzEQ== X-Gm-Message-State: ANoB5pkv03/dWVsHjCVvspKsSt0Xe2ZiLCUTtogoiUrHMI/pKKsILivl WELP+GvMlVtPytZyNVcRpzJBqw== X-Google-Smtp-Source: AA0mqf79DhHVSwMs1mV6TcK6y27h37/Y2wD8QTQBWOzWEyDqEQLMzmP0UjQWP4JYYgkq42GQF18t9g== X-Received: by 2002:a7b:c3d5:0:b0:3d0:306c:f7a3 with SMTP id t21-20020a7bc3d5000000b003d0306cf7a3mr3974799wmj.128.1669225151430; Wed, 23 Nov 2022 09:39:11 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id v10-20020adfe28a000000b0023647841c5bsm17464636wri.60.2022.11.23.09.39.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 09:39:10 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Peter Zijlstra Cc: Dmitry Safonov , Ard Biesheuvel , Bob Gilligan , "David S. Miller" , Dmitry Safonov <0x7f454c46@gmail.com>, Francesco Ruggeri , Hideaki YOSHIFUJI , Jakub Kicinski , Jason Baron , Josh Poimboeuf , Paolo Abeni , Salam Noureddine , Steven Rostedt , netdev@vger.kernel.org Subject: [PATCH v6 3/5] net/tcp: Disable TCP-MD5 static key on tcp_md5sig_info destruction Date: Wed, 23 Nov 2022 17:38:57 +0000 Message-Id: <20221123173859.473629-4-dima@arista.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123173859.473629-1-dima@arista.com> References: <20221123173859.473629-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To do that, separate two scenarios: - where it's the first MD5 key on the system, which means that enabling of the static key may need to sleep; - copying of an existing key from a listening socket to the request socket upon receiving a signed TCP segment, where static key was already enabled (when the key was added to the listening socket). Now the life-time of the static branch for TCP-MD5 is until: - last tcp_md5sig_info is destroyed - last socket in time-wait state with MD5 key is closed. Which means that after all sockets with TCP-MD5 keys are gone, the system gets back the performance of disabled md5-key static branch. While at here, provide static_key_fast_inc() helper that does ref counter increment in atomic fashion (without grabbing cpus_read_lock() on CONFIG_JUMP_LABEL=y). This is needed to add a new user for a static_key when the caller controls the lifetime of another user. Signed-off-by: Dmitry Safonov Acked-by: Jakub Kicinski Reviewed-by: Eric Dumazet --- include/net/tcp.h | 10 ++++-- net/ipv4/tcp.c | 5 +-- net/ipv4/tcp_ipv4.c | 71 ++++++++++++++++++++++++++++++++-------- net/ipv4/tcp_minisocks.c | 16 ++++++--- net/ipv4/tcp_output.c | 4 +-- net/ipv6/tcp_ipv6.c | 10 +++--- 6 files changed, 84 insertions(+), 32 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 6b814e788f00..f925377066fe 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1675,7 +1675,11 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb); int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags, - const u8 *newkey, u8 newkeylen, gfp_t gfp); + const u8 *newkey, u8 newkeylen); +int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, + struct tcp_md5sig_key *key); + int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags); struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, @@ -1683,7 +1687,7 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, #ifdef CONFIG_TCP_MD5SIG #include -extern struct static_key_false tcp_md5_needed; +extern struct static_key_false_deferred tcp_md5_needed; struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family); @@ -1691,7 +1695,7 @@ static inline struct tcp_md5sig_key * tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family) { - if (!static_branch_unlikely(&tcp_md5_needed)) + if (!static_branch_unlikely(&tcp_md5_needed.key)) return NULL; return __tcp_md5_do_lookup(sk, l3index, addr, family); } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 4a69c5fcfedc..267406f199bc 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4465,11 +4465,8 @@ bool tcp_alloc_md5sig_pool(void) if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) { mutex_lock(&tcp_md5sig_mutex); - if (!tcp_md5sig_pool_populated) { + if (!tcp_md5sig_pool_populated) __tcp_alloc_md5sig_pool(); - if (tcp_md5sig_pool_populated) - static_branch_inc(&tcp_md5_needed); - } mutex_unlock(&tcp_md5sig_mutex); } diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 2d76d50b8ae8..2ae6a061f36e 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1064,7 +1064,7 @@ static void tcp_v4_reqsk_destructor(struct request_sock *req) * We need to maintain these in the sk structure. */ -DEFINE_STATIC_KEY_FALSE(tcp_md5_needed); +DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_md5_needed, HZ); EXPORT_SYMBOL(tcp_md5_needed); static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new) @@ -1177,9 +1177,6 @@ static int tcp_md5sig_info_add(struct sock *sk, gfp_t gfp) struct tcp_sock *tp = tcp_sk(sk); struct tcp_md5sig_info *md5sig; - if (rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) - return 0; - md5sig = kmalloc(sizeof(*md5sig), gfp); if (!md5sig) return -ENOMEM; @@ -1191,9 +1188,9 @@ static int tcp_md5sig_info_add(struct sock *sk, gfp_t gfp) } /* This can be called on a newly created socket, from other files */ -int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen, int l3index, u8 flags, - const u8 *newkey, u8 newkeylen, gfp_t gfp) +static int __tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, u8 flags, + const u8 *newkey, u8 newkeylen, gfp_t gfp) { /* Add Key to the list */ struct tcp_md5sig_key *key; @@ -1220,9 +1217,6 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, return 0; } - if (tcp_md5sig_info_add(sk, gfp)) - return -ENOMEM; - md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); @@ -1246,8 +1240,59 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, hlist_add_head_rcu(&key->node, &md5sig->head); return 0; } + +int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, u8 flags, + const u8 *newkey, u8 newkeylen) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) { + if (tcp_md5sig_info_add(sk, GFP_KERNEL)) + return -ENOMEM; + + if (!static_branch_inc(&tcp_md5_needed.key)) { + struct tcp_md5sig_info *md5sig; + + md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); + rcu_assign_pointer(tp->md5sig_info, NULL); + kfree_rcu(md5sig); + return -EUSERS; + } + } + + return __tcp_md5_do_add(sk, addr, family, prefixlen, l3index, flags, + newkey, newkeylen, GFP_KERNEL); +} EXPORT_SYMBOL(tcp_md5_do_add); +int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, + struct tcp_md5sig_key *key) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) { + if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) + return -ENOMEM; + + if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) { + struct tcp_md5sig_info *md5sig; + + md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); + net_warn_ratelimited("Too many TCP-MD5 keys in the system\n"); + rcu_assign_pointer(tp->md5sig_info, NULL); + kfree_rcu(md5sig); + return -EUSERS; + } + } + + return __tcp_md5_do_add(sk, addr, family, prefixlen, l3index, + key->flags, key->key, key->keylen, + sk_gfp_mask(sk, GFP_ATOMIC)); +} +EXPORT_SYMBOL(tcp_md5_key_copy); + int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags) { @@ -1334,7 +1379,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, return -EINVAL; return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); } static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, @@ -1591,8 +1636,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, * memory, then we end up not copying the key * across. Shucks. */ - tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, key->flags, - key->key, key->keylen, GFP_ATOMIC); + tcp_md5_key_copy(newsk, addr, AF_INET, 32, l3index, key); sk_gso_disable(newsk); } #endif @@ -2284,6 +2328,7 @@ void tcp_v4_destroy_sock(struct sock *sk) tcp_clear_md5_list(sk); kfree_rcu(rcu_dereference_protected(tp->md5sig_info, 1), rcu); tp->md5sig_info = NULL; + static_branch_slow_dec_deferred(&tcp_md5_needed); } #endif diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index c375f603a16c..6908812d50d3 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -291,13 +291,19 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) */ do { tcptw->tw_md5_key = NULL; - if (static_branch_unlikely(&tcp_md5_needed)) { + if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_md5sig_key *key; key = tp->af_specific->md5_lookup(sk, sk); if (key) { tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); - BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool()); + if (!tcptw->tw_md5_key) + break; + BUG_ON(!tcp_alloc_md5sig_pool()); + if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) { + kfree(tcptw->tw_md5_key); + tcptw->tw_md5_key = NULL; + } } } } while (0); @@ -337,11 +343,13 @@ EXPORT_SYMBOL(tcp_time_wait); void tcp_twsk_destructor(struct sock *sk) { #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed)) { + if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_timewait_sock *twsk = tcp_twsk(sk); - if (twsk->tw_md5_key) + if (twsk->tw_md5_key) { kfree_rcu(twsk->tw_md5_key, rcu); + static_branch_slow_dec_deferred(&tcp_md5_needed); + } } #endif } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 894410dc9293..71d01cf3c13e 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -766,7 +766,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, *md5 = NULL; #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed) && + if (static_branch_unlikely(&tcp_md5_needed.key) && rcu_access_pointer(tp->md5sig_info)) { *md5 = tp->af_specific->md5_lookup(sk, sk); if (*md5) { @@ -922,7 +922,7 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb *md5 = NULL; #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed) && + if (static_branch_unlikely(&tcp_md5_needed.key) && rcu_access_pointer(tp->md5sig_info)) { *md5 = tp->af_specific->md5_lookup(sk, sk); if (*md5) { diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index f676be14e6b6..83304d6a6bd0 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -677,12 +677,11 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], AF_INET, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, - GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr, AF_INET6, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); } static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, @@ -1382,9 +1381,8 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * * memory, then we end up not copying the key * across. Shucks. */ - tcp_md5_do_add(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, - AF_INET6, 128, l3index, key->flags, key->key, key->keylen, - sk_gfp_mask(sk, GFP_ATOMIC)); + tcp_md5_key_copy(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, + AF_INET6, 128, l3index, key); } #endif From patchwork Wed Nov 23 17:38:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13054042 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E429C3A59F for ; Wed, 23 Nov 2022 17:39:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239180AbiKWRj4 (ORCPT ); Wed, 23 Nov 2022 12:39:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238995AbiKWRjW (ORCPT ); Wed, 23 Nov 2022 12:39:22 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46BE69B39B for ; Wed, 23 Nov 2022 09:39:14 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id v7so13628707wmn.0 for ; Wed, 23 Nov 2022 09:39:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SmDkIrBVfUMbGOBltM7PoGg9g03wcI36X1ZwAu/jynQ=; b=LD4MRU/DCHliVq/YHIZWlJjin2QNdAQoGk6lgZc7akHPAmqjUFE+ut9mFMd5QKCTyf PJf2edhebPqnUqt4K8K299b4GrfwP7I5Copin0aWgc4XFCrjzWezRndf8zYDKvqNkCKw HdbrVFAo/kax408LxOrsyGVtsD9aBrZCeMyT3xf5JTdnUFpwUGLHhTCm8fpesn0qb5T/ N3xgajw6m/aydB1XrJQaiBdcYpFNlAw1O8ABAZeXPMQAoBFMV4pxPD6PnCV7qJ7xHEn0 Uj1z+rJP0hrEReE8EHSMJDn4spd8ohJxIRV08XYc0y6yrZ8jHJsjibrjIdViRC2PraV5 jiiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SmDkIrBVfUMbGOBltM7PoGg9g03wcI36X1ZwAu/jynQ=; b=y8dG94HcCqf+3ZTjpSIQqGKW5D84VbI/1AouuRzY+8QobpB145bt/1KTdYX60LAVj1 7NrrtMw/dhEuCMMNIyxEr0WDvJncP5iBsd7QRI8fqtRb/8DHtBROAWvSwZXal3zxL+eW ImIdpigS0u8ApkL2kWJQgFB5LoIvM0TNnoHxKYWoUomTYbYWN9XqM1Xe7weAvvNpYfU7 NIQVRzIFzJbXkC0O7d/bcLpjm1w7MjfHOTTt17Bf5nFOY+jqA1d34cTbwJG2aObD/Bb/ dO77RzOcqWQ1801x958ReTyxtyQ/f6NearHBiu9OK/opsyfPnO+nYM3JDihftWzlG04U TKYQ== X-Gm-Message-State: ANoB5pm3+Kt3GeYX9diiFbnyaPXRHcx69YOTYf1ytMNKuopkBhp2yyFd JlZdT2mj9j+EqWGuTfogmtIw7A== X-Google-Smtp-Source: AA0mqf7ymQbNbUEzGQzIwWAlN38NB9tz8tam6wmSGZCHApcPTqMq/u7sSBgUVZX6mslnPfL8DK6IeA== X-Received: by 2002:a05:600c:2296:b0:3cf:baa6:8ca5 with SMTP id 22-20020a05600c229600b003cfbaa68ca5mr7025631wmf.178.1669225152774; Wed, 23 Nov 2022 09:39:12 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id v10-20020adfe28a000000b0023647841c5bsm17464636wri.60.2022.11.23.09.39.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 09:39:12 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Peter Zijlstra Cc: Dmitry Safonov , Ard Biesheuvel , Bob Gilligan , "David S. Miller" , Dmitry Safonov <0x7f454c46@gmail.com>, Francesco Ruggeri , Hideaki YOSHIFUJI , Jakub Kicinski , Jason Baron , Josh Poimboeuf , Paolo Abeni , Salam Noureddine , Steven Rostedt , netdev@vger.kernel.org Subject: [PATCH v6 4/5] net/tcp: Do cleanup on tcp_md5_key_copy() failure Date: Wed, 23 Nov 2022 17:38:58 +0000 Message-Id: <20221123173859.473629-5-dima@arista.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123173859.473629-1-dima@arista.com> References: <20221123173859.473629-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If the kernel was short on (atomic) memory and failed to allocate it - don't proceed to creation of request socket. Otherwise the socket would be unsigned and userspace likely doesn't expect that the TCP is not MD5-signed anymore. Signed-off-by: Dmitry Safonov Acked-by: Jakub Kicinski Reviewed-by: Eric Dumazet --- net/ipv4/tcp_ipv4.c | 9 ++------- net/ipv6/tcp_ipv6.c | 15 ++++++++------- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 2ae6a061f36e..e214098087fe 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1630,13 +1630,8 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, addr = (union tcp_md5_addr *)&newinet->inet_daddr; key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET); if (key) { - /* - * We're using one, so create a matching key - * on the newsk structure. If we fail to get - * memory, then we end up not copying the key - * across. Shucks. - */ - tcp_md5_key_copy(newsk, addr, AF_INET, 32, l3index, key); + if (tcp_md5_key_copy(newsk, addr, AF_INET, 32, l3index, key)) + goto put_and_exit; sk_gso_disable(newsk); } #endif diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 83304d6a6bd0..21486b4a9774 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1376,13 +1376,14 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * /* Copy over the MD5 key from the original socket */ key = tcp_v6_md5_do_lookup(sk, &newsk->sk_v6_daddr, l3index); if (key) { - /* We're using one, so create a matching key - * on the newsk structure. If we fail to get - * memory, then we end up not copying the key - * across. Shucks. - */ - tcp_md5_key_copy(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, - AF_INET6, 128, l3index, key); + const union tcp_md5_addr *addr; + + addr = (union tcp_md5_addr *)&newsk->sk_v6_daddr; + if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) { + inet_csk_prepare_forced_close(newsk); + tcp_done(newsk); + goto out; + } } #endif From patchwork Wed Nov 23 17:38:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 13054041 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E9BBC433FE for ; Wed, 23 Nov 2022 17:39:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239171AbiKWRjz (ORCPT ); Wed, 23 Nov 2022 12:39:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238819AbiKWRjW (ORCPT ); Wed, 23 Nov 2022 12:39:22 -0500 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDF33B972B for ; Wed, 23 Nov 2022 09:39:15 -0800 (PST) Received: by mail-wm1-x330.google.com with SMTP id r9-20020a1c4409000000b003d02dd48c45so1692428wma.0 for ; Wed, 23 Nov 2022 09:39:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=B6YByZGXcPaxVAwfhLuLTeQPsdnWRNfeeMbhOwZgWvo=; b=JPBm03qP/4/EIPZVSZlhAkYt210/IaMdI4rETETcBPQ1Ry1KY1ZIObqQaEOP5XKoYb RgX98xIjbhsZm7VIgN1F4l6nP7POFydPt/LNkg0DaPPceR4UXQIEw3cvxqU6qeClZyh8 DR2RPbUIwhrmKi6S1H6Taf938/zIIGQ03P57vklgYI0K4RNjT5OGR+bK63++sVupcIfp zaqWqpohktUIk5JCmdRqzCBRNd6E7Bc/d3n2FtU1Fj40dTLGaUsmQaEB4LT3vqyg2I3p 1WyyrbhxFuaKNJXtJ7iiDY0Qe4fcv/AVtneQx8TRHgjPNKv2LX5FFdPFRugZ6bkQIS+M fqkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B6YByZGXcPaxVAwfhLuLTeQPsdnWRNfeeMbhOwZgWvo=; b=hQviiH6WRMXV4E7TEqGvVVT5LQ9VBvWAv1I25fqaPJEBEDIAp5TFEwNwlJyszt79hu v3BnFIVLD/9JtNDaimfhmksAun9B7adMmb2s0pTMRGjh5SREp/uy5IDjGItR/yiaSAmu TBI9pPP/FyUSAKb8NBsIVyt1/APRykvD9LGR0KoxYNDPK5uMUvTmr0zQoBbINJjbWnPX qWFxsNiMWTNp/NYKThb52xak8RDbtTfbBbTA2HTc5/32szwNB6Bc5t11gzo70RoKOj7e 9C01PndcoVzO5HTkrmvuq0X7gnfIimqCHICRZI12iCRfURirCG03IRbyVXA04L816gCp CpLg== X-Gm-Message-State: ANoB5pnLKQERnp91FmJzfldHYzoZliqhmZVx6eA0veAaBFXYxzRptSn8 qn3cX8IWnkK97F/86cKK96Qfjw== X-Google-Smtp-Source: AA0mqf4iPA2Zbe25WnFk0phcy+nscdWNKfFtwgtbqx9FmzFG85JuMQP0IrrAznNC588KVkAzDvdG1A== X-Received: by 2002:a1c:4b12:0:b0:3cf:5237:c0be with SMTP id y18-20020a1c4b12000000b003cf5237c0bemr20405557wma.163.1669225154298; Wed, 23 Nov 2022 09:39:14 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id v10-20020adfe28a000000b0023647841c5bsm17464636wri.60.2022.11.23.09.39.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 09:39:13 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Peter Zijlstra Cc: Dmitry Safonov , Ard Biesheuvel , Bob Gilligan , "David S. Miller" , Dmitry Safonov <0x7f454c46@gmail.com>, Francesco Ruggeri , Hideaki YOSHIFUJI , Jakub Kicinski , Jason Baron , Josh Poimboeuf , Paolo Abeni , Salam Noureddine , Steven Rostedt , netdev@vger.kernel.org Subject: [PATCH v6 5/5] net/tcp: Separate initialization of twsk Date: Wed, 23 Nov 2022 17:38:59 +0000 Message-Id: <20221123173859.473629-6-dima@arista.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123173859.473629-1-dima@arista.com> References: <20221123173859.473629-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Convert BUG_ON() to WARN_ON_ONCE() and warn as well for unlikely static key int overflow error-path. Signed-off-by: Dmitry Safonov Acked-by: Jakub Kicinski Reviewed-by: Eric Dumazet --- net/ipv4/tcp_minisocks.c | 61 +++++++++++++++++++++++----------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 6908812d50d3..e002f2e1d4f2 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -240,6 +240,40 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb, } EXPORT_SYMBOL(tcp_timewait_state_process); +static void tcp_time_wait_init(struct sock *sk, struct tcp_timewait_sock *tcptw) +{ +#ifdef CONFIG_TCP_MD5SIG + const struct tcp_sock *tp = tcp_sk(sk); + struct tcp_md5sig_key *key; + + /* + * The timewait bucket does not have the key DB from the + * sock structure. We just make a quick copy of the + * md5 key being used (if indeed we are using one) + * so the timewait ack generating code has the key. + */ + tcptw->tw_md5_key = NULL; + if (!static_branch_unlikely(&tcp_md5_needed.key)) + return; + + key = tp->af_specific->md5_lookup(sk, sk); + if (key) { + tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); + if (!tcptw->tw_md5_key) + return; + if (!tcp_alloc_md5sig_pool()) + goto out_free; + if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) + goto out_free; + } + return; +out_free: + WARN_ON_ONCE(1); + kfree(tcptw->tw_md5_key); + tcptw->tw_md5_key = NULL; +#endif +} + /* * Move a socket to time-wait or dead fin-wait-2 state. */ @@ -282,32 +316,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) } #endif -#ifdef CONFIG_TCP_MD5SIG - /* - * The timewait bucket does not have the key DB from the - * sock structure. We just make a quick copy of the - * md5 key being used (if indeed we are using one) - * so the timewait ack generating code has the key. - */ - do { - tcptw->tw_md5_key = NULL; - if (static_branch_unlikely(&tcp_md5_needed.key)) { - struct tcp_md5sig_key *key; - - key = tp->af_specific->md5_lookup(sk, sk); - if (key) { - tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); - if (!tcptw->tw_md5_key) - break; - BUG_ON(!tcp_alloc_md5sig_pool()); - if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) { - kfree(tcptw->tw_md5_key); - tcptw->tw_md5_key = NULL; - } - } - } - } while (0); -#endif + tcp_time_wait_init(sk, tcptw); /* Get the TIME_WAIT timeout firing. */ if (timeo < rto)