From patchwork Fri Nov 5 01:49:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12603941 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E684C433FE for ; Fri, 5 Nov 2021 01:50:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 689AF61354 for ; Fri, 5 Nov 2021 01:50:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231724AbhKEBwh (ORCPT ); Thu, 4 Nov 2021 21:52:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231372AbhKEBwg (ORCPT ); Thu, 4 Nov 2021 21:52:36 -0400 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E869CC061203 for ; Thu, 4 Nov 2021 18:49:57 -0700 (PDT) Received: by mail-wr1-x42e.google.com with SMTP id o14so11310400wra.12 for ; Thu, 04 Nov 2021 18:49:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rQaR7pp39Z0B6hBnX5gFqMWGoIyCNaiLBW5E59MvVPU=; b=HpKBBKdcwfPb3BMYNPFePf+BzaC0GDFahXGNk6Xf3M1UObOb5421v7FFc07VdVU1WC J1/772b1SSlAjfPajdR2V345GHLG07oHXjOpjrNG9cCKsPCpANeP0PkuM/zcErR0oM0g Dc79B3CVgKl8USygfoQVaNZfvertwqBr2p7yFq1c+z4f7YKLMEsXEAnpn4GZVOBYhghM 3JHacOGOA2OQN5WjvHNOhualZnF72yExBCkAjzZFdMgUdeE7b6CGhHMvMMKCS8esz/bt Ux9z52Qpt9kOdySlK08zeRih87wA8bgyYpdEptTrw5NOVHLDy9j0SY2lJn75SYlj7CMW BrAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rQaR7pp39Z0B6hBnX5gFqMWGoIyCNaiLBW5E59MvVPU=; b=Jpsjg2laLnU4PEjvO3HupC8rxFpCuIFSFI76nWXmLdPvM/ySDlGgoJBAuvwhK1NLuZ 6IiNKIKX67SGY9aQJJKcB/i/f7sTf6o/ac3nPUM3ZoeO3EEseFNQTL/BdeVWHbuBYD40 VueSr6rAAa4g/k+9EFaZEhFbyDXQs+tdxKZ7ZKo17i5D/hUM9SnE9Z+5ZkYC7pQQOwKd nqT1lhe17RNDp5kCDsz6OF2q+/WqrEYGSm7UMfYzrUN2i2xGnTR5BQ9IUwnwDs9ENvzd yZIx4XKmaBCZDRcsFCsCcmhdg2NnXurtYowRpWonqzO2++V+nTpwCzJubSa0/SeH6mQL rFVA== X-Gm-Message-State: AOAM530UNsNe9xOcyUQXPdKwWUARBzwu8URAo8GQYd1xZBgfrLllkupw PrJL0TyLLbZtjFYgPdtAUXdjag== X-Google-Smtp-Source: ABdhPJxood27uGNuAJy89dbUs4q1LVV544g8s4ei9ZS6BOn6gHSZyEs/iyJSjCj57SDkdUgfgcoRdg== X-Received: by 2002:adf:a389:: with SMTP id l9mr58721497wrb.121.1636076996488; Thu, 04 Nov 2021 18:49:56 -0700 (PDT) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm7202421wmq.46.2021.11.04.18.49.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 18:49:56 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , David Ahern , "David S. Miller" , Eric Dumazet , Francesco Ruggeri , Jakub Kicinski , Herbert Xu , Hideaki YOSHIFUJI , Leonard Crestez , linux-crypto@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 1/5] tcp/md5: Don't BUG_ON() failed kmemdup() Date: Fri, 5 Nov 2021 01:49:49 +0000 Message-Id: <20211105014953.972946-2-dima@arista.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211105014953.972946-1-dima@arista.com> References: <20211105014953.972946-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org static_branch_unlikely(&tcp_md5_needed) is enabled by tcp_alloc_md5sig_pool(), so as long as the code doesn't change tcp_md5sig_pool has been already populated if this code is being executed. In case tcptw->tw_md5_key allocaion failed - no reason to crash kernel: tcp_{v4,v6}_send_ack() will send unsigned segment, the connection won't be established, which is bad enough, but in OOM situation totally acceptable and better than kernel crash. Introduce tcp_md5sig_pool_ready() helper. tcp_alloc_md5sig_pool() usage is intentionally avoided here as it's fast-path here and it's check for sanity rather than point of actual pool allocation. That will allow to have generic slow-path allocator for tcp crypto pool. Signed-off-by: Dmitry Safonov --- include/net/tcp.h | 1 + net/ipv4/tcp.c | 5 +++++ net/ipv4/tcp_minisocks.c | 5 +++-- 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 4da22b41bde6..3e5423a10a74 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1672,6 +1672,7 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index, #endif bool tcp_alloc_md5sig_pool(void); +bool tcp_md5sig_pool_ready(void); struct tcp_md5sig_pool *tcp_get_md5sig_pool(void); static inline void tcp_put_md5sig_pool(void) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index b7796b4cf0a0..c0856a6af9f5 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4314,6 +4314,11 @@ bool tcp_alloc_md5sig_pool(void) } EXPORT_SYMBOL(tcp_alloc_md5sig_pool); +bool tcp_md5sig_pool_ready(void) +{ + return tcp_md5sig_pool_populated; +} +EXPORT_SYMBOL(tcp_md5sig_pool_ready); /** * tcp_get_md5sig_pool - get md5sig_pool for this user diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index cf913a66df17..c99cdb529902 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -293,11 +293,12 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) tcptw->tw_md5_key = NULL; if (static_branch_unlikely(&tcp_md5_needed)) { struct tcp_md5sig_key *key; + bool err = WARN_ON(!tcp_md5sig_pool_ready()); key = tp->af_specific->md5_lookup(sk, sk); - if (key) { + if (key && !err) { tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); - BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool()); + WARN_ON_ONCE(tcptw->tw_md5_key == NULL); } } } while (0); From patchwork Fri Nov 5 01:49:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12603943 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7929C433FE for ; Fri, 5 Nov 2021 01:50:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE2B361268 for ; Fri, 5 Nov 2021 01:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231833AbhKEBwm (ORCPT ); Thu, 4 Nov 2021 21:52:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231715AbhKEBwh (ORCPT ); Thu, 4 Nov 2021 21:52:37 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA54DC061203 for ; Thu, 4 Nov 2021 18:49:58 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id j128-20020a1c2386000000b003301a98dd62so8511501wmj.5 for ; Thu, 04 Nov 2021 18:49:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dNwZbIKE8lWneBjLIglBIBwD0K8M5nQ1YwG5fYxV9yQ=; b=S6873E77/TJAfAk2C/saKYt/6NEm0CteTcVWlSBIfx1IWNTD7Q8bjsrowiXO88I7S+ LEH+nrIQC8MQCABpuao3qyBTNN9A278yGM3a1irACwfOGwcc1q+uIqfLfRACZq0tiJzk ovBpHKT6QjjaXbDQVDrOPug+/QiRYxUMaM6bdh3UHjaW1FekQyY9xh4+TT6CJHnQs/zO H5mJcJFblppVgwfKrU81pamfcy160SDTTyPFBxLoMUtXfhxncvJK1zWzJuWdiBhCelIz ahUuGr5fxFkbdaBsh0NRLVDKc13euXGU3uqc4pR8j3wHmfMLsY5I/FhfMNxBYtWo42JB 0xzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dNwZbIKE8lWneBjLIglBIBwD0K8M5nQ1YwG5fYxV9yQ=; b=5mVRdRni1CGPK5aj3NQ1VgLaCg5yNZbnSHlW1EVDX1F4Dy5n+bZFifeU4+/Kqtmjsx VKqM6jz2tSzjQo2g+5rnWl2Kxa5+of8qBxaZ4YjTiu3C1KNqnGRJ0+2t1jRMMclaI5vF XeRy1LD+hFqqmR/1Yx73nQ1eBpHS5qbzkZzf4fhP31xFJN7mAyW0dRbGmrhrDut8vU+2 g17kEf4GzC3VBU3fYIJzq6a4sL0sXebzjLofmPFWCjWKyuBTgItGpShpRcxZlBeyZSEy Ja4CPgEDR0oPrJg6SHwEfdg8Kv7q7w7sENuyewzOcic3JjpCU07eWcJ5rHGR5H2UUD94 Ac/A== X-Gm-Message-State: AOAM533P1yIPYDxqDcLWmpcFCwv3r+MrXVk/zVugGmbNCTjTR7RAh+D0 23Oec6/eDcI9Ws8WSAQm1mA6Yg== X-Google-Smtp-Source: ABdhPJyMGEIoev86k8g8uhwR4iMxEaS98FgaftBQIKbdzzTI4hjx8cTbU0bLfu9OlD5UJTes221/6A== X-Received: by 2002:a05:600c:4f0f:: with SMTP id l15mr9170724wmq.25.1636076997577; Thu, 04 Nov 2021 18:49:57 -0700 (PDT) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm7202421wmq.46.2021.11.04.18.49.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 18:49:57 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , David Ahern , "David S. Miller" , Eric Dumazet , Francesco Ruggeri , Jakub Kicinski , Herbert Xu , Hideaki YOSHIFUJI , Leonard Crestez , linux-crypto@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 2/5] tcp/md5: Don't leak ahash in OOM Date: Fri, 5 Nov 2021 01:49:50 +0000 Message-Id: <20211105014953.972946-3-dima@arista.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211105014953.972946-1-dima@arista.com> References: <20211105014953.972946-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In quite unlikely scenario when __tcp_alloc_md5sig_pool() succeeded in crypto_alloc_ahash(), but later failed to allocate per-cpu request or scratch area ahash will be leaked. In theory it can happen multiple times in OOM condition for every setsockopt(TCP_MD5SIG{,_EXT}). Add a clean-up path to free ahash. Signed-off-by: Dmitry Safonov --- net/ipv4/tcp.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c0856a6af9f5..eb478028b1ea 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4276,15 +4276,13 @@ static void __tcp_alloc_md5sig_pool(void) GFP_KERNEL, cpu_to_node(cpu)); if (!scratch) - return; + goto out_free; per_cpu(tcp_md5sig_pool, cpu).scratch = scratch; } - if (per_cpu(tcp_md5sig_pool, cpu).md5_req) - continue; req = ahash_request_alloc(hash, GFP_KERNEL); if (!req) - return; + goto out_free; ahash_request_set_callback(req, 0, NULL, NULL); @@ -4295,6 +4293,16 @@ static void __tcp_alloc_md5sig_pool(void) */ smp_wmb(); tcp_md5sig_pool_populated = true; + return; + +out_free: + for_each_possible_cpu(cpu) { + if (per_cpu(tcp_md5sig_pool, cpu).md5_req == NULL) + break; + ahash_request_free(per_cpu(tcp_md5sig_pool, cpu).md5_req); + per_cpu(tcp_md5sig_pool, cpu).md5_req = NULL; + } + crypto_free_ahash(hash); } bool tcp_alloc_md5sig_pool(void) From patchwork Fri Nov 5 01:49:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12603945 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20356C43217 for ; Fri, 5 Nov 2021 01:50:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0264D61268 for ; Fri, 5 Nov 2021 01:50:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231848AbhKEBwn (ORCPT ); Thu, 4 Nov 2021 21:52:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231683AbhKEBwk (ORCPT ); Thu, 4 Nov 2021 21:52:40 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2ACE9C06120B for ; Thu, 4 Nov 2021 18:50:00 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id d24so11421602wra.0 for ; Thu, 04 Nov 2021 18:50:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+ZX3pYJ1I/7ebtQwANMInsRgOc/yS6aTn3F+Fk+PiwA=; b=GsPk6IXOjNs0SPvXxuznhJg1YhpoxbWiSH6pSRXi8P/R0//AeMACHltL4qoPIRMz8P IrFp99b1IUUs5OTWeGsqg4lbJU6kkaOwjhbdS7M42pbe7et5jbVcyfJjSRJEOrwH1AeY yqfRuU3xh5UbfkIRxG3+1JCXO4+6VYi7pu+EE92plfdxuV37Eh9vRsu/AvhaUmyM9R27 Vq1tmG6Vedja2BG0lsHyvSdEoOUZaaiXg2G02JHIWrQaeaW9zL10dRKuD4/SM5moGXGn maJY1f/+RHr5ekxuV7mYJq1dC4Yd5sS6HriL/BBrMrbBWP4y+i0MmG6sgBgINbT0iw9O q4SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+ZX3pYJ1I/7ebtQwANMInsRgOc/yS6aTn3F+Fk+PiwA=; b=fXCRLSyzLRyQuwlAJ4lD422kIwuUc+fFC3wBF8uzhzwD1EmRrBogj3ULribw5a3YgO zop1gAq7n8aEmVJ7rlauyFbHaDR+tXCBOgNwG0BFileVsjq3uCMDWePMIIPl4JWaqBX8 46Yr/qg2V4gjMkvx9Qw9FFPewUFIAe2wl9JTPX5oEjMcc/i7ckE2kfAqc3ff1xkM+tb1 7SxwvPPpk24oh9SaPlBIi7ahLCQUmL3teVrTkRoiQdIKn9OmZvm+xO4Yt94uepzpheAq Dax5qjgbS75cmwIT03RMURf+Iz+4cZzgHlIYjLRGK8kJlng0jWbrVGKTfxG5Szk+OcJJ VtVQ== X-Gm-Message-State: AOAM5337Q5CgEf1NHTiSLW1ZyeW8bPIZkcmVksz87JOI+FucWP1InILR XMX9P11WdYSN4/2MdZ7BTST9vw== X-Google-Smtp-Source: ABdhPJyTYh3Ig4SsaG9/lmxp6FpLNLdjOI6iWi6OA7zO5KbzwlTFLcjrRe7t+hJwW6ea2dbTESgj3g== X-Received: by 2002:a5d:4107:: with SMTP id l7mr51155903wrp.209.1636076998744; Thu, 04 Nov 2021 18:49:58 -0700 (PDT) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm7202421wmq.46.2021.11.04.18.49.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 18:49:58 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , David Ahern , "David S. Miller" , Eric Dumazet , Francesco Ruggeri , Jakub Kicinski , Herbert Xu , Hideaki YOSHIFUJI , Leonard Crestez , linux-crypto@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 3/5] tcp/md5: Alloc tcp_md5sig_pool only in setsockopt() Date: Fri, 5 Nov 2021 01:49:51 +0000 Message-Id: <20211105014953.972946-4-dima@arista.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211105014953.972946-1-dima@arista.com> References: <20211105014953.972946-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Besides setsockopt() tcp_md5_do_add() can be called from tcp_v4_syn_recv_sock()/tcp_v6_syn_recv_sock(). If it is called from there, tcp_md5_do_lookup() has succeeded, which means that md5 static key is enabled. Which makes this tcp_alloc_md5sig_pool() call to be nop for any caller, but tcp_v4_parse_md5_keys()/tcp_v6_parse_md5_keys(). tcp_alloc_md5sig_pool() can sleep if tcp_md5sig_pool hasn't been populated, so if anything changes tcp_md5_do_add() may start sleeping in atomic context. Let's leave the check for tcp_md5sig_pool in tcp_md5_do_add(), but intentionally call tcp_alloc_md5sig_pool() only from sleepable setsockopt() syscall context. Signed-off-by: Dmitry Safonov --- net/ipv4/tcp_ipv4.c | 5 ++++- net/ipv6/tcp_ipv6.c | 3 +++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 13d868c43284..6a8ff9ab1cbc 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1190,7 +1190,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) return -ENOMEM; - if (!tcp_alloc_md5sig_pool()) { + if (WARN_ON_ONCE(!tcp_md5sig_pool_ready())) { sock_kfree_s(sk, key, sizeof(*key)); return -ENOMEM; } @@ -1294,6 +1294,9 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; + if (!tcp_alloc_md5sig_pool()) + return -ENOMEM; + return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 2cc9b0e53ad1..3af13bd6fed0 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -654,6 +654,9 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; + if (!tcp_alloc_md5sig_pool()) + return -ENOMEM; + if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], AF_INET, prefixlen, l3index, flags, From patchwork Fri Nov 5 01:49:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12603947 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE1EAC433FE for ; Fri, 5 Nov 2021 01:50:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D599861267 for ; Fri, 5 Nov 2021 01:50:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231882AbhKEBwp (ORCPT ); Thu, 4 Nov 2021 21:52:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231806AbhKEBwk (ORCPT ); Thu, 4 Nov 2021 21:52:40 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CADDC06120E for ; Thu, 4 Nov 2021 18:50:01 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id g191-20020a1c9dc8000000b0032fbf912885so5502661wme.4 for ; Thu, 04 Nov 2021 18:50:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WVMQGvz1JzGP49ZR8f4x6hDkNtNhDSqb+g0cz83wFEk=; b=aQis27S3L+OgQ+z6AX4zIqgONns+vOrRZSCxRNJKDl2E/hcWqNWPWZWCkZqW6wQp4y rA0hpX6aLuO6nIFLvLrtspeh6p567c4anqR/P95x2EnKBX1y9Hx2xC1Rv9cj1qxXzIdl +ymQehYcx1yyP4SVP/LvrOgliB0yl/jKh2Ok4588P/NW3xtMttucku08lcSUk+f5Wsmm 0NRNzbCkIVF5dBbl+no+MLwiYew7d1BKZqlcMJFFRhLz5lDNXpCJj7CdsepZ0RFMncXr U6izbYBg/r+QB8i4tH146wN2DCBaARhTgDliXtHRvL27hGFf0lVjeMID/mu9rDmraUxf 7o3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WVMQGvz1JzGP49ZR8f4x6hDkNtNhDSqb+g0cz83wFEk=; b=DPqtxAI5f9pJ9wg5V9YMdW99byuBawcWFPmtK0AicCoUa+TLHzeNwodLUCKJfXPGjL 1X72h6JAoboEDK0cnASHPQZUqxPzK9ekRj02G+Zl2nSBa+eH4Ll7/vXjtYUg1cyeuwZf nZzJNcKd8acFz8U2p6j/leDONBoRxQeMmMRNllKaA6CoUKcIvtUfYo08e1w/uZ0rrFec Y42UQUQuR27bgWtMGRhiVXDsYLqyYIY34cbem4tkfI3TzyDB3GiaPCV04ynkIR4zo1KF PSlNgS+xawhYGCRFRLvS3HJTL26aeu66asJm3Iw9V4vhYK+l6LX4b7h4OoVYa6p1dJiM FkwQ== X-Gm-Message-State: AOAM530xlm0FukGnIU7LKfBC4OW8HDCRKEKjnVtAiJBH0S4Ic2v3Anks yQriwGSfolYYcQo6bVwqTyg0mw== X-Google-Smtp-Source: ABdhPJzWbDL+KacOMA9l2vNuuEV3oFBBsEmnMC/TaPUAaxAIjQYjGZoT7K2DUv2v7rdxZqK7lnfd3w== X-Received: by 2002:a05:600c:a49:: with SMTP id c9mr28076855wmq.172.1636076999814; Thu, 04 Nov 2021 18:49:59 -0700 (PDT) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm7202421wmq.46.2021.11.04.18.49.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 18:49:59 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , David Ahern , "David S. Miller" , Eric Dumazet , Francesco Ruggeri , Jakub Kicinski , Herbert Xu , Hideaki YOSHIFUJI , Leonard Crestez , linux-crypto@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 4/5] tcp/md5: Use tcp_md5sig_pool_* naming scheme Date: Fri, 5 Nov 2021 01:49:52 +0000 Message-Id: <20211105014953.972946-5-dima@arista.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211105014953.972946-1-dima@arista.com> References: <20211105014953.972946-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use common prefixes for operations with (struct tcp_md5sig_pool). Signed-off-by: Dmitry Safonov --- include/net/tcp.h | 6 +++--- net/ipv4/tcp.c | 18 +++++++++--------- net/ipv4/tcp_ipv4.c | 14 +++++++------- net/ipv6/tcp_ipv6.c | 14 +++++++------- 4 files changed, 26 insertions(+), 26 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 3e5423a10a74..27eb71dd7ff8 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1671,11 +1671,11 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index, #define tcp_twsk_md5_key(twsk) NULL #endif -bool tcp_alloc_md5sig_pool(void); +bool tcp_md5sig_pool_alloc(void); bool tcp_md5sig_pool_ready(void); -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void); -static inline void tcp_put_md5sig_pool(void) +struct tcp_md5sig_pool *tcp_md5sig_pool_get(void); +static inline void tcp_md5sig_pool_put(void) { local_bh_enable(); } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index eb478028b1ea..8d8692fc9cd5 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4257,7 +4257,7 @@ static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool); static DEFINE_MUTEX(tcp_md5sig_mutex); static bool tcp_md5sig_pool_populated = false; -static void __tcp_alloc_md5sig_pool(void) +static void __tcp_md5sig_pool_alloc(void) { struct crypto_ahash *hash; int cpu; @@ -4289,7 +4289,7 @@ static void __tcp_alloc_md5sig_pool(void) per_cpu(tcp_md5sig_pool, cpu).md5_req = req; } /* before setting tcp_md5sig_pool_populated, we must commit all writes - * to memory. See smp_rmb() in tcp_get_md5sig_pool() + * to memory. See smp_rmb() in tcp_md5sig_pool_get() */ smp_wmb(); tcp_md5sig_pool_populated = true; @@ -4305,13 +4305,13 @@ static void __tcp_alloc_md5sig_pool(void) crypto_free_ahash(hash); } -bool tcp_alloc_md5sig_pool(void) +bool tcp_md5sig_pool_alloc(void) { if (unlikely(!tcp_md5sig_pool_populated)) { mutex_lock(&tcp_md5sig_mutex); if (!tcp_md5sig_pool_populated) { - __tcp_alloc_md5sig_pool(); + __tcp_md5sig_pool_alloc(); if (tcp_md5sig_pool_populated) static_branch_inc(&tcp_md5_needed); } @@ -4320,7 +4320,7 @@ bool tcp_alloc_md5sig_pool(void) } return tcp_md5sig_pool_populated; } -EXPORT_SYMBOL(tcp_alloc_md5sig_pool); +EXPORT_SYMBOL(tcp_md5sig_pool_alloc); bool tcp_md5sig_pool_ready(void) { @@ -4329,25 +4329,25 @@ bool tcp_md5sig_pool_ready(void) EXPORT_SYMBOL(tcp_md5sig_pool_ready); /** - * tcp_get_md5sig_pool - get md5sig_pool for this user + * tcp_md5sig_pool_get - get md5sig_pool for this user * * We use percpu structure, so if we succeed, we exit with preemption * and BH disabled, to make sure another thread or softirq handling * wont try to get same context. */ -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void) +struct tcp_md5sig_pool *tcp_md5sig_pool_get(void) { local_bh_disable(); if (tcp_md5sig_pool_populated) { - /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */ + /* coupled with smp_wmb() in __tcp_md5sig_pool_alloc() */ smp_rmb(); return this_cpu_ptr(&tcp_md5sig_pool); } local_bh_enable(); return NULL; } -EXPORT_SYMBOL(tcp_get_md5sig_pool); +EXPORT_SYMBOL(tcp_md5sig_pool_get); int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, const struct sk_buff *skb, unsigned int header_len) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 6a8ff9ab1cbc..44db9afa17fc 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1294,7 +1294,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; - if (!tcp_alloc_md5sig_pool()) + if (!tcp_md5sig_pool_alloc()) return -ENOMEM; return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, @@ -1332,7 +1332,7 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, struct tcp_md5sig_pool *hp; struct ahash_request *req; - hp = tcp_get_md5sig_pool(); + hp = tcp_md5sig_pool_get(); if (!hp) goto clear_hash_noput; req = hp->md5_req; @@ -1347,11 +1347,11 @@ static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, if (crypto_ahash_final(req)) goto clear_hash; - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -1375,7 +1375,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, daddr = iph->daddr; } - hp = tcp_get_md5sig_pool(); + hp = tcp_md5sig_pool_get(); if (!hp) goto clear_hash_noput; req = hp->md5_req; @@ -1393,11 +1393,11 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, if (crypto_ahash_final(req)) goto clear_hash; - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 3af13bd6fed0..9147f9f69196 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -654,7 +654,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; - if (!tcp_alloc_md5sig_pool()) + if (!tcp_md5sig_pool_alloc()) return -ENOMEM; if (ipv6_addr_v4mapped(&sin6->sin6_addr)) @@ -701,7 +701,7 @@ static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, struct tcp_md5sig_pool *hp; struct ahash_request *req; - hp = tcp_get_md5sig_pool(); + hp = tcp_md5sig_pool_get(); if (!hp) goto clear_hash_noput; req = hp->md5_req; @@ -716,11 +716,11 @@ static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, if (crypto_ahash_final(req)) goto clear_hash; - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -745,7 +745,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, daddr = &ip6h->daddr; } - hp = tcp_get_md5sig_pool(); + hp = tcp_md5sig_pool_get(); if (!hp) goto clear_hash_noput; req = hp->md5_req; @@ -763,11 +763,11 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, if (crypto_ahash_final(req)) goto clear_hash; - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + tcp_md5sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; From patchwork Fri Nov 5 01:49:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12603949 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C3D7C433EF for ; Fri, 5 Nov 2021 01:50:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61DD261267 for ; Fri, 5 Nov 2021 01:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231754AbhKEBwt (ORCPT ); Thu, 4 Nov 2021 21:52:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231764AbhKEBwl (ORCPT ); Thu, 4 Nov 2021 21:52:41 -0400 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61EBFC061714 for ; Thu, 4 Nov 2021 18:50:02 -0700 (PDT) Received: by mail-wm1-x32e.google.com with SMTP id y196so5900712wmc.3 for ; Thu, 04 Nov 2021 18:50:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6tKYXMkUZiulqG3qeKu5r9mJqYJnVYqNGpeigVjU+Ao=; b=dxVVptgkmo5Hf4qKaU6tIWQJZ8KheNDjEL/u2yn2INeLVhgagv5Qb9iH2uA01ls69P fFjAehOagW1C+kP3E2P1fMCkNQQFUtVr5SA7ku9rX4yryIwempvScgcu+VgiElH27DN2 VsR4tHh0tK7UcrZSI1a9gPbud8EchA+KUYR9aN2M2xMhn9H9nPbQG0Zy2QKqIYa2ddYF kGK/f0ssxtspWTyn/aASYR3kiAcN/BVOjL0M1vowpGPxFH4IbeVq/1mqfr+MTQpzembi TLskZ1++x0flqxkGC8wDwjH5CK5dhkrx0/CxVjBvRwMCfAv31xIOg8xw26Q7CjryIl2A HkIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6tKYXMkUZiulqG3qeKu5r9mJqYJnVYqNGpeigVjU+Ao=; b=Ds31JtaCtRjWMw/+g/VCvU5ZT2nUlsAZxtbKKRPSrOIKnedxI5wdJLhTOAzf7nmndR vOh2/oKAtO2faRJrFuCwlEGxc2F1wcWRMgJY72uCNZHzE62K77gWLAFVFFbmY7uepOZq 6HW5RRd6rN7MGxHva2TeroJlcK5FYKeuZh4+4xu8xNS1yBG23/ZqMcvzZBdgi9CxtTNP FhkHx9tvlDtD53C+Z7n7/1ALt16oVsLUZXQUclJ1ICVv0Uv+TKaTzM+IwfwWJNPX8J9F Rkx6FiyCILKjMNTv2p1SZ6xolD7B/GqjKLer+I0zdZrvYn1kor60Q4wlwbKSR0/b0ZP6 v5Fw== X-Gm-Message-State: AOAM5326BR58QWumpxtpc2uRbjyodsHBREsa+pJMAab1MCiJmUPsMS40 WyZEcDCN6obJjUELALQBk2xmuwRTV4VRZdul X-Google-Smtp-Source: ABdhPJxuYJsb/ihoN0Oxy8yu39Bzs+/Sn+JS5kF9wBfjXBsVcTsfk3P/uEeIH0NBZZZWYn1Difmmqw== X-Received: by 2002:a1c:9851:: with SMTP id a78mr26830003wme.116.1636077000883; Thu, 04 Nov 2021 18:50:00 -0700 (PDT) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm7202421wmq.46.2021.11.04.18.49.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 18:50:00 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , David Ahern , "David S. Miller" , Eric Dumazet , Francesco Ruggeri , Jakub Kicinski , Herbert Xu , Hideaki YOSHIFUJI , Leonard Crestez , linux-crypto@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 5/5] tcp/md5: Make more generic tcp_sig_pool Date: Fri, 5 Nov 2021 01:49:53 +0000 Message-Id: <20211105014953.972946-6-dima@arista.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211105014953.972946-1-dima@arista.com> References: <20211105014953.972946-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Convert tcp_md5sig_pool to more generic tcp_sig_pool. Now tcp_sig_pool_alloc(const char *alg) can be used to allocate per-cpu ahash request for different hashing algorithms besides md5. tcp_sig_pool_get() and tcp_sig_pool_put() should be used to get ahash_request and scratch area. Make tcp_sig_pool reusable for TCP Authentication Option support (TCP-AO, RFC5925), where RFC5926[1] requires HMAC-SHA1 and AES-128_CMAC hashing at least. Signed-off-by: Dmitry Safonov --- include/net/tcp.h | 24 +++-- net/ipv4/tcp.c | 192 +++++++++++++++++++++++++++------------ net/ipv4/tcp_ipv4.c | 46 +++++----- net/ipv4/tcp_minisocks.c | 4 +- net/ipv6/tcp_ipv6.c | 44 ++++----- 5 files changed, 197 insertions(+), 113 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 27eb71dd7ff8..2d868ce8736d 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1628,10 +1628,10 @@ union tcp_md5sum_block { #endif }; -/* - pool: digest algorithm, hash description and scratch buffer */ -struct tcp_md5sig_pool { - struct ahash_request *md5_req; +struct tcp_sig_pool { + struct ahash_request *req; void *scratch; +#define SCRATCH_SIZE (sizeof(union tcp_md5sum_block) + sizeof(struct tcphdr)) }; /* - functions */ @@ -1671,18 +1671,24 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index, #define tcp_twsk_md5_key(twsk) NULL #endif -bool tcp_md5sig_pool_alloc(void); -bool tcp_md5sig_pool_ready(void); -struct tcp_md5sig_pool *tcp_md5sig_pool_get(void); -static inline void tcp_md5sig_pool_put(void) +/* TCP MD5 supports only one hash function, set MD5 id in stone + * to avoid needless storing MD5 id in (struct tcp_md5sig_info). + */ +#define TCP_MD5_SIG_ID 0 + +int tcp_sig_pool_alloc(const char *alg); +bool tcp_sig_pool_ready(unsigned int id); + +int tcp_sig_pool_get(unsigned int id, struct tcp_sig_pool *tsp); +static inline void tcp_sig_pool_put(void) { local_bh_enable(); } -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff *, +int tcp_md5_hash_skb_data(struct tcp_sig_pool *, const struct sk_buff *, unsigned int header_len); -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_key(struct tcp_sig_pool *hp, const struct tcp_md5sig_key *key); /* From tcp_fastopen.c */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 8d8692fc9cd5..8307c7b91d09 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4253,32 +4253,79 @@ int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, EXPORT_SYMBOL(tcp_getsockopt); #ifdef CONFIG_TCP_MD5SIG -static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool); -static DEFINE_MUTEX(tcp_md5sig_mutex); -static bool tcp_md5sig_pool_populated = false; +struct tcp_sig_crypto { + struct ahash_request * __percpu *req; + const char *alg; + bool ready; +}; + +#define TCP_SIG_POOL_MAX 8 +static struct tcp_sig_pool_priv_t { + struct tcp_sig_crypto cryptos[TCP_SIG_POOL_MAX]; + unsigned int cryptos_nr; +} tcp_sig_pool_priv = { + .cryptos_nr = 1, + .cryptos[TCP_MD5_SIG_ID].alg = "md5", +}; + +static DEFINE_PER_CPU(void *, tcp_sig_pool_scratch); +static DEFINE_MUTEX(tcp_sig_pool_mutex); -static void __tcp_md5sig_pool_alloc(void) +static int tcp_sig_pool_alloc_scrath(void) { - struct crypto_ahash *hash; int cpu; - hash = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); - if (IS_ERR(hash)) - return; + for_each_possible_cpu(cpu) { + void *scratch = per_cpu(tcp_sig_pool_scratch, cpu); + + if (scratch) + continue; + + scratch = kmalloc_node(SCRATCH_SIZE, GFP_KERNEL, + cpu_to_node(cpu)); + if (!scratch) + return -ENOMEM; + per_cpu(tcp_sig_pool_scratch, cpu) = scratch; + } + return 0; +} + +bool tcp_sig_pool_ready(unsigned int id) +{ + return tcp_sig_pool_priv.cryptos[id].ready; +} +EXPORT_SYMBOL(tcp_sig_pool_ready); + +static int __tcp_sig_pool_alloc(struct tcp_sig_crypto *crypto, const char *alg) +{ + struct crypto_ahash *hash; + bool needs_key; + int cpu, ret = -ENOMEM; + + crypto->alg = kstrdup(alg, GFP_KERNEL); + if (!crypto->alg) + return -ENOMEM; + + crypto->req = alloc_percpu(struct ahash_request *); + if (!crypto->req) + goto out_free_alg; + + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) { + ret = PTR_ERR(hash); + goto out_free_req; + } + + /* If hash has .setkey(), allocate ahash per-cpu, not only request */ + needs_key = crypto_ahash_get_flags(hash) & CRYPTO_TFM_NEED_KEY; for_each_possible_cpu(cpu) { - void *scratch = per_cpu(tcp_md5sig_pool, cpu).scratch; struct ahash_request *req; - if (!scratch) { - scratch = kmalloc_node(sizeof(union tcp_md5sum_block) + - sizeof(struct tcphdr), - GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - goto out_free; - per_cpu(tcp_md5sig_pool, cpu).scratch = scratch; - } + if (hash == NULL) + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) + goto out_free; req = ahash_request_alloc(hash, GFP_KERNEL); if (!req) @@ -4286,75 +4333,106 @@ static void __tcp_md5sig_pool_alloc(void) ahash_request_set_callback(req, 0, NULL, NULL); - per_cpu(tcp_md5sig_pool, cpu).md5_req = req; + *per_cpu_ptr(crypto->req, cpu) = req; + + if (needs_key) + hash = NULL; } - /* before setting tcp_md5sig_pool_populated, we must commit all writes - * to memory. See smp_rmb() in tcp_md5sig_pool_get() + /* before setting crypto->ready, we must commit all writes + * to memory. See smp_rmb() in tcp_sig_pool_get() */ smp_wmb(); - tcp_md5sig_pool_populated = true; - return; + crypto->ready = true; + return 0; out_free: + if (!IS_ERR_OR_NULL(hash) && needs_key) + crypto_free_ahash(hash); + for_each_possible_cpu(cpu) { - if (per_cpu(tcp_md5sig_pool, cpu).md5_req == NULL) + if (*per_cpu_ptr(crypto->req, cpu) == NULL) break; - ahash_request_free(per_cpu(tcp_md5sig_pool, cpu).md5_req); - per_cpu(tcp_md5sig_pool, cpu).md5_req = NULL; + hash = crypto_ahash_reqtfm(*per_cpu_ptr(crypto->req, cpu)); + ahash_request_free(*per_cpu_ptr(crypto->req, cpu)); + if (needs_key) { + crypto_free_ahash(hash); + hash = NULL; + } } - crypto_free_ahash(hash); + + if (hash) + crypto_free_ahash(hash); +out_free_req: + free_percpu(crypto->req); +out_free_alg: + kfree(crypto->alg); + return ret; } -bool tcp_md5sig_pool_alloc(void) +int tcp_sig_pool_alloc(const char *alg) { - if (unlikely(!tcp_md5sig_pool_populated)) { - mutex_lock(&tcp_md5sig_mutex); + unsigned int i, err; - if (!tcp_md5sig_pool_populated) { - __tcp_md5sig_pool_alloc(); - if (tcp_md5sig_pool_populated) - static_branch_inc(&tcp_md5_needed); - } + /* slow-path: once per setsockopt() */ + mutex_lock(&tcp_sig_pool_mutex); - mutex_unlock(&tcp_md5sig_mutex); + err = tcp_sig_pool_alloc_scrath(); + if (err) + goto out; + + for (i = 0; i < tcp_sig_pool_priv.cryptos_nr; i++) { + if (!strcmp(tcp_sig_pool_priv.cryptos[i].alg, alg)) + break; } - return tcp_md5sig_pool_populated; -} -EXPORT_SYMBOL(tcp_md5sig_pool_alloc); -bool tcp_md5sig_pool_ready(void) -{ - return tcp_md5sig_pool_populated; + if (i >= TCP_SIG_POOL_MAX) { + i = -ENOSPC; + goto out; + } + + if (tcp_sig_pool_priv.cryptos[i].ready) + goto out; + + err = __tcp_sig_pool_alloc(&tcp_sig_pool_priv.cryptos[i], alg); + if (!err && i == TCP_MD5_SIG_ID) + static_branch_inc(&tcp_md5_needed); + +out: + mutex_unlock(&tcp_sig_pool_mutex); + return err ?: i; } -EXPORT_SYMBOL(tcp_md5sig_pool_ready); +EXPORT_SYMBOL(tcp_sig_pool_alloc); /** - * tcp_md5sig_pool_get - get md5sig_pool for this user + * tcp_sig_pool_get - get tcp_sig_pool for this user * * We use percpu structure, so if we succeed, we exit with preemption * and BH disabled, to make sure another thread or softirq handling * wont try to get same context. */ -struct tcp_md5sig_pool *tcp_md5sig_pool_get(void) +int tcp_sig_pool_get(unsigned int id, struct tcp_sig_pool *tsp) { local_bh_disable(); - if (tcp_md5sig_pool_populated) { - /* coupled with smp_wmb() in __tcp_md5sig_pool_alloc() */ - smp_rmb(); - return this_cpu_ptr(&tcp_md5sig_pool); + if (id > tcp_sig_pool_priv.cryptos_nr || !tcp_sig_pool_ready(id)) { + local_bh_enable(); + return -EINVAL; } - local_bh_enable(); - return NULL; + + /* coupled with smp_wmb() in __tcp_sig_pool_alloc() */ + smp_rmb(); + tsp->req = *this_cpu_ptr(tcp_sig_pool_priv.cryptos[id].req); + tsp->scratch = this_cpu_ptr(&tcp_sig_pool_scratch); + return 0; } -EXPORT_SYMBOL(tcp_md5sig_pool_get); +EXPORT_SYMBOL(tcp_sig_pool_get); -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_skb_data(struct tcp_sig_pool *hp, const struct sk_buff *skb, unsigned int header_len) { struct scatterlist sg; const struct tcphdr *tp = tcp_hdr(skb); - struct ahash_request *req = hp->md5_req; + struct ahash_request *req = hp->req; unsigned int i; const unsigned int head_data_len = skb_headlen(skb) > header_len ? skb_headlen(skb) - header_len : 0; @@ -4388,16 +4466,16 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, } EXPORT_SYMBOL(tcp_md5_hash_skb_data); -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key) +int tcp_md5_hash_key(struct tcp_sig_pool *hp, const struct tcp_md5sig_key *key) { u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */ struct scatterlist sg; sg_init_one(&sg, key->key, keylen); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen); + ahash_request_set_crypt(hp->req, &sg, NULL, keylen); /* We use data_race() because tcp_md5_do_add() might change key->key under us */ - return data_race(crypto_ahash_update(hp->md5_req)); + return data_race(crypto_ahash_update(hp->req)); } EXPORT_SYMBOL(tcp_md5_hash_key); diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 44db9afa17fc..7e18806c6d82 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1190,7 +1190,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) return -ENOMEM; - if (WARN_ON_ONCE(!tcp_md5sig_pool_ready())) { + if (WARN_ON_ONCE(!tcp_sig_pool_ready(TCP_MD5_SIG_ID))) { sock_kfree_s(sk, key, sizeof(*key)); return -ENOMEM; } @@ -1249,6 +1249,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, u8 prefixlen = 32; int l3index = 0; u8 flags; + int err; if (optlen < sizeof(cmd)) return -EINVAL; @@ -1294,14 +1295,15 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; - if (!tcp_md5sig_pool_alloc()) - return -ENOMEM; + err = tcp_sig_pool_alloc("md5"); + if (err < 0 || WARN_ON_ONCE(err != TCP_MD5_SIG_ID)) + return err; return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); } -static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v4_md5_hash_headers(struct tcp_sig_pool *hp, __be32 daddr, __be32 saddr, const struct tcphdr *th, int nbytes) { @@ -1321,37 +1323,36 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, _th->check = 0; sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, __be32 daddr, __be32 saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; + struct tcp_sig_pool hp; struct ahash_request *req; - hp = tcp_md5sig_pool_get(); - if (!hp) + if (tcp_sig_pool_get(TCP_MD5_SIG_ID, &hp)) goto clear_hash_noput; - req = hp->md5_req; + req = hp.req; if (crypto_ahash_init(req)) goto clear_hash; - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; ahash_request_set_crypt(req, NULL, md5_hash, 0); if (crypto_ahash_final(req)) goto clear_hash; - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); return 0; clear_hash: - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -1361,7 +1362,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb) { - struct tcp_md5sig_pool *hp; + struct tcp_sig_pool hp; struct ahash_request *req; const struct tcphdr *th = tcp_hdr(skb); __be32 saddr, daddr; @@ -1375,29 +1376,28 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, daddr = iph->daddr; } - hp = tcp_md5sig_pool_get(); - if (!hp) + if (tcp_sig_pool_get(TCP_MD5_SIG_ID, &hp)) goto clear_hash_noput; - req = hp->md5_req; + req = hp.req; if (crypto_ahash_init(req)) goto clear_hash; - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; ahash_request_set_crypt(req, NULL, md5_hash, 0); if (crypto_ahash_final(req)) goto clear_hash; - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); return 0; clear_hash: - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index c99cdb529902..7bfc5a42ce98 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -293,10 +293,10 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) tcptw->tw_md5_key = NULL; if (static_branch_unlikely(&tcp_md5_needed)) { struct tcp_md5sig_key *key; - bool err = WARN_ON(!tcp_md5sig_pool_ready()); + bool err = !tcp_sig_pool_ready(TCP_MD5_SIG_ID); key = tp->af_specific->md5_lookup(sk, sk); - if (key && !err) { + if (key && !WARN_ON(err)) { tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); WARN_ON_ONCE(tcptw->tw_md5_key == NULL); } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 9147f9f69196..0abc28937058 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -603,6 +603,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, int l3index = 0; u8 prefixlen; u8 flags; + int err; if (optlen < sizeof(cmd)) return -EINVAL; @@ -654,8 +655,9 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) return -EINVAL; - if (!tcp_md5sig_pool_alloc()) - return -ENOMEM; + err = tcp_sig_pool_alloc("md5"); + if (err < 0 || WARN_ON_ONCE(err != TCP_MD5_SIG_ID)) + return err; if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], @@ -668,7 +670,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); } -static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v6_md5_hash_headers(struct tcp_sig_pool *hp, const struct in6_addr *daddr, const struct in6_addr *saddr, const struct tcphdr *th, int nbytes) @@ -689,38 +691,37 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, _th->check = 0; sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, const struct in6_addr *daddr, struct in6_addr *saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; + struct tcp_sig_pool hp; struct ahash_request *req; - hp = tcp_md5sig_pool_get(); - if (!hp) + if (tcp_sig_pool_get(TCP_MD5_SIG_ID, &hp)) goto clear_hash_noput; - req = hp->md5_req; + req = hp.req; if (crypto_ahash_init(req)) goto clear_hash; - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; ahash_request_set_crypt(req, NULL, md5_hash, 0); if (crypto_ahash_final(req)) goto clear_hash; - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); return 0; clear_hash: - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -732,7 +733,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, const struct sk_buff *skb) { const struct in6_addr *saddr, *daddr; - struct tcp_md5sig_pool *hp; + struct tcp_sig_pool hp; struct ahash_request *req; const struct tcphdr *th = tcp_hdr(skb); @@ -745,29 +746,28 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, daddr = &ip6h->daddr; } - hp = tcp_md5sig_pool_get(); - if (!hp) + if (tcp_sig_pool_get(TCP_MD5_SIG_ID, &hp)) goto clear_hash_noput; - req = hp->md5_req; + req = hp.req; if (crypto_ahash_init(req)) goto clear_hash; - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; ahash_request_set_crypt(req, NULL, md5_hash, 0); if (crypto_ahash_final(req)) goto clear_hash; - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); return 0; clear_hash: - tcp_md5sig_pool_put(); + tcp_sig_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1;