From patchwork Tue Jul 26 20:15:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929792 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4AE2C04A68 for ; Tue, 26 Jul 2022 20:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239871AbiGZUQW (ORCPT ); Tue, 26 Jul 2022 16:16:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239839AbiGZUQQ (ORCPT ); Tue, 26 Jul 2022 16:16:16 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D8B722292 for ; Tue, 26 Jul 2022 13:16:10 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id ay11-20020a05600c1e0b00b003a3013da120so6551wmb.5 for ; Tue, 26 Jul 2022 13:16:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pXanxk9kkSH5IzZhG+IrzCgXJT5SaMoTxGMdgKi88Ao=; b=aOTdaJR7DRCHTIhcRrNUKxttpjYsSYzUN56FfDWDFzWSCG98auk7DNhiske4HaC3zG cS6h+UesI+/EQl/sHZpkP7AsceKUhdlWOjCCO3rXGmOU7O6FegMwB4L8sFyYH9uL0KLU UKsGcQMbRLxqcQ+tphoCYKCx6EqnRaRP9idnG+bIn08ahovsjwxGm6YMATBHXTlbjgiS fWk6KsehukN3u1IN5gy6zWnI59fMaGzddwvFRpExwBo5b4rlTSqWtz8XwLEYXUS0WeZ/ kE0+yP1n5ukWYeqlSd0NDKImOFfrKY9RToA5QaWiBhAsjw6KRTL8ApD3o9t+OickTtBz ysBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pXanxk9kkSH5IzZhG+IrzCgXJT5SaMoTxGMdgKi88Ao=; b=l27N5cWQG9GT+AgYkSdREGvszR1vEdmDu9IlqkNTTbvjMwAgpEl2bXnXe1KuqGW10W IYRW3o+n73n5TKleYFREMMRddtRzKnZ0u516yMwVewO3Ik7vKLMjUELDRn6VRw2Cvn7y PbCTZahLvl6e5leffuw2LhUgkywhqYXtCpycm8g4sCji0/srnb79VkofaekJ1uK/5lA0 2X1LSo9nQmHme6p5hQ4qZTnVd7ooBAJV3+is6rNOYzywubRVdqVRBKr4OX5RgTBp2Pew M2mtQBDlDOUqRbYP5xeojL65xcQBdh+ZH0GJuDbbZXrKTWcFTrLHPW/KaziXOH3VXjMg Ufaw== X-Gm-Message-State: AJIora9lgq41HE//a85I0bxg3RAe1KcmV9tno92Qa52qE8N3yKmlNtLg mbq8iZW3OSVOMWMNHL3RVVPLgw== X-Google-Smtp-Source: AGRyM1vN2dLizczxw8G9+ojw1TT85gS1AkDW1LlK5/c4Vuq3soc//mjpjF9+MdBOjk0uYiaWbIipjg== X-Received: by 2002:a05:600c:2212:b0:3a3:328:5bd2 with SMTP id z18-20020a05600c221200b003a303285bd2mr568174wml.146.1658866568443; Tue, 26 Jul 2022 13:16:08 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:08 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 1/6] crypto: Introduce crypto_pool Date: Tue, 26 Jul 2022 21:15:55 +0100 Message-Id: <20220726201600.1715505-2-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce a per-CPU pool of async crypto requests that can be used in bh-disabled contexts (designed with net RX/TX softirqs as users in mind). Allocation can sleep and is a slow-path. Initial implementation has only ahash as a backend and a fix-sized array of possible algorithms used in parallel. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 + crypto/Makefile | 1 + crypto/crypto_pool.c | 287 ++++++++++++++++++++++++++++++++++++++++++ include/crypto/pool.h | 34 +++++ 4 files changed, 328 insertions(+) create mode 100644 crypto/crypto_pool.c create mode 100644 include/crypto/pool.h diff --git a/crypto/Kconfig b/crypto/Kconfig index bb427a835e44..aeddaa3dcc77 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -2128,6 +2128,12 @@ config CRYPTO_STATS config CRYPTO_HASH_INFO bool +config CRYPTO_POOL + tristate "Per-CPU crypto pool" + default n + help + Per-CPU pool of crypto requests ready for usage in atomic contexts. + source "drivers/crypto/Kconfig" source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" diff --git a/crypto/Makefile b/crypto/Makefile index 167c004dbf4f..6d1d9801b76b 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRYPTO_ACOMP2) += crypto_acompress.o cryptomgr-y := algboss.o testmgr.o obj-$(CONFIG_CRYPTO_MANAGER2) += cryptomgr.o +obj-$(CONFIG_CRYPTO_POOL) += crypto_pool.o obj-$(CONFIG_CRYPTO_USER) += crypto_user.o crypto_user-y := crypto_user_base.o crypto_user-$(CONFIG_CRYPTO_STATS) += crypto_user_stat.o diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c new file mode 100644 index 000000000000..c668c02499b7 --- /dev/null +++ b/crypto/crypto_pool.c @@ -0,0 +1,287 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include +#include +#include +#include +#include +#include + +static unsigned long scratch_size = DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static DEFINE_PER_CPU(void *, crypto_pool_scratch); + +struct crypto_pool_entry { + struct ahash_request * __percpu *req; + const char *alg; + struct kref kref; + bool needs_key; +}; + +#define CPOOL_SIZE (PAGE_SIZE/sizeof(struct crypto_pool_entry)) +static struct crypto_pool_entry cpool[CPOOL_SIZE]; +static int last_allocated; +static DEFINE_MUTEX(cpool_mutex); + +static int crypto_pool_scratch_alloc(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch = per_cpu(crypto_pool_scratch, cpu); + + if (scratch) + continue; + + scratch = kmalloc_node(scratch_size, GFP_KERNEL, + cpu_to_node(cpu)); + if (!scratch) + return -ENOMEM; + per_cpu(crypto_pool_scratch, cpu) = scratch; + } + return 0; +} + +static void crypto_pool_scratch_free(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch = per_cpu(crypto_pool_scratch, cpu); + + if (!scratch) + continue; + per_cpu(crypto_pool_scratch, cpu) = NULL; + kfree(scratch); + } +} + +static int __cpool_alloc_ahash(struct crypto_pool_entry *e, const char *alg) +{ + struct crypto_ahash *hash; + int cpu, ret = -ENOMEM; + + e->alg = kstrdup(alg, GFP_KERNEL); + if (!e->alg) + return -ENOMEM; + + e->req = alloc_percpu(struct ahash_request *); + if (!e->req) + goto out_free_alg; + + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) { + ret = PTR_ERR(hash); + goto out_free_req; + } + + /* If hash has .setkey(), allocate ahash per-cpu, not only request */ + e->needs_key = crypto_ahash_get_flags(hash) & CRYPTO_TFM_NEED_KEY; + + for_each_possible_cpu(cpu) { + struct ahash_request *req; + + if (!hash) + hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) + goto out_free; + + req = ahash_request_alloc(hash, GFP_KERNEL); + if (!req) + goto out_free; + + ahash_request_set_callback(req, 0, NULL, NULL); + + *per_cpu_ptr(e->req, cpu) = req; + + if (e->needs_key) + hash = NULL; + } + kref_init(&e->kref); + return 0; + +out_free: + if (!IS_ERR_OR_NULL(hash) && e->needs_key) + crypto_free_ahash(hash); + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) == NULL) + break; + hash = crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash = NULL; + } + } + + if (hash) + crypto_free_ahash(hash); +out_free_req: + free_percpu(e->req); +out_free_alg: + kfree(e->alg); + e->alg = NULL; + return ret; +} + +/** + * crypto_pool_alloc_ahash - allocates pool for ahash requests + * @alg: name of async hash algorithm + */ +int crypto_pool_alloc_ahash(const char *alg) +{ + unsigned int i; + int err; + + /* slow-path */ + mutex_lock(&cpool_mutex); + err = crypto_pool_scratch_alloc(); + if (err) + goto out; + + for (i = 0; i < last_allocated; i++) { + if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { + kref_get(&cpool[i].kref); + goto out; + } + } + + for (i = 0; i < last_allocated; i++) { + if (!cpool[i].alg) + break; + } + if (i >= CPOOL_SIZE) { + err = -ENOSPC; + goto out; + } + + err = __cpool_alloc_ahash(&cpool[i], alg); + if (!err && last_allocated <= i) + last_allocated++; +out: + mutex_unlock(&cpool_mutex); + return err ?: (int)i; +} +EXPORT_SYMBOL_GPL(crypto_pool_alloc_ahash); + +static void __cpool_free_entry(struct crypto_pool_entry *e) +{ + struct crypto_ahash *hash = NULL; + int cpu; + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) == NULL) + continue; + + hash = crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash = NULL; + } + } + if (hash) + crypto_free_ahash(hash); + free_percpu(e->req); + kfree(e->alg); + memset(e, 0, sizeof(*e)); +} + +static void cpool_cleanup_work_cb(struct work_struct *work) +{ + unsigned int i; + bool free_scratch = true; + + mutex_lock(&cpool_mutex); + for (i = 0; i < last_allocated; i++) { + if (kref_read(&cpool[i].kref) > 0) { + free_scratch = false; + continue; + } + if (!cpool[i].alg) + continue; + __cpool_free_entry(&cpool[i]); + } + if (free_scratch) + crypto_pool_scratch_free(); + mutex_unlock(&cpool_mutex); +} + +static DECLARE_WORK(cpool_cleanup_work, cpool_cleanup_work_cb); +static void cpool_schedule_cleanup(struct kref *kref) +{ + schedule_work(&cpool_cleanup_work); +} + +/** + * crypto_pool_release - decreases number of users for a pool. If it was + * the last user of the pool, releases any memory that was consumed. + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + */ +void crypto_pool_release(unsigned int id) +{ + if (WARN_ON_ONCE(id > last_allocated || !cpool[id].alg)) + return; + + /* slow-path */ + kref_put(&cpool[id].kref, cpool_schedule_cleanup); +} +EXPORT_SYMBOL_GPL(crypto_pool_release); + +/** + * crypto_pool_add - increases number of users (refcounter) for a pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + */ +void crypto_pool_add(unsigned int id) +{ + if (WARN_ON_ONCE(id > last_allocated || !cpool[id].alg)) + return; + kref_get(&cpool[id].kref); +} +EXPORT_SYMBOL_GPL(crypto_pool_add); + +/** + * crypto_pool_get - disable bh and start using crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + * @c: returned crypto_pool for usage (uninitialized on failure) + */ +int crypto_pool_get(unsigned int id, struct crypto_pool *c) +{ + struct crypto_pool_ahash *ret = (struct crypto_pool_ahash *)c; + + local_bh_disable(); + if (WARN_ON_ONCE(id > last_allocated || !cpool[id].alg)) { + local_bh_enable(); + return -EINVAL; + } + ret->req = *this_cpu_ptr(cpool[id].req); + ret->base.scratch = this_cpu_read(crypto_pool_scratch); + return 0; +} +EXPORT_SYMBOL_GPL(crypto_pool_get); + +/** + * crypto_pool_algo - return algorithm of crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_ahash() + * @buf: buffer to return name of algorithm + * @buf_len: size of @buf + */ +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len) +{ + size_t ret = 0; + + /* slow-path */ + mutex_lock(&cpool_mutex); + if (cpool[id].alg) + ret = strscpy(buf, cpool[id].alg, buf_len); + mutex_unlock(&cpool_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_pool_algo); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Per-CPU pool of crypto requests"); diff --git a/include/crypto/pool.h b/include/crypto/pool.h new file mode 100644 index 000000000000..2c61aa45faff --- /dev/null +++ b/include/crypto/pool.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _CRYPTO_POOL_H +#define _CRYPTO_POOL_H + +#include + +#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 + +struct crypto_pool { + void *scratch; +}; + +/* + * struct crypto_pool_ahash - per-CPU pool of ahash_requests + * @base: common members that can be used by any async crypto ops + * @req: pre-allocated ahash request + */ +struct crypto_pool_ahash { + struct crypto_pool base; + struct ahash_request *req; +}; + +int crypto_pool_alloc_ahash(const char *alg); +void crypto_pool_add(unsigned int id); +void crypto_pool_release(unsigned int id); + +int crypto_pool_get(unsigned int id, struct crypto_pool *c); +static inline void crypto_pool_put(void) +{ + local_bh_enable(); +} +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len); + +#endif /* _CRYPTO_POOL_H */ From patchwork Tue Jul 26 20:15:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929793 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13EB9C00144 for ; Tue, 26 Jul 2022 20:16:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239883AbiGZUQX (ORCPT ); Tue, 26 Jul 2022 16:16:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239838AbiGZUQQ (ORCPT ); Tue, 26 Jul 2022 16:16:16 -0400 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3609231DC9 for ; Tue, 26 Jul 2022 13:16:11 -0700 (PDT) Received: by mail-wm1-x32d.google.com with SMTP id j29-20020a05600c1c1d00b003a2fdafdefbso19053wms.2 for ; Tue, 26 Jul 2022 13:16:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GWR8XbBS1oy+lmPSjXOdFIQ7ZMLcTchWH7fRG8D5Oe4=; b=LyKNGMWZNujT6s9/QUWJ2JVBwtpyqqQwZJlDQaKuI+upIlpRhAy3Nj9WwePSSu8pUl yA3q3wn8gtvE5mz8rffc9gM4xJMJrCu8GPLJvcn6Mn+O9ZZAgFc3GCzL64tC11ct6TmK PtQjdgAMGzKrapZECA0xSYFxrtDcXAsfd/IzTcjArOvGuVkEP4LGMSY4fWdrEnyQW4fN J2BOgLkTRSIKF+kKYCweRPV0ABuWIEOrUjWwqN3/Aaegd6rAWAOw6wGm0RtJdgPX/pJv 0RLBc24wp3AJaUS1Jd7q4puaPs4XkhODkccEz8DTMVRjj1HEvj3e5YP5EqWuR+eX339l li5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GWR8XbBS1oy+lmPSjXOdFIQ7ZMLcTchWH7fRG8D5Oe4=; b=rFDuLJhTxc9jzMOD6aAJrKfb3mNYpDpaXHanLNVzhMeL91gvejPlx3Q7uERPmAm4nF VNeWOaALsyKmXZj5nPN4PGqf63tw+dm7+fscIzpicWPtY0hX95hXSwWwEai0mc6NcNQk w83itlQZo9zf4Vo1RVSH5t4b3Oqr8KVB3EjLNmjW8hzps9Qh6+i0wqN3QE2aoJe8ZQG+ DOBTguDro/1F6fs9ylLoZXeeIQy/lCEYkht9GncXxUZkdzTOiuGSjcIxrjbkgDf4yQP/ vzMhAtOUH+9tZd1V5hGf5QPJYJQgDLYIigouFHegNgY/NICKSfeFmCRr/J21gTCGm00l Pg3Q== X-Gm-Message-State: AJIora923qtVmrsGWP562hayJ8uvKbmCnuVI4NTCjBT1qPQhQeQWyNez 91yjkknZShoBeltX4uC9b+J6iA== X-Google-Smtp-Source: AGRyM1sTDJOnzvOn5GuKOaCCikjaJbFUgi0rTM+scygEVQdwUbMQzlcukhe0K/rXy0i8Vr8g6LcDsg== X-Received: by 2002:a7b:c7d1:0:b0:3a3:1890:3495 with SMTP id z17-20020a7bc7d1000000b003a318903495mr591785wmk.18.1658866569657; Tue, 26 Jul 2022 13:16:09 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:09 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 2/6] crypto_pool: Add crypto_pool_reserve_scratch() Date: Tue, 26 Jul 2022 21:15:56 +0100 Message-Id: <20220726201600.1715505-3-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Instead of having build-time hardcoded constant, reallocate scratch area, if needed by user. Different algos, different users may need different size of temp per-CPU buffer. Only up-sizing supported for simplicity. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 ++++ crypto/crypto_pool.c | 65 +++++++++++++++++++++++++++++++------------ include/crypto/pool.h | 3 +- 3 files changed, 54 insertions(+), 20 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index aeddaa3dcc77..e5865be483be 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -2134,6 +2134,12 @@ config CRYPTO_POOL help Per-CPU pool of crypto requests ready for usage in atomic contexts. +config CRYPTO_POOL_DEFAULT_SCRATCH_SIZE + hex "Per-CPU default scratch area size" + depends on CRYPTO_POOL + default 0x100 + range 0x100 0x10000 + source "drivers/crypto/Kconfig" source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c index c668c02499b7..8ad6415fa817 100644 --- a/crypto/crypto_pool.c +++ b/crypto/crypto_pool.c @@ -1,13 +1,14 @@ // SPDX-License-Identifier: GPL-2.0-or-later #include +#include #include #include #include #include #include -static unsigned long scratch_size = DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static unsigned long scratch_size = CONFIG_CRYPTO_POOL_DEFAULT_SCRATCH_SIZE; static DEFINE_PER_CPU(void *, crypto_pool_scratch); struct crypto_pool_entry { @@ -19,28 +20,60 @@ struct crypto_pool_entry { #define CPOOL_SIZE (PAGE_SIZE/sizeof(struct crypto_pool_entry)) static struct crypto_pool_entry cpool[CPOOL_SIZE]; -static int last_allocated; +static unsigned int last_allocated; static DEFINE_MUTEX(cpool_mutex); -static int crypto_pool_scratch_alloc(void) +static void __set_scratch(void *scratch) { - int cpu; + kfree(this_cpu_read(crypto_pool_scratch)); + this_cpu_write(crypto_pool_scratch, scratch); +} - lockdep_assert_held(&cpool_mutex); +/* Slow-path */ +/** + * crypto_pool_reserve_scratch - re-allocates scratch buffer, slow-path + * @size: request size for the scratch/temp buffer + */ +int crypto_pool_reserve_scratch(unsigned long size) +{ + int cpu, err = 0; + mutex_lock(&cpool_mutex); + if (size <= scratch_size) { + for_each_possible_cpu(cpu) { + if (per_cpu(crypto_pool_scratch, cpu)) + continue; + goto allocate_scratch; + } + mutex_unlock(&cpool_mutex); + return 0; + } +allocate_scratch: + cpus_read_lock(); for_each_possible_cpu(cpu) { - void *scratch = per_cpu(crypto_pool_scratch, cpu); + void *scratch; - if (scratch) - continue; + scratch = kmalloc_node(size, GFP_KERNEL, cpu_to_node(cpu)); + if (!scratch) { + err = -ENOMEM; + break; + } - scratch = kmalloc_node(scratch_size, GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - return -ENOMEM; - per_cpu(crypto_pool_scratch, cpu) = scratch; + if (!cpu_online(cpu)) { + kfree(per_cpu(crypto_pool_scratch, cpu)); + per_cpu(crypto_pool_scratch, cpu) = scratch; + continue; + } + err = smp_call_function_single(cpu, __set_scratch, scratch, 1); + if (err) { + kfree(scratch); + break; + } } - return 0; + + cpus_read_unlock(); + mutex_unlock(&cpool_mutex); + return err; } static void crypto_pool_scratch_free(void) @@ -139,10 +172,6 @@ int crypto_pool_alloc_ahash(const char *alg) /* slow-path */ mutex_lock(&cpool_mutex); - err = crypto_pool_scratch_alloc(); - if (err) - goto out; - for (i = 0; i < last_allocated; i++) { if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { kref_get(&cpool[i].kref); diff --git a/include/crypto/pool.h b/include/crypto/pool.h index 2c61aa45faff..c7d817860cc3 100644 --- a/include/crypto/pool.h +++ b/include/crypto/pool.h @@ -4,8 +4,6 @@ #include -#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 - struct crypto_pool { void *scratch; }; @@ -20,6 +18,7 @@ struct crypto_pool_ahash { struct ahash_request *req; }; +int crypto_pool_reserve_scratch(unsigned long size); int crypto_pool_alloc_ahash(const char *alg); void crypto_pool_add(unsigned int id); void crypto_pool_release(unsigned int id); From patchwork Tue Jul 26 20:15:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929794 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19CE5C00140 for ; Tue, 26 Jul 2022 20:16:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239886AbiGZUQ0 (ORCPT ); Tue, 26 Jul 2022 16:16:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239721AbiGZUQV (ORCPT ); Tue, 26 Jul 2022 16:16:21 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93A62326FE for ; Tue, 26 Jul 2022 13:16:12 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id q18so11190722wrx.8 for ; Tue, 26 Jul 2022 13:16:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=pdEmRQDkktxyj133IoEXqOneVb48tak5Ed7jC2tmPss=; b=NoSqp+M8tdLz1sYdFPgF7sEQXLSNybM3qMMiMI+8XdUdNnT/1ziUDTfJOPtzM2pf7B 60eDbGh5QPE2+3VRM5/liw90GlJa4dRAWD35fnQU46tW6uTxxrgvQzAkvdyK7YDJ2BLL S3yc9XpH15mLCEnOqP5tHgO4eztj0vRCO/Lc/AY/QTEDwvOWN66UwOj7wv9/l0cdHWwH Z1elrL0lZEiHkoxXg3msEDCkqQxfTT0XD+8ar1CJzEgclE0dYP0wH9K0jeeflUXT27Ct S51HGosABXnq1obGhEZfcrAIwK2QXaf24/I71b2B99nO+tCN3RNfD/A+47RYIDlq+xYO I1fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=pdEmRQDkktxyj133IoEXqOneVb48tak5Ed7jC2tmPss=; b=7EWeraUsa1qRVyaIVhL2B0nkJjUjh5tCZBJht+ifj+j2JryCgzM7EetHzhornPAvxJ NwUx9MOpDwuFpKaLeiG38wrRc3xw7Uk/JxH2ULnBfGKgEUbxpamS2ZBU8ee0VgWdls77 NDMibz4eKMjhup6cc89DurLmDr+4b9a9LmITWxfs5dIXMJe84SLhoxnmKKFu8DdlK6Tj LbUOZjETKYWM0M0S+LMLUErGtGxDcSb3619aCvaMhzhoSRdhO3t8iFBwQgk0PtaRaetR s5oODOUckWzAfW1UGZ0SGMp7oyJmGOqfXdwy0KEveumflVtdCc+k0Pzf4GHYHyALzAIE 9QOA== X-Gm-Message-State: AJIora/49jRUdAOQujuC5/rlRicPA1JVwXpb40mxg5l/N/7q4z63/kT1 CNvmKnwiL5vd1HFwiqDgZ1ATdA== X-Google-Smtp-Source: AGRyM1uQRBcHLiZ1Y9LPpdW7z50O+MQLPM9/unZYKIWc040bRCMVRnpd/+MrBgLnu3o3KKE4D15qKg== X-Received: by 2002:a5d:5581:0:b0:20f:fc51:7754 with SMTP id i1-20020a5d5581000000b0020ffc517754mr12412279wrv.413.1658866571059; Tue, 26 Jul 2022 13:16:11 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:10 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 3/6] net/tcp: Separate tcp_md5sig_info allocation into tcp_md5sig_info_add() Date: Tue, 26 Jul 2022 21:15:57 +0100 Message-Id: <20220726201600.1715505-4-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a helper to allocate tcp_md5sig_info, that will help later to do/allocate things when info allocated, once per socket. Signed-off-by: Dmitry Safonov --- net/ipv4/tcp_ipv4.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 0c83780dc9bf..55e4092209a5 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1152,6 +1152,24 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, } EXPORT_SYMBOL(tcp_v4_md5_lookup); +static int tcp_md5sig_info_add(struct sock *sk, gfp_t gfp) +{ + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_md5sig_info *md5sig; + + if (rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) + return 0; + + md5sig = kmalloc(sizeof(*md5sig), gfp); + if (!md5sig) + return -ENOMEM; + + sk_gso_disable(sk); + INIT_HLIST_HEAD(&md5sig->head); + rcu_assign_pointer(tp->md5sig_info, md5sig); + return 0; +} + /* This can be called on a newly created socket, from other files */ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags, @@ -1182,17 +1200,11 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, return 0; } + if (tcp_md5sig_info_add(sk, gfp)) + return -ENOMEM; + md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); - if (!md5sig) { - md5sig = kmalloc(sizeof(*md5sig), gfp); - if (!md5sig) - return -ENOMEM; - - sk_gso_disable(sk); - INIT_HLIST_HEAD(&md5sig->head); - rcu_assign_pointer(tp->md5sig_info, md5sig); - } key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) From patchwork Tue Jul 26 20:15:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929795 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96955C04A68 for ; Tue, 26 Jul 2022 20:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239908AbiGZUQf (ORCPT ); Tue, 26 Jul 2022 16:16:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239863AbiGZUQW (ORCPT ); Tue, 26 Jul 2022 16:16:22 -0400 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 149A932B84 for ; Tue, 26 Jul 2022 13:16:14 -0700 (PDT) Received: by mail-wr1-x430.google.com with SMTP id g2so13550768wru.3 for ; Tue, 26 Jul 2022 13:16:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=pmSO5o3Q44tqKwKFfKNMm0Ze8ef3CVpVV9CI9yXzviE=; b=D5lUTewwVLHHGjzMcNus5GsYhY5VOyJ5EbsBDumqrRq9zpSv8LL+NILNY+idc7V+jy gwssJlDW4770QDmmsH1d8Z2O7J/OOre7FLr4zdjpP6erd/N5/yx5S7DUdz/51yqjxP1F XkQpdIlAPcuuU9EK2Hp55iUVvyctYxw5oBCKEMSQJ7N5BNayUJNb8F5yjlmA8z28kUVP A/AzsOtOML497PIrxw9l3CfdJCNw9/gT3tP0R6Td5iUkT08C2NtmMEKETjiSeQZku+3s IoRefq1rlqZP+SVAhReLcNqCIZfD07wbzR0r9SkPpyEzb0AaR/oVqFPa8A4oqfVyQGOU pNtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=pmSO5o3Q44tqKwKFfKNMm0Ze8ef3CVpVV9CI9yXzviE=; b=o5lzk3DN/UlK1LayBI4X5LqtWeEPIDbuwCb2aW7LWGNDL9NevMaZ67L7ms6Q6aamgb Wt55zY5Pc9xQtoUCigYldBSf/HBM1WwiY78jEjQ6TtE6/CURLX+a99U+/giMHyS3wvSn t/b1vsBJwxas9wnp94pIphhcRF1hEVtaOQBUAwjJsWeSTGQfSyCU8Fd9gj7cKk/na2hL /25k9eJmhac5oqVtO88xWlj2e1tklBTZ4Ha8X2TWKxAaIa1FsGyws61LmGlcrY5pMT4E xLvsdqCMi0pD98cSvyeqtKqV5+8xR+loXv/1fkoyWSCFvPZ34i++Y55cXQ6/xnRKkIAs kPcw== X-Gm-Message-State: AJIora/sBFlcSHUZrpvw1paoF/yRXz0UE7EEnqXtll5QOP2hvwOfAzHe bgeS0tV5E9FRYDXC0i3qaeZIJA== X-Google-Smtp-Source: AGRyM1uB79Y9Ri2seTNWcO7Gq0vlx1PC/AF/5zeWVVl++TFNfM6zhF2Jc/Q/fwlASGpGWU8eiPG99Q== X-Received: by 2002:a05:6000:986:b0:21e:939b:a71 with SMTP id by6-20020a056000098600b0021e939b0a71mr5780694wrb.256.1658866572371; Tue, 26 Jul 2022 13:16:12 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:11 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 4/6] net/tcp: Disable TCP-MD5 static key on tcp_md5sig_info destruction Date: Tue, 26 Jul 2022 21:15:58 +0100 Message-Id: <20220726201600.1715505-5-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To do that, separate two scenarios: - where it's the first MD5 key on the system, which means that enabling of the static key may need to sleep; - copying of an existing key from a listening socket to the request socket upon receiving a signed TCP segment, where static key was already enabled (when the key was added to the listening socket). Now the life-time of the static branch for TCP-MD5 is until: - last tcp_md5sig_info is destroyed - last socket in time-wait state with MD5 key is closed. Which means that after all sockets with TCP-MD5 keys are gone, the system gets back the performance of disabled md5-key static branch. Signed-off-by: Dmitry Safonov Reported-by: kernel test robot Reported-by: kernel test robot --- include/net/tcp.h | 10 ++++++--- net/ipv4/tcp.c | 5 +---- net/ipv4/tcp_ipv4.c | 45 +++++++++++++++++++++++++++++++--------- net/ipv4/tcp_minisocks.c | 9 +++++--- net/ipv4/tcp_output.c | 4 ++-- net/ipv6/tcp_ipv6.c | 10 ++++----- 6 files changed, 55 insertions(+), 28 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index f366626cbbba..aa735f963723 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1663,7 +1663,11 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb); int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags, - const u8 *newkey, u8 newkeylen, gfp_t gfp); + const u8 *newkey, u8 newkeylen); +int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, + struct tcp_md5sig_key *key); + int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags); struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, @@ -1671,7 +1675,7 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, #ifdef CONFIG_TCP_MD5SIG #include -extern struct static_key_false tcp_md5_needed; +extern struct static_key_false_deferred tcp_md5_needed; struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family); @@ -1679,7 +1683,7 @@ static inline struct tcp_md5sig_key * tcp_md5_do_lookup(const struct sock *sk, int l3index, const union tcp_md5_addr *addr, int family) { - if (!static_branch_unlikely(&tcp_md5_needed)) + if (!static_branch_unlikely(&tcp_md5_needed.key)) return NULL; return __tcp_md5_do_lookup(sk, l3index, addr, family); } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 635c6782cdbb..300056008e90 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4404,11 +4404,8 @@ bool tcp_alloc_md5sig_pool(void) if (unlikely(!tcp_md5sig_pool_populated)) { mutex_lock(&tcp_md5sig_mutex); - if (!tcp_md5sig_pool_populated) { + if (!tcp_md5sig_pool_populated) __tcp_alloc_md5sig_pool(); - if (tcp_md5sig_pool_populated) - static_branch_inc(&tcp_md5_needed); - } mutex_unlock(&tcp_md5sig_mutex); } diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 55e4092209a5..5b0caaea5029 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1044,7 +1044,7 @@ static void tcp_v4_reqsk_destructor(struct request_sock *req) * We need to maintain these in the sk structure. */ -DEFINE_STATIC_KEY_FALSE(tcp_md5_needed); +DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_md5_needed, HZ); EXPORT_SYMBOL(tcp_md5_needed); static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new) @@ -1171,9 +1171,9 @@ static int tcp_md5sig_info_add(struct sock *sk, gfp_t gfp) } /* This can be called on a newly created socket, from other files */ -int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, - int family, u8 prefixlen, int l3index, u8 flags, - const u8 *newkey, u8 newkeylen, gfp_t gfp) +int __tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, u8 flags, + const u8 *newkey, u8 newkeylen, gfp_t gfp) { /* Add Key to the list */ struct tcp_md5sig_key *key; @@ -1200,9 +1200,6 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, return 0; } - if (tcp_md5sig_info_add(sk, gfp)) - return -ENOMEM; - md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk)); @@ -1226,8 +1223,36 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, hlist_add_head_rcu(&key->node, &md5sig->head); return 0; } + +int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, u8 flags, + const u8 *newkey, u8 newkeylen) +{ + if (tcp_md5sig_info_add(sk, GFP_KERNEL)) + return -ENOMEM; + + static_branch_inc(&tcp_md5_needed.key); + + return __tcp_md5_do_add(sk, addr, family, prefixlen, l3index, flags, + newkey, newkeylen, GFP_KERNEL); +} EXPORT_SYMBOL(tcp_md5_do_add); +int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr, + int family, u8 prefixlen, int l3index, + struct tcp_md5sig_key *key) +{ + if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) + return -ENOMEM; + + atomic_inc(&tcp_md5_needed.key.key.enabled); + + return __tcp_md5_do_add(sk, addr, family, prefixlen, l3index, + key->flags, key->key, key->keylen, + sk_gfp_mask(sk, GFP_ATOMIC)); +} +EXPORT_SYMBOL(tcp_md5_key_copy); + int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags) { @@ -1314,7 +1339,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, return -EINVAL; return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); } static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, @@ -1571,8 +1596,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, * memory, then we end up not copying the key * across. Shucks. */ - tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, key->flags, - key->key, key->keylen, GFP_ATOMIC); + tcp_md5_key_copy(newsk, addr, AF_INET, 32, l3index, key); sk_gso_disable(newsk); } #endif @@ -2260,6 +2284,7 @@ void tcp_v4_destroy_sock(struct sock *sk) tcp_clear_md5_list(sk); kfree_rcu(rcu_dereference_protected(tp->md5sig_info, 1), rcu); tp->md5sig_info = NULL; + static_branch_slow_dec_deferred(&tcp_md5_needed); } #endif diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index cb95d88497ae..5d475a45a478 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -291,13 +291,14 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) */ do { tcptw->tw_md5_key = NULL; - if (static_branch_unlikely(&tcp_md5_needed)) { + if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_md5sig_key *key; key = tp->af_specific->md5_lookup(sk, sk); if (key) { tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool()); + atomic_inc(&tcp_md5_needed.key.key.enabled); } } } while (0); @@ -337,11 +338,13 @@ EXPORT_SYMBOL(tcp_time_wait); void tcp_twsk_destructor(struct sock *sk) { #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed)) { + if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_timewait_sock *twsk = tcp_twsk(sk); - if (twsk->tw_md5_key) + if (twsk->tw_md5_key) { kfree_rcu(twsk->tw_md5_key, rcu); + static_branch_slow_dec_deferred(&tcp_md5_needed); + } } #endif } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 78b654ff421b..9e12845a8758 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -766,7 +766,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, *md5 = NULL; #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed) && + if (static_branch_unlikely(&tcp_md5_needed.key) && rcu_access_pointer(tp->md5sig_info)) { *md5 = tp->af_specific->md5_lookup(sk, sk); if (*md5) { @@ -922,7 +922,7 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb *md5 = NULL; #ifdef CONFIG_TCP_MD5SIG - if (static_branch_unlikely(&tcp_md5_needed) && + if (static_branch_unlikely(&tcp_md5_needed.key) && rcu_access_pointer(tp->md5sig_info)) { *md5 = tp->af_specific->md5_lookup(sk, sk); if (*md5) { diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index e54eee80ce5f..cb891a71db0d 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -658,12 +658,11 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, if (ipv6_addr_v4mapped(&sin6->sin6_addr)) return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], AF_INET, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, - GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr, AF_INET6, prefixlen, l3index, flags, - cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); + cmd.tcpm_key, cmd.tcpm_keylen); } static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, @@ -1359,9 +1358,8 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * * memory, then we end up not copying the key * across. Shucks. */ - tcp_md5_do_add(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, - AF_INET6, 128, l3index, key->flags, key->key, key->keylen, - sk_gfp_mask(sk, GFP_ATOMIC)); + tcp_md5_key_copy(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, + AF_INET6, 128, l3index, key); } #endif From patchwork Tue Jul 26 20:15:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929796 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1776C3F6B0 for ; Tue, 26 Jul 2022 20:16:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239929AbiGZUQy (ORCPT ); Tue, 26 Jul 2022 16:16:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239867AbiGZUQW (ORCPT ); Tue, 26 Jul 2022 16:16:22 -0400 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83C2A32B8F for ; Tue, 26 Jul 2022 13:16:15 -0700 (PDT) Received: by mail-wr1-x42e.google.com with SMTP id q18so11190849wrx.8 for ; Tue, 26 Jul 2022 13:16:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Eehr4I6LShNPnqC8iBpwixr+0n5YGJ6W1hkSoL6jCXQ=; b=g2/reOHW1eJbSXHaa1DvAMlFs2zXHAgHPrGCu/RS16SCytzEIvFndxzuYNXT4GWYfe V7pAkoe4ABpxg1IbikjFCoqATQNRCSO/z1Yycaq3lRakL9NxbpOBZzQxxdihVbMIkwtX NZ2E3K7Z+i7HMGZun2Ey/5Rsw4lkCRSO6L8D8y6GODSTOL9UY3Nb/xU6owmj9kwlIWTJ u/88DoY1abaAxAy4YWTbM9IAcSWc47wLhx+OKHJ/kHBAGV/lmdmtrAGusT6l3I9McN7x Ro+wsqoq7xN+hAUKg39zMUEoAyCGunQioHZSA9Q2ZojXCwRuG99EPd6HlNMM0yfh4FIV 84Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Eehr4I6LShNPnqC8iBpwixr+0n5YGJ6W1hkSoL6jCXQ=; b=49q2ZbGsZFOGNF26j/5tb0drBpzfNT170Zytty01s5Nb2vgLzcsBO3JJakSO2NpbHM 6Sn2xnEiFRWzrqw8msiypA85L0Fh7CT0+O9kBZE1JOTW/jMr8L6FTFekcU9tWkZuejN8 UyamvAg3qEDEY4urMyJdNbaIPyN20oyncH653/Nu0jsWFk7bukB6k8JJRAmjmOa3jjvK TxLxzazBrwdXRec9zIzoJZ9Kat58F9RrSdbXYmx1NxVopK8BNj91aKSz7QuMrpmSgL5Y gkqlXdHoFvN2ZVVNadrI7GY1sxmThkALYTKrBuhnkk2u3sdaFE+mEbH+AVm2glcMa46I jpsQ== X-Gm-Message-State: AJIora/HD47RxQuROWgcbbG4le4Sk+1cAUmkJE4Msp5WMJSpaOtzy/YU HnM0zB+oh1HcFDPhEFswqWc5hw== X-Google-Smtp-Source: AGRyM1uVbzFsXTbvL6A2nOYM6ttwODXmdQywk1yFcQ8V0rwXJHiB0bUjPmQaRKZ/gDlecdLa4vnigw== X-Received: by 2002:adf:facb:0:b0:21e:4f54:9651 with SMTP id a11-20020adffacb000000b0021e4f549651mr12470112wrs.378.1658866573724; Tue, 26 Jul 2022 13:16:13 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:13 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 5/6] net/tcp: Use crypto_pool for TCP-MD5 Date: Tue, 26 Jul 2022 21:15:59 +0100 Message-Id: <20220726201600.1715505-6-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use crypto_pool API that was designed with tcp_md5sig_pool in mind. The conversion to use crypto_pool will allow: - to reuse ahash_request(s) for different users - to allocate only one per-CPU scratch buffer rather than a new one for each user - to have a common API for net/ users that need ahash on RX/TX fast path Signed-off-by: Dmitry Safonov --- include/net/tcp.h | 22 +++------ net/ipv4/Kconfig | 2 +- net/ipv4/tcp.c | 99 +++++++++++----------------------------- net/ipv4/tcp_ipv4.c | 90 +++++++++++++++++++++--------------- net/ipv4/tcp_minisocks.c | 22 +++++++-- net/ipv6/tcp_ipv6.c | 53 ++++++++++----------- 6 files changed, 127 insertions(+), 161 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index aa735f963723..7060dd1cf6cd 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1652,12 +1652,6 @@ union tcp_md5sum_block { #endif }; -/* - pool: digest algorithm, hash description and scratch buffer */ -struct tcp_md5sig_pool { - struct ahash_request *md5_req; - void *scratch; -}; - /* - functions */ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb); @@ -1713,17 +1707,15 @@ tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb, #define tcp_twsk_md5_key(twsk) NULL #endif -bool tcp_alloc_md5sig_pool(void); - -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void); -static inline void tcp_put_md5sig_pool(void) -{ - local_bh_enable(); -} +struct crypto_pool_ahash; +int tcp_md5_alloc_crypto_pool(void); +void tcp_md5_release_crypto_pool(void); +void tcp_md5_add_crypto_pool(void); +extern int tcp_md5_crypto_pool_id; -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff *, +int tcp_md5_hash_skb_data(struct crypto_pool_ahash *, const struct sk_buff *, unsigned int header_len); -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_key(struct crypto_pool_ahash *hp, const struct tcp_md5sig_key *key); /* From tcp_fastopen.c */ diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig index e983bb0c5012..c341864e4398 100644 --- a/net/ipv4/Kconfig +++ b/net/ipv4/Kconfig @@ -733,7 +733,7 @@ config DEFAULT_TCP_CONG config TCP_MD5SIG bool "TCP: MD5 Signature Option support (RFC2385)" - select CRYPTO + select CRYPTO_POOL select CRYPTO_MD5 help RFC2385 specifies a method of giving MD5 protection to TCP sessions. diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 300056008e90..73abb00e12bf 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -244,6 +244,7 @@ #define pr_fmt(fmt) "TCP: " fmt #include +#include #include #include #include @@ -4355,92 +4356,43 @@ int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval, EXPORT_SYMBOL(tcp_getsockopt); #ifdef CONFIG_TCP_MD5SIG -static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool); -static DEFINE_MUTEX(tcp_md5sig_mutex); -static bool tcp_md5sig_pool_populated = false; +int tcp_md5_crypto_pool_id = -1; +EXPORT_SYMBOL(tcp_md5_crypto_pool_id); -static void __tcp_alloc_md5sig_pool(void) +int tcp_md5_alloc_crypto_pool(void) { - struct crypto_ahash *hash; - int cpu; - - hash = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); - if (IS_ERR(hash)) - return; - - for_each_possible_cpu(cpu) { - void *scratch = per_cpu(tcp_md5sig_pool, cpu).scratch; - struct ahash_request *req; - - if (!scratch) { - scratch = kmalloc_node(sizeof(union tcp_md5sum_block) + - sizeof(struct tcphdr), - GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - return; - per_cpu(tcp_md5sig_pool, cpu).scratch = scratch; - } - if (per_cpu(tcp_md5sig_pool, cpu).md5_req) - continue; - - req = ahash_request_alloc(hash, GFP_KERNEL); - if (!req) - return; + int ret; - ahash_request_set_callback(req, 0, NULL, NULL); + ret = crypto_pool_reserve_scratch(sizeof(union tcp_md5sum_block) + + sizeof(struct tcphdr)); + if (ret) + return ret; - per_cpu(tcp_md5sig_pool, cpu).md5_req = req; - } - /* before setting tcp_md5sig_pool_populated, we must commit all writes - * to memory. See smp_rmb() in tcp_get_md5sig_pool() - */ - smp_wmb(); - tcp_md5sig_pool_populated = true; + ret = crypto_pool_alloc_ahash("md5"); + if (ret >= 0) + tcp_md5_crypto_pool_id = ret; + return ret; } +EXPORT_SYMBOL(tcp_md5_alloc_crypto_pool); -bool tcp_alloc_md5sig_pool(void) +void tcp_md5_release_crypto_pool(void) { - if (unlikely(!tcp_md5sig_pool_populated)) { - mutex_lock(&tcp_md5sig_mutex); - - if (!tcp_md5sig_pool_populated) - __tcp_alloc_md5sig_pool(); - - mutex_unlock(&tcp_md5sig_mutex); - } - return tcp_md5sig_pool_populated; + crypto_pool_release(tcp_md5_crypto_pool_id); } -EXPORT_SYMBOL(tcp_alloc_md5sig_pool); +EXPORT_SYMBOL(tcp_md5_release_crypto_pool); - -/** - * tcp_get_md5sig_pool - get md5sig_pool for this user - * - * We use percpu structure, so if we succeed, we exit with preemption - * and BH disabled, to make sure another thread or softirq handling - * wont try to get same context. - */ -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void) +void tcp_md5_add_crypto_pool(void) { - local_bh_disable(); - - if (tcp_md5sig_pool_populated) { - /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */ - smp_rmb(); - return this_cpu_ptr(&tcp_md5sig_pool); - } - local_bh_enable(); - return NULL; + crypto_pool_add(tcp_md5_crypto_pool_id); } -EXPORT_SYMBOL(tcp_get_md5sig_pool); +EXPORT_SYMBOL(tcp_md5_add_crypto_pool); -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_skb_data(struct crypto_pool_ahash *hp, const struct sk_buff *skb, unsigned int header_len) { struct scatterlist sg; const struct tcphdr *tp = tcp_hdr(skb); - struct ahash_request *req = hp->md5_req; + struct ahash_request *req = hp->req; unsigned int i; const unsigned int head_data_len = skb_headlen(skb) > header_len ? skb_headlen(skb) - header_len : 0; @@ -4474,16 +4426,17 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, } EXPORT_SYMBOL(tcp_md5_hash_skb_data); -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key) +int tcp_md5_hash_key(struct crypto_pool_ahash *hp, + const struct tcp_md5sig_key *key) { u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */ struct scatterlist sg; sg_init_one(&sg, key->key, keylen); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen); + ahash_request_set_crypt(hp->req, &sg, NULL, keylen); /* We use data_race() because tcp_md5_do_add() might change key->key under us */ - return data_race(crypto_ahash_update(hp->md5_req)); + return data_race(crypto_ahash_update(hp->req)); } EXPORT_SYMBOL(tcp_md5_hash_key); diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 5b0caaea5029..100e142ed03a 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -79,6 +79,7 @@ #include #include +#include #include #include @@ -1206,10 +1207,6 @@ int __tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) return -ENOMEM; - if (!tcp_alloc_md5sig_pool()) { - sock_kfree_s(sk, key, sizeof(*key)); - return -ENOMEM; - } memcpy(key->key, newkey, newkeylen); key->keylen = newkeylen; @@ -1228,8 +1225,13 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, u8 flags, const u8 *newkey, u8 newkeylen) { - if (tcp_md5sig_info_add(sk, GFP_KERNEL)) + if (tcp_md5_alloc_crypto_pool()) + return -ENOMEM; + + if (tcp_md5sig_info_add(sk, GFP_KERNEL)) { + tcp_md5_release_crypto_pool(); return -ENOMEM; + } static_branch_inc(&tcp_md5_needed.key); @@ -1242,8 +1244,12 @@ int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr, int family, u8 prefixlen, int l3index, struct tcp_md5sig_key *key) { - if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) + tcp_md5_add_crypto_pool(); + + if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) { + tcp_md5_release_crypto_pool(); return -ENOMEM; + } atomic_inc(&tcp_md5_needed.key.key.enabled); @@ -1342,7 +1348,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname, cmd.tcpm_key, cmd.tcpm_keylen); } -static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v4_md5_hash_headers(struct crypto_pool_ahash *hp, __be32 daddr, __be32 saddr, const struct tcphdr *th, int nbytes) { @@ -1350,7 +1356,7 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, struct scatterlist sg; struct tcphdr *_th; - bp = hp->scratch; + bp = hp->base.scratch; bp->saddr = saddr; bp->daddr = daddr; bp->pad = 0; @@ -1362,37 +1368,34 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, _th->check = 0; sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, __be32 daddr, __be32 saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; - hp = tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req = hp->md5_req; - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -1402,8 +1405,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; const struct tcphdr *th = tcp_hdr(skb); __be32 saddr, daddr; @@ -1416,29 +1418,27 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, daddr = iph->daddr; } - hp = tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req = hp->md5_req; - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -2257,6 +2257,18 @@ static int tcp_v4_init_sock(struct sock *sk) return 0; } +#ifdef CONFIG_TCP_MD5SIG +static void tcp_md5sig_info_free_rcu(struct rcu_head *head) +{ + struct tcp_md5sig_info *md5sig; + + md5sig = container_of(head, struct tcp_md5sig_info, rcu); + kfree(md5sig); + static_branch_slow_dec_deferred(&tcp_md5_needed); + tcp_md5_release_crypto_pool(); +} +#endif + void tcp_v4_destroy_sock(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); @@ -2281,10 +2293,12 @@ void tcp_v4_destroy_sock(struct sock *sk) #ifdef CONFIG_TCP_MD5SIG /* Clean up the MD5 key list, if any */ if (tp->md5sig_info) { + struct tcp_md5sig_info *md5sig; + + md5sig = rcu_dereference_protected(tp->md5sig_info, 1); tcp_clear_md5_list(sk); - kfree_rcu(rcu_dereference_protected(tp->md5sig_info, 1), rcu); - tp->md5sig_info = NULL; - static_branch_slow_dec_deferred(&tcp_md5_needed); + call_rcu(&md5sig->rcu, tcp_md5sig_info_free_rcu); + rcu_assign_pointer(tp->md5sig_info, NULL); } #endif diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 5d475a45a478..d1d30337ffec 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -297,8 +297,10 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) key = tp->af_specific->md5_lookup(sk, sk); if (key) { tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC); - BUG_ON(tcptw->tw_md5_key && !tcp_alloc_md5sig_pool()); + if (WARN_ON(!tcptw->tw_md5_key)) + break; atomic_inc(&tcp_md5_needed.key.key.enabled); + tcp_md5_add_crypto_pool(); } } } while (0); @@ -335,16 +337,26 @@ void tcp_time_wait(struct sock *sk, int state, int timeo) } EXPORT_SYMBOL(tcp_time_wait); +#ifdef CONFIG_TCP_MD5SIG +static void tcp_md5_twsk_free_rcu(struct rcu_head *head) +{ + struct tcp_md5sig_key *key; + + key = container_of(head, struct tcp_md5sig_key, rcu); + kfree(key); + static_branch_slow_dec_deferred(&tcp_md5_needed); + tcp_md5_release_crypto_pool(); +} +#endif + void tcp_twsk_destructor(struct sock *sk) { #ifdef CONFIG_TCP_MD5SIG if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_timewait_sock *twsk = tcp_twsk(sk); - if (twsk->tw_md5_key) { - kfree_rcu(twsk->tw_md5_key, rcu); - static_branch_slow_dec_deferred(&tcp_md5_needed); - } + if (twsk->tw_md5_key) + call_rcu(&twsk->tw_md5_key->rcu, tcp_md5_twsk_free_rcu); } #endif } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index cb891a71db0d..f75569f889e7 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -64,6 +64,7 @@ #include #include +#include #include #include @@ -665,7 +666,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname, cmd.tcpm_key, cmd.tcpm_keylen); } -static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v6_md5_hash_headers(struct crypto_pool_ahash *hp, const struct in6_addr *daddr, const struct in6_addr *saddr, const struct tcphdr *th, int nbytes) @@ -674,7 +675,7 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, struct scatterlist sg; struct tcphdr *_th; - bp = hp->scratch; + bp = hp->base.scratch; /* 1. TCP pseudo-header (RFC2460) */ bp->saddr = *saddr; bp->daddr = *daddr; @@ -686,38 +687,35 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, _th->check = 0; sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key, const struct in6_addr *daddr, struct in6_addr *saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; - hp = tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req = hp->md5_req; - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -729,8 +727,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, const struct sk_buff *skb) { const struct in6_addr *saddr, *daddr; - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; const struct tcphdr *th = tcp_hdr(skb); if (sk) { /* valid for establish/request sockets */ @@ -742,29 +739,27 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, daddr = &ip6h->daddr; } - hp = tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req = hp->md5_req; - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; From patchwork Tue Jul 26 20:16:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 12929797 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0CA3C04A68 for ; Tue, 26 Jul 2022 20:16:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239839AbiGZUQ4 (ORCPT ); Tue, 26 Jul 2022 16:16:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239869AbiGZUQW (ORCPT ); Tue, 26 Jul 2022 16:16:22 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FD8032DAF for ; Tue, 26 Jul 2022 13:16:16 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id h9so21649934wrm.0 for ; Tue, 26 Jul 2022 13:16:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=awxy/BDZoT0mM/P9mIhPTjV07MQDeBS6ciLhmG/Wd6U=; b=Mas144nsu2lGiSPU7SoK2Dkvt5JZVg0bt5YZ5uyveNMSh2MeomG6b8c0PR8essfZ1Q OxyDDVWcaSYFog89+hcdJRrm9vRWNeQS2kulnHwpYs9bfwmBrZpQbiLlAzfg/r6kKxjn deh0mXYPSOEVWcT0Rv5X+0xnkkdE6DQud2sIVROP4iJtFKe2oscy989H6b8PQ0EQ87nc YedkvrWDJ9z1gCK4IoW2WjiPOJzIM2oBFEg3BsEAOvNLbZAoXufMAO2z/G6ge0mlrL+T eF6i+lgGk7MQNJYBklV8qUguOilc/LdaowwM2bRfxirx7W624ROvjBwwlsRDt3PR8Tm5 HQnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=awxy/BDZoT0mM/P9mIhPTjV07MQDeBS6ciLhmG/Wd6U=; b=gOW+mL5PYsgmYaPJPgX/+l5Gd3Ni1DDElQdJq9HhwX8Wx1cbjhswaD/Zs62W5glr07 ++O3cJr+GXcePr6j/yp3Q3wkDN9toZiqMou4I7fOg9/mxkmTXeZfsR8jJgxvuat37EyB 5NxnAsDPm2zzZYNNX1eCXs8voMSXAdxk5Ustzfh1ZcEnIvoRDKLWOmpWSPLvCoQEuTPR bh4yP945xtS82xOhfg4wuIxo3irKucrMuaa2F1MxbkqFMwCo/yA16uwGsE6EwO4Pw3x9 2+zY+wOkhrLXvlMHuvx74aJyF/PDhyuwl37j74YnSF2CNmicRydCMYSEH3LhybMChdlP Rksw== X-Gm-Message-State: AJIora90rntB8nNw5lFVMqrGCXntnraFzUiLkBGG/fYJGPtjHp6N3RPf PgKsYlpAFo+ugr4HG55azVmb3g== X-Google-Smtp-Source: AGRyM1suvy8hibTSugbZ0RLGIbu6B2puqQ/WjE13KKElm59HY9bzOPt2Uu/G6z8SjtU3/DaBfjhoYg== X-Received: by 2002:a05:6000:168e:b0:21d:ae03:49d9 with SMTP id y14-20020a056000168e00b0021dae0349d9mr12073735wrd.457.1658866575021; Tue, 26 Jul 2022 13:16:15 -0700 (PDT) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id m6-20020a05600c3b0600b003a320e6f011sm28073wms.1.2022.07.26.13.16.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Jul 2022 13:16:14 -0700 (PDT) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Cc: Dmitry Safonov <0x7f454c46@gmail.com>, Dmitry Safonov , Andy Lutomirski , Ard Biesheuvel , David Ahern , "David S. Miller" , Eric Biggers , Eric Dumazet , Francesco Ruggeri , Herbert Xu , Hideaki YOSHIFUJI , Jakub Kicinski , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH 6/6] net/ipv6: sr: Switch to using crypto_pool Date: Tue, 26 Jul 2022 21:16:00 +0100 Message-Id: <20220726201600.1715505-7-dima@arista.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220726201600.1715505-1-dima@arista.com> References: <20220726201600.1715505-1-dima@arista.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The conversion to use crypto_pool has the following upsides: - now SR uses asynchronous API which may potentially free CPU cycles and improve performance for of CPU crypto algorithm providers; - hash descriptors now don't have to be allocated on boot, but only at the moment SR starts using HMAC and until the last HMAC secret is deleted; - potentially reuse ahash_request(s) for different users - allocate only one per-CPU scratch buffer rather than a new one for each user - have a common API for net/ users that need ahash on RX/TX fast path Signed-off-by: Dmitry Safonov --- include/net/seg6_hmac.h | 7 -- net/ipv6/Kconfig | 2 +- net/ipv6/seg6.c | 3 - net/ipv6/seg6_hmac.c | 204 ++++++++++++++++------------------------ 4 files changed, 80 insertions(+), 136 deletions(-) diff --git a/include/net/seg6_hmac.h b/include/net/seg6_hmac.h index 2b5d2ee5613e..d6b7820ecda2 100644 --- a/include/net/seg6_hmac.h +++ b/include/net/seg6_hmac.h @@ -32,13 +32,6 @@ struct seg6_hmac_info { u8 alg_id; }; -struct seg6_hmac_algo { - u8 alg_id; - char name[64]; - struct crypto_shash * __percpu *tfms; - struct shash_desc * __percpu *shashs; -}; - extern int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, struct in6_addr *saddr, u8 *output); diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig index 658bfed1df8b..5be1dab0f178 100644 --- a/net/ipv6/Kconfig +++ b/net/ipv6/Kconfig @@ -304,7 +304,7 @@ config IPV6_SEG6_LWTUNNEL config IPV6_SEG6_HMAC bool "IPv6: Segment Routing HMAC support" depends on IPV6 - select CRYPTO + select CRYPTO_POOL select CRYPTO_HMAC select CRYPTO_SHA1 select CRYPTO_SHA256 diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c index 73aaabf0e966..96b80e1d04c1 100644 --- a/net/ipv6/seg6.c +++ b/net/ipv6/seg6.c @@ -552,9 +552,6 @@ int __init seg6_init(void) void seg6_exit(void) { -#ifdef CONFIG_IPV6_SEG6_HMAC - seg6_hmac_exit(); -#endif #ifdef CONFIG_IPV6_SEG6_LWTUNNEL seg6_iptunnel_exit(); #endif diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index d43c50a7310d..3732dd993925 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -35,6 +35,7 @@ #include #include +#include #include #include #include @@ -70,6 +71,12 @@ static const struct rhashtable_params rht_params = { .obj_cmpfn = seg6_hmac_cmpfn, }; +struct seg6_hmac_algo { + u8 alg_id; + char name[64]; + int crypto_pool_id; +}; + static struct seg6_hmac_algo hmac_algos[] = { { .alg_id = SEG6_HMAC_ALGO_SHA1, @@ -115,55 +122,17 @@ static struct seg6_hmac_algo *__hmac_get_algo(u8 alg_id) return NULL; } -static int __do_hmac(struct seg6_hmac_info *hinfo, const char *text, u8 psize, - u8 *output, int outlen) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int ret, dgsize; - - algo = __hmac_get_algo(hinfo->alg_id); - if (!algo) - return -ENOENT; - - tfm = *this_cpu_ptr(algo->tfms); - - dgsize = crypto_shash_digestsize(tfm); - if (dgsize > outlen) { - pr_debug("sr-ipv6: __do_hmac: digest size too big (%d / %d)\n", - dgsize, outlen); - return -ENOMEM; - } - - ret = crypto_shash_setkey(tfm, hinfo->secret, hinfo->slen); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_setkey failed: err %d\n", ret); - goto failed; - } - - shash = *this_cpu_ptr(algo->shashs); - shash->tfm = tfm; - - ret = crypto_shash_digest(shash, text, psize, output); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_digest failed: err %d\n", ret); - goto failed; - } - - return dgsize; - -failed: - return ret; -} - int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, struct in6_addr *saddr, u8 *output) { __be32 hmackeyid = cpu_to_be32(hinfo->hmackeyid); - u8 tmp_out[SEG6_HMAC_MAX_DIGESTSIZE]; + struct crypto_pool_ahash hp; + struct seg6_hmac_algo *algo; int plen, i, dgsize, wrsize; + struct crypto_ahash *tfm; + struct scatterlist sg; char *ring, *off; + int err; /* a 160-byte buffer for digest output allows to store highest known * hash function (RadioGatun) with up to 1216 bits @@ -176,6 +145,10 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, if (plen >= SEG6_HMAC_RING_SIZE) return -EMSGSIZE; + algo = __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + /* Let's build the HMAC text on the ring buffer. The text is composed * as follows, in order: * @@ -186,8 +159,36 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, * 5. All segments in the segments list (n * 128 bits) */ - local_bh_disable(); + err = crypto_pool_get(algo->crypto_pool_id, (struct crypto_pool *)&hp); + if (err) + return err; + ring = this_cpu_ptr(hmac_ring); + + sg_init_one(&sg, ring, plen); + + tfm = crypto_ahash_reqtfm(hp.req); + dgsize = crypto_ahash_digestsize(tfm); + if (dgsize > SEG6_HMAC_MAX_DIGESTSIZE) { + pr_debug("digest size too big (%d / %d)\n", + dgsize, SEG6_HMAC_MAX_DIGESTSIZE); + err = -ENOMEM; + goto err_put_pool; + } + + err = crypto_ahash_setkey(tfm, hinfo->secret, hinfo->slen); + if (err) { + pr_debug("crypto_ahash_setkey failed: err %d\n", err); + goto err_put_pool; + } + + err = crypto_ahash_init(hp.req); + if (err) + goto err_put_pool; + + ahash_request_set_crypt(hp.req, &sg, + hp.base.scratch, SEG6_HMAC_MAX_DIGESTSIZE); + off = ring; /* source address */ @@ -210,21 +211,25 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, off += 16; } - dgsize = __do_hmac(hinfo, ring, plen, tmp_out, - SEG6_HMAC_MAX_DIGESTSIZE); - local_bh_enable(); + err = crypto_ahash_update(hp.req); + if (err) + goto err_put_pool; - if (dgsize < 0) - return dgsize; + err = crypto_ahash_final(hp.req); + if (err) + goto err_put_pool; wrsize = SEG6_HMAC_FIELD_LEN; if (wrsize > dgsize) wrsize = dgsize; memset(output, 0, SEG6_HMAC_FIELD_LEN); - memcpy(output, tmp_out, wrsize); + memcpy(output, hp.base.scratch, wrsize); - return 0; +err_put_pool: + crypto_pool_put(); + + return err; } EXPORT_SYMBOL(seg6_hmac_compute); @@ -291,12 +296,24 @@ EXPORT_SYMBOL(seg6_hmac_info_lookup); int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo) { struct seg6_pernet_data *sdata = seg6_pernet(net); - int err; + struct seg6_hmac_algo *algo; + int ret; - err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, + algo = __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + + ret = crypto_pool_alloc_ahash(algo->name); + if (ret < 0) + return ret; + algo->crypto_pool_id = ret; + + ret = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, rht_params); + if (ret) + crypto_pool_release(algo->crypto_pool_id); - return err; + return ret; } EXPORT_SYMBOL(seg6_hmac_info_add); @@ -304,6 +321,7 @@ int seg6_hmac_info_del(struct net *net, u32 key) { struct seg6_pernet_data *sdata = seg6_pernet(net); struct seg6_hmac_info *hinfo; + struct seg6_hmac_algo *algo; int err = -ENOENT; hinfo = rhashtable_lookup_fast(&sdata->hmac_infos, &key, rht_params); @@ -315,6 +333,12 @@ int seg6_hmac_info_del(struct net *net, u32 key) if (err) goto out; + algo = __hmac_get_algo(hinfo->alg_id); + if (algo) + crypto_pool_release(algo->crypto_pool_id); + else + WARN_ON_ONCE(1); + seg6_hinfo_release(hinfo); out: @@ -348,56 +372,9 @@ int seg6_push_hmac(struct net *net, struct in6_addr *saddr, } EXPORT_SYMBOL(seg6_push_hmac); -static int seg6_hmac_init_algo(void) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int i, alg_count, cpu; - - alg_count = ARRAY_SIZE(hmac_algos); - - for (i = 0; i < alg_count; i++) { - struct crypto_shash **p_tfm; - int shsize; - - algo = &hmac_algos[i]; - algo->tfms = alloc_percpu(struct crypto_shash *); - if (!algo->tfms) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - tfm = crypto_alloc_shash(algo->name, 0, 0); - if (IS_ERR(tfm)) - return PTR_ERR(tfm); - p_tfm = per_cpu_ptr(algo->tfms, cpu); - *p_tfm = tfm; - } - - p_tfm = raw_cpu_ptr(algo->tfms); - tfm = *p_tfm; - - shsize = sizeof(*shash) + crypto_shash_descsize(tfm); - - algo->shashs = alloc_percpu(struct shash_desc *); - if (!algo->shashs) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - shash = kzalloc_node(shsize, GFP_KERNEL, - cpu_to_node(cpu)); - if (!shash) - return -ENOMEM; - *per_cpu_ptr(algo->shashs, cpu) = shash; - } - } - - return 0; -} - int __init seg6_hmac_init(void) { - return seg6_hmac_init_algo(); + return crypto_pool_reserve_scratch(SEG6_HMAC_MAX_DIGESTSIZE); } int __net_init seg6_hmac_net_init(struct net *net) @@ -407,29 +384,6 @@ int __net_init seg6_hmac_net_init(struct net *net) return rhashtable_init(&sdata->hmac_infos, &rht_params); } -void seg6_hmac_exit(void) -{ - struct seg6_hmac_algo *algo = NULL; - int i, alg_count, cpu; - - alg_count = ARRAY_SIZE(hmac_algos); - for (i = 0; i < alg_count; i++) { - algo = &hmac_algos[i]; - for_each_possible_cpu(cpu) { - struct crypto_shash *tfm; - struct shash_desc *shash; - - shash = *per_cpu_ptr(algo->shashs, cpu); - kfree(shash); - tfm = *per_cpu_ptr(algo->tfms, cpu); - crypto_free_shash(tfm); - } - free_percpu(algo->tfms); - free_percpu(algo->shashs); - } -} -EXPORT_SYMBOL(seg6_hmac_exit); - void __net_exit seg6_hmac_net_exit(struct net *net) { struct seg6_pernet_data *sdata = seg6_pernet(net);