From patchwork Mon Jan 15 22:08:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Fedorenko X-Patchwork-Id: 13520228 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB0EA1B596; Mon, 15 Jan 2024 22:08:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=meta.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=meta.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=meta.com header.i=@meta.com header.b="fQ9rvWKq" Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 40FEOQSs002602; Mon, 15 Jan 2024 14:08:15 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=168IDPBiTcoUW8SKfBkkujyxEWMifF1n933AvjBRx5c=; b=fQ9rvWKqR9l17HcBO+bevL1ANzOuQV2meJiXU5sLUqrG90K967+xw5l7REweejBDYlhf bGxGlyPDV9FwRhM63wl7P29Qj6GZ0EwjSEfV2shYSj65tNAwqHsO2sxJob5rvDLqmbjr 7oea9qwnHgL+J9YyGtb61J2cV4/vySStuC6sHTHzTYnBVFgxXY9p7bON/zu66yFMygOn xjCeHG0N2QQ+RJH/rer53lnUurWyd8PszyTrlemW5Ow/iu7Q84g/El2A9Sw/Y+j8u1gH jDobU7MyEydCjMYHXOPeBKtGSoZnsdfvhdA3/6KzMBdYOVfiLxyZoOoJs3mkDOlsh6kl WA== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3vn5ve2fn7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Mon, 15 Jan 2024 14:08:15 -0800 Received: from devvm4158.cln0.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server id 15.1.2507.35; Mon, 15 Jan 2024 14:08:11 -0800 From: Vadim Fedorenko To: Vadim Fedorenko , Jakub Kicinski , Martin KaFai Lau , Andrii Nakryiko , Alexei Starovoitov , Mykola Lysenko , Herbert Xu CC: Vadim Fedorenko , , , , Victor Stewart Subject: [PATCH bpf-next v8 1/3] bpf: make common crypto API for TC/XDP programs Date: Mon, 15 Jan 2024 14:08:01 -0800 Message-ID: <20240115220803.1973440-1-vadfed@meta.com> X-Mailer: git-send-email 2.39.3 Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: TgyBaaR8feTtPZaCcbMoI9eoGRXYbomQ X-Proofpoint-ORIG-GUID: TgyBaaR8feTtPZaCcbMoI9eoGRXYbomQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-01-15_15,2024-01-15_03,2023-05-22_02 Add crypto API support to BPF to be able to decrypt or encrypt packets in TC/XDP BPF programs. Special care should be taken for initialization part of crypto algo because crypto alloc) doesn't work with preemtion disabled, it can be run only in sleepable BPF program. Also async crypto is not supported because of the very same issue - TC/XDP BPF programs are not sleepable. Signed-off-by: Vadim Fedorenko --- v7 -> v8: - add statesize ops to bpf crypto type as some ciphers are now stateful - improve error path in bpf_crypto_create v6 -> v7: - style fixes v5 -> v6: - replace lskcipher with infrastructure to provide pluggable cipher types - add BPF skcipher as plug-in module in a separate patch v4 -> v5: - replace crypto API to use lskcipher (suggested by Herbert Xu) - remove SG list usage and provide raw buffers v3 -> v4: - reuse __bpf_dynptr_data and remove own implementation - use const __str to provide algorithm name - use kfunc macroses to avoid compilator warnings v2 -> v3: - fix kdoc issues v1 -> v2: - use kmalloc in sleepable func, suggested by Alexei - use __bpf_dynptr_is_rdonly() to check destination, suggested by Jakub - use __bpf_dynptr_data_ptr() for all dynptr accesses --- include/linux/bpf.h | 1 + include/linux/bpf_crypto.h | 24 +++ kernel/bpf/Makefile | 3 + kernel/bpf/crypto.c | 366 +++++++++++++++++++++++++++++++++++++ kernel/bpf/helpers.c | 2 +- kernel/bpf/verifier.c | 1 + 6 files changed, 396 insertions(+), 1 deletion(-) create mode 100644 include/linux/bpf_crypto.h create mode 100644 kernel/bpf/crypto.c diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 377857b232c6..54fc30c64d19 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1263,6 +1263,7 @@ int bpf_dynptr_check_size(u32 size); u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr); const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len); void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len); +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr); #ifdef CONFIG_BPF_JIT int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h new file mode 100644 index 000000000000..8456b7477e1d --- /dev/null +++ b/include/linux/bpf_crypto.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ +#ifndef _BPF_CRYPTO_H +#define _BPF_CRYPTO_H + +struct bpf_crypto_type { + void *(*alloc_tfm)(const char *algo); + void (*free_tfm)(void *tfm); + int (*has_algo)(const char *algo); + int (*setkey)(void *tfm, const u8 *key, unsigned int keylen); + int (*setauthsize)(void *tfm, unsigned int authsize); + int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv); + int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv); + unsigned int (*ivsize)(void *tfm); + unsigned int (*statesize)(void *tfm); + u32 (*get_flags)(void *tfm); + struct module *owner; + char name[14]; +}; + +int bpf_crypto_register_type(const struct bpf_crypto_type *type); +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type); + +#endif /* _BPF_CRYPTO_H */ diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index f526b7573e97..bcde762bb2c2 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -41,6 +41,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o obj-$(CONFIG_BPF_SYSCALL) += cpumask.o obj-${CONFIG_BPF_LSM} += bpf_lsm.o endif +ifeq ($(CONFIG_CRYPTO),y) +obj-$(CONFIG_BPF_SYSCALL) += crypto.o +endif obj-$(CONFIG_BPF_PRELOAD) += preload/ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c new file mode 100644 index 000000000000..74b06e7122d2 --- /dev/null +++ b/kernel/bpf/crypto.c @@ -0,0 +1,366 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2023 Meta, Inc */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct bpf_crypto_type_list { + const struct bpf_crypto_type *type; + struct list_head list; +}; + +static LIST_HEAD(bpf_crypto_types); +static DECLARE_RWSEM(bpf_crypto_types_sem); + +/** + * struct bpf_crypto_ctx - refcounted BPF crypto context structure + * @type: The pointer to bpf crypto type + * @tfm: The pointer to instance of crypto API struct. + * @rcu: The RCU head used to free the crypto context with RCU safety. + * @usage: Object reference counter. When the refcount goes to 0, the + * memory is released back to the BPF allocator, which provides + * RCU safety. + */ +struct bpf_crypto_ctx { + const struct bpf_crypto_type *type; + void *tfm; + struct rcu_head rcu; + refcount_t usage; +}; + +int bpf_crypto_register_type(const struct bpf_crypto_type *type) +{ + struct bpf_crypto_type_list *node; + int err = -EEXIST; + + down_write(&bpf_crypto_types_sem); + list_for_each_entry(node, &bpf_crypto_types, list) { + if (!strcmp(node->type->name, type->name)) + goto unlock; + } + + node = kmalloc(sizeof(*node), GFP_KERNEL); + err = -ENOMEM; + if (!node) + goto unlock; + + node->type = type; + list_add(&node->list, &bpf_crypto_types); + err = 0; + +unlock: + up_write(&bpf_crypto_types_sem); + + return err; +} +EXPORT_SYMBOL_GPL(bpf_crypto_register_type); + +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type) +{ + struct bpf_crypto_type_list *node; + int err = -ENOENT; + + down_write(&bpf_crypto_types_sem); + list_for_each_entry(node, &bpf_crypto_types, list) { + if (strcmp(node->type->name, type->name)) + continue; + + list_del(&node->list); + kfree(node); + err = 0; + break; + } + up_write(&bpf_crypto_types_sem); + + return err; +} +EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type); + +static const struct bpf_crypto_type *bpf_crypto_get_type(const char *name) +{ + const struct bpf_crypto_type *type = ERR_PTR(-ENOENT); + struct bpf_crypto_type_list *node; + + down_read(&bpf_crypto_types_sem); + list_for_each_entry(node, &bpf_crypto_types, list) { + if (strcmp(node->type->name, name)) + continue; + + if (try_module_get(node->type->owner)) + type = node->type; + break; + } + up_read(&bpf_crypto_types_sem); + + return type; +} + +__bpf_kfunc_start_defs(); + +/** + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context. + * + * Allocates a crypto context that can be used, acquired, and released by + * a BPF program. The crypto context returned by this function must either + * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release(). + * As crypto API functions use GFP_KERNEL allocations, this function can + * only be used in sleepable BPF programs. + * + * bpf_crypto_ctx_create() allocates memory for crypto context. + * It may return NULL if no memory is available. + * @type__str: pointer to string representation of crypto type. + * @algo__str: pointer to string representation of algorithm. + * @pkey: bpf_dynptr which holds cipher key to do crypto. + * @authsize: the size of authentication data in case of AEAD transformation + * @err: integer to store error code when NULL is returned + */ +__bpf_kfunc struct bpf_crypto_ctx * +bpf_crypto_ctx_create(const char *type__str, const char *algo__str, + const struct bpf_dynptr_kern *pkey, + unsigned int authsize, int *err) +{ + const struct bpf_crypto_type *type = bpf_crypto_get_type(type__str); + struct bpf_crypto_ctx *ctx; + const u8 *key; + u32 key_len; + + type = bpf_crypto_get_type(type__str); + if (IS_ERR(type)) { + *err = PTR_ERR(type); + return NULL; + } + + if (!type->has_algo(algo__str)) { + *err = -EOPNOTSUPP; + goto err_module_put; + } + + if (!authsize && type->setauthsize) { + *err = -EOPNOTSUPP; + goto err_module_put; + } + + if (authsize && !type->setauthsize) { + *err = -EOPNOTSUPP; + goto err_module_put; + } + + key_len = __bpf_dynptr_size(pkey); + if (!key_len) { + *err = -EINVAL; + goto err_module_put; + } + key = __bpf_dynptr_data(pkey, key_len); + if (!key) { + *err = -EINVAL; + goto err_module_put; + } + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) { + *err = -ENOMEM; + goto err_module_put; + } + + ctx->type = type; + ctx->tfm = type->alloc_tfm(algo__str); + if (IS_ERR(ctx->tfm)) { + *err = PTR_ERR(ctx->tfm); + goto err_free_ctx; + } + + if (authsize) { + *err = type->setauthsize(ctx->tfm, authsize); + if (*err) + goto err_free_tfm; + } + + *err = type->setkey(ctx->tfm, key, key_len); + if (*err) + goto err_free_tfm; + + refcount_set(&ctx->usage, 1); + + return ctx; + +err_free_tfm: + type->free_tfm(ctx->tfm); +err_free_ctx: + kfree(ctx); +err_module_put: + module_put(type->owner); + + return NULL; +} + +static void crypto_free_cb(struct rcu_head *head) +{ + struct bpf_crypto_ctx *ctx; + + ctx = container_of(head, struct bpf_crypto_ctx, rcu); + ctx->type->free_tfm(ctx->tfm); + module_put(ctx->type->owner); + kfree(ctx); +} + +/** + * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto context. + * @ctx: The BPF crypto context being acquired. The ctx must be a trusted + * pointer. + * + * Acquires a reference to a BPF crypto context. The context returned by this function + * must either be embedded in a map as a kptr, or freed with + * bpf_crypto_skcipher_ctx_release(). + */ +__bpf_kfunc struct bpf_crypto_ctx * +bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx) +{ + refcount_inc(&ctx->usage); + return ctx; +} + +/** + * bpf_crypto_ctx_release() - Release a previously acquired BPF crypto context. + * @ctx: The crypto context being released. + * + * Releases a previously acquired reference to a BPF crypto context. When the final + * reference of the BPF crypto context has been released, it is subsequently freed in + * an RCU callback in the BPF memory allocator. + */ +__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx) +{ + if (refcount_dec_and_test(&ctx->usage)) + call_rcu(&ctx->rcu, crypto_free_cb); +} + +static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + const struct bpf_dynptr_kern *siv, + bool decrypt) +{ + u32 src_len, dst_len, siv_len; + const u8 *psrc; + u8 *pdst, *piv; + int err; + + if (ctx->type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) + return -EINVAL; + + if (__bpf_dynptr_is_rdonly(dst)) + return -EINVAL; + + siv_len = __bpf_dynptr_size(siv); + src_len = __bpf_dynptr_size(src); + dst_len = __bpf_dynptr_size(dst); + if (!src_len || !dst_len) + return -EINVAL; + + if (siv_len != (ctx->type->ivsize(ctx->tfm) + ctx->type->statesize(ctx->tfm))) + return -EINVAL; + + psrc = __bpf_dynptr_data(src, src_len); + if (!psrc) + return -EINVAL; + pdst = __bpf_dynptr_data_rw(dst, dst_len); + if (!pdst) + return -EINVAL; + + piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL; + if (siv_len && !piv) + return -EINVAL; + + err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv) + : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv); + + return err; +} + +/** + * bpf_crypto_decrypt() - Decrypt buffer using configured context and IV provided. + * @ctx: The crypto context being used. The ctx must be a trusted pointer. + * @src: bpf_dynptr to the encrypted data. Must be a trusted pointer. + * @dst: bpf_dynptr to the buffer where to store the result. Must be a trusted pointer. + * @siv: bpf_dynptr to IV data and state data to be used by decryptor. + * + * Decrypts provided buffer using IV data and the crypto context. Crypto context must be configured. + */ +__bpf_kfunc int bpf_crypto_decrypt(struct bpf_crypto_ctx *ctx, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + struct bpf_dynptr_kern *siv) +{ + return bpf_crypto_crypt(ctx, src, dst, siv, true); +} + +/** + * bpf_crypto_encrypt() - Encrypt buffer using configured context and IV provided. + * @ctx: The crypto context being used. The ctx must be a trusted pointer. + * @src: bpf_dynptr to the plain data. Must be a trusted pointer. + * @dst: bpf_dynptr to buffer where to store the result. Must be a trusted pointer. + * @siv: bpf_dynptr to IV data and state data to be used by decryptor. + * + * Encrypts provided buffer using IV data and the crypto context. Crypto context must be configured. + */ +__bpf_kfunc int bpf_crypto_encrypt(struct bpf_crypto_ctx *ctx, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + struct bpf_dynptr_kern *siv) +{ + return bpf_crypto_crypt(ctx, src, dst, siv, false); +} + +__bpf_kfunc_end_defs(); + +BTF_SET8_START(crypt_init_kfunc_btf_ids) +BTF_ID_FLAGS(func, bpf_crypto_ctx_create, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_crypto_ctx_release, KF_RELEASE) +BTF_ID_FLAGS(func, bpf_crypto_ctx_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL) +BTF_SET8_END(crypt_init_kfunc_btf_ids) + +static const struct btf_kfunc_id_set crypt_init_kfunc_set = { + .owner = THIS_MODULE, + .set = &crypt_init_kfunc_btf_ids, +}; + +BTF_SET8_START(crypt_kfunc_btf_ids) +BTF_ID_FLAGS(func, bpf_crypto_decrypt, KF_RCU) +BTF_ID_FLAGS(func, bpf_crypto_encrypt, KF_RCU) +BTF_SET8_END(crypt_kfunc_btf_ids) + +static const struct btf_kfunc_id_set crypt_kfunc_set = { + .owner = THIS_MODULE, + .set = &crypt_kfunc_btf_ids, +}; + +BTF_ID_LIST(bpf_crypto_dtor_ids) +BTF_ID(struct, bpf_crypto_ctx) +BTF_ID(func, bpf_crypto_ctx_release) + +static int __init crypto_kfunc_init(void) +{ + int ret; + const struct btf_id_dtor_kfunc bpf_crypto_dtors[] = { + { + .btf_id = bpf_crypto_dtor_ids[0], + .kfunc_btf_id = bpf_crypto_dtor_ids[1] + }, + }; + + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &crypt_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &crypt_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &crypt_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, + &crypt_init_kfunc_set); + return ret ?: register_btf_id_dtor_kfuncs(bpf_crypto_dtors, + ARRAY_SIZE(bpf_crypto_dtors), + THIS_MODULE); +} + +late_initcall(crypto_kfunc_init); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index e04ca1af8927..593adf036ec0 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1440,7 +1440,7 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = { #define DYNPTR_SIZE_MASK 0xFFFFFF #define DYNPTR_RDONLY_BIT BIT(31) -static bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr) +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr) { return ptr->size & DYNPTR_RDONLY_BIT; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9507800026cf..74b24df05a3c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5265,6 +5265,7 @@ BTF_ID(struct, cgroup) #endif BTF_ID(struct, bpf_cpumask) BTF_ID(struct, task_struct) +BTF_ID(struct, bpf_crypto_ctx) BTF_SET_END(rcu_protected_types) static bool rcu_protected_object(const struct btf *btf, u32 btf_id) From patchwork Mon Jan 15 22:08:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Fedorenko X-Patchwork-Id: 13520229 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 669261B5AA; Mon, 15 Jan 2024 22:08:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=meta.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=meta.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=meta.com header.i=@meta.com header.b="EWLhrreO" Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 40FEOQP0002617; Mon, 15 Jan 2024 14:08:19 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=PbW9lm3Hm509IJ92WErvuFAfIoq3sCqG7oXnAe9h1v4=; b=EWLhrreOl6x848TvjWaxeJ+HZ2naZwL1y04OSaMT4bmcHD65eHlG3OuoQnldcXqLgkzB dhr2G+egvDds7y0AFRMhV3r6CX9RKRL6o1feS5y44BzUeGFXP30XjXZmmWlEACc0aC74 eCC6iH+BIqFY2xVriebVC+srPh97wfZaZYQRk6MDQ0f7eQLDqIu7BYMSdlWAg2SVfHnO N6cRKBHgsDyzD2UVSStzMmDFugnKoSuNMmp7pVr65p7F0ubec8uW4PJ2mkbrvsNjpqKh SjiWQPfzaP+SGbmT3p7I+QlFBjdPZw95yOz2KCVFKqi5GkB9STlVip1a82WqHKdvRaNy fQ== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3vn5ve2fn9-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Mon, 15 Jan 2024 14:08:18 -0800 Received: from devvm4158.cln0.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server id 15.1.2507.35; Mon, 15 Jan 2024 14:08:14 -0800 From: Vadim Fedorenko To: Vadim Fedorenko , Jakub Kicinski , Martin KaFai Lau , Andrii Nakryiko , Alexei Starovoitov , Mykola Lysenko , Herbert Xu CC: Vadim Fedorenko , , , , Victor Stewart Subject: [PATCH bpf-next v8 2/3] bpf: crypto: add skcipher to bpf crypto Date: Mon, 15 Jan 2024 14:08:02 -0800 Message-ID: <20240115220803.1973440-2-vadfed@meta.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240115220803.1973440-1-vadfed@meta.com> References: <20240115220803.1973440-1-vadfed@meta.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: QpYG2gH4EkdA3tQEZooDtzMHgyvUboKI X-Proofpoint-ORIG-GUID: QpYG2gH4EkdA3tQEZooDtzMHgyvUboKI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-01-15_15,2024-01-15_03,2023-05-22_02 Implement skcipher crypto in BPF crypto framework. Signed-off-by: Vadim Fedorenko Acked-by: Herbert Xu --- v7 -> v8: - Move bpf_crypto_skcipher.c to crypto and make it part of skcipher module. This way looks more natural and makes bpf crypto proper modular. MAINTAINERS files is adjusted to make bpf part belong to BPF maintainers. v6 - v7: - style issues v6: - introduce new file --- MAINTAINERS | 8 ++++ crypto/Makefile | 3 ++ crypto/bpf_crypto_skcipher.c | 82 ++++++++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+) create mode 100644 crypto/bpf_crypto_skcipher.c diff --git a/MAINTAINERS b/MAINTAINERS index c36618d4659e..ae788357c56d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3753,6 +3753,14 @@ F: kernel/bpf/tnum.c F: kernel/bpf/trampoline.c F: kernel/bpf/verifier.c +BPF [CRYPTO] +M: Vadim Fedorenko +L: bpf@vger.kernel.org +S: Maintained +F: crypto/bpf_crypto_skcipher.c +F: include/linux/bpf_crypto.h +F: kernel/bpf/crypto.c + BPF [DOCUMENTATION] (Related to Standardization) R: David Vernet L: bpf@vger.kernel.org diff --git a/crypto/Makefile b/crypto/Makefile index 408f0a1f9ab9..538124f8bf8a 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -20,6 +20,9 @@ crypto_skcipher-y += lskcipher.o crypto_skcipher-y += skcipher.o obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o +ifeq ($(CONFIG_BPF_SYSCALL),y) +obj-$(CONFIG_CRYPTO_SKCIPHER2) += bpf_crypto_skcipher.o +endif obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o diff --git a/crypto/bpf_crypto_skcipher.c b/crypto/bpf_crypto_skcipher.c new file mode 100644 index 000000000000..e0b32cf7f002 --- /dev/null +++ b/crypto/bpf_crypto_skcipher.c @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2023 Meta, Inc */ +#include +#include +#include +#include + +static void *bpf_crypto_lskcipher_alloc_tfm(const char *algo) +{ + return crypto_alloc_lskcipher(algo, 0, 0); +} + +static void bpf_crypto_lskcipher_free_tfm(void *tfm) +{ + crypto_free_lskcipher(tfm); +} + +static int bpf_crypto_lskcipher_has_algo(const char *algo) +{ + return crypto_has_skcipher(algo, CRYPTO_ALG_TYPE_LSKCIPHER, CRYPTO_ALG_TYPE_MASK); +} + +static int bpf_crypto_lskcipher_setkey(void *tfm, const u8 *key, unsigned int keylen) +{ + return crypto_lskcipher_setkey(tfm, key, keylen); +} + +static u32 bpf_crypto_lskcipher_get_flags(void *tfm) +{ + return crypto_lskcipher_get_flags(tfm); +} + +static unsigned int bpf_crypto_lskcipher_ivsize(void *tfm) +{ + return crypto_lskcipher_ivsize(tfm); +} + +static unsigned int bpf_crypto_lskcipher_statesize(void *tfm) +{ + return crypto_lskcipher_statesize(tfm); +} + +static int bpf_crypto_lskcipher_encrypt(void *tfm, const u8 *src, u8 *dst, + unsigned int len, u8 *siv) +{ + return crypto_lskcipher_encrypt(tfm, src, dst, len, siv); +} + +static int bpf_crypto_lskcipher_decrypt(void *tfm, const u8 *src, u8 *dst, + unsigned int len, u8 *siv) +{ + return crypto_lskcipher_decrypt(tfm, src, dst, len, siv); +} + +static const struct bpf_crypto_type bpf_crypto_lskcipher_type = { + .alloc_tfm = bpf_crypto_lskcipher_alloc_tfm, + .free_tfm = bpf_crypto_lskcipher_free_tfm, + .has_algo = bpf_crypto_lskcipher_has_algo, + .setkey = bpf_crypto_lskcipher_setkey, + .encrypt = bpf_crypto_lskcipher_encrypt, + .decrypt = bpf_crypto_lskcipher_decrypt, + .ivsize = bpf_crypto_lskcipher_ivsize, + .statesize = bpf_crypto_lskcipher_statesize, + .get_flags = bpf_crypto_lskcipher_get_flags, + .owner = THIS_MODULE, + .name = "skcipher", +}; + +static int __init bpf_crypto_skcipher_init(void) +{ + return bpf_crypto_register_type(&bpf_crypto_lskcipher_type); +} + +static void __exit bpf_crypto_skcipher_exit(void) +{ + int err = bpf_crypto_unregister_type(&bpf_crypto_lskcipher_type); + WARN_ON_ONCE(err); +} + +module_init(bpf_crypto_skcipher_init); +module_exit(bpf_crypto_skcipher_exit); +MODULE_LICENSE("GPL"); From patchwork Mon Jan 15 22:08:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Fedorenko X-Patchwork-Id: 13520227 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 184121865A; Mon, 15 Jan 2024 22:08:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=meta.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=meta.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=meta.com header.i=@meta.com header.b="l5dKffgs" Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 40FEOQP2002617; Mon, 15 Jan 2024 14:08:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=qUAbAl2fxN10CFm519eKRFOf54oJysZiqOT5RK2YGO8=; b=l5dKffgs72bTj3anKxs7MXqbIeoS0JfEiAOs17WIRTUMWlI7UWIlRbGS0i2SgdXKB83d 3KFCCzoft7oKQYBMAmY9TilMitmcNtKIRNiANZq8LJMj+MAvGjEmMAB5dyHcejw2DZhd FRDd/014HM+rkq0bEvX4No1nE/1/NDRO5cizzW8MuO+9BZwq794qUA3jKUTb95uL/bTy wy3QlI1aBQJAXJPwQikqZvzRjkgM9g0szpXPAcKK00PZzf5xngxGXtRTDKtCldE8KRzY IOYWCQ90W/kW5LNIS3TLkEiATR46wRhAwlYuJwRPooaW1vEJ4tMb3gfCHXJbaVmtEWI8 Jg== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3vn5ve2fn9-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Mon, 15 Jan 2024 14:08:20 -0800 Received: from devvm4158.cln0.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server id 15.1.2507.35; Mon, 15 Jan 2024 14:08:17 -0800 From: Vadim Fedorenko To: Vadim Fedorenko , Jakub Kicinski , Martin KaFai Lau , Andrii Nakryiko , Alexei Starovoitov , Mykola Lysenko , Herbert Xu CC: Vadim Fedorenko , , , , Victor Stewart Subject: [PATCH bpf-next v8 3/3] selftests: bpf: crypto skcipher algo selftests Date: Mon, 15 Jan 2024 14:08:03 -0800 Message-ID: <20240115220803.1973440-3-vadfed@meta.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240115220803.1973440-1-vadfed@meta.com> References: <20240115220803.1973440-1-vadfed@meta.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: zYJ7D-jecQ-esI6rsJGtDXHmJRHbQ2fF X-Proofpoint-ORIG-GUID: zYJ7D-jecQ-esI6rsJGtDXHmJRHbQ2fF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-01-15_15,2024-01-15_03,2023-05-22_02 Add simple tc hook selftests to show the way to work with new crypto BPF API. Some tricky dynptr initialization is used to provide empty iv dynptr. Simple AES-ECB algo is used to demonstrate encryption and decryption of fixed size buffers. Signed-off-by: Vadim Fedorenko --- v7 -> v8: - use sizeof for all constant buffer operations - make local functions static - initialize crypto_key value via access to bss data - add bpf_skb_pull_data to be sure that data is linear - some comments around tricky dynptr initialization v6 -> v7: - style issues v5 -> v6: - use AF_ALG socket to confirm proper algorithm test - adjust test kernel config to include AF_ALG v4 -> v5: - adjust selftests to use new naming - restore tests on aarch64 and s390 as no sg lists are used v3 -> v4: - adjust selftests to use new syntax of helpers - add tests for acquire and release v2 -> v3: - disable tests on s390 and aarch64 because of unknown Fatal exception in sg_init_one v1 -> v2: - add CONFIG_CRYPTO_AES and CONFIG_CRYPTO_ECB to selftest build config suggested by Daniel --- tools/testing/selftests/bpf/config | 5 + .../selftests/bpf/prog_tests/crypto_sanity.c | 217 ++++++++++++++++++ .../selftests/bpf/progs/crypto_common.h | 67 ++++++ .../selftests/bpf/progs/crypto_sanity.c | 200 ++++++++++++++++ 4 files changed, 489 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/crypto_sanity.c create mode 100644 tools/testing/selftests/bpf/progs/crypto_common.h create mode 100644 tools/testing/selftests/bpf/progs/crypto_sanity.c diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index c125c441abc7..2221994a36d6 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -13,7 +13,12 @@ CONFIG_BPF_SYSCALL=y CONFIG_CGROUP_BPF=y CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_SHA256=y +CONFIG_CRYPTO_USER_API=y CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_USER_API_SKCIPHER=y +CONFIG_CRYPTO_SKCIPHER=y +CONFIG_CRYPTO_ECB=y +CONFIG_CRYPTO_AES=y CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_BTF=y CONFIG_DEBUG_INFO_DWARF4=y diff --git a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c new file mode 100644 index 000000000000..70bde9640651 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c @@ -0,0 +1,217 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include +#include + +#include "test_progs.h" +#include "network_helpers.h" +#include "crypto_sanity.skel.h" + +#define NS_TEST "crypto_sanity_ns" +#define IPV6_IFACE_ADDR "face::1" +#define UDP_TEST_PORT 7777 +static const unsigned char crypto_key[] = "testtest12345678"; +static const char plain_text[] = "stringtoencrypt0"; +static int opfd = -1, tfmfd = -1; + +static int init_afalg(void) +{ + struct sockaddr_alg sa = { + .salg_family = AF_ALG, + .salg_type = "skcipher", + .salg_name = "ecb(aes)" + }; + + tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0); + if (tfmfd == -1) + return errno; + if (bind(tfmfd, (struct sockaddr *)&sa, sizeof(sa)) == -1) + return errno; + if (setsockopt(tfmfd, SOL_ALG, ALG_SET_KEY, crypto_key, 16) == -1) + return errno; + opfd = accept(tfmfd, NULL, 0); + if (opfd == -1) + return errno; + return 0; +} + +static void deinit_afalg(void) +{ + if (tfmfd) + close(tfmfd); + if (opfd) + close(opfd); +} + +static void do_crypt_afalg(const void *src, void *dst, int size, bool encrypt) +{ + struct msghdr msg = {}; + struct cmsghdr *cmsg; + char cbuf[CMSG_SPACE(4)] = {0}; + struct iovec iov; + + msg.msg_control = cbuf; + msg.msg_controllen = sizeof(cbuf); + + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_level = SOL_ALG; + cmsg->cmsg_type = ALG_SET_OP; + cmsg->cmsg_len = CMSG_LEN(4); + *(__u32 *)CMSG_DATA(cmsg) = encrypt ? ALG_OP_ENCRYPT : ALG_OP_DECRYPT; + + iov.iov_base = (char *)src; + iov.iov_len = size; + + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + + sendmsg(opfd, &msg, 0); + read(opfd, dst, size); +} + +void test_crypto_sanity(void) +{ + LIBBPF_OPTS(bpf_tc_hook, qdisc_hook, .attach_point = BPF_TC_EGRESS); + LIBBPF_OPTS(bpf_tc_opts, tc_attach_enc); + LIBBPF_OPTS(bpf_tc_opts, tc_attach_dec); + LIBBPF_OPTS(bpf_test_run_opts, opts, + .repeat = 1, + ); + struct nstoken *nstoken = NULL; + struct crypto_sanity *skel; + char afalg_plain[16] = {0}; + char afalg_dst[16] = {0}; + struct sockaddr_in6 addr; + int sockfd, err, pfd; + socklen_t addrlen; + + skel = crypto_sanity__open(); + if (!ASSERT_OK_PTR(skel, "skel open")) + return; + + bpf_program__set_autoload(skel->progs.crypto_acquire, true); + + err = crypto_sanity__load(skel); + if (!ASSERT_ERR(err, "crypto_acquire unexpected load success")) + goto fail; + + crypto_sanity__destroy(skel); + + skel = crypto_sanity__open(); + if (!ASSERT_OK_PTR(skel, "skel open")) + return; + + bpf_program__set_autoload(skel->progs.crypto_acquire, false); + + SYS(fail, "ip netns add %s", NS_TEST); + SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR); + SYS(fail, "ip -net %s link set dev lo up", NS_TEST); + + err = crypto_sanity__load(skel); + if (!ASSERT_OK(err, "crypto_sanity__load")) + goto fail; + + memcpy(skel->bss->crypto_key, crypto_key, sizeof(crypto_key)); + + nstoken = open_netns(NS_TEST); + if (!ASSERT_OK_PTR(nstoken, "open_netns")) + goto fail; + + err = init_afalg(); + if (!ASSERT_OK(err, "AF_ALG init fail")) + goto fail; + + qdisc_hook.ifindex = if_nametoindex("lo"); + if (!ASSERT_GT(qdisc_hook.ifindex, 0, "if_nametoindex lo")) + goto fail; + + err = crypto_sanity__attach(skel); + if (!ASSERT_OK(err, "crypto_sanity__attach")) + goto fail; + + pfd = bpf_program__fd(skel->progs.crypto_release); + if (!ASSERT_GT(pfd, 0, "crypto_release fd")) + goto fail; + + err = bpf_prog_test_run_opts(pfd, &opts); + if (!ASSERT_OK(err, "crypto_release") || + !ASSERT_OK(opts.retval, "crypto_release retval")) + goto fail; + + pfd = bpf_program__fd(skel->progs.skb_crypto_setup); + if (!ASSERT_GT(pfd, 0, "skb_crypto_setup fd")) + goto fail; + + err = bpf_prog_test_run_opts(pfd, &opts); + if (!ASSERT_OK(err, "skb_crypto_setup") || + !ASSERT_OK(opts.retval, "skb_crypto_setup retval")) + goto fail; + + if (!ASSERT_OK(skel->bss->status, "skb_crypto_setup status")) + goto fail; + + err = bpf_tc_hook_create(&qdisc_hook); + if (!ASSERT_OK(err, "create qdisc hook")) + goto fail; + + addrlen = sizeof(addr); + err = make_sockaddr(AF_INET6, IPV6_IFACE_ADDR, UDP_TEST_PORT, + (void *)&addr, &addrlen); + if (!ASSERT_OK(err, "make_sockaddr")) + goto fail; + + tc_attach_enc.prog_fd = bpf_program__fd(skel->progs.encrypt_sanity); + err = bpf_tc_attach(&qdisc_hook, &tc_attach_enc); + if (!ASSERT_OK(err, "attach encrypt filter")) + goto fail; + + sockfd = socket(AF_INET6, SOCK_DGRAM, 0); + if (!ASSERT_NEQ(sockfd, -1, "encrypt socket")) + goto fail; + err = sendto(sockfd, plain_text, sizeof(plain_text), 0, (void *)&addr, addrlen); + close(sockfd); + if (!ASSERT_EQ(err, sizeof(plain_text), "encrypt send")) + goto fail; + + do_crypt_afalg(plain_text, afalg_dst, sizeof(afalg_dst), true); + + bpf_tc_detach(&qdisc_hook, &tc_attach_enc); + if (!ASSERT_OK(skel->bss->status, "encrypt status")) + goto fail; + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_dst, sizeof(afalg_dst), "encrypt AF_ALG")) + goto fail; + + tc_attach_dec.prog_fd = bpf_program__fd(skel->progs.decrypt_sanity); + err = bpf_tc_attach(&qdisc_hook, &tc_attach_dec); + if (!ASSERT_OK(err, "attach decrypt filter")) + goto fail; + + sockfd = socket(AF_INET6, SOCK_DGRAM, 0); + if (!ASSERT_NEQ(sockfd, -1, "decrypt socket")) + goto fail; + err = sendto(sockfd, afalg_dst, sizeof(afalg_dst), 0, (void *)&addr, addrlen); + close(sockfd); + if (!ASSERT_EQ(err, sizeof(afalg_dst), "decrypt send")) + goto fail; + + do_crypt_afalg(afalg_dst, afalg_plain, sizeof(afalg_plain), false); + + bpf_tc_detach(&qdisc_hook, &tc_attach_dec); + if (!ASSERT_OK(skel->bss->status, "decrypt status")) + goto fail; + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_plain, sizeof(afalg_plain), "decrypt AF_ALG")) + goto fail; + +fail: + if (nstoken) { + bpf_tc_hook_destroy(&qdisc_hook); + close_netns(nstoken); + } + deinit_afalg(); + SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null"); + crypto_sanity__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/crypto_common.h b/tools/testing/selftests/bpf/progs/crypto_common.h new file mode 100644 index 000000000000..260b9a0fb4ed --- /dev/null +++ b/tools/testing/selftests/bpf/progs/crypto_common.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#ifndef _CRYPTO_COMMON_H +#define _CRYPTO_COMMON_H + +#include "errno.h" +#include + +struct bpf_crypto_ctx *bpf_crypto_ctx_create(const char *type__str, const char *algo__str, + const struct bpf_dynptr *pkey, + unsigned int authsize, int *err) __ksym; +struct bpf_crypto_ctx *bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx) __ksym; +void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx) __ksym; +int bpf_crypto_encrypt(struct bpf_crypto_ctx *ctx, const struct bpf_dynptr *src, + struct bpf_dynptr *dst, struct bpf_dynptr *iv) __ksym; +int bpf_crypto_decrypt(struct bpf_crypto_ctx *ctx, const struct bpf_dynptr *src, + struct bpf_dynptr *dst, struct bpf_dynptr *iv) __ksym; + +struct __crypto_ctx_value { + struct bpf_crypto_ctx __kptr * ctx; +}; + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct __crypto_ctx_value); + __uint(max_entries, 1); +} __crypto_ctx_map SEC(".maps"); + +static inline struct __crypto_ctx_value *crypto_ctx_value_lookup(void) +{ + u32 key = 0; + + return bpf_map_lookup_elem(&__crypto_ctx_map, &key); +} + +static inline int crypto_ctx_insert(struct bpf_crypto_ctx *ctx) +{ + struct __crypto_ctx_value local, *v; + struct bpf_crypto_ctx *old; + u32 key = 0; + int err; + + local.ctx = NULL; + err = bpf_map_update_elem(&__crypto_ctx_map, &key, &local, 0); + if (err) { + bpf_crypto_ctx_release(ctx); + return err; + } + + v = bpf_map_lookup_elem(&__crypto_ctx_map, &key); + if (!v) { + bpf_crypto_ctx_release(ctx); + return -ENOENT; + } + + old = bpf_kptr_xchg(&v->ctx, ctx); + if (old) { + bpf_crypto_ctx_release(old); + return -EEXIST; + } + + return 0; +} + +#endif /* _CRYPTO_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/crypto_sanity.c b/tools/testing/selftests/bpf/progs/crypto_sanity.c new file mode 100644 index 000000000000..8bf5b6b87410 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/crypto_sanity.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include "bpf_tracing_net.h" +#include +#include +#include +#include "bpf_misc.h" +#include "bpf_kfuncs.h" +#include "crypto_common.h" + +#define UDP_TEST_PORT 7777 +unsigned char crypto_key[16]; +char dst[16] = {}; +int status; + +static int skb_dynptr_validate(struct __sk_buff *skb, struct bpf_dynptr *psrc) +{ + struct ipv6hdr ip6h; + struct udphdr udph; + u32 offset; + + if (skb->protocol != __bpf_constant_htons(ETH_P_IPV6)) + return -1; + + if (bpf_skb_load_bytes(skb, ETH_HLEN, &ip6h, sizeof(ip6h))) + return -1; + + if (ip6h.nexthdr != IPPROTO_UDP) + return -1; + + if (bpf_skb_load_bytes(skb, ETH_HLEN + sizeof(ip6h), &udph, sizeof(udph))) + return -1; + + if (udph.dest != __bpf_constant_htons(UDP_TEST_PORT)) + return -1; + + offset = ETH_HLEN + sizeof(ip6h) + sizeof(udph); + if (skb->len < offset + 16) + return -1; + + /* let's make sure that 16 bytes of payload are in the linear part of skb */ + bpf_skb_pull_data(skb, offset + 16); + bpf_dynptr_from_skb(skb, 0, psrc); + bpf_dynptr_adjust(psrc, offset, offset + 16); + + return 0; +} + +SEC("fentry.s/bpf_fentry_test1") +int BPF_PROG(skb_crypto_setup) +{ + struct bpf_crypto_ctx *cctx; + struct bpf_dynptr key = {}; + int err = 0; + + status = 0; + + bpf_dynptr_from_mem(crypto_key, sizeof(crypto_key), 0, &key); + cctx = bpf_crypto_ctx_create("skcipher", "ecb(aes)", &key, 0, &err); + + if (!cctx) { + status = err; + return 0; + } + + err = crypto_ctx_insert(cctx); + if (err && err != -EEXIST) + status = err; + + return 0; +} + +SEC("fentry.s/bpf_fentry_test1") +int BPF_PROG(crypto_release) +{ + struct bpf_crypto_ctx *cctx; + struct bpf_dynptr key = {}; + int err = 0; + + status = 0; + + bpf_dynptr_from_mem(crypto_key, sizeof(crypto_key), 0, &key); + cctx = bpf_crypto_ctx_create("skcipher", "ecb(aes)", &key, 0, &err); + + if (!cctx) { + status = err; + return 0; + } + + bpf_crypto_ctx_release(cctx); + + return 0; +} + +SEC("?fentry.s/bpf_fentry_test1") +__failure __msg("Unreleased reference") +int BPF_PROG(crypto_acquire) +{ + struct bpf_crypto_ctx *cctx; + struct bpf_dynptr key = {}; + int err = 0; + + status = 0; + + bpf_dynptr_from_mem(crypto_key, sizeof(crypto_key), 0, &key); + cctx = bpf_crypto_ctx_create("skcipher", "ecb(aes)", &key, 0, &err); + + if (!cctx) { + status = err; + return 0; + } + + cctx = bpf_crypto_ctx_acquire(cctx); + if (!cctx) + return -EINVAL; + + bpf_crypto_ctx_release(cctx); + + return 0; +} + +SEC("tc") +int decrypt_sanity(struct __sk_buff *skb) +{ + struct __crypto_ctx_value *v; + struct bpf_crypto_ctx *ctx; + struct bpf_dynptr psrc, pdst, iv; + int err; + + err = skb_dynptr_validate(skb, &psrc); + if (err < 0) { + status = err; + return TC_ACT_SHOT; + } + + v = crypto_ctx_value_lookup(); + if (!v) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + ctx = v->ctx; + if (!ctx) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); + /* iv dynptr has to be initialized with 0 size, but proper memory region + * has to be provided anyway + */ + bpf_dynptr_from_mem(dst, 0, 0, &iv); + + status = bpf_crypto_decrypt(ctx, &psrc, &pdst, &iv); + + return TC_ACT_SHOT; +} + +SEC("tc") +int encrypt_sanity(struct __sk_buff *skb) +{ + struct __crypto_ctx_value *v; + struct bpf_crypto_ctx *ctx; + struct bpf_dynptr psrc, pdst, iv; + int err; + + status = 0; + + err = skb_dynptr_validate(skb, &psrc); + if (err < 0) { + status = err; + return TC_ACT_SHOT; + } + + v = crypto_ctx_value_lookup(); + if (!v) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + ctx = v->ctx; + if (!ctx) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); + /* iv dynptr has to be initialized with 0 size, but proper memory region + * has to be provided anyway + */ + bpf_dynptr_from_mem(dst, 0, 0, &iv); + + status = bpf_crypto_encrypt(ctx, &psrc, &pdst, &iv); + + return TC_ACT_SHOT; +} + +char __license[] SEC("license") = "GPL";