diff mbox series

[bpf-next,v9,1/4] bpf: make common crypto API for TC/XDP programs

Message ID 20240416204004.3942393-2-vadfed@meta.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series BPF crypto API framework | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-4 fail Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 fail Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-12 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-10 success Logs for s390x-gcc / test
bpf/vmtest-bpf-next-VM_Test-13 fail Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / test
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-17 fail Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-llvm-17 / test
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-21 fail Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-llvm-18 / test
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 7517 this patch: 7523
netdev/build_tools success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 9 maintainers not CCed: yonghong.song@linux.dev john.fastabend@gmail.com jolsa@kernel.org kpsingh@kernel.org sdf@google.com daniel@iogearbox.net haoluo@google.com eddyz87@gmail.com song@kernel.org
netdev/build_clang fail Errors and warnings before: 1228 this patch: 1229
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 7900 this patch: 7906
netdev/checkpatch warning WARNING: Prefer __aligned(8) over __attribute__((aligned(8))) WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: line length of 100 exceeds 80 columns WARNING: line length of 81 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 85 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 90 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc fail Errors and warnings before: 13 this patch: 18
netdev/source_inline success Was 0 now: 0

Commit Message

Vadim Fedorenko April 16, 2024, 8:40 p.m. UTC
Add crypto API support to BPF to be able to decrypt or encrypt packets
in TC/XDP BPF programs. Special care should be taken for initialization
part of crypto algo because crypto alloc) doesn't work with preemtion
disabled, it can be run only in sleepable BPF program. Also async crypto
is not supported because of the very same issue - TC/XDP BPF programs
are not sleepable.

Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
---
v8 -> v9:
- improve initialization API to provide more data and make one call to
  have crypto context fully initialized and ready to do crypto actions
- improve fast path to avoid indirect calls
- make bpf_crypto_create runnable from syscall type programs
v7 -> v8:
- add statesize ops to bpf crypto type as some ciphers are now stateful
- improve error path in bpf_crypto_create
v6 -> v7:
- style fixes
v5 -> v6:
- replace lskcipher with infrastructure to provide pluggable cipher
  types
- add BPF skcipher as plug-in module in a separate patch
v4 -> v5:
- replace crypto API to use lskcipher (suggested by Herbert Xu)
- remove SG list usage and provide raw buffers
v3 -> v4:
- reuse __bpf_dynptr_data and remove own implementation
- use const __str to provide algorithm name
- use kfunc macroses to avoid compilator warnings
v2 -> v3:
- fix kdoc issues
v1 -> v2:
- use kmalloc in sleepable func, suggested by Alexei
- use __bpf_dynptr_is_rdonly() to check destination, suggested by Jakub
- use __bpf_dynptr_data_ptr() for all dynptr accesses
---
 include/linux/bpf.h        |   1 +
 include/linux/bpf_crypto.h |  24 +++
 kernel/bpf/Makefile        |   3 +
 kernel/bpf/crypto.c        | 377 +++++++++++++++++++++++++++++++++++++
 kernel/bpf/helpers.c       |   2 +-
 kernel/bpf/verifier.c      |   1 +
 6 files changed, 407 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/bpf_crypto.h
 create mode 100644 kernel/bpf/crypto.c

Comments

kernel test robot April 17, 2024, 6:17 a.m. UTC | #1
Hi Vadim,

kernel test robot noticed the following build warnings:

[auto build test WARNING on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Vadim-Fedorenko/bpf-make-common-crypto-API-for-TC-XDP-programs/20240417-044349
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20240416204004.3942393-2-vadfed%40meta.com
patch subject: [PATCH bpf-next v9 1/4] bpf: make common crypto API for TC/XDP programs
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240417/202404171409.EgchVGya-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240417/202404171409.EgchVGya-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202404171409.EgchVGya-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> kernel/bpf/crypto.c:53: warning: Function parameter or struct member 'siv_len' not described in 'bpf_crypto_ctx'


vim +53 kernel/bpf/crypto.c

    37	
    38	/**
    39	 * struct bpf_crypto_ctx - refcounted BPF crypto context structure
    40	 * @type:	The pointer to bpf crypto type
    41	 * @tfm:	The pointer to instance of crypto API struct.
    42	 * @rcu:	The RCU head used to free the crypto context with RCU safety.
    43	 * @usage:	Object reference counter. When the refcount goes to 0, the
    44	 *		memory is released back to the BPF allocator, which provides
    45	 *		RCU safety.
    46	 */
    47	struct bpf_crypto_ctx {
    48		const struct bpf_crypto_type *type;
    49		void *tfm;
    50		u32 siv_len;
    51		struct rcu_head rcu;
    52		refcount_t usage;
  > 53	};
    54
Martin KaFai Lau April 19, 2024, 6:57 p.m. UTC | #2
On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 5034c1b4ded7..acc479c13f52 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1265,6 +1265,7 @@ int bpf_dynptr_check_size(u32 size);
>   u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
>   const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
>   void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
> +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
>   
>   #ifdef CONFIG_BPF_JIT
>   int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
> diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
> new file mode 100644
> index 000000000000..a41e71d4e2d9
> --- /dev/null
> +++ b/include/linux/bpf_crypto.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +#ifndef _BPF_CRYPTO_H
> +#define _BPF_CRYPTO_H
> +
> +struct bpf_crypto_type {
> +	void *(*alloc_tfm)(const char *algo);
> +	void (*free_tfm)(void *tfm);
> +	int (*has_algo)(const char *algo);
> +	int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
> +	int (*setauthsize)(void *tfm, unsigned int authsize);
> +	int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
> +	int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
> +	unsigned int (*ivsize)(void *tfm);
> +	unsigned int (*statesize)(void *tfm);
> +	u32 (*get_flags)(void *tfm);
> +	struct module *owner;
> +	char name[14];
> +};
> +
> +int bpf_crypto_register_type(const struct bpf_crypto_type *type);
> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
> +
> +#endif /* _BPF_CRYPTO_H */
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 368c5d86b5b7..736bd22e5ce0 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -44,6 +44,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
>   obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
>   obj-${CONFIG_BPF_LSM} += bpf_lsm.o
>   endif
> +ifeq ($(CONFIG_CRYPTO),y)
> +obj-$(CONFIG_BPF_SYSCALL) += crypto.o
> +endif
>   obj-$(CONFIG_BPF_PRELOAD) += preload/
>   
>   obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
> diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
> new file mode 100644
> index 000000000000..a76d80f37f55
> --- /dev/null
> +++ b/kernel/bpf/crypto.c
> @@ -0,0 +1,377 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (c) 2024 Meta, Inc */
> +#include <linux/bpf.h>
> +#include <linux/bpf_crypto.h>
> +#include <linux/bpf_mem_alloc.h>
> +#include <linux/btf.h>
> +#include <linux/btf_ids.h>
> +#include <linux/filter.h>
> +#include <linux/scatterlist.h>
> +#include <linux/skbuff.h>
> +#include <crypto/skcipher.h>
> +
> +struct bpf_crypto_type_list {
> +	const struct bpf_crypto_type *type;
> +	struct list_head list;
> +};
> +
> +/* BPF crypto initialization parameters struct */
> +/**
> + * struct bpf_crypto_params - BPF crypto initialization parameters structure
> + * @type:	The string of crypto operation type.
> + * @algo:	The string of algorithm to initialize.
> + * @key:	The cipher key used to init crypto algorithm.
> + * @key_len:	The length of cipher key.
> + * @authsize:	The length of authentication tag used by algorithm.
> + */
> +struct bpf_crypto_params {
> +	char type[14];
> +	char algo[128];
> +	__u8 key[256];

It should have a two byte hole here. Add
	__u8 reserved[2];

and check for 0 in bpf_crypto_ctx_create() in case it could be reused later. The 
bpf_crypto_ctx_create() should not be called very often.

> +	__u32 key_len;
> +	__u32 authsize;

I don't think there is tail padding of this struct, so should be fine.

> +} __attribute__((aligned(8)));

Does it need aligned(8) here?

> +
> +static LIST_HEAD(bpf_crypto_types);
> +static DECLARE_RWSEM(bpf_crypto_types_sem);
> +
> +/**
> + * struct bpf_crypto_ctx - refcounted BPF crypto context structure
> + * @type:	The pointer to bpf crypto type
> + * @tfm:	The pointer to instance of crypto API struct.
> + * @rcu:	The RCU head used to free the crypto context with RCU safety.
> + * @usage:	Object reference counter. When the refcount goes to 0, the
> + *		memory is released back to the BPF allocator, which provides
> + *		RCU safety.
> + */
> +struct bpf_crypto_ctx {
> +	const struct bpf_crypto_type *type;
> +	void *tfm;
> +	u32 siv_len;
> +	struct rcu_head rcu;
> +	refcount_t usage;
> +};
> +
> +int bpf_crypto_register_type(const struct bpf_crypto_type *type)
> +{
> +	struct bpf_crypto_type_list *node;
> +	int err = -EEXIST;
> +
> +	down_write(&bpf_crypto_types_sem);
> +	list_for_each_entry(node, &bpf_crypto_types, list) {
> +		if (!strcmp(node->type->name, type->name))
> +			goto unlock;
> +	}
> +
> +	node = kmalloc(sizeof(*node), GFP_KERNEL);
> +	err = -ENOMEM;
> +	if (!node)
> +		goto unlock;
> +
> +	node->type = type;
> +	list_add(&node->list, &bpf_crypto_types);
> +	err = 0;
> +
> +unlock:
> +	up_write(&bpf_crypto_types_sem);
> +
> +	return err;
> +}
> +EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
> +
> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
> +{
> +	struct bpf_crypto_type_list *node;
> +	int err = -ENOENT;
> +
> +	down_write(&bpf_crypto_types_sem);
> +	list_for_each_entry(node, &bpf_crypto_types, list) {
> +		if (strcmp(node->type->name, type->name))
> +			continue;
> +
> +		list_del(&node->list);
> +		kfree(node);
> +		err = 0;
> +		break;
> +	}
> +	up_write(&bpf_crypto_types_sem);
> +
> +	return err;
> +}
> +EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
> +
> +static const struct bpf_crypto_type *bpf_crypto_get_type(const char *name)
> +{
> +	const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
> +	struct bpf_crypto_type_list *node;
> +
> +	down_read(&bpf_crypto_types_sem);
> +	list_for_each_entry(node, &bpf_crypto_types, list) {
> +		if (strcmp(node->type->name, name))
> +			continue;
> +
> +		if (try_module_get(node->type->owner))
> +			type = node->type;
> +		break;
> +	}
> +	up_read(&bpf_crypto_types_sem);
> +
> +	return type;
> +}
> +
> +__bpf_kfunc_start_defs();
> +
> +/**
> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
> + *
> + * Allocates a crypto context that can be used, acquired, and released by
> + * a BPF program. The crypto context returned by this function must either
> + * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
> + * As crypto API functions use GFP_KERNEL allocations, this function can
> + * only be used in sleepable BPF programs.
> + *
> + * bpf_crypto_ctx_create() allocates memory for crypto context.
> + * It may return NULL if no memory is available.
> + * @params: pointer to struct bpf_crypto_params which contains all the
> + *          details needed to initialise crypto context.
> + * @err:    integer to store error code when NULL is returned.
> + */
> +__bpf_kfunc struct bpf_crypto_ctx *
> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)

Add a "u32 params__sz" arg in case that the params struct will have addition.
Take a look at how opts__sz is checked in nf_conntrack_bpf.c.

> +{
> +	const struct bpf_crypto_type *type;
> +	struct bpf_crypto_ctx *ctx;
> +
> +	type = bpf_crypto_get_type(params->type);
> +	if (IS_ERR(type)) {
> +		*err = PTR_ERR(type);
> +		return NULL;
> +	}
> +
> +	if (!type->has_algo(params->algo)) {
> +		*err = -EOPNOTSUPP;
> +		goto err_module_put;
> +	}
> +
> +	if (!params->authsize && type->setauthsize) {
> +		*err = -EOPNOTSUPP;
> +		goto err_module_put;
> +	}
> +
> +	if (params->authsize && !type->setauthsize) {

nit. Together with the previous "if" test, replace them with one test like:

	if (!!params->authsize ^ !!type->setauthsize) {


> +		*err = -EOPNOTSUPP;
> +		goto err_module_put;
> +	}
> +
> +	if (!params->key_len) {

Also checks "|| params->key_len > sizeof(params->key)"

> +		*err = -EINVAL;
> +		goto err_module_put;
> +	}
> +
> +	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +	if (!ctx) {
> +		*err = -ENOMEM;
> +		goto err_module_put;
> +	}
> +
> +	ctx->type = type;
> +	ctx->tfm = type->alloc_tfm(params->algo);
> +	if (IS_ERR(ctx->tfm)) {
> +		*err = PTR_ERR(ctx->tfm);
> +		goto err_free_ctx;
> +	}
> +
> +	if (params->authsize) {
> +		*err = type->setauthsize(ctx->tfm, params->authsize);
> +		if (*err)
> +			goto err_free_tfm;
> +	}
> +
> +	*err = type->setkey(ctx->tfm, params->key, params->key_len);
> +	if (*err)
> +		goto err_free_tfm;
> +
> +	if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
> +		*err = -EINVAL;
> +		goto err_free_tfm;
> +	}
> +
> +	ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
> +
> +	refcount_set(&ctx->usage, 1);
> +
> +	return ctx;
> +
> +err_free_tfm:
> +	type->free_tfm(ctx->tfm);
> +err_free_ctx:
> +	kfree(ctx);
> +err_module_put:
> +	module_put(type->owner);
> +
> +	return NULL;
> +}
> +
> +static void crypto_free_cb(struct rcu_head *head)
> +{
> +	struct bpf_crypto_ctx *ctx;
> +
> +	ctx = container_of(head, struct bpf_crypto_ctx, rcu);
> +	ctx->type->free_tfm(ctx->tfm);
> +	module_put(ctx->type->owner);
> +	kfree(ctx);
> +}
> +
> +/**
> + * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto context.
> + * @ctx: The BPF crypto context being acquired. The ctx must be a trusted
> + *	     pointer.
> + *
> + * Acquires a reference to a BPF crypto context. The context returned by this function
> + * must either be embedded in a map as a kptr, or freed with
> + * bpf_crypto_skcipher_ctx_release().
> + */
> +__bpf_kfunc struct bpf_crypto_ctx *
> +bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
> +{
> +	if (!refcount_inc_not_zero(&ctx->usage))
> +		return NULL;
> +	return ctx;
> +}
> +
> +/**
> + * bpf_crypto_ctx_release() - Release a previously acquired BPF crypto context.
> + * @ctx: The crypto context being released.
> + *
> + * Releases a previously acquired reference to a BPF crypto context. When the final
> + * reference of the BPF crypto context has been released, it is subsequently freed in
> + * an RCU callback in the BPF memory allocator.
> + */
> +__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
> +{
> +	if (refcount_dec_and_test(&ctx->usage))
> +		call_rcu(&ctx->rcu, crypto_free_cb);
> +}
> +
> +static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
> +			    const struct bpf_dynptr_kern *src,
> +			    struct bpf_dynptr_kern *dst,
> +			    const struct bpf_dynptr_kern *siv,
> +			    bool decrypt)
> +{
> +	u32 src_len, dst_len, siv_len;
> +	const u8 *psrc;
> +	u8 *pdst, *piv;
> +	int err;
> +
> +	if (__bpf_dynptr_is_rdonly(dst))
> +		return -EINVAL;
> +
> +	siv_len = __bpf_dynptr_size(siv);
> +	src_len = __bpf_dynptr_size(src);
> +	dst_len = __bpf_dynptr_size(dst);
> +	if (!src_len || !dst_len)
> +		return -EINVAL;
> +
> +	if (siv_len != ctx->siv_len)
> +		return -EINVAL;
> +
> +	psrc = __bpf_dynptr_data(src, src_len);
> +	if (!psrc)
> +		return -EINVAL;
> +	pdst = __bpf_dynptr_data_rw(dst, dst_len);
> +	if (!pdst)
> +		return -EINVAL;
> +
> +	piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;

It has been a while. I don't remember if it has already been brought up before.

The "const struct bpf_dynptr_kern *siv" here is essentially an optional pointer. 
Allowing NULL is a more intuitive usage instead of passing a 0-len dynptr. The 
verifier needs some changes to take __nullable suffix for "struct 
bpf_dynptr_kern *siv__nullable". This could be a follow up to relax the 
restriction to allow NULL and not necessary in this set.

> +	if (siv_len && !piv)
> +		return -EINVAL;
> +
> +	err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv)
> +		      : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
> +
> +	return err;
> +}
Vadim Fedorenko April 20, 2024, 12:24 a.m. UTC | #3
On 19/04/2024 19:57, Martin KaFai Lau wrote:
> On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 5034c1b4ded7..acc479c13f52 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1265,6 +1265,7 @@ int bpf_dynptr_check_size(u32 size);
>>   u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
>>   const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 
>> len);
>>   void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
>> +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
>>   #ifdef CONFIG_BPF_JIT
>>   int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct 
>> bpf_trampoline *tr);
>> diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
>> new file mode 100644
>> index 000000000000..a41e71d4e2d9
>> --- /dev/null
>> +++ b/include/linux/bpf_crypto.h
>> @@ -0,0 +1,24 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
>> +#ifndef _BPF_CRYPTO_H
>> +#define _BPF_CRYPTO_H
>> +
>> +struct bpf_crypto_type {
>> +    void *(*alloc_tfm)(const char *algo);
>> +    void (*free_tfm)(void *tfm);
>> +    int (*has_algo)(const char *algo);
>> +    int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
>> +    int (*setauthsize)(void *tfm, unsigned int authsize);
>> +    int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int 
>> len, u8 *iv);
>> +    int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int 
>> len, u8 *iv);
>> +    unsigned int (*ivsize)(void *tfm);
>> +    unsigned int (*statesize)(void *tfm);
>> +    u32 (*get_flags)(void *tfm);
>> +    struct module *owner;
>> +    char name[14];
>> +};
>> +
>> +int bpf_crypto_register_type(const struct bpf_crypto_type *type);
>> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
>> +
>> +#endif /* _BPF_CRYPTO_H */
>> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
>> index 368c5d86b5b7..736bd22e5ce0 100644
>> --- a/kernel/bpf/Makefile
>> +++ b/kernel/bpf/Makefile
>> @@ -44,6 +44,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
>>   obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
>>   obj-${CONFIG_BPF_LSM} += bpf_lsm.o
>>   endif
>> +ifeq ($(CONFIG_CRYPTO),y)
>> +obj-$(CONFIG_BPF_SYSCALL) += crypto.o
>> +endif
>>   obj-$(CONFIG_BPF_PRELOAD) += preload/
>>   obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
>> diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
>> new file mode 100644
>> index 000000000000..a76d80f37f55
>> --- /dev/null
>> +++ b/kernel/bpf/crypto.c
>> @@ -0,0 +1,377 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/* Copyright (c) 2024 Meta, Inc */
>> +#include <linux/bpf.h>
>> +#include <linux/bpf_crypto.h>
>> +#include <linux/bpf_mem_alloc.h>
>> +#include <linux/btf.h>
>> +#include <linux/btf_ids.h>
>> +#include <linux/filter.h>
>> +#include <linux/scatterlist.h>
>> +#include <linux/skbuff.h>
>> +#include <crypto/skcipher.h>
>> +
>> +struct bpf_crypto_type_list {
>> +    const struct bpf_crypto_type *type;
>> +    struct list_head list;
>> +};
>> +
>> +/* BPF crypto initialization parameters struct */
>> +/**
>> + * struct bpf_crypto_params - BPF crypto initialization parameters 
>> structure
>> + * @type:    The string of crypto operation type.
>> + * @algo:    The string of algorithm to initialize.
>> + * @key:    The cipher key used to init crypto algorithm.
>> + * @key_len:    The length of cipher key.
>> + * @authsize:    The length of authentication tag used by algorithm.
>> + */
>> +struct bpf_crypto_params {
>> +    char type[14];
>> +    char algo[128];
>> +    __u8 key[256];
> 
> It should have a two byte hole here. Add
>      __u8 reserved[2];
> 
> and check for 0 in bpf_crypto_ctx_create() in case it could be reused 
> later. The bpf_crypto_ctx_create() should not be called very often.
> 

Sure, I'll add it to have algo, key and other fields aligned to 8 bytes.

>> +    __u32 key_len;
>> +    __u32 authsize;
> 
> I don't think there is tail padding of this struct, so should be fine.
> 
>> +} __attribute__((aligned(8)));
> 
> Does it need aligned(8) here?

Nope, looks like left over after uapi variant.

> 
>> +
>> +static LIST_HEAD(bpf_crypto_types);
>> +static DECLARE_RWSEM(bpf_crypto_types_sem);
>> +
>> +/**
>> + * struct bpf_crypto_ctx - refcounted BPF crypto context structure
>> + * @type:    The pointer to bpf crypto type
>> + * @tfm:    The pointer to instance of crypto API struct.
>> + * @rcu:    The RCU head used to free the crypto context with RCU 
>> safety.
>> + * @usage:    Object reference counter. When the refcount goes to 0, the
>> + *        memory is released back to the BPF allocator, which provides
>> + *        RCU safety.
>> + */
>> +struct bpf_crypto_ctx {
>> +    const struct bpf_crypto_type *type;
>> +    void *tfm;
>> +    u32 siv_len;
>> +    struct rcu_head rcu;
>> +    refcount_t usage;
>> +};
>> +
>> +int bpf_crypto_register_type(const struct bpf_crypto_type *type)
>> +{
>> +    struct bpf_crypto_type_list *node;
>> +    int err = -EEXIST;
>> +
>> +    down_write(&bpf_crypto_types_sem);
>> +    list_for_each_entry(node, &bpf_crypto_types, list) {
>> +        if (!strcmp(node->type->name, type->name))
>> +            goto unlock;
>> +    }
>> +
>> +    node = kmalloc(sizeof(*node), GFP_KERNEL);
>> +    err = -ENOMEM;
>> +    if (!node)
>> +        goto unlock;
>> +
>> +    node->type = type;
>> +    list_add(&node->list, &bpf_crypto_types);
>> +    err = 0;
>> +
>> +unlock:
>> +    up_write(&bpf_crypto_types_sem);
>> +
>> +    return err;
>> +}
>> +EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
>> +
>> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
>> +{
>> +    struct bpf_crypto_type_list *node;
>> +    int err = -ENOENT;
>> +
>> +    down_write(&bpf_crypto_types_sem);
>> +    list_for_each_entry(node, &bpf_crypto_types, list) {
>> +        if (strcmp(node->type->name, type->name))
>> +            continue;
>> +
>> +        list_del(&node->list);
>> +        kfree(node);
>> +        err = 0;
>> +        break;
>> +    }
>> +    up_write(&bpf_crypto_types_sem);
>> +
>> +    return err;
>> +}
>> +EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
>> +
>> +static const struct bpf_crypto_type *bpf_crypto_get_type(const char 
>> *name)
>> +{
>> +    const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
>> +    struct bpf_crypto_type_list *node;
>> +
>> +    down_read(&bpf_crypto_types_sem);
>> +    list_for_each_entry(node, &bpf_crypto_types, list) {
>> +        if (strcmp(node->type->name, name))
>> +            continue;
>> +
>> +        if (try_module_get(node->type->owner))
>> +            type = node->type;
>> +        break;
>> +    }
>> +    up_read(&bpf_crypto_types_sem);
>> +
>> +    return type;
>> +}
>> +
>> +__bpf_kfunc_start_defs();
>> +
>> +/**
>> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
>> + *
>> + * Allocates a crypto context that can be used, acquired, and 
>> released by
>> + * a BPF program. The crypto context returned by this function must 
>> either
>> + * be embedded in a map as a kptr, or freed with 
>> bpf_crypto_ctx_release().
>> + * As crypto API functions use GFP_KERNEL allocations, this function can
>> + * only be used in sleepable BPF programs.
>> + *
>> + * bpf_crypto_ctx_create() allocates memory for crypto context.
>> + * It may return NULL if no memory is available.
>> + * @params: pointer to struct bpf_crypto_params which contains all the
>> + *          details needed to initialise crypto context.
>> + * @err:    integer to store error code when NULL is returned.
>> + */
>> +__bpf_kfunc struct bpf_crypto_ctx *
>> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
> 
> Add a "u32 params__sz" arg in case that the params struct will have 
> addition.
> Take a look at how opts__sz is checked in nf_conntrack_bpf.c.
> 

nf_conntrack uses hard-coded value, while xfrm code uses
sizeof(struct bpf_xfrm_state_opts), which one is better?

>> +{
>> +    const struct bpf_crypto_type *type;
>> +    struct bpf_crypto_ctx *ctx;
>> +
>> +    type = bpf_crypto_get_type(params->type);
>> +    if (IS_ERR(type)) {
>> +        *err = PTR_ERR(type);
>> +        return NULL;
>> +    }
>> +
>> +    if (!type->has_algo(params->algo)) {
>> +        *err = -EOPNOTSUPP;
>> +        goto err_module_put;
>> +    }
>> +
>> +    if (!params->authsize && type->setauthsize) {
>> +        *err = -EOPNOTSUPP;
>> +        goto err_module_put;
>> +    }
>> +
>> +    if (params->authsize && !type->setauthsize) {
> 
> nit. Together with the previous "if" test, replace them with one test like:
> 
>      if (!!params->authsize ^ !!type->setauthsize) {
> 

yep

> 
>> +        *err = -EOPNOTSUPP;
>> +        goto err_module_put;
>> +    }
>> +
>> +    if (!params->key_len) {
> 
> Also checks "|| params->key_len > sizeof(params->key)"

Sure

>> +        *err = -EINVAL;
>> +        goto err_module_put;
>> +    }
>> +
>> +    ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
>> +    if (!ctx) {
>> +        *err = -ENOMEM;
>> +        goto err_module_put;
>> +    }
>> +
>> +    ctx->type = type;
>> +    ctx->tfm = type->alloc_tfm(params->algo);
>> +    if (IS_ERR(ctx->tfm)) {
>> +        *err = PTR_ERR(ctx->tfm);
>> +        goto err_free_ctx;
>> +    }
>> +
>> +    if (params->authsize) {
>> +        *err = type->setauthsize(ctx->tfm, params->authsize);
>> +        if (*err)
>> +            goto err_free_tfm;
>> +    }
>> +
>> +    *err = type->setkey(ctx->tfm, params->key, params->key_len);
>> +    if (*err)
>> +        goto err_free_tfm;
>> +
>> +    if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
>> +        *err = -EINVAL;
>> +        goto err_free_tfm;
>> +    }
>> +
>> +    ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
>> +
>> +    refcount_set(&ctx->usage, 1);
>> +
>> +    return ctx;
>> +
>> +err_free_tfm:
>> +    type->free_tfm(ctx->tfm);
>> +err_free_ctx:
>> +    kfree(ctx);
>> +err_module_put:
>> +    module_put(type->owner);
>> +
>> +    return NULL;
>> +}
>> +
>> +static void crypto_free_cb(struct rcu_head *head)
>> +{
>> +    struct bpf_crypto_ctx *ctx;
>> +
>> +    ctx = container_of(head, struct bpf_crypto_ctx, rcu);
>> +    ctx->type->free_tfm(ctx->tfm);
>> +    module_put(ctx->type->owner);
>> +    kfree(ctx);
>> +}
>> +
>> +/**
>> + * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto 
>> context.
>> + * @ctx: The BPF crypto context being acquired. The ctx must be a 
>> trusted
>> + *         pointer.
>> + *
>> + * Acquires a reference to a BPF crypto context. The context returned 
>> by this function
>> + * must either be embedded in a map as a kptr, or freed with
>> + * bpf_crypto_skcipher_ctx_release().
>> + */
>> +__bpf_kfunc struct bpf_crypto_ctx *
>> +bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
>> +{
>> +    if (!refcount_inc_not_zero(&ctx->usage))
>> +        return NULL;
>> +    return ctx;
>> +}
>> +
>> +/**
>> + * bpf_crypto_ctx_release() - Release a previously acquired BPF 
>> crypto context.
>> + * @ctx: The crypto context being released.
>> + *
>> + * Releases a previously acquired reference to a BPF crypto context. 
>> When the final
>> + * reference of the BPF crypto context has been released, it is 
>> subsequently freed in
>> + * an RCU callback in the BPF memory allocator.
>> + */
>> +__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
>> +{
>> +    if (refcount_dec_and_test(&ctx->usage))
>> +        call_rcu(&ctx->rcu, crypto_free_cb);
>> +}
>> +
>> +static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
>> +                const struct bpf_dynptr_kern *src,
>> +                struct bpf_dynptr_kern *dst,
>> +                const struct bpf_dynptr_kern *siv,
>> +                bool decrypt)
>> +{
>> +    u32 src_len, dst_len, siv_len;
>> +    const u8 *psrc;
>> +    u8 *pdst, *piv;
>> +    int err;
>> +
>> +    if (__bpf_dynptr_is_rdonly(dst))
>> +        return -EINVAL;
>> +
>> +    siv_len = __bpf_dynptr_size(siv);
>> +    src_len = __bpf_dynptr_size(src);
>> +    dst_len = __bpf_dynptr_size(dst);
>> +    if (!src_len || !dst_len)
>> +        return -EINVAL;
>> +
>> +    if (siv_len != ctx->siv_len)
>> +        return -EINVAL;
>> +
>> +    psrc = __bpf_dynptr_data(src, src_len);
>> +    if (!psrc)
>> +        return -EINVAL;
>> +    pdst = __bpf_dynptr_data_rw(dst, dst_len);
>> +    if (!pdst)
>> +        return -EINVAL;
>> +
>> +    piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;
> 
> It has been a while. I don't remember if it has already been brought up 
> before.
> 
> The "const struct bpf_dynptr_kern *siv" here is essentially an optional 
> pointer. Allowing NULL is a more intuitive usage instead of passing a 
> 0-len dynptr. The verifier needs some changes to take __nullable suffix 
> for "struct bpf_dynptr_kern *siv__nullable". This could be a follow up 
> to relax the restriction to allow NULL and not necessary in this set.
>

I think we have already discussed and agreed to have follow up once this 
part is merged.


>> +    if (siv_len && !piv)
>> +        return -EINVAL;
>> +
>> +    err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, 
>> piv)
>> +              : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
>> +
>> +    return err;
>> +}
> 
>
Martin KaFai Lau April 22, 2024, 9:42 p.m. UTC | #4
On 4/19/24 5:24 PM, Vadim Fedorenko wrote:
>>> +/**
>>> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
>>> + *
>>> + * Allocates a crypto context that can be used, acquired, and released by
>>> + * a BPF program. The crypto context returned by this function must either
>>> + * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
>>> + * As crypto API functions use GFP_KERNEL allocations, this function can
>>> + * only be used in sleepable BPF programs.
>>> + *
>>> + * bpf_crypto_ctx_create() allocates memory for crypto context.
>>> + * It may return NULL if no memory is available.
>>> + * @params: pointer to struct bpf_crypto_params which contains all the
>>> + *          details needed to initialise crypto context.
>>> + * @err:    integer to store error code when NULL is returned.
>>> + */
>>> +__bpf_kfunc struct bpf_crypto_ctx *
>>> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
>>
>> Add a "u32 params__sz" arg in case that the params struct will have addition.
>> Take a look at how opts__sz is checked in nf_conntrack_bpf.c.
>>
> 
> nf_conntrack uses hard-coded value, while xfrm code uses
> sizeof(struct bpf_xfrm_state_opts), which one is better?

If it is about the enum NF_BPF_CT_OPTS_SZ in nf_conntrack, I don't think it is a 
must have. bpf_core_type_size() should have the same effect to figure out the 
sizeof a struct in the running kernel.

afaik, sizeof() should do.
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5034c1b4ded7..acc479c13f52 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1265,6 +1265,7 @@  int bpf_dynptr_check_size(u32 size);
 u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
 const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
 void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
+bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
 
 #ifdef CONFIG_BPF_JIT
 int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
new file mode 100644
index 000000000000..a41e71d4e2d9
--- /dev/null
+++ b/include/linux/bpf_crypto.h
@@ -0,0 +1,24 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#ifndef _BPF_CRYPTO_H
+#define _BPF_CRYPTO_H
+
+struct bpf_crypto_type {
+	void *(*alloc_tfm)(const char *algo);
+	void (*free_tfm)(void *tfm);
+	int (*has_algo)(const char *algo);
+	int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
+	int (*setauthsize)(void *tfm, unsigned int authsize);
+	int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
+	int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
+	unsigned int (*ivsize)(void *tfm);
+	unsigned int (*statesize)(void *tfm);
+	u32 (*get_flags)(void *tfm);
+	struct module *owner;
+	char name[14];
+};
+
+int bpf_crypto_register_type(const struct bpf_crypto_type *type);
+int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
+
+#endif /* _BPF_CRYPTO_H */
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 368c5d86b5b7..736bd22e5ce0 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -44,6 +44,9 @@  obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
 obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
 obj-${CONFIG_BPF_LSM} += bpf_lsm.o
 endif
+ifeq ($(CONFIG_CRYPTO),y)
+obj-$(CONFIG_BPF_SYSCALL) += crypto.o
+endif
 obj-$(CONFIG_BPF_PRELOAD) += preload/
 
 obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
new file mode 100644
index 000000000000..a76d80f37f55
--- /dev/null
+++ b/kernel/bpf/crypto.c
@@ -0,0 +1,377 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2024 Meta, Inc */
+#include <linux/bpf.h>
+#include <linux/bpf_crypto.h>
+#include <linux/bpf_mem_alloc.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
+#include <linux/filter.h>
+#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
+#include <crypto/skcipher.h>
+
+struct bpf_crypto_type_list {
+	const struct bpf_crypto_type *type;
+	struct list_head list;
+};
+
+/* BPF crypto initialization parameters struct */
+/**
+ * struct bpf_crypto_params - BPF crypto initialization parameters structure
+ * @type:	The string of crypto operation type.
+ * @algo:	The string of algorithm to initialize.
+ * @key:	The cipher key used to init crypto algorithm.
+ * @key_len:	The length of cipher key.
+ * @authsize:	The length of authentication tag used by algorithm.
+ */
+struct bpf_crypto_params {
+	char type[14];
+	char algo[128];
+	__u8 key[256];
+	__u32 key_len;
+	__u32 authsize;
+} __attribute__((aligned(8)));
+
+static LIST_HEAD(bpf_crypto_types);
+static DECLARE_RWSEM(bpf_crypto_types_sem);
+
+/**
+ * struct bpf_crypto_ctx - refcounted BPF crypto context structure
+ * @type:	The pointer to bpf crypto type
+ * @tfm:	The pointer to instance of crypto API struct.
+ * @rcu:	The RCU head used to free the crypto context with RCU safety.
+ * @usage:	Object reference counter. When the refcount goes to 0, the
+ *		memory is released back to the BPF allocator, which provides
+ *		RCU safety.
+ */
+struct bpf_crypto_ctx {
+	const struct bpf_crypto_type *type;
+	void *tfm;
+	u32 siv_len;
+	struct rcu_head rcu;
+	refcount_t usage;
+};
+
+int bpf_crypto_register_type(const struct bpf_crypto_type *type)
+{
+	struct bpf_crypto_type_list *node;
+	int err = -EEXIST;
+
+	down_write(&bpf_crypto_types_sem);
+	list_for_each_entry(node, &bpf_crypto_types, list) {
+		if (!strcmp(node->type->name, type->name))
+			goto unlock;
+	}
+
+	node = kmalloc(sizeof(*node), GFP_KERNEL);
+	err = -ENOMEM;
+	if (!node)
+		goto unlock;
+
+	node->type = type;
+	list_add(&node->list, &bpf_crypto_types);
+	err = 0;
+
+unlock:
+	up_write(&bpf_crypto_types_sem);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
+
+int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
+{
+	struct bpf_crypto_type_list *node;
+	int err = -ENOENT;
+
+	down_write(&bpf_crypto_types_sem);
+	list_for_each_entry(node, &bpf_crypto_types, list) {
+		if (strcmp(node->type->name, type->name))
+			continue;
+
+		list_del(&node->list);
+		kfree(node);
+		err = 0;
+		break;
+	}
+	up_write(&bpf_crypto_types_sem);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
+
+static const struct bpf_crypto_type *bpf_crypto_get_type(const char *name)
+{
+	const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
+	struct bpf_crypto_type_list *node;
+
+	down_read(&bpf_crypto_types_sem);
+	list_for_each_entry(node, &bpf_crypto_types, list) {
+		if (strcmp(node->type->name, name))
+			continue;
+
+		if (try_module_get(node->type->owner))
+			type = node->type;
+		break;
+	}
+	up_read(&bpf_crypto_types_sem);
+
+	return type;
+}
+
+__bpf_kfunc_start_defs();
+
+/**
+ * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
+ *
+ * Allocates a crypto context that can be used, acquired, and released by
+ * a BPF program. The crypto context returned by this function must either
+ * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
+ * As crypto API functions use GFP_KERNEL allocations, this function can
+ * only be used in sleepable BPF programs.
+ *
+ * bpf_crypto_ctx_create() allocates memory for crypto context.
+ * It may return NULL if no memory is available.
+ * @params: pointer to struct bpf_crypto_params which contains all the
+ *          details needed to initialise crypto context.
+ * @err:    integer to store error code when NULL is returned.
+ */
+__bpf_kfunc struct bpf_crypto_ctx *
+bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
+{
+	const struct bpf_crypto_type *type;
+	struct bpf_crypto_ctx *ctx;
+
+	type = bpf_crypto_get_type(params->type);
+	if (IS_ERR(type)) {
+		*err = PTR_ERR(type);
+		return NULL;
+	}
+
+	if (!type->has_algo(params->algo)) {
+		*err = -EOPNOTSUPP;
+		goto err_module_put;
+	}
+
+	if (!params->authsize && type->setauthsize) {
+		*err = -EOPNOTSUPP;
+		goto err_module_put;
+	}
+
+	if (params->authsize && !type->setauthsize) {
+		*err = -EOPNOTSUPP;
+		goto err_module_put;
+	}
+
+	if (!params->key_len) {
+		*err = -EINVAL;
+		goto err_module_put;
+	}
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx) {
+		*err = -ENOMEM;
+		goto err_module_put;
+	}
+
+	ctx->type = type;
+	ctx->tfm = type->alloc_tfm(params->algo);
+	if (IS_ERR(ctx->tfm)) {
+		*err = PTR_ERR(ctx->tfm);
+		goto err_free_ctx;
+	}
+
+	if (params->authsize) {
+		*err = type->setauthsize(ctx->tfm, params->authsize);
+		if (*err)
+			goto err_free_tfm;
+	}
+
+	*err = type->setkey(ctx->tfm, params->key, params->key_len);
+	if (*err)
+		goto err_free_tfm;
+
+	if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
+		*err = -EINVAL;
+		goto err_free_tfm;
+	}
+
+	ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
+
+	refcount_set(&ctx->usage, 1);
+
+	return ctx;
+
+err_free_tfm:
+	type->free_tfm(ctx->tfm);
+err_free_ctx:
+	kfree(ctx);
+err_module_put:
+	module_put(type->owner);
+
+	return NULL;
+}
+
+static void crypto_free_cb(struct rcu_head *head)
+{
+	struct bpf_crypto_ctx *ctx;
+
+	ctx = container_of(head, struct bpf_crypto_ctx, rcu);
+	ctx->type->free_tfm(ctx->tfm);
+	module_put(ctx->type->owner);
+	kfree(ctx);
+}
+
+/**
+ * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto context.
+ * @ctx: The BPF crypto context being acquired. The ctx must be a trusted
+ *	     pointer.
+ *
+ * Acquires a reference to a BPF crypto context. The context returned by this function
+ * must either be embedded in a map as a kptr, or freed with
+ * bpf_crypto_skcipher_ctx_release().
+ */
+__bpf_kfunc struct bpf_crypto_ctx *
+bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
+{
+	if (!refcount_inc_not_zero(&ctx->usage))
+		return NULL;
+	return ctx;
+}
+
+/**
+ * bpf_crypto_ctx_release() - Release a previously acquired BPF crypto context.
+ * @ctx: The crypto context being released.
+ *
+ * Releases a previously acquired reference to a BPF crypto context. When the final
+ * reference of the BPF crypto context has been released, it is subsequently freed in
+ * an RCU callback in the BPF memory allocator.
+ */
+__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
+{
+	if (refcount_dec_and_test(&ctx->usage))
+		call_rcu(&ctx->rcu, crypto_free_cb);
+}
+
+static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
+			    const struct bpf_dynptr_kern *src,
+			    struct bpf_dynptr_kern *dst,
+			    const struct bpf_dynptr_kern *siv,
+			    bool decrypt)
+{
+	u32 src_len, dst_len, siv_len;
+	const u8 *psrc;
+	u8 *pdst, *piv;
+	int err;
+
+	if (__bpf_dynptr_is_rdonly(dst))
+		return -EINVAL;
+
+	siv_len = __bpf_dynptr_size(siv);
+	src_len = __bpf_dynptr_size(src);
+	dst_len = __bpf_dynptr_size(dst);
+	if (!src_len || !dst_len)
+		return -EINVAL;
+
+	if (siv_len != ctx->siv_len)
+		return -EINVAL;
+
+	psrc = __bpf_dynptr_data(src, src_len);
+	if (!psrc)
+		return -EINVAL;
+	pdst = __bpf_dynptr_data_rw(dst, dst_len);
+	if (!pdst)
+		return -EINVAL;
+
+	piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;
+	if (siv_len && !piv)
+		return -EINVAL;
+
+	err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv)
+		      : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
+
+	return err;
+}
+
+/**
+ * bpf_crypto_decrypt() - Decrypt buffer using configured context and IV provided.
+ * @ctx:	The crypto context being used. The ctx must be a trusted pointer.
+ * @src:	bpf_dynptr to the encrypted data. Must be a trusted pointer.
+ * @dst:	bpf_dynptr to the buffer where to store the result. Must be a trusted pointer.
+ * @siv:	bpf_dynptr to IV data and state data to be used by decryptor.
+ *
+ * Decrypts provided buffer using IV data and the crypto context. Crypto context must be configured.
+ */
+__bpf_kfunc int bpf_crypto_decrypt(struct bpf_crypto_ctx *ctx,
+				   const struct bpf_dynptr_kern *src,
+				   struct bpf_dynptr_kern *dst,
+				   struct bpf_dynptr_kern *siv)
+{
+	return bpf_crypto_crypt(ctx, src, dst, siv, true);
+}
+
+/**
+ * bpf_crypto_encrypt() - Encrypt buffer using configured context and IV provided.
+ * @ctx:	The crypto context being used. The ctx must be a trusted pointer.
+ * @src:	bpf_dynptr to the plain data. Must be a trusted pointer.
+ * @dst:	bpf_dynptr to buffer where to store the result. Must be a trusted pointer.
+ * @siv:		bpf_dynptr to IV data and state data to be used by decryptor.
+ *
+ * Encrypts provided buffer using IV data and the crypto context. Crypto context must be configured.
+ */
+__bpf_kfunc int bpf_crypto_encrypt(struct bpf_crypto_ctx *ctx,
+				   const struct bpf_dynptr_kern *src,
+				   struct bpf_dynptr_kern *dst,
+				   struct bpf_dynptr_kern *siv)
+{
+	return bpf_crypto_crypt(ctx, src, dst, siv, false);
+}
+
+__bpf_kfunc_end_defs();
+
+BTF_KFUNCS_START(crypt_init_kfunc_btf_ids)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_create, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_release, KF_RELEASE)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
+BTF_KFUNCS_END(crypt_init_kfunc_btf_ids)
+
+static const struct btf_kfunc_id_set crypt_init_kfunc_set = {
+	.owner = THIS_MODULE,
+	.set   = &crypt_init_kfunc_btf_ids,
+};
+
+BTF_KFUNCS_START(crypt_kfunc_btf_ids)
+BTF_ID_FLAGS(func, bpf_crypto_decrypt, KF_RCU)
+BTF_ID_FLAGS(func, bpf_crypto_encrypt, KF_RCU)
+BTF_KFUNCS_END(crypt_kfunc_btf_ids)
+
+static const struct btf_kfunc_id_set crypt_kfunc_set = {
+	.owner = THIS_MODULE,
+	.set   = &crypt_kfunc_btf_ids,
+};
+
+BTF_ID_LIST(bpf_crypto_dtor_ids)
+BTF_ID(struct, bpf_crypto_ctx)
+BTF_ID(func, bpf_crypto_ctx_release)
+
+static int __init crypto_kfunc_init(void)
+{
+	int ret;
+	const struct btf_id_dtor_kfunc bpf_crypto_dtors[] = {
+		{
+			.btf_id	      = bpf_crypto_dtor_ids[0],
+			.kfunc_btf_id = bpf_crypto_dtor_ids[1]
+		},
+	};
+
+	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &crypt_kfunc_set);
+	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &crypt_kfunc_set);
+	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &crypt_kfunc_set);
+	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL,
+					       &crypt_init_kfunc_set);
+	return  ret ?: register_btf_id_dtor_kfuncs(bpf_crypto_dtors,
+						   ARRAY_SIZE(bpf_crypto_dtors),
+						   THIS_MODULE);
+}
+
+late_initcall(crypto_kfunc_init);
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8cde717137bd..a67fa076b844 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1443,7 +1443,7 @@  static const struct bpf_func_proto bpf_kptr_xchg_proto = {
 #define DYNPTR_SIZE_MASK	0xFFFFFF
 #define DYNPTR_RDONLY_BIT	BIT(31)
 
-static bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
+bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
 {
 	return ptr->size & DYNPTR_RDONLY_BIT;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2aad6d90550f..f83f537af60f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5295,6 +5295,7 @@  BTF_ID(struct, cgroup)
 BTF_ID(struct, bpf_cpumask)
 #endif
 BTF_ID(struct, task_struct)
+BTF_ID(struct, bpf_crypto_ctx)
 BTF_SET_END(rcu_protected_types)
 
 static bool rcu_protected_object(const struct btf *btf, u32 btf_id)