From patchwork Thu Oct 26 01:59:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Fedorenko X-Patchwork-Id: 13437163 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6AF6469D; Thu, 26 Oct 2023 02:00:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=meta.com header.i=@meta.com header.b="YhlRHLDw" Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39858128; Wed, 25 Oct 2023 19:00:05 -0700 (PDT) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 39PLfNvc016399; Wed, 25 Oct 2023 18:59:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=OZPz1Oq2LyaUZOw961SQOKp4naN5ewSkvLqajBv5OZo=; b=YhlRHLDw7tSocLSNlatu7lVt4UM1d8z/9m6lNwEWQWUTLgdaATA/TWzzW6NsfPz4MJ9N xPDUkbTXJ3KbKUXMGX2Yj7bpjFScmV0EfVOXQru0O6X3AeB5S537x05qHcWU4xg6qmsc 0Hwb/cyY4BZ2WgR0OwDYDjVLsCg/A57Lm+WTBAB77StGFXRE4FarxI0PHZFw4SRRzxkz X+EzAo+okpCuFBjMrYXSKItKrhHcNOJcnUQsxNrNaJDgDXYAc24S+0CxDex5HcGedLio doVcHheSaz6WXPFh7Uhs0gUJJhc8lEik5zlgCtrIXDlKC5pb82GK/pTAPnPfWgGfyIKH Dw== Received: from mail.thefacebook.com ([163.114.132.120]) by m0001303.ppops.net (PPS) with ESMTPS id 3ty54a4fdg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Wed, 25 Oct 2023 18:59:49 -0700 Received: from devvm4158.cln0.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::8) with Microsoft SMTP Server id 15.1.2507.34; Wed, 25 Oct 2023 18:59:46 -0700 From: Vadim Fedorenko To: Martin KaFai Lau , Andrii Nakryiko , Alexei Starovoitov , Mykola Lysenko CC: Vadim Fedorenko , Vadim Fedorenko , , Subject: [PATCH bpf-next 1/2] bpf: add skcipher API support to TC/XDP programs Date: Wed, 25 Oct 2023 18:59:37 -0700 Message-ID: <20231026015938.276743-1-vadfed@meta.com> X-Mailer: git-send-email 2.39.3 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [2620:10d:c085:208::11] X-Proofpoint-ORIG-GUID: RY2iX-rENakqTmMALOZ3NMME7fkdaggl X-Proofpoint-GUID: RY2iX-rENakqTmMALOZ3NMME7fkdaggl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-25_13,2023-10-25_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net Add crypto API support to BPF to be able to decrypt or encrypt packets in TC/XDP BPF programs. Only symmetric key ciphers are supported for now. Special care should be taken for initialization part of crypto algo because crypto_alloc_sync_skcipher() doesn't work with preemtion disabled, it can be run only in sleepable BPF program. Also async crypto is not supported because of the very same issue - TC/XDP BPF programs are not sleepable. Signed-off-by: Vadim Fedorenko --- include/linux/bpf.h | 1 + kernel/bpf/Makefile | 3 + kernel/bpf/crypto.c | 278 ++++++++++++++++++++++++++++++++++++++++++ kernel/bpf/helpers.c | 2 +- kernel/bpf/verifier.c | 1 + 5 files changed, 284 insertions(+), 1 deletion(-) create mode 100644 kernel/bpf/crypto.c diff --git a/include/linux/bpf.h b/include/linux/bpf.h index d3c51a507508..17145738f176 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1222,6 +1222,7 @@ enum bpf_dynptr_type { int bpf_dynptr_check_size(u32 size); u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr); +enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *ptr); #ifdef CONFIG_BPF_JIT int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index f526b7573e97..e14b5834c477 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -41,6 +41,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o obj-$(CONFIG_BPF_SYSCALL) += cpumask.o obj-${CONFIG_BPF_LSM} += bpf_lsm.o endif +ifeq ($(CONFIG_CRYPTO_SKCIPHER),y) +obj-$(CONFIG_BPF_SYSCALL) += crypto.o +endif obj-$(CONFIG_BPF_PRELOAD) += preload/ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c new file mode 100644 index 000000000000..f7803a5591b0 --- /dev/null +++ b/kernel/bpf/crypto.c @@ -0,0 +1,278 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2023 Meta, Inc */ +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * struct bpf_crypto_skcipher_ctx - refcounted BPF sync skcipher context structure + * @tfm: The pointer to crypto_sync_skcipher struct. + * @rcu: The RCU head used to free the crypto context with RCU safety. + * @usage: Object reference counter. When the refcount goes to 0, the + * memory is released back to the BPF allocator, which provides + * RCU safety. + */ + +struct bpf_crypto_skcipher_ctx { + struct crypto_sync_skcipher *tfm; + struct rcu_head rcu; + refcount_t usage; +}; + +static struct bpf_mem_alloc bpf_crypto_ctx_ma; + +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global kfuncs as their definitions will be in BTF"); + +/** + * bpf_crypto_skcipher_ctx_create() - Create a mutable BPF crypto context. + * + * Allocates a crypto context that can be used, acquired, and released by + * a BPF program. The crypto context returned by this function must either + * be embedded in a map as a kptr, or freed with bpf_crypto_skcipher_ctx_release(). + * + * bpf_crypto_skcipher_ctx_create() allocates memory using the BPF memory + * allocator, and will not block. It may return NULL if no memory is available. + * @algo: bpf_dynptr which holds string representation of algorithm. + * @key: bpf_dynptr which holds cipher key to do crypto. + */ +__bpf_kfunc struct bpf_crypto_skcipher_ctx * +bpf_crypto_skcipher_ctx_create(const struct bpf_dynptr_kern *algo, const struct bpf_dynptr_kern *key, + int *err) +{ + struct bpf_crypto_skcipher_ctx *ctx; + + if (__bpf_dynptr_size(algo) > CRYPTO_MAX_ALG_NAME) { + *err = -EINVAL; + return NULL; + } + + if (!crypto_has_skcipher(algo->data, CRYPTO_ALG_TYPE_SKCIPHER, CRYPTO_ALG_TYPE_MASK)) { + *err = -EOPNOTSUPP; + return NULL; + } + + ctx = bpf_mem_cache_alloc(&bpf_crypto_ctx_ma); + if (!ctx) { + *err = -ENOMEM; + return NULL; + } + + memset(ctx, 0, sizeof(*ctx)); + + ctx->tfm = crypto_alloc_sync_skcipher(algo->data, 0, 0); + if (IS_ERR(ctx->tfm)) { + *err = PTR_ERR(ctx->tfm); + ctx->tfm = NULL; + goto err; + } + + *err = crypto_sync_skcipher_setkey(ctx->tfm, key->data, __bpf_dynptr_size(key)); + if (*err) + goto err; + + refcount_set(&ctx->usage, 1); + + return ctx; +err: + if (ctx->tfm) + crypto_free_sync_skcipher(ctx->tfm); + bpf_mem_cache_free(&bpf_crypto_ctx_ma, ctx); + + return NULL; +} + +static void crypto_free_sync_skcipher_cb(struct rcu_head *head) +{ + struct bpf_crypto_skcipher_ctx *ctx; + + ctx = container_of(head, struct bpf_crypto_skcipher_ctx, rcu); + crypto_free_sync_skcipher(ctx->tfm); + migrate_disable(); + bpf_mem_cache_free(&bpf_crypto_ctx_ma, ctx); + migrate_enable(); +} + +/** + * bpf_crypto_skcipher_ctx_acquire() - Acquire a reference to a BPF crypto context. + * @ctx: The BPF crypto context being acquired. The ctx must be a trusted + * pointer. + * + * Acquires a reference to a BPF crypto context. The context returned by this function + * must either be embedded in a map as a kptr, or freed with + * bpf_crypto_skcipher_ctx_release(). + */ +__bpf_kfunc struct bpf_crypto_skcipher_ctx * +bpf_crypto_skcipher_ctx_acquire(struct bpf_crypto_skcipher_ctx *ctx) +{ + refcount_inc(&ctx->usage); + return ctx; +} + +/** + * bpf_crypto_skcipher_ctx_release() - Release a previously acquired BPF crypto context. + * @ctx: The crypto context being released. + * + * Releases a previously acquired reference to a BPF cpumask. When the final + * reference of the BPF cpumask has been released, it is subsequently freed in + * an RCU callback in the BPF memory allocator. + */ +__bpf_kfunc void bpf_crypto_skcipher_ctx_release(struct bpf_crypto_skcipher_ctx *ctx) +{ + if (refcount_dec_and_test(&ctx->usage)) + call_rcu(&ctx->rcu, crypto_free_sync_skcipher_cb); +} + +static void *__bpf_dynptr_data_ptr(const struct bpf_dynptr_kern *ptr) +{ + enum bpf_dynptr_type type; + + if (!ptr->data) + return NULL; + + type = bpf_dynptr_get_type(ptr); + + switch (type) { + case BPF_DYNPTR_TYPE_LOCAL: + case BPF_DYNPTR_TYPE_RINGBUF: + return ptr->data + ptr->offset; + case BPF_DYNPTR_TYPE_SKB: + return skb_pointer_if_linear(ptr->data, ptr->offset, __bpf_dynptr_size(ptr)); + case BPF_DYNPTR_TYPE_XDP: + { + void *xdp_ptr = bpf_xdp_pointer(ptr->data, ptr->offset, __bpf_dynptr_size(ptr)); + if (!IS_ERR_OR_NULL(xdp_ptr)) + return xdp_ptr; + + return NULL; + } + default: + WARN_ONCE(true, "unknown dynptr type %d\n", type); + return NULL; + } +} + +static int bpf_crypto_skcipher_crypt(struct crypto_sync_skcipher *tfm, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + const struct bpf_dynptr_kern *iv, + bool decrypt) +{ + struct skcipher_request *req = NULL; + struct scatterlist sgin, sgout; + int err; + + if (crypto_sync_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) + return -EINVAL; + + if (!__bpf_dynptr_size(dst) || !__bpf_dynptr_size(src)) + return -EINVAL; + + if (__bpf_dynptr_size(iv) != crypto_sync_skcipher_ivsize(tfm)) + return -EINVAL; + + req = skcipher_request_alloc(&tfm->base, GFP_ATOMIC); + if (!req) + return -ENOMEM; + + sg_init_one(&sgin, __bpf_dynptr_data_ptr(src), __bpf_dynptr_size(src)); + sg_init_one(&sgout, __bpf_dynptr_data_ptr(dst), __bpf_dynptr_size(dst)); + + skcipher_request_set_crypt(req, &sgin, &sgout, __bpf_dynptr_size(src), + __bpf_dynptr_data_ptr(iv)); + + err = decrypt ? crypto_skcipher_decrypt(req) : crypto_skcipher_encrypt(req); + + skcipher_request_free(req); + + return err; +} + +/** + * bpf_crypto_skcipher_decrypt() - Decrypt buffer using configured context and IV provided. + * @ctx: The crypto context being used. The ctx must be a trusted pointer. + * @src: bpf_dynptr to the encrypted data. Must be a trusted pointer. + * @dst: bpf_dynptr to the buffer where to store the result. Must be a trusted pointer. + * @iv: bpf_dynptr to IV data to be used by decryptor. + * + * Decrypts provided buffer using IV data and the crypto context. Crypto context must be configured. + */ +__bpf_kfunc int bpf_crypto_skcipher_decrypt(struct bpf_crypto_skcipher_ctx *ctx, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + const struct bpf_dynptr_kern *iv) +{ + return bpf_crypto_skcipher_crypt(ctx->tfm, src, dst, iv, true); +} + +/** + * bpf_crypto_skcipher_encrypt() - Encrypt buffer using configured context and IV provided. + * @ctx: The crypto context being used. The ctx must be a trusted pointer. + * @src: bpf_dynptr to the plain data. Must be a trusted pointer. + * @dst: bpf_dynptr to buffer where to store the result. Must be a trusted pointer. + * @iv: bpf_dynptr to IV data to be used by decryptor. + * + * Encrypts provided buffer using IV data and the crypto context. Crypto context must be configured. + */ +__bpf_kfunc int bpf_crypto_skcipher_encrypt(struct bpf_crypto_skcipher_ctx *ctx, + const struct bpf_dynptr_kern *src, + struct bpf_dynptr_kern *dst, + const struct bpf_dynptr_kern *iv) +{ + return bpf_crypto_skcipher_crypt(ctx->tfm, src, dst, iv, false); +} + +__diag_pop(); + +BTF_SET8_START(crypt_skcipher_init_kfunc_btf_ids) +BTF_ID_FLAGS(func, bpf_crypto_skcipher_ctx_create, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_crypto_skcipher_ctx_release, KF_RELEASE) +BTF_ID_FLAGS(func, bpf_crypto_skcipher_ctx_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) +BTF_SET8_END(crypt_skcipher_init_kfunc_btf_ids) + +static const struct btf_kfunc_id_set crypt_skcipher_init_kfunc_set = { + .owner = THIS_MODULE, + .set = &crypt_skcipher_init_kfunc_btf_ids, +}; + +BTF_SET8_START(crypt_skcipher_kfunc_btf_ids) +BTF_ID_FLAGS(func, bpf_crypto_skcipher_decrypt, KF_RCU) +BTF_ID_FLAGS(func, bpf_crypto_skcipher_encrypt, KF_RCU) +BTF_SET8_END(crypt_skcipher_kfunc_btf_ids) + +static const struct btf_kfunc_id_set crypt_skcipher_kfunc_set = { + .owner = THIS_MODULE, + .set = &crypt_skcipher_kfunc_btf_ids, +}; + +BTF_ID_LIST(crypto_skcipher_dtor_ids) +BTF_ID(struct, bpf_crypto_skcipher_ctx) +BTF_ID(func, bpf_crypto_skcipher_ctx_release) + +static int __init crypto_skcipher_kfunc_init(void) +{ + int ret; + const struct btf_id_dtor_kfunc crypto_skcipher_dtors[] = { + { + .btf_id = crypto_skcipher_dtor_ids[0], + .kfunc_btf_id = crypto_skcipher_dtor_ids[1] + }, + }; + + ret = bpf_mem_alloc_init(&bpf_crypto_ctx_ma, sizeof(struct bpf_crypto_skcipher_ctx), false); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &crypt_skcipher_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &crypt_skcipher_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &crypt_skcipher_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &crypt_skcipher_init_kfunc_set); + return ret ?: register_btf_id_dtor_kfuncs(crypto_skcipher_dtors, + ARRAY_SIZE(crypto_skcipher_dtors), + THIS_MODULE); +} + +late_initcall(crypto_skcipher_kfunc_init); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 62a53ebfedf9..0c2a10ff5dfd 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1444,7 +1444,7 @@ static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_typ ptr->size |= type << DYNPTR_TYPE_SHIFT; } -static enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *ptr) +enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *ptr) { return (ptr->size & ~(DYNPTR_RDONLY_BIT)) >> DYNPTR_TYPE_SHIFT; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index bb58987e4844..75d2f47ca3cb 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5184,6 +5184,7 @@ BTF_ID(struct, prog_test_ref_kfunc) BTF_ID(struct, cgroup) BTF_ID(struct, bpf_cpumask) BTF_ID(struct, task_struct) +BTF_ID(struct, bpf_crypto_skcipher_ctx) BTF_SET_END(rcu_protected_types) static bool rcu_protected_object(const struct btf *btf, u32 btf_id) From patchwork Thu Oct 26 01:59:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vadim Fedorenko X-Patchwork-Id: 13437164 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A781749F; Thu, 26 Oct 2023 02:00:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=meta.com header.i=@meta.com header.b="O2cqlMkO" Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B027212F; Wed, 25 Oct 2023 19:00:29 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39PMVQ1n012893; Wed, 25 Oct 2023 19:00:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meta.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=s2048-2021-q4; bh=8UPJi435dLHCycWk7gRegwvU0o7kPnLG9DTceS7LPAM=; b=O2cqlMkOT2KEEIKaaAyMmscrbW3En3v1cJnEGIeexK9yteR3cpeE7uUS8sLReUM3usOk UnSW2c6dAfZTh4XEOyFQ4b2iwUFQ5UHoUHK+u5JcDxS1GjfD1d9rwtU04W4VoIOAHmiI CJr6P56g0i3Y3EWiYUMjI1YN5ToyWBkdpAogHYo7pjzWo+cIb0lx4YJcbF7V06sDl62K o/6SULR2p6aC/T5ICMGSuLgI5FVraCDnB22wEYuhARSnC8QkgGr19K6dJlgzAXC6JhNi eiaPOHxiE6apo0rvveIaOSazbYoE8yXffX450oc4bXHhutG1vFWlBlfoh5KGxih/Oo/j Bw== Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3tybvr8x04-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT); Wed, 25 Oct 2023 19:00:15 -0700 Received: from devvm4158.cln0.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::8) with Microsoft SMTP Server id 15.1.2507.34; Wed, 25 Oct 2023 18:59:48 -0700 From: Vadim Fedorenko To: Martin KaFai Lau , Andrii Nakryiko , Alexei Starovoitov , Mykola Lysenko CC: Vadim Fedorenko , Vadim Fedorenko , , Subject: [PATCH bpf-next 2/2] selftests: bpf: crypto skcipher algo selftests Date: Wed, 25 Oct 2023 18:59:38 -0700 Message-ID: <20231026015938.276743-2-vadfed@meta.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231026015938.276743-1-vadfed@meta.com> References: <20231026015938.276743-1-vadfed@meta.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [2620:10d:c085:208::11] X-Proofpoint-ORIG-GUID: uv3H0YVWNEq2CF2pUYpl6AA-2kXB46EL X-Proofpoint-GUID: uv3H0YVWNEq2CF2pUYpl6AA-2kXB46EL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-25_13,2023-10-25_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net Add simple tc hook selftests to show the way to work with new crypto BPF API. Some weird structre and map are added to setup program to make verifier happy about dynptr initialization from memory. Simple AES-ECB algo is used to demonstrate encryption and decryption of fixed size buffers. Signed-off-by: Vadim Fedorenko --- tools/testing/selftests/bpf/config | 1 + .../selftests/bpf/prog_tests/crypto_sanity.c | 129 +++++++++++++++ .../selftests/bpf/progs/crypto_common.h | 98 +++++++++++ .../selftests/bpf/progs/crypto_sanity.c | 154 ++++++++++++++++++ 4 files changed, 382 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/crypto_sanity.c create mode 100644 tools/testing/selftests/bpf/progs/crypto_common.h create mode 100644 tools/testing/selftests/bpf/progs/crypto_sanity.c diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index 02dd4409200e..2a5d6339831b 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -14,6 +14,7 @@ CONFIG_CGROUP_BPF=y CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_SHA256=y CONFIG_CRYPTO_USER_API_HASH=y +CONFIG_CRYPTO_SKCIPHER=y CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_BTF=y CONFIG_DEBUG_INFO_DWARF4=y diff --git a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c new file mode 100644 index 000000000000..a43969da6d15 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include + +#include "test_progs.h" +#include "network_helpers.h" +#include "crypto_sanity.skel.h" + +#define NS_TEST "crypto_sanity_ns" +#define IPV6_IFACE_ADDR "face::1" +#define UDP_TEST_PORT 7777 +static const char plain_text[] = "stringtoencrypt0"; +static const char crypted_data[] = "\x5B\x59\x39\xEA\xD9\x7A\x2D\xAD\xA7\xE0\x43" \ + "\x37\x8A\x77\x17\xB2"; + +void test_crypto_sanity(void) +{ + LIBBPF_OPTS(bpf_tc_hook, qdisc_hook, .attach_point = BPF_TC_EGRESS); + LIBBPF_OPTS(bpf_tc_opts, tc_attach_enc); + LIBBPF_OPTS(bpf_tc_opts, tc_attach_dec); + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = crypted_data, + .data_size_in = sizeof(crypted_data), + .repeat = 1, + ); + struct nstoken *nstoken = NULL; + struct crypto_sanity *skel; + struct sockaddr_in6 addr; + int sockfd, err, pfd; + socklen_t addrlen; + + skel = crypto_sanity__open(); + if (!ASSERT_OK_PTR(skel, "skel open")) + return; + + bpf_program__set_autoload(skel->progs.skb_crypto_setup, true); + + SYS(fail, "ip netns add %s", NS_TEST); + SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR); + SYS(fail, "ip -net %s link set dev lo up", NS_TEST); + + err = crypto_sanity__load(skel); + if (!ASSERT_OK(err, "crypto_sanity__load")) + goto fail; + + nstoken = open_netns(NS_TEST); + if (!ASSERT_OK_PTR(nstoken, "open_netns")) + goto fail; + + qdisc_hook.ifindex = if_nametoindex("lo"); + if (!ASSERT_GT(qdisc_hook.ifindex, 0, "if_nametoindex lo")) + goto fail; + + err = crypto_sanity__attach(skel); + if (!ASSERT_OK(err, "crypto_sanity__attach")) + goto fail; + + pfd = bpf_program__fd(skel->progs.skb_crypto_setup); + if (!ASSERT_GT(pfd, 0, "skb_crypto_setup fd")) + goto fail; + + err = bpf_prog_test_run_opts(pfd, &opts); + if (!ASSERT_OK(err, "skb_crypto_setup") || + !ASSERT_OK(opts.retval, "skb_crypto_setup retval")) + goto fail; + + if (!ASSERT_OK(skel->bss->status, "skb_crypto_setup status")) + goto fail; + + err = bpf_tc_hook_create(&qdisc_hook); + if (!ASSERT_OK(err, "create qdisc hook")) + goto fail; + + addrlen = sizeof(addr); + err = make_sockaddr(AF_INET6, IPV6_IFACE_ADDR, UDP_TEST_PORT, + (void *)&addr, &addrlen); + if (!ASSERT_OK(err, "make_sockaddr")) + goto fail; + + tc_attach_dec.prog_fd = bpf_program__fd(skel->progs.decrypt_sanity); + err = bpf_tc_attach(&qdisc_hook, &tc_attach_dec); + if (!ASSERT_OK(err, "attach decrypt filter")) + goto fail; + + sockfd = socket(AF_INET6, SOCK_DGRAM, 0); + if (!ASSERT_NEQ(sockfd, -1, "decrypt socket")) + goto fail; + err = sendto(sockfd, crypted_data, 16, 0, (void *)&addr, addrlen); + close(sockfd); + if (!ASSERT_EQ(err, 16, "decrypt send")) + goto fail; + + bpf_tc_detach(&qdisc_hook, &tc_attach_dec); + if (!ASSERT_OK(skel->bss->status, "decrypt status")) + goto fail; + if (!ASSERT_STRNEQ(skel->bss->dst, plain_text, sizeof(plain_text), "decrypt")) + goto fail; + + tc_attach_enc.prog_fd = bpf_program__fd(skel->progs.encrypt_sanity); + err = bpf_tc_attach(&qdisc_hook, &tc_attach_enc); + if (!ASSERT_OK(err, "attach encrypt filter")) + goto fail; + + sockfd = socket(AF_INET6, SOCK_DGRAM, 0); + if (!ASSERT_NEQ(sockfd, -1, "encrypt socket")) + goto fail; + err = sendto(sockfd, plain_text, 16, 0, (void *)&addr, addrlen); + close(sockfd); + if (!ASSERT_EQ(err, 16, "encrypt send")) + goto fail; + + bpf_tc_detach(&qdisc_hook, &tc_attach_enc); + if (!ASSERT_OK(skel->bss->status, "encrypt status")) + goto fail; + if (!ASSERT_STRNEQ(skel->bss->dst, crypted_data, sizeof(crypted_data), "encrypt")) + goto fail; + +fail: + if (nstoken) { + bpf_tc_hook_destroy(&qdisc_hook); + close_netns(nstoken); + } + SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null"); + crypto_sanity__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/crypto_common.h b/tools/testing/selftests/bpf/progs/crypto_common.h new file mode 100644 index 000000000000..c448f07d6e42 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/crypto_common.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#ifndef _CRYPTO_COMMON_H +#define _CRYPTO_COMMON_H + +#include "errno.h" +#include + +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) +private(CTX) static struct bpf_crypto_skcipher_ctx __kptr * global_crypto_ctx; + +struct bpf_crypto_skcipher_ctx *bpf_crypto_skcipher_ctx_create(const struct bpf_dynptr *algo, const struct bpf_dynptr *key, int *err) __ksym; +struct bpf_crypto_skcipher_ctx *bpf_crypto_skcipher_ctx_acquire(struct bpf_crypto_skcipher_ctx *ctx) __ksym; +void bpf_crypto_skcipher_ctx_release(struct bpf_crypto_skcipher_ctx *ctx) __ksym; +int bpf_crypto_skcipher_encrypt(struct bpf_crypto_skcipher_ctx *ctx, const struct bpf_dynptr *src, struct bpf_dynptr *dst, const struct bpf_dynptr *iv) __ksym; +int bpf_crypto_skcipher_decrypt(struct bpf_crypto_skcipher_ctx *ctx, const struct bpf_dynptr *src, struct bpf_dynptr *dst, const struct bpf_dynptr *iv) __ksym; + +struct __crypto_skcipher_ctx_value { + struct bpf_crypto_skcipher_ctx __kptr * ctx; +}; + +struct crypto_conf_value { + __u8 algo[32]; + __u32 algo_size; + __u8 key[32]; + __u32 key_size; + __u8 iv[32]; + __u32 iv_size; + __u8 dst[32]; + __u32 dst_size; +}; + +struct array_conf_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct crypto_conf_value); + __uint(max_entries, 1); +} __crypto_conf_map SEC(".maps"); + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct __crypto_skcipher_ctx_value); + __uint(max_entries, 1); +} __crypto_skcipher_ctx_map SEC(".maps"); + +static inline struct crypto_conf_value *crypto_conf_lookup(void) +{ + struct crypto_conf_value *v, local = {}; + u32 key = 0; + + v = bpf_map_lookup_elem(&__crypto_conf_map, &key); + if (v) + return v; + + bpf_map_update_elem(&__crypto_conf_map, &key, &local, 0); + return bpf_map_lookup_elem(&__crypto_conf_map, &key); +} + + +static inline struct __crypto_skcipher_ctx_value *crypto_skcipher_ctx_value_lookup(void) +{ + u32 key = 0; + + return bpf_map_lookup_elem(&__crypto_skcipher_ctx_map, &key); +} + +static inline int crypto_skcipher_ctx_insert(struct bpf_crypto_skcipher_ctx *ctx) +{ + struct __crypto_skcipher_ctx_value local, *v; + long status; + struct bpf_crypto_skcipher_ctx *old; + u32 key = 0; + + local.ctx = NULL; + status = bpf_map_update_elem(&__crypto_skcipher_ctx_map, &key, &local, 0); + if (status) { + bpf_crypto_skcipher_ctx_release(ctx); + return status; + } + + v = bpf_map_lookup_elem(&__crypto_skcipher_ctx_map, &key); + if (!v) { + bpf_crypto_skcipher_ctx_release(ctx); + return -ENOENT; + } + + old = bpf_kptr_xchg(&v->ctx, ctx); + if (old) { + bpf_crypto_skcipher_ctx_release(old); + return -EEXIST; + } + + return 0; +} + +#endif /* _CRYPTO_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/crypto_sanity.c b/tools/testing/selftests/bpf/progs/crypto_sanity.c new file mode 100644 index 000000000000..71a172d8d2a2 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/crypto_sanity.c @@ -0,0 +1,154 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include "bpf_tracing_net.h" +#include +#include +#include +#include "bpf_misc.h" +#include "bpf_kfuncs.h" +#include "crypto_common.h" + +#define UDP_TEST_PORT 7777 +unsigned char crypto_key[16] = "testtest12345678"; +char crypto_algo[9] = "ecb(aes)"; +char dst[32] = {}; +int status; + +static inline int skb_validate_test(const struct __sk_buff *skb) +{ + struct ipv6hdr ip6h; + struct udphdr udph; + u32 offset; + + if (skb->protocol != __bpf_constant_htons(ETH_P_IPV6)) + return -1; + + if (bpf_skb_load_bytes(skb, ETH_HLEN, &ip6h, sizeof(ip6h))) + return -1; + + if (ip6h.nexthdr != IPPROTO_UDP) + return -1; + + if (bpf_skb_load_bytes(skb, ETH_HLEN + sizeof(ip6h), &udph, sizeof(udph))) + return -1; + + if (udph.dest != __bpf_constant_htons(UDP_TEST_PORT)) + return -1; + + offset = ETH_HLEN + sizeof(ip6h) + sizeof(udph); + if (skb->len < offset + 16) + return -1; + + return offset; +} + +SEC("?fentry.s/bpf_fentry_test1") +int BPF_PROG(skb_crypto_setup) +{ + struct crypto_conf_value *c; + struct bpf_dynptr algo, key; + int err = 0; + + status = 0; + + c = crypto_conf_lookup(); + if (!c) { + status = -EINVAL; + return 0; + } + + bpf_dynptr_from_mem(crypto_algo, sizeof(crypto_algo), 0, &algo); + bpf_dynptr_from_mem(crypto_key, sizeof(crypto_key), 0, &key); + struct bpf_crypto_skcipher_ctx *cctx = bpf_crypto_skcipher_ctx_create(&algo, &key, &err); + + if (!cctx) { + status = err; + return 0; + } + + err = crypto_skcipher_ctx_insert(cctx); + if (err && err != -EEXIST) + status = err; + + return 0; +} + +SEC("tc") +int decrypt_sanity(struct __sk_buff *skb) +{ + struct __crypto_skcipher_ctx_value *v; + struct bpf_crypto_skcipher_ctx *ctx; + struct bpf_dynptr psrc, pdst, iv; + int err; + + err = skb_validate_test(skb); + if (err < 0) { + status = err; + return TC_ACT_SHOT; + } + + v = crypto_skcipher_ctx_value_lookup(); + if (!v) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + ctx = v->ctx; + if (!ctx) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + bpf_dynptr_from_skb(skb, 0, &psrc); + bpf_dynptr_adjust(&psrc, err, err + 16); + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); + bpf_dynptr_from_mem(dst, 0, 0, &iv); + + bpf_crypto_skcipher_decrypt(ctx, &psrc, &pdst, &iv); + + status = 0; + + return TC_ACT_SHOT; +} + +SEC("tc") +int encrypt_sanity(struct __sk_buff *skb) +{ + struct __crypto_skcipher_ctx_value *v; + struct bpf_crypto_skcipher_ctx *ctx; + struct bpf_dynptr psrc, pdst, iv; + int err; + + status = 0; + + err = skb_validate_test(skb); + if (err < 0) { + status = err; + return TC_ACT_SHOT; + } + + v = crypto_skcipher_ctx_value_lookup(); + if (!v) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + ctx = v->ctx; + if (!ctx) { + status = -ENOENT; + return TC_ACT_SHOT; + } + + bpf_dynptr_from_skb(skb, 0, &psrc); + bpf_dynptr_adjust(&psrc, err, err + 16); + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst); + bpf_dynptr_from_mem(dst, 0, 0, &iv); + + bpf_crypto_skcipher_encrypt(ctx, &psrc, &pdst, &iv); + + return TC_ACT_SHOT; +} + +char __license[] SEC("license") = "GPL";