diff mbox series

[v10,5/9] bpf: Add bpf_lookup_*_key() and bpf_key_put() kfuncs

Message ID 20220810165932.2143413-6-roberto.sassu@huawei.com (mailing list archive)
State New
Headers show
Series bpf: Add kfuncs for PKCS#7 signature verification | expand

Commit Message

Roberto Sassu Aug. 10, 2022, 4:59 p.m. UTC
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put()
kfuncs, to respectively search a key with a given serial and flags, obtain
a key from a pre-determined ID defined in include/linux/verification.h, and
cleanup.

Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
---
 include/linux/bpf.h      |   6 ++
 kernel/trace/bpf_trace.c | 146 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 152 insertions(+)

Comments

Alexei Starovoitov Aug. 10, 2022, 9:33 p.m. UTC | #1
On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> +
> +static int __init bpf_key_sig_kfuncs_init(void)
> +{
> +	int ret;
> +
> +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> +					&bpf_key_sig_kfunc_set);
> +	if (!ret)
> +		return 0;
> +
> +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> +					 &bpf_key_sig_kfunc_set);

Isn't this a watery water ?
Don't you have a patch 1 ?
What am I missing ?
Roberto Sassu Aug. 11, 2022, 7:46 a.m. UTC | #2
> From: Alexei Starovoitov [mailto:alexei.starovoitov@gmail.com]
> Sent: Wednesday, August 10, 2022 11:34 PM
> On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> > +
> > +static int __init bpf_key_sig_kfuncs_init(void)
> > +{
> > +	int ret;
> > +
> > +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > +					&bpf_key_sig_kfunc_set);
> > +	if (!ret)
> > +		return 0;
> > +
> > +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> > +					 &bpf_key_sig_kfunc_set);
> 
> Isn't this a watery water ?
> Don't you have a patch 1 ?
> What am I missing ?

Uhm, yes. I had doubts too. That was what also KP did.

It makes sense to register once, since we mapped LSM to
TRACING.

Will resend only this patch. And I will figure out why CI failed.

Roberto
Roberto Sassu Aug. 11, 2022, 12:02 p.m. UTC | #3
> From: Roberto Sassu [mailto:roberto.sassu@huawei.com]
> Sent: Thursday, August 11, 2022 9:47 AM
> > From: Alexei Starovoitov [mailto:alexei.starovoitov@gmail.com]
> > Sent: Wednesday, August 10, 2022 11:34 PM
> > On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> > > +
> > > +static int __init bpf_key_sig_kfuncs_init(void)
> > > +{
> > > +	int ret;
> > > +
> > > +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > > +					&bpf_key_sig_kfunc_set);
> > > +	if (!ret)
> > > +		return 0;
> > > +
> > > +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> > > +					 &bpf_key_sig_kfunc_set);
> >
> > Isn't this a watery water ?
> > Don't you have a patch 1 ?
> > What am I missing ?
> 
> Uhm, yes. I had doubts too. That was what also KP did.
> 
> It makes sense to register once, since we mapped LSM to
> TRACING.
> 
> Will resend only this patch. And I will figure out why CI failed.

Adding in CC Daniel Müller, which worked on this.

I think the issue is that some kernel options are set to =m.
This causes the CI to miss all kernel modules, since they are
not copied to the virtual machine that executes the tests.

I'm testing this patch:

https://github.com/robertosassu/libbpf-ci/commit/b665e001b58c4ddb792a2a68098ea5dc6936b15c

Roberto
Daniel Müller Aug. 11, 2022, 11:52 p.m. UTC | #4
On Thu, Aug 11, 2022 at 12:02:57PM +0000, Roberto Sassu wrote:
> > From: Roberto Sassu [mailto:roberto.sassu@huawei.com]
> > Sent: Thursday, August 11, 2022 9:47 AM
> > > From: Alexei Starovoitov [mailto:alexei.starovoitov@gmail.com]
> > > Sent: Wednesday, August 10, 2022 11:34 PM
> > > On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> > > > +
> > > > +static int __init bpf_key_sig_kfuncs_init(void)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > > > +					&bpf_key_sig_kfunc_set);
> > > > +	if (!ret)
> > > > +		return 0;
> > > > +
> > > > +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> > > > +					 &bpf_key_sig_kfunc_set);
> > >
> > > Isn't this a watery water ?
> > > Don't you have a patch 1 ?
> > > What am I missing ?
> > 
> > Uhm, yes. I had doubts too. That was what also KP did.
> > 
> > It makes sense to register once, since we mapped LSM to
> > TRACING.
> > 
> > Will resend only this patch. And I will figure out why CI failed.
> 
> Adding in CC Daniel Müller, which worked on this.
> 
> I think the issue is that some kernel options are set to =m.
> This causes the CI to miss all kernel modules, since they are
> not copied to the virtual machine that executes the tests.
> 
> I'm testing this patch:
> 
> https://github.com/robertosassu/libbpf-ci/commit/b665e001b58c4ddb792a2a68098ea5dc6936b15c

I commented on the pull request. Would it make sense to adjust the
kernel configuration in this repository instead? I am worried that
otherwise everybody may need a similar work around, depending on how
selftests are ultimately run.

Thanks,
Daniel
KP Singh Aug. 12, 2022, 12:49 a.m. UTC | #5
On Wed, Aug 10, 2022 at 7:01 PM Roberto Sassu <roberto.sassu@huawei.com> wrote:
>
> Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put()
> kfuncs, to respectively search a key with a given serial and flags, obtain

nit: "with a given key handle serial number"

> a key from a pre-determined ID defined in include/linux/verification.h, and
> cleanup.
>
> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
> ---
>  include/linux/bpf.h      |   6 ++
>  kernel/trace/bpf_trace.c | 146 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 152 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index a82f8c559ae2..d415e5e97551 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2573,4 +2573,10 @@ static inline void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {}
>  static inline void bpf_cgroup_atype_put(int cgroup_atype) {}
>  #endif /* CONFIG_BPF_LSM */
>
> +#ifdef CONFIG_KEYS
> +struct bpf_key {
> +       struct key *key;
> +       bool has_ref;
> +};
> +#endif /* CONFIG_KEYS */
>  #endif /* _LINUX_BPF_H */
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 68e5cdd24cef..a607bb0be738 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -20,6 +20,8 @@
>  #include <linux/fprobe.h>
>  #include <linux/bsearch.h>
>  #include <linux/sort.h>
> +#include <linux/key.h>
> +#include <linux/verification.h>
>
>  #include <net/bpf_sk_storage.h>
>
> @@ -1181,6 +1183,150 @@ static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = {
>         .arg1_type      = ARG_PTR_TO_CTX,
>  };
>
> +#ifdef CONFIG_KEYS
> +__diag_push();
> +__diag_ignore_all("-Wmissing-prototypes",
> +                 "kfuncs which will be used in BPF programs");
> +
> +/**
> + * bpf_lookup_user_key - lookup a key by its serial
> + * @serial: key serial

nit: "key handle serial number"


> + * @flags: lookup-specific flags
> + *
> + * Search a key with a given *serial* and the provided *flags*. The
> + * returned key, if found, has the reference count incremented by
> + * one, and is stored in a bpf_key structure, returned to the caller.

nit: This can be made a little clearer with:

Search a key with a given *serial* and the provided *flags*.
If found, increment the reference count of the key by
one, and return it in the bpf_key structure.


> + * The bpf_key structure must be passed to bpf_key_put() when done
> + * with it, so that the key reference count is decremented and the
> + * bpf_key structure is freed.
> + *
> + * Permission checks are deferred to the time the key is used by
> + * one of the available key-specific kfuncs.
> + *
> + * Set *flags* with 1, to attempt creating a requested special
> + * keyring (e.g. session keyring), if it doesn't yet exist. Set
> + * *flags* with 2 to lookup a key without waiting for the key
> + * construction, and to retrieve uninstantiated keys (keys without
> + * data attached to them).

The 1 and 2 here are so confusing why not just use their actual names here:
KEY_LOOKUP_CREATE and KEY_LOOKUP_PARTIAL.

> + *
> + * Return: a bpf_key pointer with a valid key pointer if the key is found, a
> + *         NULL pointer otherwise.
> + */
> +struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags)
> +{
> +       key_ref_t key_ref;
> +       struct bpf_key *bkey;
> +
> +       /* Keep in sync with include/linux/key.h. */

What does this comment mean? Does this mean that more flags may end up in this
check? if so, let's just put an inline function in include/linux/key.h?

> +       if (flags & ~(KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL))
> +               return NULL;
> +
> +       /*
> +        * Permission check is deferred until actual kfunc using the key,
> +        * since here the intent of the caller is not yet known.
> +        *
> +        * We cannot trust the caller to provide the needed permission as
> +        * argument, since nothing prevents the caller from using the
> +        * obtained key for a different purpose than the one declared.
> +        */

nit: This can just be a simple comment.

Permission check is deferred until the key is used as the intent of the
caller is unknown here.

> +       key_ref = lookup_user_key(serial, flags, KEY_DEFER_PERM_CHECK);
> +       if (IS_ERR(key_ref))
> +               return NULL;
> +
> +       bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
> +       if (!bkey) {
> +               key_put(key_ref_to_ptr(key_ref));
> +               return NULL;
> +       }
> +
> +       bkey->key = key_ref_to_ptr(key_ref);
> +       bkey->has_ref = true;
> +
> +       return bkey;
> +}
> +
> +/**
> + * bpf_lookup_system_key - lookup a key by a system-defined ID
> + * @id: key ID
> + *
> + * Obtain a bpf_key structure with a key pointer set to the passed key ID.
> + * The key pointer is marked as invalid, to prevent bpf_key_put() from
> + * attempting to decrement the key reference count on that pointer. The key
> + * pointer set in such way is currently understood only by
> + * verify_pkcs7_signature().
> + *
> + * Set *id* to one of the values defined in include/linux/verification.h:
> + * 0 for the primary keyring (immutable keyring of system keys); 1 for both

Please use VERIFY_USE_PLATFORM_KEYRING
and VERIFY_USE_SECONDARY_KEYRING here instead of 0 and 1


> + * the primary and secondary keyring (where keys can be added only if they
> + * are vouched for by existing keys in those keyrings); 2 for the platform
> + * keyring (primarily used by the integrity subsystem to verify a kexec'ed
> + * kerned image and, possibly, the initramfs signature).
> + *
> + * Return: a bpf_key pointer with an invalid key pointer set from the
> + *         pre-determined ID on success, a NULL pointer otherwise
> + */
> +struct bpf_key *bpf_lookup_system_key(u64 id)
> +{
> +       struct bpf_key *bkey;
> +
> +       /* Keep in sync with defs in include/linux/verification.h. */

Here too, it's best to introduce a "MAX" value or a small inline helper
rather than this comment.

> +       if (id > (unsigned long)VERIFY_USE_PLATFORM_KEYRING)
> +               return NULL;
> +
> +       bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
> +       if (!bkey)
> +               return NULL;
> +
> +       bkey->key = (struct key *)(unsigned long)id;
> +       bkey->has_ref = false;
> +
> +       return bkey;
> +}
> +
> +/**
> + * bpf_key_put - decrement key reference count if key is valid and free bpf_key
> + * @bkey: bpf_key structure
> + *
> + * Decrement the reference count of the key inside *bkey*, if the pointer
> + * is valid, and free *bkey*.
> + */

This is more of a style thing but your comment literally describes the
small function
below. Do we really need this?

> +void bpf_key_put(struct bpf_key *bkey)
> +{
> +       if (bkey->has_ref)
> +               key_put(bkey->key);
> +
> +       kfree(bkey);
> +}
> +
> +__diag_pop();
> +
> +BTF_SET8_START(key_sig_kfunc_set)
> +BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)
> +BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL)
> +BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE)
> +BTF_SET8_END(key_sig_kfunc_set)
> +
> +static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = {
> +       .owner = THIS_MODULE,
> +       .set = &key_sig_kfunc_set,
> +};
> +
> +static int __init bpf_key_sig_kfuncs_init(void)
> +{
> +       int ret;
> +
> +       ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> +                                       &bpf_key_sig_kfunc_set);
> +       if (!ret)
> +               return 0;
> +
> +       return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> +                                        &bpf_key_sig_kfunc_set);
> +}
> +
> +late_initcall(bpf_key_sig_kfuncs_init);
> +#endif /* CONFIG_KEYS */
> +
>  static const struct bpf_func_proto *
>  bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  {
> --
> 2.25.1
>

[...]
Roberto Sassu Aug. 12, 2022, 8:11 a.m. UTC | #6
> From: Daniel Müller [mailto:deso@posteo.net]
> Sent: Friday, August 12, 2022 1:52 AM
> On Thu, Aug 11, 2022 at 12:02:57PM +0000, Roberto Sassu wrote:
> > > From: Roberto Sassu [mailto:roberto.sassu@huawei.com]
> > > Sent: Thursday, August 11, 2022 9:47 AM
> > > > From: Alexei Starovoitov [mailto:alexei.starovoitov@gmail.com]
> > > > Sent: Wednesday, August 10, 2022 11:34 PM
> > > > On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> > > > > +
> > > > > +static int __init bpf_key_sig_kfuncs_init(void)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > > > > +					&bpf_key_sig_kfunc_set);
> > > > > +	if (!ret)
> > > > > +		return 0;
> > > > > +
> > > > > +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> > > > > +					 &bpf_key_sig_kfunc_set);
> > > >
> > > > Isn't this a watery water ?
> > > > Don't you have a patch 1 ?
> > > > What am I missing ?
> > >
> > > Uhm, yes. I had doubts too. That was what also KP did.
> > >
> > > It makes sense to register once, since we mapped LSM to
> > > TRACING.
> > >
> > > Will resend only this patch. And I will figure out why CI failed.
> >
> > Adding in CC Daniel Müller, which worked on this.
> >
> > I think the issue is that some kernel options are set to =m.
> > This causes the CI to miss all kernel modules, since they are
> > not copied to the virtual machine that executes the tests.
> >
> > I'm testing this patch:
> >
> > https://github.com/robertosassu/libbpf-
> ci/commit/b665e001b58c4ddb792a2a68098ea5dc6936b15c
> 
> I commented on the pull request. Would it make sense to adjust the
> kernel configuration in this repository instead? I am worried that
> otherwise everybody may need a similar work around, depending on how
> selftests are ultimately run.

The issue seems specific of the eBPF CI. Others might be able to use
kernel modules.

Either choice is fine for me.

Roberto
Daniel Müller Aug. 15, 2022, 4:22 p.m. UTC | #7
On Fri, Aug 12, 2022 at 08:11:00AM +0000, Roberto Sassu wrote:
> > From: Daniel Müller [mailto:deso@posteo.net]
> > Sent: Friday, August 12, 2022 1:52 AM
> > On Thu, Aug 11, 2022 at 12:02:57PM +0000, Roberto Sassu wrote:
> > > > From: Roberto Sassu [mailto:roberto.sassu@huawei.com]
> > > > Sent: Thursday, August 11, 2022 9:47 AM
> > > > > From: Alexei Starovoitov [mailto:alexei.starovoitov@gmail.com]
> > > > > Sent: Wednesday, August 10, 2022 11:34 PM
> > > > > On Wed, Aug 10, 2022 at 06:59:28PM +0200, Roberto Sassu wrote:
> > > > > > +
> > > > > > +static int __init bpf_key_sig_kfuncs_init(void)
> > > > > > +{
> > > > > > +	int ret;
> > > > > > +
> > > > > > +	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
> > > > > > +					&bpf_key_sig_kfunc_set);
> > > > > > +	if (!ret)
> > > > > > +		return 0;
> > > > > > +
> > > > > > +	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
> > > > > > +					 &bpf_key_sig_kfunc_set);
> > > > >
> > > > > Isn't this a watery water ?
> > > > > Don't you have a patch 1 ?
> > > > > What am I missing ?
> > > >
> > > > Uhm, yes. I had doubts too. That was what also KP did.
> > > >
> > > > It makes sense to register once, since we mapped LSM to
> > > > TRACING.
> > > >
> > > > Will resend only this patch. And I will figure out why CI failed.
> > >
> > > Adding in CC Daniel Müller, which worked on this.
> > >
> > > I think the issue is that some kernel options are set to =m.
> > > This causes the CI to miss all kernel modules, since they are
> > > not copied to the virtual machine that executes the tests.
> > >
> > > I'm testing this patch:
> > >
> > > https://github.com/robertosassu/libbpf-
> > ci/commit/b665e001b58c4ddb792a2a68098ea5dc6936b15c
> > 
> > I commented on the pull request. Would it make sense to adjust the
> > kernel configuration in this repository instead? I am worried that
> > otherwise everybody may need a similar work around, depending on how
> > selftests are ultimately run.
> 
> The issue seems specific of the eBPF CI. Others might be able to use
> kernel modules.
> 
> Either choice is fine for me.

I understand that depending on how tests are run, kernel modules may be
available to be loaded. My point is that I am not aware of anything that we
would loose by having the functionality built-in to begin with (others can
correct me). So it seems as if that's an easy way to sidestep any issues of that
sort from the start and, hence, would be my preference.

Thanks,
Daniel
Roberto Sassu Aug. 16, 2022, 8:40 a.m. UTC | #8
> From: KP Singh [mailto:kpsingh@kernel.org]
> Sent: Friday, August 12, 2022 2:50 AM

[...]

> > +/**
> > + * bpf_key_put - decrement key reference count if key is valid and free
> bpf_key
> > + * @bkey: bpf_key structure
> > + *
> > + * Decrement the reference count of the key inside *bkey*, if the pointer
> > + * is valid, and free *bkey*.
> > + */
> 
> This is more of a style thing but your comment literally describes the
> small function
> below. Do we really need this?

Thanks for the review, KP. Just kept this, to follow the style of kernel
documentation for functions.

Roberto
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index a82f8c559ae2..d415e5e97551 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2573,4 +2573,10 @@  static inline void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype) {}
 static inline void bpf_cgroup_atype_put(int cgroup_atype) {}
 #endif /* CONFIG_BPF_LSM */
 
+#ifdef CONFIG_KEYS
+struct bpf_key {
+	struct key *key;
+	bool has_ref;
+};
+#endif /* CONFIG_KEYS */
 #endif /* _LINUX_BPF_H */
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 68e5cdd24cef..a607bb0be738 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -20,6 +20,8 @@ 
 #include <linux/fprobe.h>
 #include <linux/bsearch.h>
 #include <linux/sort.h>
+#include <linux/key.h>
+#include <linux/verification.h>
 
 #include <net/bpf_sk_storage.h>
 
@@ -1181,6 +1183,150 @@  static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = {
 	.arg1_type	= ARG_PTR_TO_CTX,
 };
 
+#ifdef CONFIG_KEYS
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+		  "kfuncs which will be used in BPF programs");
+
+/**
+ * bpf_lookup_user_key - lookup a key by its serial
+ * @serial: key serial
+ * @flags: lookup-specific flags
+ *
+ * Search a key with a given *serial* and the provided *flags*. The
+ * returned key, if found, has the reference count incremented by
+ * one, and is stored in a bpf_key structure, returned to the caller.
+ * The bpf_key structure must be passed to bpf_key_put() when done
+ * with it, so that the key reference count is decremented and the
+ * bpf_key structure is freed.
+ *
+ * Permission checks are deferred to the time the key is used by
+ * one of the available key-specific kfuncs.
+ *
+ * Set *flags* with 1, to attempt creating a requested special
+ * keyring (e.g. session keyring), if it doesn't yet exist. Set
+ * *flags* with 2 to lookup a key without waiting for the key
+ * construction, and to retrieve uninstantiated keys (keys without
+ * data attached to them).
+ *
+ * Return: a bpf_key pointer with a valid key pointer if the key is found, a
+ *         NULL pointer otherwise.
+ */
+struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags)
+{
+	key_ref_t key_ref;
+	struct bpf_key *bkey;
+
+	/* Keep in sync with include/linux/key.h. */
+	if (flags & ~(KEY_LOOKUP_CREATE | KEY_LOOKUP_PARTIAL))
+		return NULL;
+
+	/*
+	 * Permission check is deferred until actual kfunc using the key,
+	 * since here the intent of the caller is not yet known.
+	 *
+	 * We cannot trust the caller to provide the needed permission as
+	 * argument, since nothing prevents the caller from using the
+	 * obtained key for a different purpose than the one declared.
+	 */
+	key_ref = lookup_user_key(serial, flags, KEY_DEFER_PERM_CHECK);
+	if (IS_ERR(key_ref))
+		return NULL;
+
+	bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
+	if (!bkey) {
+		key_put(key_ref_to_ptr(key_ref));
+		return NULL;
+	}
+
+	bkey->key = key_ref_to_ptr(key_ref);
+	bkey->has_ref = true;
+
+	return bkey;
+}
+
+/**
+ * bpf_lookup_system_key - lookup a key by a system-defined ID
+ * @id: key ID
+ *
+ * Obtain a bpf_key structure with a key pointer set to the passed key ID.
+ * The key pointer is marked as invalid, to prevent bpf_key_put() from
+ * attempting to decrement the key reference count on that pointer. The key
+ * pointer set in such way is currently understood only by
+ * verify_pkcs7_signature().
+ *
+ * Set *id* to one of the values defined in include/linux/verification.h:
+ * 0 for the primary keyring (immutable keyring of system keys); 1 for both
+ * the primary and secondary keyring (where keys can be added only if they
+ * are vouched for by existing keys in those keyrings); 2 for the platform
+ * keyring (primarily used by the integrity subsystem to verify a kexec'ed
+ * kerned image and, possibly, the initramfs signature).
+ *
+ * Return: a bpf_key pointer with an invalid key pointer set from the
+ *         pre-determined ID on success, a NULL pointer otherwise
+ */
+struct bpf_key *bpf_lookup_system_key(u64 id)
+{
+	struct bpf_key *bkey;
+
+	/* Keep in sync with defs in include/linux/verification.h. */
+	if (id > (unsigned long)VERIFY_USE_PLATFORM_KEYRING)
+		return NULL;
+
+	bkey = kmalloc(sizeof(*bkey), GFP_ATOMIC);
+	if (!bkey)
+		return NULL;
+
+	bkey->key = (struct key *)(unsigned long)id;
+	bkey->has_ref = false;
+
+	return bkey;
+}
+
+/**
+ * bpf_key_put - decrement key reference count if key is valid and free bpf_key
+ * @bkey: bpf_key structure
+ *
+ * Decrement the reference count of the key inside *bkey*, if the pointer
+ * is valid, and free *bkey*.
+ */
+void bpf_key_put(struct bpf_key *bkey)
+{
+	if (bkey->has_ref)
+		key_put(bkey->key);
+
+	kfree(bkey);
+}
+
+__diag_pop();
+
+BTF_SET8_START(key_sig_kfunc_set)
+BTF_ID_FLAGS(func, bpf_lookup_user_key, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_lookup_system_key, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_key_put, KF_RELEASE)
+BTF_SET8_END(key_sig_kfunc_set)
+
+static const struct btf_kfunc_id_set bpf_key_sig_kfunc_set = {
+	.owner = THIS_MODULE,
+	.set = &key_sig_kfunc_set,
+};
+
+static int __init bpf_key_sig_kfuncs_init(void)
+{
+	int ret;
+
+	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
+					&bpf_key_sig_kfunc_set);
+	if (!ret)
+		return 0;
+
+	return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM,
+					 &bpf_key_sig_kfunc_set);
+}
+
+late_initcall(bpf_key_sig_kfuncs_init);
+#endif /* CONFIG_KEYS */
+
 static const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {