diff mbox series

[RESEND,v3,bpf-next,01/14] bpf: introduce BPF token object

Message ID 20230629051832.897119-2-andrii@kernel.org (mailing list archive)
State Changes Requested
Delegated to: Paul Moore
Headers show
Series BPF token | expand

Commit Message

Andrii Nakryiko June 29, 2023, 5:18 a.m. UTC
Add new kind of BPF kernel object, BPF token. BPF token is meant to to
allow delegating privileged BPF functionality, like loading a BPF
program or creating a BPF map, from privileged process to a *trusted*
unprivileged process, all while have a good amount of control over which
privileged operations could be performed using provided BPF token.

This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
allows to create a new BPF token object along with a set of allowed
commands that such BPF token allows to unprivileged applications.
Currently only BPF_TOKEN_CREATE command itself can be
delegated, but other patches gradually add ability to delegate
BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.

The above means that new BPF tokens can be created using existing BPF
token, if original privileged creator allowed BPF_TOKEN_CREATE command.
New derived BPF token cannot be more powerful than the original BPF
token.

Importantly, BPF token is automatically pinned at the specified location
inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
command, unlike BPF prog/map/btf/link. This provides more control over
unintended sharing of BPF tokens through pinning it in another BPF FS
instances.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf.h            |  47 ++++++++++
 include/uapi/linux/bpf.h       |  38 ++++++++
 kernel/bpf/Makefile            |   2 +-
 kernel/bpf/inode.c             |  46 +++++++--
 kernel/bpf/syscall.c           |  17 ++++
 kernel/bpf/token.c             | 167 +++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |  38 ++++++++
 7 files changed, 344 insertions(+), 11 deletions(-)
 create mode 100644 kernel/bpf/token.c

Comments

Christian Brauner July 4, 2023, 12:43 p.m. UTC | #1
On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> allow delegating privileged BPF functionality, like loading a BPF
> program or creating a BPF map, from privileged process to a *trusted*
> unprivileged process, all while have a good amount of control over which
> privileged operations could be performed using provided BPF token.
> 
> This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> allows to create a new BPF token object along with a set of allowed
> commands that such BPF token allows to unprivileged applications.
> Currently only BPF_TOKEN_CREATE command itself can be
> delegated, but other patches gradually add ability to delegate
> BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> 
> The above means that new BPF tokens can be created using existing BPF
> token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> New derived BPF token cannot be more powerful than the original BPF
> token.
> 
> Importantly, BPF token is automatically pinned at the specified location
> inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> command, unlike BPF prog/map/btf/link. This provides more control over
> unintended sharing of BPF tokens through pinning it in another BPF FS
> instances.
> 
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

The main issue I have with the token approach is that it is a completely
separate delegation vector on top of user namespaces. We mentioned this
duringthe conf and this was brought up on the thread here again as well.
Imho, that's a problem both security-wise and complexity-wise.

It's not great if each subsystem gets its own custom delegation
mechanism. This imposes such a taxing complexity on both kernel- and
userspace that it will quickly become a huge liability. So I would
really strongly encourage you to explore another direction.

I do think the spirit of your proposal is workable and that it can
mostly be kept in tact.

As mentioned before, bpffs has all the means to be taught delegation:

        // In container's user namespace
        fd_fs = fsopen("bpffs");

        // Delegating task in host userns (systemd-bpfd whatever you want)
        ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);

        // In container's user namespace
        fd_mnt = fsmount(fd_fs, 0);

        ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)

Roughly, this would mean:

(i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
    mount option. IOW, it's only possibly to mount bpffs as an
    unprivileged user if a delegating process like systemd-bpfd with
    system-level privileges has marked it as delegatable.
(ii) add fine-grained delegation options that you want this
     bpffs instance to allow via new mount options. Idk,

     // allow usage of foo
     fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");

     // also allow usage of bar
     fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");

     // reset allowed options
     fsconfig(fd_fs, FSCONFIG_SET_STRING, "");

     // allow usage of schmoo
     fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");

This all seems more intuitive and integrates with user and mount
namespaces of the container. This can also work for restricting
non-userns bpf instances fwiw. You can also share instances via
bind-mount and so on. The userns of the bpffs instance can also be used
for permission checking provided a given functionality has been
delegated by e.g., systemd-bpfd or whatever.

So roughly - untested and unfinished:

diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index b9b93b81af9a..c021b0a674bb 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -623,15 +623,24 @@ struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type typ
 }
 EXPORT_SYMBOL(bpf_prog_get_type_path);
 
+struct bpf_mount_opts {
+	umode_t mode;
+	bool delegate;
+	u64 abilities;
+};
+
 /*
  * Display the mount options in /proc/mounts.
  */
 static int bpf_show_options(struct seq_file *m, struct dentry *root)
 {
+	struct bpf_mount_opts *opts = root->d_sb->s_fs_info;
 	umode_t mode = d_inode(root)->i_mode & S_IALLUGO & ~S_ISVTX;
 
 	if (mode != S_IRWXUGO)
 		seq_printf(m, ",mode=%o", mode);
+	if (opts->delegate)
+		seq_printf(m, ",delegate");
 	return 0;
 }
 
@@ -655,17 +664,17 @@ static const struct super_operations bpf_super_ops = {
 
 enum {
 	OPT_MODE,
+	Opt_delegate,
+	Opt_abilities,
 };
 
 static const struct fs_parameter_spec bpf_fs_parameters[] = {
-	fsparam_u32oct	("mode",			OPT_MODE),
+	fsparam_u32oct	     ("mode",			OPT_MODE),
+	fsparam_flag_no	     ("delegate",		Opt_delegate),
+	fsparam_string       ("abilities",		Opt_abilities),
 	{}
 };
 
-struct bpf_mount_opts {
-	umode_t mode;
-};
-
 static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
 {
 	struct bpf_mount_opts *opts = fc->fs_private;
@@ -694,6 +703,16 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
 	case OPT_MODE:
 		opts->mode = result.uint_32 & S_IALLUGO;
 		break;
+	case Opt_delegate:
+		if (fc->user_ns != &init_user_ns && !capable(CAP_SYS_ADMIN))
+			return -EPERM;
+
+		if (!result.negated)
+			opts->delegate = true;
+		break;
+	case Opt_abilities:
+		// parse param->string to opts->abilities
+		break;
 	}
 
 	return 0;
@@ -768,10 +787,20 @@ static int populate_bpffs(struct dentry *parent)
 static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
 {
 	static const struct tree_descr bpf_rfiles[] = { { "" } };
-	struct bpf_mount_opts *opts = fc->fs_private;
+	struct bpf_mount_opts *opts = sb->s_fs_info;
 	struct inode *inode;
 	int ret;
 
+	if (fc->user_ns != &init_user_ns && !opts->delegate) {
+		errorfc(fc, "Can't mount bpffs without delegation permissions");
+		return -EPERM;
+	}
+
+	if (opts->abilities && !opts->delegate) {
+		errorfc(fc, "Specifying abilities without enabling delegation");
+		return -EINVAL;
+	}
+
 	ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
 	if (ret)
 		return ret;
@@ -793,7 +822,10 @@ static int bpf_get_tree(struct fs_context *fc)
 
 static void bpf_free_fc(struct fs_context *fc)
 {
-	kfree(fc->fs_private);
+	struct bpf_mount_opts *opts = fc->s_fs_info;
+
+	if (opts)
+		kfree(opts);
 }
 
 static const struct fs_context_operations bpf_context_ops = {
@@ -815,17 +847,30 @@ static int bpf_init_fs_context(struct fs_context *fc)
 
 	opts->mode = S_IRWXUGO;
 
-	fc->fs_private = opts;
+	/* If an instance is delegated it will start with no abilities. */
+	opts->delegate = false;
+	opts->abilities = 0;
+
+	fc->s_fs_info = opts;
 	fc->ops = &bpf_context_ops;
 	return 0;
 }
 
+static void bpf_kill_super(struct super_block *sb)
+{
+	struct bpf_mount_opts *opts = sb->s_fs_info;
+
+	kill_litter_super(sb);
+	kfree(opts);
+}
+
 static struct file_system_type bpf_fs_type = {
 	.owner		= THIS_MODULE,
 	.name		= "bpf",
 	.init_fs_context = bpf_init_fs_context,
 	.parameters	= bpf_fs_parameters,
-	.kill_sb	= kill_litter_super,
+	.kill_sb	= bpf_kill_super,
+	.fs_flags	= FS_USERNS_MOUNT,
 };
 
 static int __init bpf_init(void)
Christian Brauner July 4, 2023, 1:34 p.m. UTC | #2
On Tue, Jul 04, 2023 at 02:43:59PM +0200, Christian Brauner wrote:
> On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > allow delegating privileged BPF functionality, like loading a BPF
> > program or creating a BPF map, from privileged process to a *trusted*
> > unprivileged process, all while have a good amount of control over which
> > privileged operations could be performed using provided BPF token.
> > 
> > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > allows to create a new BPF token object along with a set of allowed
> > commands that such BPF token allows to unprivileged applications.
> > Currently only BPF_TOKEN_CREATE command itself can be
> > delegated, but other patches gradually add ability to delegate
> > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > 
> > The above means that new BPF tokens can be created using existing BPF
> > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > New derived BPF token cannot be more powerful than the original BPF
> > token.
> > 
> > Importantly, BPF token is automatically pinned at the specified location
> > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > command, unlike BPF prog/map/btf/link. This provides more control over
> > unintended sharing of BPF tokens through pinning it in another BPF FS
> > instances.
> > 
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> 
> The main issue I have with the token approach is that it is a completely
> separate delegation vector on top of user namespaces. We mentioned this
> duringthe conf and this was brought up on the thread here again as well.
> Imho, that's a problem both security-wise and complexity-wise.
> 
> It's not great if each subsystem gets its own custom delegation
> mechanism. This imposes such a taxing complexity on both kernel- and
> userspace that it will quickly become a huge liability. So I would
> really strongly encourage you to explore another direction.
> 
> I do think the spirit of your proposal is workable and that it can
> mostly be kept in tact.
> 
> As mentioned before, bpffs has all the means to be taught delegation:
> 
>         // In container's user namespace
>         fd_fs = fsopen("bpffs");
> 
>         // Delegating task in host userns (systemd-bpfd whatever you want)
>         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> 
>         // In container's user namespace
>         fd_mnt = fsmount(fd_fs, 0);
> 
>         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> 
> Roughly, this would mean:
> 
> (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
>     mount option. IOW, it's only possibly to mount bpffs as an
>     unprivileged user if a delegating process like systemd-bpfd with
>     system-level privileges has marked it as delegatable.
> (ii) add fine-grained delegation options that you want this
>      bpffs instance to allow via new mount options. Idk,
> 
>      // allow usage of foo
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> 
>      // also allow usage of bar
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> 
>      // reset allowed options
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> 
>      // allow usage of schmoo
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");

This is really just one crummy way of doing this. It's ofc possible to
make this a binary struct if you wanted to; of any form:

struct bpf_delegation_opts {
	u64 a;
	u64 b;
	u64 c;
	u32 d;
	u32 e;
};

and then

struct bpf_delegation_opts opts = {
	.a = SOMETHING_SOMETHING,
	.d = SOMETHING_SOMETHING_ELSE,
};

fsconfig(fd_fs, FSCONFIG_SET_BINARY, "abilities", &opts, sizeof(opts));

you'll get:

param->size == sizeof(opts);
param->blob = memdup_user_nul();

and then you can version this by size like we do for extensible structs
and change whatever you'd like to change in the future.
Toke Høiland-Jørgensen July 4, 2023, 11:28 p.m. UTC | #3
Christian Brauner <brauner@kernel.org> writes:

> On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
>> Add new kind of BPF kernel object, BPF token. BPF token is meant to to
>> allow delegating privileged BPF functionality, like loading a BPF
>> program or creating a BPF map, from privileged process to a *trusted*
>> unprivileged process, all while have a good amount of control over which
>> privileged operations could be performed using provided BPF token.
>> 
>> This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
>> allows to create a new BPF token object along with a set of allowed
>> commands that such BPF token allows to unprivileged applications.
>> Currently only BPF_TOKEN_CREATE command itself can be
>> delegated, but other patches gradually add ability to delegate
>> BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
>> 
>> The above means that new BPF tokens can be created using existing BPF
>> token, if original privileged creator allowed BPF_TOKEN_CREATE command.
>> New derived BPF token cannot be more powerful than the original BPF
>> token.
>> 
>> Importantly, BPF token is automatically pinned at the specified location
>> inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
>> command, unlike BPF prog/map/btf/link. This provides more control over
>> unintended sharing of BPF tokens through pinning it in another BPF FS
>> instances.
>> 
>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>> ---
>
> The main issue I have with the token approach is that it is a completely
> separate delegation vector on top of user namespaces. We mentioned this
> duringthe conf and this was brought up on the thread here again as well.
> Imho, that's a problem both security-wise and complexity-wise.
>
> It's not great if each subsystem gets its own custom delegation
> mechanism. This imposes such a taxing complexity on both kernel- and
> userspace that it will quickly become a huge liability. So I would
> really strongly encourage you to explore another direction.

I share this concern as well, but I'm not quite sure I follow your
proposal here. IIUC, you're saying that instead of creating the token
using a BPF_TOKEN_CREATE command, the policy daemon should create a
bpffs instance and attach the token value directly to that, right? But
then what? Are you proposing that the calling process inside the
container open a filesystem reference (how? using fspick()?) and pass
that to the bpf syscall? Or is there some way to find the right
filesystem instance to extract this from at the time that the bpf()
syscall is issued inside the container?

-Toke
Daniel Borkmann July 5, 2023, 7:20 a.m. UTC | #4
On 7/5/23 1:28 AM, Toke Høiland-Jørgensen wrote:
> Christian Brauner <brauner@kernel.org> writes:
>> On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
>>> Add new kind of BPF kernel object, BPF token. BPF token is meant to to
>>> allow delegating privileged BPF functionality, like loading a BPF
>>> program or creating a BPF map, from privileged process to a *trusted*
>>> unprivileged process, all while have a good amount of control over which
>>> privileged operations could be performed using provided BPF token.
>>>
>>> This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
>>> allows to create a new BPF token object along with a set of allowed
>>> commands that such BPF token allows to unprivileged applications.
>>> Currently only BPF_TOKEN_CREATE command itself can be
>>> delegated, but other patches gradually add ability to delegate
>>> BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
>>>
>>> The above means that new BPF tokens can be created using existing BPF
>>> token, if original privileged creator allowed BPF_TOKEN_CREATE command.
>>> New derived BPF token cannot be more powerful than the original BPF
>>> token.
>>>
>>> Importantly, BPF token is automatically pinned at the specified location
>>> inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
>>> command, unlike BPF prog/map/btf/link. This provides more control over
>>> unintended sharing of BPF tokens through pinning it in another BPF FS
>>> instances.
>>>
>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>> ---
>>
>> The main issue I have with the token approach is that it is a completely
>> separate delegation vector on top of user namespaces. We mentioned this
>> duringthe conf and this was brought up on the thread here again as well.
>> Imho, that's a problem both security-wise and complexity-wise.
>>
>> It's not great if each subsystem gets its own custom delegation
>> mechanism. This imposes such a taxing complexity on both kernel- and
>> userspace that it will quickly become a huge liability. So I would
>> really strongly encourage you to explore another direction.
> 
> I share this concern as well, but I'm not quite sure I follow your
> proposal here. IIUC, you're saying that instead of creating the token
> using a BPF_TOKEN_CREATE command, the policy daemon should create a
> bpffs instance and attach the token value directly to that, right? But
> then what? Are you proposing that the calling process inside the
> container open a filesystem reference (how? using fspick()?) and pass
> that to the bpf syscall? Or is there some way to find the right
> filesystem instance to extract this from at the time that the bpf()
> syscall is issued inside the container?

Given there can be multiple bpffs instances, it would have to be similar
as to what Andrii did in that you need to pass the fd to the bpf(2) for
prog/map creation in order to retrieve the opts->abilities from the super
block.
Christian Brauner July 5, 2023, 8:45 a.m. UTC | #5
On Wed, Jul 05, 2023 at 09:20:28AM +0200, Daniel Borkmann wrote:
> On 7/5/23 1:28 AM, Toke Høiland-Jørgensen wrote:
> > Christian Brauner <brauner@kernel.org> writes:
> > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > > allow delegating privileged BPF functionality, like loading a BPF
> > > > program or creating a BPF map, from privileged process to a *trusted*
> > > > unprivileged process, all while have a good amount of control over which
> > > > privileged operations could be performed using provided BPF token.
> > > > 
> > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > > allows to create a new BPF token object along with a set of allowed
> > > > commands that such BPF token allows to unprivileged applications.
> > > > Currently only BPF_TOKEN_CREATE command itself can be
> > > > delegated, but other patches gradually add ability to delegate
> > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > > > 
> > > > The above means that new BPF tokens can be created using existing BPF
> > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > > New derived BPF token cannot be more powerful than the original BPF
> > > > token.
> > > > 
> > > > Importantly, BPF token is automatically pinned at the specified location
> > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > > instances.
> > > > 
> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > ---
> > > 
> > > The main issue I have with the token approach is that it is a completely
> > > separate delegation vector on top of user namespaces. We mentioned this
> > > duringthe conf and this was brought up on the thread here again as well.
> > > Imho, that's a problem both security-wise and complexity-wise.
> > > 
> > > It's not great if each subsystem gets its own custom delegation
> > > mechanism. This imposes such a taxing complexity on both kernel- and
> > > userspace that it will quickly become a huge liability. So I would
> > > really strongly encourage you to explore another direction.
> > 
> > I share this concern as well, but I'm not quite sure I follow your
> > proposal here. IIUC, you're saying that instead of creating the token
> > using a BPF_TOKEN_CREATE command, the policy daemon should create a
> > bpffs instance and attach the token value directly to that, right? But
> > then what? Are you proposing that the calling process inside the
> > container open a filesystem reference (how? using fspick()?) and pass
> > that to the bpf syscall? Or is there some way to find the right
> > filesystem instance to extract this from at the time that the bpf()
> > syscall is issued inside the container?
> 
> Given there can be multiple bpffs instances, it would have to be similar
> as to what Andrii did in that you need to pass the fd to the bpf(2) for
> prog/map creation in order to retrieve the opts->abilities from the super
> block.

I think it's pretty flexible what one can do here. Off the top of my
head there could be a dedicated file like /sys/fs/bpf/delegate which
only exists if delegation has been enabled. Thought that might be just a
wasted inode. There could be a new ioctl() on bpffsd which has the same
effect.

Probably an ioctl() on the bpffs instance is easier to grok. You could
even take away rights granted by a bpffs instance from such an fd via
additional ioctl() on it.

For increased limitations, it's also possible to have an optional
write-time security check from within the bpf call itself, e.g.,

    sys_bpf(fd_delegate)
    {
                struct fd fd = fdget_raw(fd_delegate);

                /* That token is only valid within a single user namespace ... */
                if (fd.file->f_cred->user_ns != current_user_ns())
                        return -EINVAL;

                /* woah, no CAP_BPF? */
                if (!ns_capable(fd.file->cred->user_ns, CAP_BPF))
                        return -EPERM;

                /* now check abilities */

                return 0;
    }

I'm not claiming that this is the silver bullet but it fits within the
framework of this approach and explicitly ties it into bpffs right from
the get go since this is the delegation mechanism's core.

The systemd-bpfd approach that was once pushed could probably also work
and I'm not up to date on why this was rejected. The issue against
systemd is still open.
Toke Høiland-Jørgensen July 5, 2023, 12:34 p.m. UTC | #6
Christian Brauner <brauner@kernel.org> writes:

> On Wed, Jul 05, 2023 at 09:20:28AM +0200, Daniel Borkmann wrote:
>> On 7/5/23 1:28 AM, Toke Høiland-Jørgensen wrote:
>> > Christian Brauner <brauner@kernel.org> writes:
>> > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
>> > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
>> > > > allow delegating privileged BPF functionality, like loading a BPF
>> > > > program or creating a BPF map, from privileged process to a *trusted*
>> > > > unprivileged process, all while have a good amount of control over which
>> > > > privileged operations could be performed using provided BPF token.
>> > > > 
>> > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
>> > > > allows to create a new BPF token object along with a set of allowed
>> > > > commands that such BPF token allows to unprivileged applications.
>> > > > Currently only BPF_TOKEN_CREATE command itself can be
>> > > > delegated, but other patches gradually add ability to delegate
>> > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
>> > > > 
>> > > > The above means that new BPF tokens can be created using existing BPF
>> > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
>> > > > New derived BPF token cannot be more powerful than the original BPF
>> > > > token.
>> > > > 
>> > > > Importantly, BPF token is automatically pinned at the specified location
>> > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
>> > > > command, unlike BPF prog/map/btf/link. This provides more control over
>> > > > unintended sharing of BPF tokens through pinning it in another BPF FS
>> > > > instances.
>> > > > 
>> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>> > > > ---
>> > > 
>> > > The main issue I have with the token approach is that it is a completely
>> > > separate delegation vector on top of user namespaces. We mentioned this
>> > > duringthe conf and this was brought up on the thread here again as well.
>> > > Imho, that's a problem both security-wise and complexity-wise.
>> > > 
>> > > It's not great if each subsystem gets its own custom delegation
>> > > mechanism. This imposes such a taxing complexity on both kernel- and
>> > > userspace that it will quickly become a huge liability. So I would
>> > > really strongly encourage you to explore another direction.
>> > 
>> > I share this concern as well, but I'm not quite sure I follow your
>> > proposal here. IIUC, you're saying that instead of creating the token
>> > using a BPF_TOKEN_CREATE command, the policy daemon should create a
>> > bpffs instance and attach the token value directly to that, right? But
>> > then what? Are you proposing that the calling process inside the
>> > container open a filesystem reference (how? using fspick()?) and pass
>> > that to the bpf syscall? Or is there some way to find the right
>> > filesystem instance to extract this from at the time that the bpf()
>> > syscall is issued inside the container?
>> 
>> Given there can be multiple bpffs instances, it would have to be similar
>> as to what Andrii did in that you need to pass the fd to the bpf(2) for
>> prog/map creation in order to retrieve the opts->abilities from the super
>> block.
>
> I think it's pretty flexible what one can do here. Off the top of my
> head there could be a dedicated file like /sys/fs/bpf/delegate which
> only exists if delegation has been enabled. Thought that might be just a
> wasted inode. There could be a new ioctl() on bpffsd which has the same
> effect.
>
> Probably an ioctl() on the bpffs instance is easier to grok. You could
> even take away rights granted by a bpffs instance from such an fd via
> additional ioctl() on it.

Right, gotcha; I was missing whether there was an existing mechanism to
obtain this; an ioctl makes sense. I can see the utility in attaching
this to the file system instance instead of as a separate object that's
pinned (but see my post in the other subthread about using the "ask
userspace model instead").

-Toke
Paul Moore July 5, 2023, 2:16 p.m. UTC | #7
On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > allow delegating privileged BPF functionality, like loading a BPF
> > program or creating a BPF map, from privileged process to a *trusted*
> > unprivileged process, all while have a good amount of control over which
> > privileged operations could be performed using provided BPF token.
> >
> > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > allows to create a new BPF token object along with a set of allowed
> > commands that such BPF token allows to unprivileged applications.
> > Currently only BPF_TOKEN_CREATE command itself can be
> > delegated, but other patches gradually add ability to delegate
> > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> >
> > The above means that new BPF tokens can be created using existing BPF
> > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > New derived BPF token cannot be more powerful than the original BPF
> > token.
> >
> > Importantly, BPF token is automatically pinned at the specified location
> > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > command, unlike BPF prog/map/btf/link. This provides more control over
> > unintended sharing of BPF tokens through pinning it in another BPF FS
> > instances.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> The main issue I have with the token approach is that it is a completely
> separate delegation vector on top of user namespaces. We mentioned this
> duringthe conf and this was brought up on the thread here again as well.
> Imho, that's a problem both security-wise and complexity-wise.
>
> It's not great if each subsystem gets its own custom delegation
> mechanism. This imposes such a taxing complexity on both kernel- and
> userspace that it will quickly become a huge liability. So I would
> really strongly encourage you to explore another direction.
>
> I do think the spirit of your proposal is workable and that it can
> mostly be kept in tact.
>
> As mentioned before, bpffs has all the means to be taught delegation:
>
>         // In container's user namespace
>         fd_fs = fsopen("bpffs");
>
>         // Delegating task in host userns (systemd-bpfd whatever you want)
>         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
>
>         // In container's user namespace
>         fd_mnt = fsmount(fd_fs, 0);
>
>         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
>
> Roughly, this would mean:
>
> (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
>     mount option. IOW, it's only possibly to mount bpffs as an
>     unprivileged user if a delegating process like systemd-bpfd with
>     system-level privileges has marked it as delegatable.
> (ii) add fine-grained delegation options that you want this
>      bpffs instance to allow via new mount options. Idk,
>
>      // allow usage of foo
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
>
>      // also allow usage of bar
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
>
>      // reset allowed options
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
>
>      // allow usage of schmoo
>      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
>
> This all seems more intuitive and integrates with user and mount
> namespaces of the container. This can also work for restricting
> non-userns bpf instances fwiw. You can also share instances via
> bind-mount and so on. The userns of the bpffs instance can also be used
> for permission checking provided a given functionality has been
> delegated by e.g., systemd-bpfd or whatever.

I have no arguments against any of the above, and would prefer to see
something like this over a token-based mechanism.  However we do want
to make sure we have the proper LSM control points for either approach
so that admins who rely on LSM-based security policies can manage
delegation via their policies.

Using the fsconfig() approach described by Christian above, I believe
we should have the necessary hooks already in
security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
basing that on a quick look this morning, some additional checking
would need to be done.
Christian Brauner July 5, 2023, 2:42 p.m. UTC | #8
On Wed, Jul 05, 2023 at 10:16:13AM -0400, Paul Moore wrote:
> On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > allow delegating privileged BPF functionality, like loading a BPF
> > > program or creating a BPF map, from privileged process to a *trusted*
> > > unprivileged process, all while have a good amount of control over which
> > > privileged operations could be performed using provided BPF token.
> > >
> > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > allows to create a new BPF token object along with a set of allowed
> > > commands that such BPF token allows to unprivileged applications.
> > > Currently only BPF_TOKEN_CREATE command itself can be
> > > delegated, but other patches gradually add ability to delegate
> > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > >
> > > The above means that new BPF tokens can be created using existing BPF
> > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > New derived BPF token cannot be more powerful than the original BPF
> > > token.
> > >
> > > Importantly, BPF token is automatically pinned at the specified location
> > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > instances.
> > >
> > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > ---
> >
> > The main issue I have with the token approach is that it is a completely
> > separate delegation vector on top of user namespaces. We mentioned this
> > duringthe conf and this was brought up on the thread here again as well.
> > Imho, that's a problem both security-wise and complexity-wise.
> >
> > It's not great if each subsystem gets its own custom delegation
> > mechanism. This imposes such a taxing complexity on both kernel- and
> > userspace that it will quickly become a huge liability. So I would
> > really strongly encourage you to explore another direction.
> >
> > I do think the spirit of your proposal is workable and that it can
> > mostly be kept in tact.
> >
> > As mentioned before, bpffs has all the means to be taught delegation:
> >
> >         // In container's user namespace
> >         fd_fs = fsopen("bpffs");
> >
> >         // Delegating task in host userns (systemd-bpfd whatever you want)
> >         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> >
> >         // In container's user namespace
> >         fd_mnt = fsmount(fd_fs, 0);
> >
> >         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> >
> > Roughly, this would mean:
> >
> > (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
> >     mount option. IOW, it's only possibly to mount bpffs as an
> >     unprivileged user if a delegating process like systemd-bpfd with
> >     system-level privileges has marked it as delegatable.
> > (ii) add fine-grained delegation options that you want this
> >      bpffs instance to allow via new mount options. Idk,
> >
> >      // allow usage of foo
> >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> >
> >      // also allow usage of bar
> >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> >
> >      // reset allowed options
> >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> >
> >      // allow usage of schmoo
> >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
> >
> > This all seems more intuitive and integrates with user and mount
> > namespaces of the container. This can also work for restricting
> > non-userns bpf instances fwiw. You can also share instances via
> > bind-mount and so on. The userns of the bpffs instance can also be used
> > for permission checking provided a given functionality has been
> > delegated by e.g., systemd-bpfd or whatever.
> 
> I have no arguments against any of the above, and would prefer to see
> something like this over a token-based mechanism.  However we do want
> to make sure we have the proper LSM control points for either approach
> so that admins who rely on LSM-based security policies can manage
> delegation via their policies.
> 
> Using the fsconfig() approach described by Christian above, I believe
> we should have the necessary hooks already in
> security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
> basing that on a quick look this morning, some additional checking
> would need to be done.

I think what I outlined is even unnecessarily complicated. You don't
need that pointless "delegate" mount option at all actually. Permission
to delegate shouldn't be checked when the mount option is set. The
permissions should be checked when the superblock is created. That's the
right point in time. So sm like:

diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index 4174f76133df..a2eb382f5457 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -746,6 +746,13 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
        struct inode *inode;
        int ret;

+       /*
+        * If you want to delegate this instance then you need to be
+        * privileged and know what you're doing. This isn't trust.
+        */
+       if ((fc->user_ns != &init_user_ns) && !capable(CAP_SYS_ADMIN))
+               return -EPERM;
+
        ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
        if (ret)
                return ret;
@@ -800,6 +807,7 @@ static struct file_system_type bpf_fs_type = {
        .init_fs_context = bpf_init_fs_context,
        .parameters     = bpf_fs_parameters,
        .kill_sb        = kill_litter_super,
+       .fs_flags       = FS_USERNS_MOUNT,
 };

 static int __init bpf_init(void)

In fact this is conceptually generalizable but I'd need to think about
that.
Paul Moore July 5, 2023, 4 p.m. UTC | #9
On Wed, Jul 5, 2023 at 10:42 AM Christian Brauner <brauner@kernel.org> wrote:
> On Wed, Jul 05, 2023 at 10:16:13AM -0400, Paul Moore wrote:
> > On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > > allow delegating privileged BPF functionality, like loading a BPF
> > > > program or creating a BPF map, from privileged process to a *trusted*
> > > > unprivileged process, all while have a good amount of control over which
> > > > privileged operations could be performed using provided BPF token.
> > > >
> > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > > allows to create a new BPF token object along with a set of allowed
> > > > commands that such BPF token allows to unprivileged applications.
> > > > Currently only BPF_TOKEN_CREATE command itself can be
> > > > delegated, but other patches gradually add ability to delegate
> > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > > >
> > > > The above means that new BPF tokens can be created using existing BPF
> > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > > New derived BPF token cannot be more powerful than the original BPF
> > > > token.
> > > >
> > > > Importantly, BPF token is automatically pinned at the specified location
> > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > > instances.
> > > >
> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > ---
> > >
> > > The main issue I have with the token approach is that it is a completely
> > > separate delegation vector on top of user namespaces. We mentioned this
> > > duringthe conf and this was brought up on the thread here again as well.
> > > Imho, that's a problem both security-wise and complexity-wise.
> > >
> > > It's not great if each subsystem gets its own custom delegation
> > > mechanism. This imposes such a taxing complexity on both kernel- and
> > > userspace that it will quickly become a huge liability. So I would
> > > really strongly encourage you to explore another direction.
> > >
> > > I do think the spirit of your proposal is workable and that it can
> > > mostly be kept in tact.
> > >
> > > As mentioned before, bpffs has all the means to be taught delegation:
> > >
> > >         // In container's user namespace
> > >         fd_fs = fsopen("bpffs");
> > >
> > >         // Delegating task in host userns (systemd-bpfd whatever you want)
> > >         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> > >
> > >         // In container's user namespace
> > >         fd_mnt = fsmount(fd_fs, 0);
> > >
> > >         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> > >
> > > Roughly, this would mean:
> > >
> > > (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
> > >     mount option. IOW, it's only possibly to mount bpffs as an
> > >     unprivileged user if a delegating process like systemd-bpfd with
> > >     system-level privileges has marked it as delegatable.
> > > (ii) add fine-grained delegation options that you want this
> > >      bpffs instance to allow via new mount options. Idk,
> > >
> > >      // allow usage of foo
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> > >
> > >      // also allow usage of bar
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> > >
> > >      // reset allowed options
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> > >
> > >      // allow usage of schmoo
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
> > >
> > > This all seems more intuitive and integrates with user and mount
> > > namespaces of the container. This can also work for restricting
> > > non-userns bpf instances fwiw. You can also share instances via
> > > bind-mount and so on. The userns of the bpffs instance can also be used
> > > for permission checking provided a given functionality has been
> > > delegated by e.g., systemd-bpfd or whatever.
> >
> > I have no arguments against any of the above, and would prefer to see
> > something like this over a token-based mechanism.  However we do want
> > to make sure we have the proper LSM control points for either approach
> > so that admins who rely on LSM-based security policies can manage
> > delegation via their policies.
> >
> > Using the fsconfig() approach described by Christian above, I believe
> > we should have the necessary hooks already in
> > security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
> > basing that on a quick look this morning, some additional checking
> > would need to be done.
>
> I think what I outlined is even unnecessarily complicated. You don't
> need that pointless "delegate" mount option at all actually. Permission
> to delegate shouldn't be checked when the mount option is set. The
> permissions should be checked when the superblock is created.

From a LSM perspective I think we would want to have policy
enforcement points both when task A enables delegation and when task B
makes use of the delegation.  We would likely also want to be able to
add some additional delegation state to the superblock if delegation
was enabled in the first enforcement point.

I'm not too bothered by how that ends up looking from a userspace
perspective, but it seems like requiring an explicit "this fs can be
delegated" step would be a positive from a security perspective.  In
other words, just because a task *could* delegated a filesystem, may
not mean it *wants* to delegate a filesystem.
Andrii Nakryiko July 5, 2023, 9:38 p.m. UTC | #10
On Wed, Jul 5, 2023 at 7:42 AM Christian Brauner <brauner@kernel.org> wrote:
>
> On Wed, Jul 05, 2023 at 10:16:13AM -0400, Paul Moore wrote:
> > On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > > allow delegating privileged BPF functionality, like loading a BPF
> > > > program or creating a BPF map, from privileged process to a *trusted*
> > > > unprivileged process, all while have a good amount of control over which
> > > > privileged operations could be performed using provided BPF token.
> > > >
> > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > > allows to create a new BPF token object along with a set of allowed
> > > > commands that such BPF token allows to unprivileged applications.
> > > > Currently only BPF_TOKEN_CREATE command itself can be
> > > > delegated, but other patches gradually add ability to delegate
> > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > > >
> > > > The above means that new BPF tokens can be created using existing BPF
> > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > > New derived BPF token cannot be more powerful than the original BPF
> > > > token.
> > > >
> > > > Importantly, BPF token is automatically pinned at the specified location
> > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > > instances.
> > > >
> > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > ---
> > >
> > > The main issue I have with the token approach is that it is a completely
> > > separate delegation vector on top of user namespaces. We mentioned this
> > > duringthe conf and this was brought up on the thread here again as well.
> > > Imho, that's a problem both security-wise and complexity-wise.
> > >
> > > It's not great if each subsystem gets its own custom delegation
> > > mechanism. This imposes such a taxing complexity on both kernel- and
> > > userspace that it will quickly become a huge liability. So I would
> > > really strongly encourage you to explore another direction.

Alright, thanks a lot for elaborating. I did want to keep everything
contained to bpf() for various reasons, but it seems like I won't be
able to get away with this. :)

> > >
> > > I do think the spirit of your proposal is workable and that it can
> > > mostly be kept in tact.

It's good to know that at least conceptually you support the idea of
BPF delegation. I have a few more specific questions below and I'd
appreciate your answers, as I have less familiarity with how exactly
container managers do stuff at container bootstrapping stage.

But first, let's try to get some tentative agreement on design before
I go and implement the BPF-token-as-FS idea. I have basically just two
gripes with exact details of what you are proposing, so let me explain
which and why, and see if we can find some common ground.

First, the idea of coupling and bundling this "delegation" option with
BPF FS doesn't feel right. BPF FS is just a container of BPF objects,
so adding to it a new property of allowing to use privileged BPF
functionality seems a bit off.

Why not just create a new separate FS, let's code-name it "BPF Token
FS" for now (naming suggestions are welcome). Such BPF Token FS would
be dedicated to specifying everything about what's allowable through
BPF, just like my BPF token implementation. It can then be
mounted/bind-mounted inside BPF FS (or really, anywhere, it's just a
FS, right?). User application would open it (I'm guessing with
open_tree(), right?) and pass it as token_fd to bpf() syscall.

Having it as a separate single-purpose FS seems cleaner, because we
have use cases where we'd have one BPF FS instance created for a
container by our container manager, and then exposing a few separate
tokens with different sets of allowed functionality. E.g., one for
main intended workload, another for some BPF-based observability
tools, maybe yet another for more heavy-weight tools like bpftrace for
extra debugging. In the debugging case our container infrastructure
will be "evacuating" any other workloads on the same host to avoid
unnecessary consequences. The point is to not disturb
workload-under-human-debugging as much as possible, so we'd like to
keep userns intact, which is why mounting extra (more permissive) BPF
token inside already running containers is an important consideration.

With such goals, it seems nicer to have a single BPF FS, and few BPF
token FSs mounted inside it. Yes, we could bundle token functionality
with BPF FS, but separating those two seems cleaner to me. WDYT?

Second, mount options usage. I'm hearing stories from our production
folks how some new mount options (on some other FS, not BPF FS) were
breaking tools unintentionally during kernel/tooling
upgrades/downgrades, so it makes me a bit hesitant to have these
complicated sets of mount options to specify parameters of
BPF-token-as-FS. I've been thinking a bit, and I'm starting to lean
towards the idea of allowing to set up (and modify as well) all these
allowed maps/progs/attach types through special auto-created files
within BPF token FS. Something like below:

# pwd
/sys/fs/bpf/workload-token
# ls
allowed_cmds allowed_map_types allowed_prog_types allowed_attach_types
# echo "BPF_PROG_LOAD" > allowed_cmds
# echo "BPF_PROG_TYPE_KPROBE" >> allowed_prog_types
...
# cat allowed_prog_types
BPF_PROG_TYPE_KPROBE,BPF_PROG_TYPE_TRACEPOINT


The above is fake (I haven't implemented anything yet), but hopefully
works as a demonstration. We'll also need to make sure that inside
non-init userns these files are read-only or allow to just further
restrict the subset of allowed functionality, never extend it.

Such an approach will actually make it simpler to test and experiment
with this delegation locally, will make it trivial to observe what's
allowed from simple shell scripts, etc, etc. With fsmount() and O_PATH
it will be possible to set everything up from privileged processes
before ever exposing a BPF Token FS instance through a file system, if
there are any concerns about racing with user space.

That's the high-level approach I'm thinking of right now. Would that
work? How critical is it to reuse BPF FS itself and how important to
you is to rely on mount options vs special files as described above?
Hopefully not critical, and I can start working on it, and we'll get
what you want with using FS as a vehicle for delegation, while
allowing some of the intended use cases that we have in mind in a bit
cleaner fashion?

> > >
> > > As mentioned before, bpffs has all the means to be taught delegation:
> > >
> > >         // In container's user namespace
> > >         fd_fs = fsopen("bpffs");
> > >
> > >         // Delegating task in host userns (systemd-bpfd whatever you want)
> > >         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> > >
> > >         // In container's user namespace
> > >         fd_mnt = fsmount(fd_fs, 0);
> > >
> > >         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> > >
> > > Roughly, this would mean:
> > >
> > > (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
> > >     mount option. IOW, it's only possibly to mount bpffs as an
> > >     unprivileged user if a delegating process like systemd-bpfd with
> > >     system-level privileges has marked it as delegatable.

Regarding the FS_USERNS_MOUNT flag and fsopen() happening from inside
the user namespace. Am I missing something subtle and important here,
why does it have to happen inside the container's user namespace?
Can't the container manager both fsopen() and fsconfig() everything in
host userns, and only then fsmount+move_mount inside the container's
userns? Just trying to understand if there is some important early
association of userns happening at early steps here?

Also, in your example above, move_mount() should take fd_mnt, not fd_fs, right?

> > > (ii) add fine-grained delegation options that you want this
> > >      bpffs instance to allow via new mount options. Idk,
> > >
> > >      // allow usage of foo
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> > >
> > >      // also allow usage of bar
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> > >
> > >      // reset allowed options
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> > >
> > >      // allow usage of schmoo
> > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
> > >
> > > This all seems more intuitive and integrates with user and mount
> > > namespaces of the container. This can also work for restricting
> > > non-userns bpf instances fwiw. You can also share instances via
> > > bind-mount and so on. The userns of the bpffs instance can also be used
> > > for permission checking provided a given functionality has been
> > > delegated by e.g., systemd-bpfd or whatever.
> >
> > I have no arguments against any of the above, and would prefer to see
> > something like this over a token-based mechanism.  However we do want
> > to make sure we have the proper LSM control points for either approach
> > so that admins who rely on LSM-based security policies can manage
> > delegation via their policies.
> >
> > Using the fsconfig() approach described by Christian above, I believe
> > we should have the necessary hooks already in
> > security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
> > basing that on a quick look this morning, some additional checking
> > would need to be done.
>
> I think what I outlined is even unnecessarily complicated. You don't
> need that pointless "delegate" mount option at all actually. Permission
> to delegate shouldn't be checked when the mount option is set. The
> permissions should be checked when the superblock is created. That's the
> right point in time. So sm like:
>

I think this gets even more straightforward with BPF Token FS being a
separate one, right? Given BPF Token FS is all about delegation, it
has to be a privileged operation to even create it.

> diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
> index 4174f76133df..a2eb382f5457 100644
> --- a/kernel/bpf/inode.c
> +++ b/kernel/bpf/inode.c
> @@ -746,6 +746,13 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
>         struct inode *inode;
>         int ret;
>
> +       /*
> +        * If you want to delegate this instance then you need to be
> +        * privileged and know what you're doing. This isn't trust.
> +        */
> +       if ((fc->user_ns != &init_user_ns) && !capable(CAP_SYS_ADMIN))
> +               return -EPERM;
> +
>         ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
>         if (ret)
>                 return ret;
> @@ -800,6 +807,7 @@ static struct file_system_type bpf_fs_type = {
>         .init_fs_context = bpf_init_fs_context,
>         .parameters     = bpf_fs_parameters,
>         .kill_sb        = kill_litter_super,
> +       .fs_flags       = FS_USERNS_MOUNT,

Just an aside thought. It doesn't seem like there is any reason why
BPF FS right now is not created with FS_USERNS_MOUNT, so (separately
from all this discussion) I suspect we can just make it
FS_USERNS_MOUNT right now (unless we combine it with BPF-token-FS,
then yeah, we can't do that unconditionally anymore). Given BPF FS is
just a container of pinned BPF objects, just mounting BPF FS doesn't
seem to be dangerous in any way. But that's just an aside thought
here.

>  };
>
>  static int __init bpf_init(void)
>
> In fact this is conceptually generalizable but I'd need to think about
> that.
Toke Høiland-Jørgensen July 6, 2023, 11:32 a.m. UTC | #11
Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> Having it as a separate single-purpose FS seems cleaner, because we
> have use cases where we'd have one BPF FS instance created for a
> container by our container manager, and then exposing a few separate
> tokens with different sets of allowed functionality. E.g., one for
> main intended workload, another for some BPF-based observability
> tools, maybe yet another for more heavy-weight tools like bpftrace for
> extra debugging. In the debugging case our container infrastructure
> will be "evacuating" any other workloads on the same host to avoid
> unnecessary consequences. The point is to not disturb
> workload-under-human-debugging as much as possible, so we'd like to
> keep userns intact, which is why mounting extra (more permissive) BPF
> token inside already running containers is an important consideration.

This example (as well as Yafang's in the sibling subthread) makes it
even more apparent to me that it would be better with a model where the
userspace policy daemon can just make decisions on each call directly,
instead of mucking about with different tokens with different embedded
permissions. Why not go that route (see my other reply for details on
what I mean)?

-Toke
Andrii Nakryiko July 6, 2023, 8:37 p.m. UTC | #12
On Thu, Jul 6, 2023 at 4:32 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>
> > Having it as a separate single-purpose FS seems cleaner, because we
> > have use cases where we'd have one BPF FS instance created for a
> > container by our container manager, and then exposing a few separate
> > tokens with different sets of allowed functionality. E.g., one for
> > main intended workload, another for some BPF-based observability
> > tools, maybe yet another for more heavy-weight tools like bpftrace for
> > extra debugging. In the debugging case our container infrastructure
> > will be "evacuating" any other workloads on the same host to avoid
> > unnecessary consequences. The point is to not disturb
> > workload-under-human-debugging as much as possible, so we'd like to
> > keep userns intact, which is why mounting extra (more permissive) BPF
> > token inside already running containers is an important consideration.
>
> This example (as well as Yafang's in the sibling subthread) makes it
> even more apparent to me that it would be better with a model where the
> userspace policy daemon can just make decisions on each call directly,
> instead of mucking about with different tokens with different embedded
> permissions. Why not go that route (see my other reply for details on
> what I mean)?

I don't know how you arrived at this conclusion, but we've debated BPF
proxying and separate service at length, there is no point in going on
another round here. Per-call decisions can be achieved nicely by
employing BPF LSM in a restrictive manner on top of BPF token (or no
token, if you are ok without user namespaces).

>
> -Toke
>
Toke Høiland-Jørgensen July 7, 2023, 1:04 p.m. UTC | #13
Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Thu, Jul 6, 2023 at 4:32 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>
>> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>>
>> > Having it as a separate single-purpose FS seems cleaner, because we
>> > have use cases where we'd have one BPF FS instance created for a
>> > container by our container manager, and then exposing a few separate
>> > tokens with different sets of allowed functionality. E.g., one for
>> > main intended workload, another for some BPF-based observability
>> > tools, maybe yet another for more heavy-weight tools like bpftrace for
>> > extra debugging. In the debugging case our container infrastructure
>> > will be "evacuating" any other workloads on the same host to avoid
>> > unnecessary consequences. The point is to not disturb
>> > workload-under-human-debugging as much as possible, so we'd like to
>> > keep userns intact, which is why mounting extra (more permissive) BPF
>> > token inside already running containers is an important consideration.
>>
>> This example (as well as Yafang's in the sibling subthread) makes it
>> even more apparent to me that it would be better with a model where the
>> userspace policy daemon can just make decisions on each call directly,
>> instead of mucking about with different tokens with different embedded
>> permissions. Why not go that route (see my other reply for details on
>> what I mean)?
>
> I don't know how you arrived at this conclusion,

Because it makes it apparent that you're basically building a policy
engine in the kernel with this...

> but we've debated BPF proxying and separate service at length, there
> is no point in going on another round here.

You had some objections to explicit proxying via RPC calls; I suggested
a way of avoiding that by keeping the kernel in the loop, which you have
not responded to. If you're just going to go ahead with your solution
over any objections you could just have stated so from the beginning and
saved us all a lot of time :/

Can we at least put this thing behind a kconfig option, so we can turn
it off in distro kernels?

> Per-call decisions can be achieved nicely by employing BPF LSM in a
> restrictive manner on top of BPF token (or no token, if you are ok
> without user namespaces).

Building a deficient security delegation mechanism and saying "you can
patch things up using an LSM" is a terrible design, though. Also, this
still means you have to implement all the policy checks in the kernel
(just in BPF) which is awkward at best.

-Toke
Andrii Nakryiko July 7, 2023, 5:58 p.m. UTC | #14
On Fri, Jul 7, 2023 at 6:04 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>
> > On Thu, Jul 6, 2023 at 4:32 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >>
> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
> >>
> >> > Having it as a separate single-purpose FS seems cleaner, because we
> >> > have use cases where we'd have one BPF FS instance created for a
> >> > container by our container manager, and then exposing a few separate
> >> > tokens with different sets of allowed functionality. E.g., one for
> >> > main intended workload, another for some BPF-based observability
> >> > tools, maybe yet another for more heavy-weight tools like bpftrace for
> >> > extra debugging. In the debugging case our container infrastructure
> >> > will be "evacuating" any other workloads on the same host to avoid
> >> > unnecessary consequences. The point is to not disturb
> >> > workload-under-human-debugging as much as possible, so we'd like to
> >> > keep userns intact, which is why mounting extra (more permissive) BPF
> >> > token inside already running containers is an important consideration.
> >>
> >> This example (as well as Yafang's in the sibling subthread) makes it
> >> even more apparent to me that it would be better with a model where the
> >> userspace policy daemon can just make decisions on each call directly,
> >> instead of mucking about with different tokens with different embedded
> >> permissions. Why not go that route (see my other reply for details on
> >> what I mean)?
> >
> > I don't know how you arrived at this conclusion,
>
> Because it makes it apparent that you're basically building a policy
> engine in the kernel with this...

I disagree that this is a policy engine in the kernel. It's a building
block for delegation and enforcement. The policy itself is implemented
in user-space by a privileged process that decides when to issue BPF
tokens and of which configuration. And, optionally and if necessary,
further restricting using BPF LSM in a more fine-grained and dynamic
way.

>
> > but we've debated BPF proxying and separate service at length, there
> > is no point in going on another round here.
>
> You had some objections to explicit proxying via RPC calls; I suggested
> a way of avoiding that by keeping the kernel in the loop, which you have

I thought we settled the seccomp notify proposal?

> not responded to. If you're just going to go ahead with your solution
> over any objections you could just have stated so from the beginning and
> saved us all a lot of time :/

It would also be good to understand that yours is but one of the
opinions. If you read the thread carefully you'll see that other
people have differing opinions. And yours doesn't necessarily have to
be the deciding one.

I appreciate the feedback, but I don't appreciate the expectation that
your feedback is binding in any way.

>
> Can we at least put this thing behind a kconfig option, so we can turn
> it off in distro kernels?

Why can't distro disable this in some more dynamic way, though? With
existing LSM mechanism, sysctl, whatever? I think it would be useful
to let users have control over this and decide for themselves without
having to rebuild a custom kernel.

>
> > Per-call decisions can be achieved nicely by employing BPF LSM in a
> > restrictive manner on top of BPF token (or no token, if you are ok
> > without user namespaces).
>
> Building a deficient security delegation mechanism and saying "you can
> patch things up using an LSM" is a terrible design, though. Also, this

A bunch of people disagree with you.

> still means you have to implement all the policy checks in the kernel
> (just in BPF) which is awkward at best.

"Patch things up using an LSM", if necessary, in a restrictive manner
is what LSM folks prefer. You are also assuming that it's always
necessary, and I'm saying that in lots of practical contexts LSM won't
be even necessary.

>
> -Toke
>
Toke Høiland-Jørgensen July 7, 2023, 10 p.m. UTC | #15
Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:

> On Fri, Jul 7, 2023 at 6:04 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>>
>> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>>
>> > On Thu, Jul 6, 2023 at 4:32 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>> >>
>> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>> >>
>> >> > Having it as a separate single-purpose FS seems cleaner, because we
>> >> > have use cases where we'd have one BPF FS instance created for a
>> >> > container by our container manager, and then exposing a few separate
>> >> > tokens with different sets of allowed functionality. E.g., one for
>> >> > main intended workload, another for some BPF-based observability
>> >> > tools, maybe yet another for more heavy-weight tools like bpftrace for
>> >> > extra debugging. In the debugging case our container infrastructure
>> >> > will be "evacuating" any other workloads on the same host to avoid
>> >> > unnecessary consequences. The point is to not disturb
>> >> > workload-under-human-debugging as much as possible, so we'd like to
>> >> > keep userns intact, which is why mounting extra (more permissive) BPF
>> >> > token inside already running containers is an important consideration.
>> >>
>> >> This example (as well as Yafang's in the sibling subthread) makes it
>> >> even more apparent to me that it would be better with a model where the
>> >> userspace policy daemon can just make decisions on each call directly,
>> >> instead of mucking about with different tokens with different embedded
>> >> permissions. Why not go that route (see my other reply for details on
>> >> what I mean)?
>> >
>> > I don't know how you arrived at this conclusion,
>>
>> Because it makes it apparent that you're basically building a policy
>> engine in the kernel with this...
>
> I disagree that this is a policy engine in the kernel. It's a building
> block for delegation and enforcement. The policy itself is implemented
> in user-space by a privileged process that decides when to issue BPF
> tokens and of which configuration. And, optionally and if necessary,
> further restricting using BPF LSM in a more fine-grained and dynamic
> way.

Right, and I'm saying that it's too coarse-grained to be a proper
building block in its own right. As evidenced by the need for adding an
LSM on top to do anything fine-grained; a task which is decidedly
non-trivial to get right, BTW. Which means that the path of least
resistance is going to be to just grant a token and not bother with the
LSM, thus ending up with this being a giant foot gun from a security
PoV.

>> > but we've debated BPF proxying and separate service at length, there
>> > is no point in going on another round here.
>>
>> You had some objections to explicit proxying via RPC calls; I suggested
>> a way of avoiding that by keeping the kernel in the loop, which you have
>
> I thought we settled the seccomp notify proposal?

Your objection to that was that it was too much of a hack to read all
the target process memory (etc) from the policy daemon, which I
acknowledged and suggested a way of keeping the kernel in the loop so it
can take responsibility for the gnarly bits while still allowing
userspace to actually make the decision:

https://lore.kernel.org/r/87v8ezb6x5.fsf@toke.dk

(Last two paragraphs). Maybe that message just got lost somewhere on its
way to your inbox?

>> not responded to. If you're just going to go ahead with your solution
>> over any objections you could just have stated so from the beginning and
>> saved us all a lot of time :/
>
> It would also be good to understand that yours is but one of the
> opinions. If you read the thread carefully you'll see that other
> people have differing opinions. And yours doesn't necessarily have to
> be the deciding one.
>
> I appreciate the feedback, but I don't appreciate the expectation that
> your feedback is binding in any way.

I'm not expecting veto rights, I'm objecting to being ignored. The way
this development process is *supposed* to work (as far as I'm concerned)
is that someone proposes a patch series, the community provides
feedback, and discussion proceeds until there's at least rough consensus
that the solution we've arrived at is the right way forward.

If you're going to cut that process short and just pick and choose which
comments are worth addressing and which are not, I can't stop you,
obviously; but at least do me the favour of being up front about it so I
can stop wasting my time trying to be constructive.

Anyhow, I guess this point is moot for this discussion since I'm about
to leave for vacation for four weeks and won't be able to follow up on
this. Apologies for the bad timing :/ I'll ping some RH folks and try to
get them to keep an eye on this while I'm away...

>> Can we at least put this thing behind a kconfig option, so we can turn
>> it off in distro kernels?
>
> Why can't distro disable this in some more dynamic way, though? With
> existing LSM mechanism, sysctl, whatever? I think it would be useful
> to let users have control over this and decide for themselves without
> having to rebuild a custom kernel.

A sysctl similar to the existing one for unprivileged BPF would be fine
as well. If an LSM ends up being the only way to control it, though,
that will carry so much operational overhead for us to get to a working
state that it'll most likely be simpler to just patch it out of the
kernel.

-Toke
Andrii Nakryiko July 7, 2023, 11:58 p.m. UTC | #16
On Fri, Jul 7, 2023 at 3:00 PM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
>
> > On Fri, Jul 7, 2023 at 6:04 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >>
> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
> >>
> >> > On Thu, Jul 6, 2023 at 4:32 AM Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> >> >>
> >> >> Andrii Nakryiko <andrii.nakryiko@gmail.com> writes:
> >> >>
> >> >> > Having it as a separate single-purpose FS seems cleaner, because we
> >> >> > have use cases where we'd have one BPF FS instance created for a
> >> >> > container by our container manager, and then exposing a few separate
> >> >> > tokens with different sets of allowed functionality. E.g., one for
> >> >> > main intended workload, another for some BPF-based observability
> >> >> > tools, maybe yet another for more heavy-weight tools like bpftrace for
> >> >> > extra debugging. In the debugging case our container infrastructure
> >> >> > will be "evacuating" any other workloads on the same host to avoid
> >> >> > unnecessary consequences. The point is to not disturb
> >> >> > workload-under-human-debugging as much as possible, so we'd like to
> >> >> > keep userns intact, which is why mounting extra (more permissive) BPF
> >> >> > token inside already running containers is an important consideration.
> >> >>
> >> >> This example (as well as Yafang's in the sibling subthread) makes it
> >> >> even more apparent to me that it would be better with a model where the
> >> >> userspace policy daemon can just make decisions on each call directly,
> >> >> instead of mucking about with different tokens with different embedded
> >> >> permissions. Why not go that route (see my other reply for details on
> >> >> what I mean)?
> >> >
> >> > I don't know how you arrived at this conclusion,
> >>
> >> Because it makes it apparent that you're basically building a policy
> >> engine in the kernel with this...
> >
> > I disagree that this is a policy engine in the kernel. It's a building
> > block for delegation and enforcement. The policy itself is implemented
> > in user-space by a privileged process that decides when to issue BPF
> > tokens and of which configuration. And, optionally and if necessary,
> > further restricting using BPF LSM in a more fine-grained and dynamic
> > way.
>
> Right, and I'm saying that it's too coarse-grained to be a proper

CAP_BPF, CAP_PERFMON, CAP_SYS_ADMIN, CAP_NET_ADMIN are also very
coarse grained. And somehow we get by and make do with them outside of
the user namespace use case.

> building block in its own right. As evidenced by the need for adding an
> LSM on top to do anything fine-grained; a task which is decidedly

There is no *need* to add LSM. For tons of practical use cases you
won't need it. Yes, people will make a decision whether they even have
to bother with more fine grained controls. And if yes, LSM is there to
provide it.

> non-trivial to get right, BTW. Which means that the path of least
> resistance is going to be to just grant a token and not bother with the
> LSM, thus ending up with this being a giant foot gun from a security
> PoV.

If there is no need for LSM, yes, and I think it's totally acceptable.
It will be up to users to decide.

>
> >> > but we've debated BPF proxying and separate service at length, there
> >> > is no point in going on another round here.
> >>
> >> You had some objections to explicit proxying via RPC calls; I suggested
> >> a way of avoiding that by keeping the kernel in the loop, which you have
> >
> > I thought we settled the seccomp notify proposal?
>
> Your objection to that was that it was too much of a hack to read all
> the target process memory (etc) from the policy daemon, which I
> acknowledged and suggested a way of keeping the kernel in the loop so it
> can take responsibility for the gnarly bits while still allowing
> userspace to actually make the decision:
>

Your proposal for some new mechanism for blocking bpf() syscall to let
another user space process make decision and somehow provide all the
necessary data to make this decision without that process needing to
read original process' memory (so presumably kernel will make a copy
of BPF program instructions, BTF contents, all the strings, etc, etc?)
sounded more like a joke and just a contrarian way to provide *any*
alternative, just to disagree with the much simpler and more
straightforward proposal.

I encourage you to spend some time prototyping this new mechanism,
sending RFC and gathering community feedback before using this
handwavy idea as an excuse to block BPF token-like mechanism. I'll be
curious to read the discussion on how it's different from
authoritative LSM, seccomp notify, etc, etc.

> https://lore.kernel.org/r/87v8ezb6x5.fsf@toke.dk
>
> (Last two paragraphs). Maybe that message just got lost somewhere on its
> way to your inbox?
>
> >> not responded to. If you're just going to go ahead with your solution
> >> over any objections you could just have stated so from the beginning and
> >> saved us all a lot of time :/
> >
> > It would also be good to understand that yours is but one of the
> > opinions. If you read the thread carefully you'll see that other
> > people have differing opinions. And yours doesn't necessarily have to
> > be the deciding one.
> >
> > I appreciate the feedback, but I don't appreciate the expectation that
> > your feedback is binding in any way.
>
> I'm not expecting veto rights, I'm objecting to being ignored. The way

You are not being ignored. We are just disagreeing. There is a
difference. BPF proxying was discussed at length and people who manage
large sets of BPF applications voiced their concerns. Security
concerns you have for BPF token are just as applicable to CAP_BPF and
other caps. BPF token actually allows to drop those very
coarse-grained capabilities in a bunch of circumstances and overall
improve the security. Also note, there were security folks in the
discussion which seem to be fine with the BPF token approach, overall.

You don't like my (and others') answers. That's fine, but please don't
pretend like you are being ignored.

> this development process is *supposed* to work (as far as I'm concerned)
> is that someone proposes a patch series, the community provides
> feedback, and discussion proceeds until there's at least rough consensus
> that the solution we've arrived at is the right way forward.

Rough consensus, not 100% consensus, though?.. There will always be
someone who disagrees.

>
> If you're going to cut that process short and just pick and choose which

Yep, clearly, going into the 3rd month of discussions (starting from
LSF/MM, and I don't even include the authoritative LSM discussions
before that) is cutting this process very short, of course.

> comments are worth addressing and which are not, I can't stop you,
> obviously; but at least do me the favour of being up front about it so I
> can stop wasting my time trying to be constructive.

I wouldn't say that a proposal like "some seccomp-notify-like
mechanism to let another process decide if bpf() syscall should
proceed" with not much effort put into thinking about how it should be
done specifically and whether it's actually a better approach was very
constructive. And it felt self-evident that it's not a good way,
especially after Christian himself said that the seccomp-based
approach is also not a good generic solution. Your proposal was just a
weird bpf()-specific (and not very well specified) twist on the
seccomp notify idea. But as I said above, give it a try, perhaps I'm
mistaken and the BPF community would love the idea and implementation.

>
> Anyhow, I guess this point is moot for this discussion since I'm about
> to leave for vacation for four weeks and won't be able to follow up on
> this. Apologies for the bad timing :/ I'll ping some RH folks and try to
> get them to keep an eye on this while I'm away...

Enjoy your vacation!

>
> >> Can we at least put this thing behind a kconfig option, so we can turn
> >> it off in distro kernels?
> >
> > Why can't distro disable this in some more dynamic way, though? With
> > existing LSM mechanism, sysctl, whatever? I think it would be useful
> > to let users have control over this and decide for themselves without
> > having to rebuild a custom kernel.
>
> A sysctl similar to the existing one for unprivileged BPF would be fine
> as well. If an LSM ends up being the only way to control it, though,
> that will carry so much operational overhead for us to get to a working
> state that it'll most likely be simpler to just patch it out of the
> kernel.

Sounds good, I will add sysctl for the next version.

>
> -Toke
>
Djalal Harouni July 10, 2023, 11:42 p.m. UTC | #17
On Sat, Jul 8, 2023 at 1:59 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
...
> > >
> > > Why can't distro disable this in some more dynamic way, though? With
> > > existing LSM mechanism, sysctl, whatever? I think it would be useful
> > > to let users have control over this and decide for themselves without
> > > having to rebuild a custom kernel.
> >
> > A sysctl similar to the existing one for unprivileged BPF would be fine
> > as well. If an LSM ends up being the only way to control it, though,
> > that will carry so much operational overhead for us to get to a working
> > state that it'll most likely be simpler to just patch it out of the
> > kernel.
>
> Sounds good, I will add sysctl for the next version.

What would be the purpose of the sysctl? or a kconfig? AFAICT the
operation is still privileged, and it's an opt-in? anyway...

It is obvious that this should be part of the BPF core... The other
user space proxy solution tries to solve another use case competing
with LSMs. It won't be able to handle the full context (or today's
nested workload) at bpf() call time... There are obvious reasons why
LSMs do exist...

Thanks for agreeing that it should be attached to the user namespace
at creation time as it is crucial to get it right... and Christian
(thanks BTW ;-) ) maybe we make it walk user ns list up to parent and
allow the token if it's coming from a parent namespace that is part of
the same hierarchy, then theoretically the parent ns is more
privileged...  will check again and reply to the corresponding email.

Thanks!
Christian Brauner July 11, 2023, 1:33 p.m. UTC | #18
On Wed, Jul 05, 2023 at 02:38:43PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 5, 2023 at 7:42 AM Christian Brauner <brauner@kernel.org> wrote:
> >
> > On Wed, Jul 05, 2023 at 10:16:13AM -0400, Paul Moore wrote:
> > > On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> > > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > > > allow delegating privileged BPF functionality, like loading a BPF
> > > > > program or creating a BPF map, from privileged process to a *trusted*
> > > > > unprivileged process, all while have a good amount of control over which
> > > > > privileged operations could be performed using provided BPF token.
> > > > >
> > > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > > > allows to create a new BPF token object along with a set of allowed
> > > > > commands that such BPF token allows to unprivileged applications.
> > > > > Currently only BPF_TOKEN_CREATE command itself can be
> > > > > delegated, but other patches gradually add ability to delegate
> > > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > > > >
> > > > > The above means that new BPF tokens can be created using existing BPF
> > > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > > > New derived BPF token cannot be more powerful than the original BPF
> > > > > token.
> > > > >
> > > > > Importantly, BPF token is automatically pinned at the specified location
> > > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > > > instances.
> > > > >
> > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > > ---
> > > >
> > > > The main issue I have with the token approach is that it is a completely
> > > > separate delegation vector on top of user namespaces. We mentioned this
> > > > duringthe conf and this was brought up on the thread here again as well.
> > > > Imho, that's a problem both security-wise and complexity-wise.
> > > >
> > > > It's not great if each subsystem gets its own custom delegation
> > > > mechanism. This imposes such a taxing complexity on both kernel- and
> > > > userspace that it will quickly become a huge liability. So I would
> > > > really strongly encourage you to explore another direction.
> 
> Alright, thanks a lot for elaborating. I did want to keep everything
> contained to bpf() for various reasons, but it seems like I won't be
> able to get away with this. :)
> 
> > > >
> > > > I do think the spirit of your proposal is workable and that it can
> > > > mostly be kept in tact.
> 
> It's good to know that at least conceptually you support the idea of
> BPF delegation. I have a few more specific questions below and I'd
> appreciate your answers, as I have less familiarity with how exactly
> container managers do stuff at container bootstrapping stage.
> 
> But first, let's try to get some tentative agreement on design before
> I go and implement the BPF-token-as-FS idea. I have basically just two
> gripes with exact details of what you are proposing, so let me explain
> which and why, and see if we can find some common ground.

Just fyi, there'll likely be some delays in my replies bc first I need
to think about it and second floods of mails. I'll be on vacation for
starting end of this week.

> 
> First, the idea of coupling and bundling this "delegation" option with
> BPF FS doesn't feel right. BPF FS is just a container of BPF objects,
> so adding to it a new property of allowing to use privileged BPF
> functionality seems a bit off.

Fwiw, I have a series that makes it possible to delegate a superblock of
a filesystem to a user namespace using the new mount api introducing a
vfs generic "delegate" mount option. So this won't be a special bpf
thing. This is generally useful.

> 
> Why not just create a new separate FS, let's code-name it "BPF Token
> FS" for now (naming suggestions are welcome). Such BPF Token FS would
> be dedicated to specifying everything about what's allowable through
> BPF, just like my BPF token implementation. It can then be
> mounted/bind-mounted inside BPF FS (or really, anywhere, it's just a
> FS, right?). User application would open it (I'm guessing with
> open_tree(), right?) and pass it as token_fd to bpf() syscall.
> 
> Having it as a separate single-purpose FS seems cleaner, because we
> have use cases where we'd have one BPF FS instance created for a
> container by our container manager, and then exposing a few separate
> tokens with different sets of allowed functionality. E.g., one for
> main intended workload, another for some BPF-based observability
> tools, maybe yet another for more heavy-weight tools like bpftrace for
> extra debugging. In the debugging case our container infrastructure
> will be "evacuating" any other workloads on the same host to avoid
> unnecessary consequences. The point is to not disturb
> workload-under-human-debugging as much as possible, so we'd like to
> keep userns intact, which is why mounting extra (more permissive) BPF
> token inside already running containers is an important consideration.
> 
> With such goals, it seems nicer to have a single BPF FS, and few BPF
> token FSs mounted inside it. Yes, we could bundle token functionality
> with BPF FS, but separating those two seems cleaner to me. WDYT?

It seems that writing a pseudo filesystem for the kernel is some right
of passage that every kernel developer wants to go through for some
reason. It's not mandatory though, it's actually discouraged.

Joking aside.
I think the danger lies in adding more and more moving parts and
fragmenting this into so many moving pieces that it's hard to see the
bigger picture and have a clear sense of the API.

> 
> Second, mount options usage. I'm hearing stories from our production
> folks how some new mount options (on some other FS, not BPF FS) were
> breaking tools unintentionally during kernel/tooling
> upgrades/downgrades, so it makes me a bit hesitant to have these
> complicated sets of mount options to specify parameters of
> BPF-token-as-FS. I've been thinking a bit, and I'm starting to lean

I don't see this as a good argument for a new pseudo filesystem. It
implies that any new filesystem would end up with the same problem. The
answer here would be to report and fix such bugs.

> towards the idea of allowing to set up (and modify as well) all these
> allowed maps/progs/attach types through special auto-created files
> within BPF token FS. Something like below:
> 
> # pwd
> /sys/fs/bpf/workload-token
> # ls
> allowed_cmds allowed_map_types allowed_prog_types allowed_attach_types
> # echo "BPF_PROG_LOAD" > allowed_cmds
> # echo "BPF_PROG_TYPE_KPROBE" >> allowed_prog_types
> ...
> # cat allowed_prog_types
> BPF_PROG_TYPE_KPROBE,BPF_PROG_TYPE_TRACEPOINT
> 
> 
> The above is fake (I haven't implemented anything yet), but hopefully
> works as a demonstration. We'll also need to make sure that inside
> non-init userns these files are read-only or allow to just further
> restrict the subset of allowed functionality, never extend it.

This implementation would get you into the business of write-time
permission checks. And this almost always means you should use an
ioctl(), not a write() operation on these files.

> 
> Such an approach will actually make it simpler to test and experiment
> with this delegation locally, will make it trivial to observe what's
> allowed from simple shell scripts, etc, etc. With fsmount() and O_PATH
> it will be possible to set everything up from privileged processes
> before ever exposing a BPF Token FS instance through a file system, if
> there are any concerns about racing with user space.
> 
> That's the high-level approach I'm thinking of right now. Would that
> work? How critical is it to reuse BPF FS itself and how important to
> you is to rely on mount options vs special files as described above?

In the end, it's your api and you need to live with it and support it.
What is important is that we don't end up with security issues. The
special files thing will work but be aware that write-time permission
checking is nasty:
* https://git.zx2c4.com/CVE-2012-0056/about/ (Thanks to Aleksa for the link.)
* commit e57457641613 ("cgroup: Use open-time cgroup namespace for process migration perm checks")
There's a lot more. It can be done but it needs stringent permission
checking and an ioctl() is probably the way to go in this case.

Another thing, if you split configuration over multiple files you can
end up introducing race windows. This is a common complaint with cgroups
and sysfs whenever configuration of something is split over multiple
files. It gets especially hairy if the options interact with each other
somehow.

> Hopefully not critical, and I can start working on it, and we'll get
> what you want with using FS as a vehicle for delegation, while
> allowing some of the intended use cases that we have in mind in a bit
> cleaner fashion?
> 
> > > >
> > > > As mentioned before, bpffs has all the means to be taught delegation:
> > > >
> > > >         // In container's user namespace
> > > >         fd_fs = fsopen("bpffs");
> > > >
> > > >         // Delegating task in host userns (systemd-bpfd whatever you want)
> > > >         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> > > >
> > > >         // In container's user namespace
> > > >         fd_mnt = fsmount(fd_fs, 0);
> > > >
> > > >         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> > > >
> > > > Roughly, this would mean:
> > > >
> > > > (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
> > > >     mount option. IOW, it's only possibly to mount bpffs as an
> > > >     unprivileged user if a delegating process like systemd-bpfd with
> > > >     system-level privileges has marked it as delegatable.
> 
> Regarding the FS_USERNS_MOUNT flag and fsopen() happening from inside
> the user namespace. Am I missing something subtle and important here,
> why does it have to happen inside the container's user namespace?
> Can't the container manager both fsopen() and fsconfig() everything in
> host userns, and only then fsmount+move_mount inside the container's
> userns? Just trying to understand if there is some important early
> association of userns happening at early steps here?

The mount api _currently_ works very roughly like this: if a filesytem
is FS_USERNS_MOUNT enabled fsopen() records the user namespace of the
caller. The recorded userns will later become the owning userns of the
filesystem's superblock (Without going into detail: owning userns of a
superblock != owning userns of a mount. move_mount() on a detached mount
is about the latter.).

I have a patchset that adds a generic "delegate" mount option which will
allow a sufficiently privileged process to do the following:

        fd_fs = fsopen("ext4");
        
        /*
	 * Set owning namespace of the filesystem's superblock.
         * Caller must be privileged over @fd_userns.
         *
	 * Note, must be first mount option to ensure that possible
	 * follow-up ermission checks for other mount options are done
	 * on the final owning namespace.
         */
        fsconfig(fd_fs, FSCONFIG_SET_FD, "delegate", NULL, fd_userns);
        
        /*
         * * If fs is FS_USERNS_MOUNT then permission is checked in @fd_userns.
         * * If fs is not FS_USERNS_MOUNT then permission is check in @init_user_ns.
         *   (Privilege in @init_user_ns implies privilege over @fd_userns.)
         */
        fsconfig(fd_fs, FSCONFIG_CMD_CREATE, NULL, 0);

After this, the sb is owned by @fd_userns. Currently my draft restricts
this to such filesystems that raise FS_ALLOW_IDMAP because they almost
can support delegation and don't need to be checked for any potential
issues. But bpffs could easily support this (without caring about
FS_ALLOW_IDMAP).

> 
> Also, in your example above, move_mount() should take fd_mnt, not fd_fs, right?
> 
> > > > (ii) add fine-grained delegation options that you want this
> > > >      bpffs instance to allow via new mount options. Idk,
> > > >
> > > >      // allow usage of foo
> > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> > > >
> > > >      // also allow usage of bar
> > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> > > >
> > > >      // reset allowed options
> > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> > > >
> > > >      // allow usage of schmoo
> > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
> > > >
> > > > This all seems more intuitive and integrates with user and mount
> > > > namespaces of the container. This can also work for restricting
> > > > non-userns bpf instances fwiw. You can also share instances via
> > > > bind-mount and so on. The userns of the bpffs instance can also be used
> > > > for permission checking provided a given functionality has been
> > > > delegated by e.g., systemd-bpfd or whatever.
> > >
> > > I have no arguments against any of the above, and would prefer to see
> > > something like this over a token-based mechanism.  However we do want
> > > to make sure we have the proper LSM control points for either approach
> > > so that admins who rely on LSM-based security policies can manage
> > > delegation via their policies.
> > >
> > > Using the fsconfig() approach described by Christian above, I believe
> > > we should have the necessary hooks already in
> > > security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
> > > basing that on a quick look this morning, some additional checking
> > > would need to be done.
> >
> > I think what I outlined is even unnecessarily complicated. You don't
> > need that pointless "delegate" mount option at all actually. Permission
> > to delegate shouldn't be checked when the mount option is set. The
> > permissions should be checked when the superblock is created. That's the
> > right point in time. So sm like:
> >
> 
> I think this gets even more straightforward with BPF Token FS being a
> separate one, right? Given BPF Token FS is all about delegation, it
> has to be a privileged operation to even create it.
> 
> > diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
> > index 4174f76133df..a2eb382f5457 100644
> > --- a/kernel/bpf/inode.c
> > +++ b/kernel/bpf/inode.c
> > @@ -746,6 +746,13 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
> >         struct inode *inode;
> >         int ret;
> >
> > +       /*
> > +        * If you want to delegate this instance then you need to be
> > +        * privileged and know what you're doing. This isn't trust.
> > +        */
> > +       if ((fc->user_ns != &init_user_ns) && !capable(CAP_SYS_ADMIN))
> > +               return -EPERM;
> > +
> >         ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
> >         if (ret)
> >                 return ret;
> > @@ -800,6 +807,7 @@ static struct file_system_type bpf_fs_type = {
> >         .init_fs_context = bpf_init_fs_context,
> >         .parameters     = bpf_fs_parameters,
> >         .kill_sb        = kill_litter_super,
> > +       .fs_flags       = FS_USERNS_MOUNT,
> 
> Just an aside thought. It doesn't seem like there is any reason why
> BPF FS right now is not created with FS_USERNS_MOUNT, so (separately
> from all this discussion) I suspect we can just make it
> FS_USERNS_MOUNT right now (unless we combine it with BPF-token-FS,
> then yeah, we can't do that unconditionally anymore). Given BPF FS is
> just a container of pinned BPF objects, just mounting BPF FS doesn't
> seem to be dangerous in any way. But that's just an aside thought
> here.

My two cents: Don't ever expose anything under user namespaces unless it
is guaranteed to be safe and has actual non-cosmetical use-cases.

The eagerness with which features pop up in user namespaces is probably
bankrolling half the infosec community.
Andrii Nakryiko July 11, 2023, 10:06 p.m. UTC | #19
On Tue, Jul 11, 2023 at 6:33 AM Christian Brauner <brauner@kernel.org> wrote:
>
> On Wed, Jul 05, 2023 at 02:38:43PM -0700, Andrii Nakryiko wrote:
> > On Wed, Jul 5, 2023 at 7:42 AM Christian Brauner <brauner@kernel.org> wrote:
> > >
> > > On Wed, Jul 05, 2023 at 10:16:13AM -0400, Paul Moore wrote:
> > > > On Tue, Jul 4, 2023 at 8:44 AM Christian Brauner <brauner@kernel.org> wrote:
> > > > > On Wed, Jun 28, 2023 at 10:18:19PM -0700, Andrii Nakryiko wrote:
> > > > > > Add new kind of BPF kernel object, BPF token. BPF token is meant to to
> > > > > > allow delegating privileged BPF functionality, like loading a BPF
> > > > > > program or creating a BPF map, from privileged process to a *trusted*
> > > > > > unprivileged process, all while have a good amount of control over which
> > > > > > privileged operations could be performed using provided BPF token.
> > > > > >
> > > > > > This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which
> > > > > > allows to create a new BPF token object along with a set of allowed
> > > > > > commands that such BPF token allows to unprivileged applications.
> > > > > > Currently only BPF_TOKEN_CREATE command itself can be
> > > > > > delegated, but other patches gradually add ability to delegate
> > > > > > BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands.
> > > > > >
> > > > > > The above means that new BPF tokens can be created using existing BPF
> > > > > > token, if original privileged creator allowed BPF_TOKEN_CREATE command.
> > > > > > New derived BPF token cannot be more powerful than the original BPF
> > > > > > token.
> > > > > >
> > > > > > Importantly, BPF token is automatically pinned at the specified location
> > > > > > inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN
> > > > > > command, unlike BPF prog/map/btf/link. This provides more control over
> > > > > > unintended sharing of BPF tokens through pinning it in another BPF FS
> > > > > > instances.
> > > > > >
> > > > > > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > > > > > ---
> > > > >
> > > > > The main issue I have with the token approach is that it is a completely
> > > > > separate delegation vector on top of user namespaces. We mentioned this
> > > > > duringthe conf and this was brought up on the thread here again as well.
> > > > > Imho, that's a problem both security-wise and complexity-wise.
> > > > >
> > > > > It's not great if each subsystem gets its own custom delegation
> > > > > mechanism. This imposes such a taxing complexity on both kernel- and
> > > > > userspace that it will quickly become a huge liability. So I would
> > > > > really strongly encourage you to explore another direction.
> >
> > Alright, thanks a lot for elaborating. I did want to keep everything
> > contained to bpf() for various reasons, but it seems like I won't be
> > able to get away with this. :)
> >
> > > > >
> > > > > I do think the spirit of your proposal is workable and that it can
> > > > > mostly be kept in tact.
> >
> > It's good to know that at least conceptually you support the idea of
> > BPF delegation. I have a few more specific questions below and I'd
> > appreciate your answers, as I have less familiarity with how exactly
> > container managers do stuff at container bootstrapping stage.
> >
> > But first, let's try to get some tentative agreement on design before
> > I go and implement the BPF-token-as-FS idea. I have basically just two
> > gripes with exact details of what you are proposing, so let me explain
> > which and why, and see if we can find some common ground.
>
> Just fyi, there'll likely be some delays in my replies bc first I need
> to think about it and second floods of mails. I'll be on vacation for
> starting end of this week.

I'll be on vacation for the next month or so starting from tomorrow,
so that's no problem :)

>
> >
> > First, the idea of coupling and bundling this "delegation" option with
> > BPF FS doesn't feel right. BPF FS is just a container of BPF objects,
> > so adding to it a new property of allowing to use privileged BPF
> > functionality seems a bit off.
>
> Fwiw, I have a series that makes it possible to delegate a superblock of
> a filesystem to a user namespace using the new mount api introducing a
> vfs generic "delegate" mount option. So this won't be a special bpf
> thing. This is generally useful.
>
> >
> > Why not just create a new separate FS, let's code-name it "BPF Token
> > FS" for now (naming suggestions are welcome). Such BPF Token FS would
> > be dedicated to specifying everything about what's allowable through
> > BPF, just like my BPF token implementation. It can then be
> > mounted/bind-mounted inside BPF FS (or really, anywhere, it's just a
> > FS, right?). User application would open it (I'm guessing with
> > open_tree(), right?) and pass it as token_fd to bpf() syscall.
> >
> > Having it as a separate single-purpose FS seems cleaner, because we
> > have use cases where we'd have one BPF FS instance created for a
> > container by our container manager, and then exposing a few separate
> > tokens with different sets of allowed functionality. E.g., one for
> > main intended workload, another for some BPF-based observability
> > tools, maybe yet another for more heavy-weight tools like bpftrace for
> > extra debugging. In the debugging case our container infrastructure
> > will be "evacuating" any other workloads on the same host to avoid
> > unnecessary consequences. The point is to not disturb
> > workload-under-human-debugging as much as possible, so we'd like to
> > keep userns intact, which is why mounting extra (more permissive) BPF
> > token inside already running containers is an important consideration.
> >
> > With such goals, it seems nicer to have a single BPF FS, and few BPF
> > token FSs mounted inside it. Yes, we could bundle token functionality
> > with BPF FS, but separating those two seems cleaner to me. WDYT?
>
> It seems that writing a pseudo filesystem for the kernel is some right
> of passage that every kernel developer wants to go through for some
> reason. It's not mandatory though, it's actually discouraged.

Believe me, I tried to avoid this as much as possible.

>
> Joking aside.
> I think the danger lies in adding more and more moving parts and
> fragmenting this into so many moving pieces that it's hard to see the
> bigger picture and have a clear sense of the API.

It's probably a difference of perspective as a BPF developer and user.
To me bundling this delegate option onto BPF FS is completely
counter-intuitive. BPF FS has (in my mind) nothing to do with how I
can use the BPF subsystem. So BPF token as a separate object/FS is way
more natural.

Having said that, I can bundle this new functionality onto BPF FS if
you insist, just to make some progress here and move to solving
further problems with BPF usage within userns. If someone else who
prefers separate FS for BPF token (and I know there are at least few
people who think it's cleaner that way as well) would like to voice
their opinion in support, please do so.

>
> >
> > Second, mount options usage. I'm hearing stories from our production
> > folks how some new mount options (on some other FS, not BPF FS) were
> > breaking tools unintentionally during kernel/tooling
> > upgrades/downgrades, so it makes me a bit hesitant to have these
> > complicated sets of mount options to specify parameters of
> > BPF-token-as-FS. I've been thinking a bit, and I'm starting to lean
>
> I don't see this as a good argument for a new pseudo filesystem. It
> implies that any new filesystem would end up with the same problem. The
> answer here would be to report and fix such bugs.

Sure, this wasn't the reason for separate BPF token FS, of course.

>
> > towards the idea of allowing to set up (and modify as well) all these
> > allowed maps/progs/attach types through special auto-created files
> > within BPF token FS. Something like below:
> >
> > # pwd
> > /sys/fs/bpf/workload-token
> > # ls
> > allowed_cmds allowed_map_types allowed_prog_types allowed_attach_types
> > # echo "BPF_PROG_LOAD" > allowed_cmds
> > # echo "BPF_PROG_TYPE_KPROBE" >> allowed_prog_types
> > ...
> > # cat allowed_prog_types
> > BPF_PROG_TYPE_KPROBE,BPF_PROG_TYPE_TRACEPOINT
> >
> >
> > The above is fake (I haven't implemented anything yet), but hopefully
> > works as a demonstration. We'll also need to make sure that inside
> > non-init userns these files are read-only or allow to just further
> > restrict the subset of allowed functionality, never extend it.
>
> This implementation would get you into the business of write-time
> permission checks. And this almost always means you should use an
> ioctl(), not a write() operation on these files.
>

Ok. I think ioctl() kind of kills all the benefits, so there is little point.

> >
> > Such an approach will actually make it simpler to test and experiment
> > with this delegation locally, will make it trivial to observe what's
> > allowed from simple shell scripts, etc, etc. With fsmount() and O_PATH
> > it will be possible to set everything up from privileged processes
> > before ever exposing a BPF Token FS instance through a file system, if
> > there are any concerns about racing with user space.
> >
> > That's the high-level approach I'm thinking of right now. Would that
> > work? How critical is it to reuse BPF FS itself and how important to
> > you is to rely on mount options vs special files as described above?
>
> In the end, it's your api and you need to live with it and support it.
> What is important is that we don't end up with security issues. The
> special files thing will work but be aware that write-time permission
> checking is nasty:
> * https://git.zx2c4.com/CVE-2012-0056/about/ (Thanks to Aleksa for the link.)

entertaining read :)

> * commit e57457641613 ("cgroup: Use open-time cgroup namespace for process migration perm checks")
> There's a lot more. It can be done but it needs stringent permission
> checking and an ioctl() is probably the way to go in this case.
>
> Another thing, if you split configuration over multiple files you can
> end up introducing race windows. This is a common complaint with cgroups
> and sysfs whenever configuration of something is split over multiple
> files. It gets especially hairy if the options interact with each other
> somehow.

I'm not too worried about races, but all the above makes sense. My
original approach with bpf() syscall creating BPF token object went
for immutable BPF token construction for the very same reasons of
simplicity. Alright, this is all fair enough, I'll give mount options
a try and see how it all works out.

>
> > Hopefully not critical, and I can start working on it, and we'll get
> > what you want with using FS as a vehicle for delegation, while
> > allowing some of the intended use cases that we have in mind in a bit
> > cleaner fashion?
> >
> > > > >
> > > > > As mentioned before, bpffs has all the means to be taught delegation:
> > > > >
> > > > >         // In container's user namespace
> > > > >         fd_fs = fsopen("bpffs");
> > > > >
> > > > >         // Delegating task in host userns (systemd-bpfd whatever you want)
> > > > >         ret = fsconfig(fd_fs, FSCONFIG_SET_FLAG, "delegate", ...);
> > > > >
> > > > >         // In container's user namespace
> > > > >         fd_mnt = fsmount(fd_fs, 0);
> > > > >
> > > > >         ret = move_mount(fd_fs, "", -EBADF, "/my/fav/location", MOVE_MOUNT_F_EMPTY_PATH)
> > > > >
> > > > > Roughly, this would mean:
> > > > >
> > > > > (i) raise FS_USERNS_MOUNT on bpffs but guard it behind the "delegate"
> > > > >     mount option. IOW, it's only possibly to mount bpffs as an
> > > > >     unprivileged user if a delegating process like systemd-bpfd with
> > > > >     system-level privileges has marked it as delegatable.
> >
> > Regarding the FS_USERNS_MOUNT flag and fsopen() happening from inside
> > the user namespace. Am I missing something subtle and important here,
> > why does it have to happen inside the container's user namespace?
> > Can't the container manager both fsopen() and fsconfig() everything in
> > host userns, and only then fsmount+move_mount inside the container's
> > userns? Just trying to understand if there is some important early
> > association of userns happening at early steps here?
>
> The mount api _currently_ works very roughly like this: if a filesytem
> is FS_USERNS_MOUNT enabled fsopen() records the user namespace of the
> caller. The recorded userns will later become the owning userns of the
> filesystem's superblock (Without going into detail: owning userns of a
> superblock != owning userns of a mount. move_mount() on a detached mount
> is about the latter.).
>
> I have a patchset that adds a generic "delegate" mount option which will
> allow a sufficiently privileged process to do the following:
>
>         fd_fs = fsopen("ext4");
>
>         /*
>          * Set owning namespace of the filesystem's superblock.
>          * Caller must be privileged over @fd_userns.
>          *
>          * Note, must be first mount option to ensure that possible
>          * follow-up ermission checks for other mount options are done
>          * on the final owning namespace.
>          */
>         fsconfig(fd_fs, FSCONFIG_SET_FD, "delegate", NULL, fd_userns);
>
>         /*
>          * * If fs is FS_USERNS_MOUNT then permission is checked in @fd_userns.
>          * * If fs is not FS_USERNS_MOUNT then permission is check in @init_user_ns.
>          *   (Privilege in @init_user_ns implies privilege over @fd_userns.)
>          */
>         fsconfig(fd_fs, FSCONFIG_CMD_CREATE, NULL, 0);
>
> After this, the sb is owned by @fd_userns. Currently my draft restricts
> this to such filesystems that raise FS_ALLOW_IDMAP because they almost
> can support delegation and don't need to be checked for any potential
> issues. But bpffs could easily support this (without caring about
> FS_ALLOW_IDMAP).

I see. Well, I should definitely not use "delegate" option name then
for anythying ;)

>
> >
> > Also, in your example above, move_mount() should take fd_mnt, not fd_fs, right?
> >
> > > > > (ii) add fine-grained delegation options that you want this
> > > > >      bpffs instance to allow via new mount options. Idk,
> > > > >
> > > > >      // allow usage of foo
> > > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "foo");
> > > > >
> > > > >      // also allow usage of bar
> > > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "bar");
> > > > >
> > > > >      // reset allowed options
> > > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "");
> > > > >
> > > > >      // allow usage of schmoo
> > > > >      fsconfig(fd_fs, FSCONFIG_SET_STRING, "abilities", "schmoo");
> > > > >
> > > > > This all seems more intuitive and integrates with user and mount
> > > > > namespaces of the container. This can also work for restricting
> > > > > non-userns bpf instances fwiw. You can also share instances via
> > > > > bind-mount and so on. The userns of the bpffs instance can also be used
> > > > > for permission checking provided a given functionality has been
> > > > > delegated by e.g., systemd-bpfd or whatever.
> > > >
> > > > I have no arguments against any of the above, and would prefer to see
> > > > something like this over a token-based mechanism.  However we do want
> > > > to make sure we have the proper LSM control points for either approach
> > > > so that admins who rely on LSM-based security policies can manage
> > > > delegation via their policies.
> > > >
> > > > Using the fsconfig() approach described by Christian above, I believe
> > > > we should have the necessary hooks already in
> > > > security_fs_context_parse_param() and security_sb_mnt_opts() but I'm
> > > > basing that on a quick look this morning, some additional checking
> > > > would need to be done.
> > >
> > > I think what I outlined is even unnecessarily complicated. You don't
> > > need that pointless "delegate" mount option at all actually. Permission
> > > to delegate shouldn't be checked when the mount option is set. The
> > > permissions should be checked when the superblock is created. That's the
> > > right point in time. So sm like:
> > >
> >
> > I think this gets even more straightforward with BPF Token FS being a
> > separate one, right? Given BPF Token FS is all about delegation, it
> > has to be a privileged operation to even create it.
> >
> > > diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
> > > index 4174f76133df..a2eb382f5457 100644
> > > --- a/kernel/bpf/inode.c
> > > +++ b/kernel/bpf/inode.c
> > > @@ -746,6 +746,13 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
> > >         struct inode *inode;
> > >         int ret;
> > >
> > > +       /*
> > > +        * If you want to delegate this instance then you need to be
> > > +        * privileged and know what you're doing. This isn't trust.
> > > +        */
> > > +       if ((fc->user_ns != &init_user_ns) && !capable(CAP_SYS_ADMIN))
> > > +               return -EPERM;
> > > +
> > >         ret = simple_fill_super(sb, BPF_FS_MAGIC, bpf_rfiles);
> > >         if (ret)
> > >                 return ret;
> > > @@ -800,6 +807,7 @@ static struct file_system_type bpf_fs_type = {
> > >         .init_fs_context = bpf_init_fs_context,
> > >         .parameters     = bpf_fs_parameters,
> > >         .kill_sb        = kill_litter_super,
> > > +       .fs_flags       = FS_USERNS_MOUNT,
> >
> > Just an aside thought. It doesn't seem like there is any reason why
> > BPF FS right now is not created with FS_USERNS_MOUNT, so (separately
> > from all this discussion) I suspect we can just make it
> > FS_USERNS_MOUNT right now (unless we combine it with BPF-token-FS,
> > then yeah, we can't do that unconditionally anymore). Given BPF FS is
> > just a container of pinned BPF objects, just mounting BPF FS doesn't
> > seem to be dangerous in any way. But that's just an aside thought
> > here.
>
> My two cents: Don't ever expose anything under user namespaces unless it
> is guaranteed to be safe and has actual non-cosmetical use-cases.

Doesn't seem cosmetic to be able to have my own private BPF FS
instance created by an application inside the container to persist
and/or share BPF prog/maps between parts of the application. But I'm
not going to do this either, it was just a realization that we seem to
be unnecessarily restrictive with BPF FS (at least until it becomes
also a BPF token itself).

>
> The eagerness with which features pop up in user namespaces is probably
> bankrolling half the infosec community.
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f58895830ada..c4f1684aa138 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -51,6 +51,7 @@  struct module;
 struct bpf_func_state;
 struct ftrace_ops;
 struct cgroup;
+struct bpf_token;
 
 extern struct idr btf_idr;
 extern spinlock_t btf_idr_lock;
@@ -1533,6 +1534,12 @@  struct bpf_link_primer {
 	u32 id;
 };
 
+struct bpf_token {
+	struct work_struct work;
+	atomic64_t refcnt;
+	u64 allowed_cmds;
+};
+
 struct bpf_struct_ops_value;
 struct btf_member;
 
@@ -1916,6 +1923,11 @@  bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
 	return ret;
 }
 
+static inline bool bpf_token_capable(const struct bpf_token *token, int cap)
+{
+	return token || capable(cap) || (cap != CAP_SYS_ADMIN && capable(CAP_SYS_ADMIN));
+}
+
 #ifdef CONFIG_BPF_SYSCALL
 DECLARE_PER_CPU(int, bpf_prog_active);
 extern struct mutex bpf_stats_enabled_mutex;
@@ -2077,8 +2089,25 @@  struct file *bpf_link_new_file(struct bpf_link *link, int *reserved_fd);
 struct bpf_link *bpf_link_get_from_fd(u32 ufd);
 struct bpf_link *bpf_link_get_curr_or_next(u32 *id);
 
+void bpf_token_inc(struct bpf_token *token);
+void bpf_token_put(struct bpf_token *token);
+int bpf_token_create(union bpf_attr *attr);
+int bpf_token_new_fd(struct bpf_token *token);
+struct bpf_token *bpf_token_get_from_fd(u32 ufd);
+
+bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd);
+
+enum bpf_type {
+	BPF_TYPE_UNSPEC	= 0,
+	BPF_TYPE_PROG,
+	BPF_TYPE_MAP,
+	BPF_TYPE_LINK,
+	BPF_TYPE_TOKEN,
+};
+
 int bpf_obj_pin_user(u32 ufd, int path_fd, const char __user *pathname);
 int bpf_obj_get_user(int path_fd, const char __user *pathname, int flags);
+int bpf_obj_pin_any(int path_fd, const char __user *pathname, void *raw, enum bpf_type type);
 
 #define BPF_ITER_FUNC_PREFIX "bpf_iter_"
 #define DEFINE_BPF_ITER_FUNC(target, args...)			\
@@ -2436,6 +2465,24 @@  static inline int bpf_obj_get_user(const char __user *pathname, int flags)
 	return -EOPNOTSUPP;
 }
 
+static inline void bpf_token_inc(struct bpf_token *token)
+{
+}
+
+static inline void bpf_token_put(struct bpf_token *token)
+{
+}
+
+static inline int bpf_token_new_fd(struct bpf_token *token)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline struct bpf_token *bpf_token_get_from_fd(u32 ufd)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
 static inline void __dev_flush(void)
 {
 }
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 60a9d59beeab..3ff91f52745d 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -846,6 +846,24 @@  union bpf_iter_link_info {
  *		Returns zero on success. On error, -1 is returned and *errno*
  *		is set appropriately.
  *
+ * BPF_TOKEN_CREATE
+ *	Description
+ *		Create BPF token with embedded information about what
+ *		BPF-related functionality it allows. This BPF token can be
+ *		passed as an extra parameter to various bpf() syscall commands
+ *		to grant BPF subsystem functionality to unprivileged processes.
+ *		BPF token is automatically pinned at specified location in BPF
+ *		FS. It can be retrieved (to get FD passed to bpf() syscall)
+ *		using BPF_OBJ_GET command. It's not allowed to re-pin BPF
+ *		token using BPF_OBJ_PIN command. Such restrictions ensure BPF
+ *		token stays associated with originally intended BPF FS
+ *		instance and cannot be intentionally or unintentionally pinned
+ *		somewhere else.
+ *
+ *	Return
+ *		Returns zero on success. On error, -1 is returned and *errno*
+ *		is set appropriately.
+ *
  * NOTES
  *	eBPF objects (maps and programs) can be shared between processes.
  *
@@ -900,6 +918,7 @@  enum bpf_cmd {
 	BPF_ITER_CREATE,
 	BPF_LINK_DETACH,
 	BPF_PROG_BIND_MAP,
+	BPF_TOKEN_CREATE,
 };
 
 enum bpf_map_type {
@@ -1622,6 +1641,25 @@  union bpf_attr {
 		__u32		flags;		/* extra flags */
 	} prog_bind_map;
 
+	struct { /* struct used by BPF_TOKEN_CREATE command */
+		/* optional, BPF token FD granting operation */
+		__u32		token_fd;
+		__u32		token_flags;
+		__u32		pin_flags;
+		/* pin_{path_fd,pathname} specify location in BPF FS instance
+		 * to pin BPF token at;
+		 * path_fd + pathname have the same semantics as openat() syscall
+		 */
+		__u32		pin_path_fd;
+		__u64		pin_pathname;
+		/* a bit set of allowed bpf() syscall commands,
+		 * e.g., (1ULL << BPF_TOKEN_CREATE) | (1ULL << BPF_PROG_LOAD)
+		 * will allow creating derived BPF tokens and loading new BPF
+		 * programs
+		 */
+		__u64		allowed_cmds;
+	} token_create;
+
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 1d3892168d32..bbc17ea3878f 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -6,7 +6,7 @@  cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse
 endif
 CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy)
 
-obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o
+obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o
 obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o
 obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
index 4174f76133df..b9b93b81af9a 100644
--- a/kernel/bpf/inode.c
+++ b/kernel/bpf/inode.c
@@ -22,13 +22,6 @@ 
 #include <linux/bpf_trace.h>
 #include "preload/bpf_preload.h"
 
-enum bpf_type {
-	BPF_TYPE_UNSPEC	= 0,
-	BPF_TYPE_PROG,
-	BPF_TYPE_MAP,
-	BPF_TYPE_LINK,
-};
-
 static void *bpf_any_get(void *raw, enum bpf_type type)
 {
 	switch (type) {
@@ -41,6 +34,9 @@  static void *bpf_any_get(void *raw, enum bpf_type type)
 	case BPF_TYPE_LINK:
 		bpf_link_inc(raw);
 		break;
+	case BPF_TYPE_TOKEN:
+		bpf_token_inc(raw);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
@@ -61,6 +57,9 @@  static void bpf_any_put(void *raw, enum bpf_type type)
 	case BPF_TYPE_LINK:
 		bpf_link_put(raw);
 		break;
+	case BPF_TYPE_TOKEN:
+		bpf_token_put(raw);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
@@ -89,6 +88,12 @@  static void *bpf_fd_probe_obj(u32 ufd, enum bpf_type *type)
 		return raw;
 	}
 
+	raw = bpf_token_get_from_fd(ufd);
+	if (!IS_ERR(raw)) {
+		*type = BPF_TYPE_TOKEN;
+		return raw;
+	}
+
 	return ERR_PTR(-EINVAL);
 }
 
@@ -97,6 +102,7 @@  static const struct inode_operations bpf_dir_iops;
 static const struct inode_operations bpf_prog_iops = { };
 static const struct inode_operations bpf_map_iops  = { };
 static const struct inode_operations bpf_link_iops  = { };
+static const struct inode_operations bpf_token_iops  = { };
 
 static struct inode *bpf_get_inode(struct super_block *sb,
 				   const struct inode *dir,
@@ -136,6 +142,8 @@  static int bpf_inode_type(const struct inode *inode, enum bpf_type *type)
 		*type = BPF_TYPE_MAP;
 	else if (inode->i_op == &bpf_link_iops)
 		*type = BPF_TYPE_LINK;
+	else if (inode->i_op == &bpf_token_iops)
+		*type = BPF_TYPE_TOKEN;
 	else
 		return -EACCES;
 
@@ -369,6 +377,11 @@  static int bpf_mklink(struct dentry *dentry, umode_t mode, void *arg)
 			     &bpf_iter_fops : &bpffs_obj_fops);
 }
 
+static int bpf_mktoken(struct dentry *dentry, umode_t mode, void *arg)
+{
+	return bpf_mkobj_ops(dentry, mode, arg, &bpf_token_iops, &bpffs_obj_fops);
+}
+
 static struct dentry *
 bpf_lookup(struct inode *dir, struct dentry *dentry, unsigned flags)
 {
@@ -435,8 +448,8 @@  static int bpf_iter_link_pin_kernel(struct dentry *parent,
 	return ret;
 }
 
-static int bpf_obj_do_pin(int path_fd, const char __user *pathname, void *raw,
-			  enum bpf_type type)
+int bpf_obj_pin_any(int path_fd, const char __user *pathname, void *raw,
+		    enum bpf_type type)
 {
 	struct dentry *dentry;
 	struct inode *dir;
@@ -469,6 +482,9 @@  static int bpf_obj_do_pin(int path_fd, const char __user *pathname, void *raw,
 	case BPF_TYPE_LINK:
 		ret = vfs_mkobj(dentry, mode, bpf_mklink, raw);
 		break;
+	case BPF_TYPE_TOKEN:
+		ret = vfs_mkobj(dentry, mode, bpf_mktoken, raw);
+		break;
 	default:
 		ret = -EPERM;
 	}
@@ -487,7 +503,15 @@  int bpf_obj_pin_user(u32 ufd, int path_fd, const char __user *pathname)
 	if (IS_ERR(raw))
 		return PTR_ERR(raw);
 
-	ret = bpf_obj_do_pin(path_fd, pathname, raw, type);
+	/* disallow BPF_OBJ_PIN command for BPF token; BPF token can only be
+	 * auto-pinned during creation with BPF_TOKEN_CREATE
+	 */
+	if (type == BPF_TYPE_TOKEN) {
+		bpf_any_put(raw, type);
+		return -EOPNOTSUPP;
+	}
+
+	ret = bpf_obj_pin_any(path_fd, pathname, raw, type);
 	if (ret != 0)
 		bpf_any_put(raw, type);
 
@@ -547,6 +571,8 @@  int bpf_obj_get_user(int path_fd, const char __user *pathname, int flags)
 		ret = bpf_map_new_fd(raw, f_flags);
 	else if (type == BPF_TYPE_LINK)
 		ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw);
+	else if (type == BPF_TYPE_TOKEN)
+		ret = (f_flags != O_RDWR) ? -EINVAL : bpf_token_new_fd(raw);
 	else
 		return -ENOENT;
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a2aef900519c..745b605fad8e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -5095,6 +5095,20 @@  static int bpf_prog_bind_map(union bpf_attr *attr)
 	return ret;
 }
 
+#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_cmds
+
+static int token_create(union bpf_attr *attr)
+{
+	if (CHECK_ATTR(BPF_TOKEN_CREATE))
+		return -EINVAL;
+
+	/* no flags are supported yet */
+	if (attr->token_create.token_flags || attr->token_create.pin_flags)
+		return -EINVAL;
+
+	return bpf_token_create(attr);
+}
+
 static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
 {
 	union bpf_attr attr;
@@ -5228,6 +5242,9 @@  static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
 	case BPF_PROG_BIND_MAP:
 		err = bpf_prog_bind_map(&attr);
 		break;
+	case BPF_TOKEN_CREATE:
+		err = token_create(&attr);
+		break;
 	default:
 		err = -EINVAL;
 		break;
diff --git a/kernel/bpf/token.c b/kernel/bpf/token.c
new file mode 100644
index 000000000000..1ece52439701
--- /dev/null
+++ b/kernel/bpf/token.c
@@ -0,0 +1,167 @@ 
+#include <linux/bpf.h>
+#include <linux/vmalloc.h>
+#include <linux/anon_inodes.h>
+#include <linux/fdtable.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/idr.h>
+#include <linux/namei.h>
+
+DEFINE_IDR(token_idr);
+DEFINE_SPINLOCK(token_idr_lock);
+
+void bpf_token_inc(struct bpf_token *token)
+{
+	atomic64_inc(&token->refcnt);
+}
+
+static void bpf_token_put_deferred(struct work_struct *work)
+{
+	struct bpf_token *token = container_of(work, struct bpf_token, work);
+
+	kvfree(token);
+}
+
+void bpf_token_put(struct bpf_token *token)
+{
+	if (!token)
+		return;
+
+	if (!atomic64_dec_and_test(&token->refcnt))
+		return;
+
+	INIT_WORK(&token->work, bpf_token_put_deferred);
+	schedule_work(&token->work);
+}
+
+static int bpf_token_release(struct inode *inode, struct file *filp)
+{
+	struct bpf_token *token = filp->private_data;
+
+	bpf_token_put(token);
+	return 0;
+}
+
+static ssize_t bpf_dummy_read(struct file *filp, char __user *buf, size_t siz,
+			      loff_t *ppos)
+{
+	/* We need this handler such that alloc_file() enables
+	 * f_mode with FMODE_CAN_READ.
+	 */
+	return -EINVAL;
+}
+
+static ssize_t bpf_dummy_write(struct file *filp, const char __user *buf,
+			       size_t siz, loff_t *ppos)
+{
+	/* We need this handler such that alloc_file() enables
+	 * f_mode with FMODE_CAN_WRITE.
+	 */
+	return -EINVAL;
+}
+
+static const struct file_operations bpf_token_fops = {
+	.release	= bpf_token_release,
+	.read		= bpf_dummy_read,
+	.write		= bpf_dummy_write,
+};
+
+static struct bpf_token *bpf_token_alloc(void)
+{
+	struct bpf_token *token;
+
+	token = kvzalloc(sizeof(*token), GFP_USER);
+	if (!token)
+		return NULL;
+
+	atomic64_set(&token->refcnt, 1);
+
+	return token;
+}
+
+static bool is_bit_subset_of(u32 subset, u32 superset)
+{
+	return (superset & subset) == subset;
+}
+
+int bpf_token_create(union bpf_attr *attr)
+{
+	struct bpf_token *new_token, *token = NULL;
+	int ret;
+
+	if (attr->token_create.token_fd) {
+		token = bpf_token_get_from_fd(attr->token_create.token_fd);
+		if (IS_ERR(token))
+			return PTR_ERR(token);
+		/* if provided BPF token doesn't allow creating new tokens,
+		 * then use system-wide capability checks only
+		 */
+		if (!bpf_token_allow_cmd(token, BPF_TOKEN_CREATE)) {
+			bpf_token_put(token);
+			token = NULL;
+		}
+	}
+
+	ret = -EPERM;
+	if (!bpf_token_capable(token, CAP_SYS_ADMIN))
+		goto out;
+
+	/* requested cmds should be a subset of associated token's set */
+	if (token && !is_bit_subset_of(attr->token_create.allowed_cmds, token->allowed_cmds))
+		goto out;
+
+	new_token = bpf_token_alloc();
+	if (!new_token) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	new_token->allowed_cmds = attr->token_create.allowed_cmds;
+
+	ret = bpf_obj_pin_any(attr->token_create.pin_path_fd,
+			      u64_to_user_ptr(attr->token_create.pin_pathname),
+			      new_token, BPF_TYPE_TOKEN);
+	if (ret < 0)
+		bpf_token_put(new_token);
+out:
+	bpf_token_put(token);
+	return ret;
+}
+
+#define BPF_TOKEN_INODE_NAME "bpf-token"
+
+/* Alloc anon_inode and FD for prepared token.
+ * Returns fd >= 0 on success; negative error, otherwise.
+ */
+int bpf_token_new_fd(struct bpf_token *token)
+{
+	return anon_inode_getfd(BPF_TOKEN_INODE_NAME, &bpf_token_fops, token, O_CLOEXEC);
+}
+
+struct bpf_token *bpf_token_get_from_fd(u32 ufd)
+{
+	struct fd f = fdget(ufd);
+	struct bpf_token *token;
+
+	if (!f.file)
+		return ERR_PTR(-EBADF);
+	if (f.file->f_op != &bpf_token_fops) {
+		fdput(f);
+		return ERR_PTR(-EINVAL);
+	}
+
+	token = f.file->private_data;
+	bpf_token_inc(token);
+	fdput(f);
+
+	return token;
+}
+
+bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd)
+{
+	if (!token)
+		return false;
+
+	return token->allowed_cmds & (1ULL << cmd);
+}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 60a9d59beeab..3ff91f52745d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -846,6 +846,24 @@  union bpf_iter_link_info {
  *		Returns zero on success. On error, -1 is returned and *errno*
  *		is set appropriately.
  *
+ * BPF_TOKEN_CREATE
+ *	Description
+ *		Create BPF token with embedded information about what
+ *		BPF-related functionality it allows. This BPF token can be
+ *		passed as an extra parameter to various bpf() syscall commands
+ *		to grant BPF subsystem functionality to unprivileged processes.
+ *		BPF token is automatically pinned at specified location in BPF
+ *		FS. It can be retrieved (to get FD passed to bpf() syscall)
+ *		using BPF_OBJ_GET command. It's not allowed to re-pin BPF
+ *		token using BPF_OBJ_PIN command. Such restrictions ensure BPF
+ *		token stays associated with originally intended BPF FS
+ *		instance and cannot be intentionally or unintentionally pinned
+ *		somewhere else.
+ *
+ *	Return
+ *		Returns zero on success. On error, -1 is returned and *errno*
+ *		is set appropriately.
+ *
  * NOTES
  *	eBPF objects (maps and programs) can be shared between processes.
  *
@@ -900,6 +918,7 @@  enum bpf_cmd {
 	BPF_ITER_CREATE,
 	BPF_LINK_DETACH,
 	BPF_PROG_BIND_MAP,
+	BPF_TOKEN_CREATE,
 };
 
 enum bpf_map_type {
@@ -1622,6 +1641,25 @@  union bpf_attr {
 		__u32		flags;		/* extra flags */
 	} prog_bind_map;
 
+	struct { /* struct used by BPF_TOKEN_CREATE command */
+		/* optional, BPF token FD granting operation */
+		__u32		token_fd;
+		__u32		token_flags;
+		__u32		pin_flags;
+		/* pin_{path_fd,pathname} specify location in BPF FS instance
+		 * to pin BPF token at;
+		 * path_fd + pathname have the same semantics as openat() syscall
+		 */
+		__u32		pin_path_fd;
+		__u64		pin_pathname;
+		/* a bit set of allowed bpf() syscall commands,
+		 * e.g., (1ULL << BPF_TOKEN_CREATE) | (1ULL << BPF_PROG_LOAD)
+		 * will allow creating derived BPF tokens and loading new BPF
+		 * programs
+		 */
+		__u64		allowed_cmds;
+	} token_create;
+
 } __attribute__((aligned(8)));
 
 /* The description below is an attempt at providing documentation to eBPF