diff mbox series

[bpf-next,v2,06/15] bpf: Allow storing user kptr in map

Message ID 20220317115957.3193097-7-memxor@gmail.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series Introduce typed pointer support in BPF maps | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1452 this patch: 1452
netdev/cc_maintainers warning 6 maintainers not CCed: netdev@vger.kernel.org songliubraving@fb.com yhs@fb.com john.fastabend@gmail.com kafai@fb.com kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 172 this patch: 172
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1469 this patch: 1469
netdev/checkpatch warning WARNING: line length of 100 exceeds 80 columns WARNING: line length of 90 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next success VM_Test
bpf/vmtest-bpf-next-PR success PR summary

Commit Message

Kumar Kartikeya Dwivedi March 17, 2022, 11:59 a.m. UTC
Recently, verifier gained __user annotation support [0] where it
prevents BPF program from normally derefering user memory pointer in the
kernel, and instead requires use of bpf_probe_read_user. We can allow
the user to also store these pointers in BPF maps, with the logic that
whenever user loads it from the BPF map, it gets marked as MEM_USER. The
tag 'kptr_user' is used to tag such pointers.

  [0]: https://lore.kernel.org/bpf/20220127154555.650886-1-yhs@fb.com

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h   |  1 +
 kernel/bpf/btf.c      | 13 ++++++++++---
 kernel/bpf/verifier.c | 15 ++++++++++++---
 3 files changed, 23 insertions(+), 6 deletions(-)

Comments

Alexei Starovoitov March 19, 2022, 6:28 p.m. UTC | #1
On Thu, Mar 17, 2022 at 05:29:48PM +0530, Kumar Kartikeya Dwivedi wrote:
> Recently, verifier gained __user annotation support [0] where it
> prevents BPF program from normally derefering user memory pointer in the
> kernel, and instead requires use of bpf_probe_read_user. We can allow
> the user to also store these pointers in BPF maps, with the logic that
> whenever user loads it from the BPF map, it gets marked as MEM_USER. The
> tag 'kptr_user' is used to tag such pointers.
> 
>   [0]: https://lore.kernel.org/bpf/20220127154555.650886-1-yhs@fb.com
> 
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  include/linux/bpf.h   |  1 +
>  kernel/bpf/btf.c      | 13 ++++++++++---
>  kernel/bpf/verifier.c | 15 ++++++++++++---
>  3 files changed, 23 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 433f5cb161cf..989f47334215 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -163,6 +163,7 @@ enum {
>  enum {
>  	BPF_MAP_VALUE_OFF_F_REF    = (1U << 0),
>  	BPF_MAP_VALUE_OFF_F_PERCPU = (1U << 1),
> +	BPF_MAP_VALUE_OFF_F_USER   = (1U << 2),
...
> +		} else if (!strcmp("kptr_user", __btf_name_by_offset(btf, t->name_off))) {

I don't see a use case where __user pointer would need to be stored into a map.
That pointer is valid in the user task context.
When bpf prog has such pointer it can read user mem through it,
but storing it for later makes little sense. The user context will certainly change.
Reading it later from the map is more or less reading random number.
Lets drop this patch until real use case arises.
Kumar Kartikeya Dwivedi March 19, 2022, 7:02 p.m. UTC | #2
On Sat, Mar 19, 2022 at 11:58:13PM IST, Alexei Starovoitov wrote:
> On Thu, Mar 17, 2022 at 05:29:48PM +0530, Kumar Kartikeya Dwivedi wrote:
> > Recently, verifier gained __user annotation support [0] where it
> > prevents BPF program from normally derefering user memory pointer in the
> > kernel, and instead requires use of bpf_probe_read_user. We can allow
> > the user to also store these pointers in BPF maps, with the logic that
> > whenever user loads it from the BPF map, it gets marked as MEM_USER. The
> > tag 'kptr_user' is used to tag such pointers.
> >
> >   [0]: https://lore.kernel.org/bpf/20220127154555.650886-1-yhs@fb.com
> >
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  include/linux/bpf.h   |  1 +
> >  kernel/bpf/btf.c      | 13 ++++++++++---
> >  kernel/bpf/verifier.c | 15 ++++++++++++---
> >  3 files changed, 23 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 433f5cb161cf..989f47334215 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -163,6 +163,7 @@ enum {
> >  enum {
> >  	BPF_MAP_VALUE_OFF_F_REF    = (1U << 0),
> >  	BPF_MAP_VALUE_OFF_F_PERCPU = (1U << 1),
> > +	BPF_MAP_VALUE_OFF_F_USER   = (1U << 2),
> ...
> > +		} else if (!strcmp("kptr_user", __btf_name_by_offset(btf, t->name_off))) {
>
> I don't see a use case where __user pointer would need to be stored into a map.
> That pointer is valid in the user task context.
> When bpf prog has such pointer it can read user mem through it,
> but storing it for later makes little sense. The user context will certainly change.
> Reading it later from the map is more or less reading random number.
> Lets drop this patch until real use case arises.

In some cases the address may be fixed (e.g. user area registration similar to
rseq, that can be done between task and BPF program), as long as the task is
alive.

But the patch itself is trivial, so fine with dropping for now.

--
Kartikeya
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 433f5cb161cf..989f47334215 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -163,6 +163,7 @@  enum {
 enum {
 	BPF_MAP_VALUE_OFF_F_REF    = (1U << 0),
 	BPF_MAP_VALUE_OFF_F_PERCPU = (1U << 1),
+	BPF_MAP_VALUE_OFF_F_USER   = (1U << 2),
 };
 
 struct bpf_map_value_off_desc {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 04d604931f59..12a89e55e77b 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3197,7 +3197,7 @@  static int btf_find_field_kptr(const struct btf *btf, const struct btf_type *t,
 			       u32 off, int sz, struct btf_field_info *info,
 			       int info_cnt, int idx)
 {
-	bool kptr_tag = false, kptr_ref_tag = false, kptr_percpu_tag = false;
+	bool kptr_tag = false, kptr_ref_tag = false, kptr_percpu_tag = false, kptr_user_tag = false;
 	int tags;
 
 	/* For PTR, sz is always == 8 */
@@ -3221,12 +3221,17 @@  static int btf_find_field_kptr(const struct btf *btf, const struct btf_type *t,
 			if (kptr_percpu_tag)
 				return -EEXIST;
 			kptr_percpu_tag = true;
+		} else if (!strcmp("kptr_user", __btf_name_by_offset(btf, t->name_off))) {
+			/* repeated tag */
+			if (kptr_user_tag)
+				return -EEXIST;
+			kptr_user_tag = true;
 		}
 		/* Look for next tag */
 		t = btf_type_by_id(btf, t->type);
 	}
 
-	tags = kptr_tag + kptr_ref_tag + kptr_percpu_tag;
+	tags = kptr_tag + kptr_ref_tag + kptr_percpu_tag + kptr_user_tag;
 	if (!tags)
 		return BTF_FIELD_IGNORE;
 	else if (tags > 1)
@@ -3241,7 +3246,9 @@  static int btf_find_field_kptr(const struct btf *btf, const struct btf_type *t,
 
 	if (idx >= info_cnt)
 		return -E2BIG;
-	if (kptr_percpu_tag)
+	if (kptr_user_tag)
+		info[idx].flags = BPF_MAP_VALUE_OFF_F_USER;
+	else if (kptr_percpu_tag)
 		info[idx].flags = BPF_MAP_VALUE_OFF_F_PERCPU;
 	else if (kptr_ref_tag)
 		info[idx].flags = BPF_MAP_VALUE_OFF_F_REF;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cc8f7250e43e..5325cc37797a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3521,7 +3521,11 @@  static int map_kptr_match_type(struct bpf_verifier_env *env,
 	const char *reg_name = "";
 	bool fixed_off_ok = true;
 
-	if (off_desc->flags & BPF_MAP_VALUE_OFF_F_PERCPU) {
+	if (off_desc->flags & BPF_MAP_VALUE_OFF_F_USER) {
+		if (reg->type != (PTR_TO_BTF_ID | MEM_USER) &&
+		    reg->type != (PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_USER))
+			goto bad_type;
+	} else if (off_desc->flags & BPF_MAP_VALUE_OFF_F_PERCPU) {
 		if (reg->type != (PTR_TO_BTF_ID | MEM_PERCPU) &&
 		    reg->type != (PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_PERCPU))
 			goto bad_type;
@@ -3565,7 +3569,9 @@  static int map_kptr_match_type(struct bpf_verifier_env *env,
 		goto bad_type;
 	return 0;
 bad_type:
-	if (off_desc->flags & BPF_MAP_VALUE_OFF_F_PERCPU)
+	if (off_desc->flags & BPF_MAP_VALUE_OFF_F_USER)
+		reg_type = PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_USER;
+	else if (off_desc->flags & BPF_MAP_VALUE_OFF_F_PERCPU)
 		reg_type = PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_PERCPU;
 	else
 		reg_type = PTR_TO_BTF_ID | PTR_MAYBE_NULL;
@@ -3583,9 +3589,9 @@  static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
 				 enum bpf_access_type t, int insn_idx)
 {
 	struct bpf_reg_state *reg = reg_state(env, regno), *val_reg;
+	bool ref_ptr = false, percpu_ptr = false, user_ptr = false;
 	struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
 	enum bpf_type_flag reg_flags = PTR_MAYBE_NULL;
-	bool ref_ptr = false, percpu_ptr = false;
 	struct bpf_map_value_off_desc *off_desc;
 	int insn_class = BPF_CLASS(insn->code);
 	struct bpf_map *map = reg->map_ptr;
@@ -3615,8 +3621,11 @@  static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
 
 	ref_ptr = off_desc->flags & BPF_MAP_VALUE_OFF_F_REF;
 	percpu_ptr = off_desc->flags & BPF_MAP_VALUE_OFF_F_PERCPU;
+	user_ptr = off_desc->flags & BPF_MAP_VALUE_OFF_F_USER;
 	if (percpu_ptr)
 		reg_flags |= MEM_PERCPU;
+	else if (user_ptr)
+		reg_flags |= MEM_USER;
 
 	if (insn_class == BPF_LDX) {
 		if (WARN_ON_ONCE(value_regno < 0))