diff mbox series

[bpf-next,v2,2/5] bpf: reject program if a __user tagged memory accessed in kernel way

Message ID 20220112201500.1623985-1-yhs@fb.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series bpf: add __user tagging support in vmlinux BTF | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1443 this patch: 1443
netdev/cc_maintainers warning 9 maintainers not CCed: kuba@kernel.org kpsingh@kernel.org john.fastabend@gmail.com kafai@fb.com songliubraving@fb.com dsahern@kernel.org yoshfuji@linux-ipv6.org netdev@vger.kernel.org davem@davemloft.net
netdev/build_clang success Errors and warnings before: 190 this patch: 190
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1457 this patch: 1457
netdev/checkpatch warning WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 85 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns WARNING: line length of 93 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next success VM_Test
bpf/vmtest-bpf-next-PR success PR summary

Commit Message

Yonghong Song Jan. 12, 2022, 8:15 p.m. UTC
BPF verifier supports direct memory access for BPF_PROG_TYPE_TRACING type
of bpf programs, e.g., a->b. If "a" is a pointer
pointing to kernel memory, bpf verifier will allow user to write
code in C like a->b and the verifier will translate it to a kernel
load properly. If "a" is a pointer to user memory, it is expected
that bpf developer should be bpf_probe_read_user() helper to
get the value a->b. Without utilizing BTF __user tagging information,
current verifier will assume that a->b is a kernel memory access
and this may generate incorrect result.

Now BTF contains __user information, it can check whether the
pointer points to a user memory or not. If it is, the verifier
can reject the program and force users to use bpf_probe_read_user()
helper explicitly.

In the future, we can easily extend btf_add_space for other
address space tagging, for example, rcu/percpu etc.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 include/linux/bpf.h            | 11 ++++++++---
 include/linux/btf.h            |  5 +++++
 kernel/bpf/btf.c               | 34 ++++++++++++++++++++++++++++------
 kernel/bpf/verifier.c          | 30 ++++++++++++++++++++++--------
 net/bpf/bpf_dummy_struct_ops.c |  6 ++++--
 net/ipv4/bpf_tcp_ca.c          |  6 ++++--
 6 files changed, 71 insertions(+), 21 deletions(-)

Comments

Alexei Starovoitov Jan. 19, 2022, 5:47 p.m. UTC | #1
On Wed, Jan 12, 2022 at 12:16 PM Yonghong Song <yhs@fb.com> wrote:
> +
> +                       /* check __user tag */
> +                       t = btf_type_by_id(btf, mtype->type);
> +                       if (btf_type_is_type_tag(t)) {
> +                               tag_value = __btf_name_by_offset(btf, t->name_off);
> +                               if (strcmp(tag_value, "user") == 0)
> +                                       tmp_flag = MEM_USER;
> +                       }
> +
>                         stype = btf_type_skip_modifiers(btf, mtype->type, &id);

Does LLVM guarantee that btf_tag will be the first in the modifiers?
Looking at the selftest:
+struct bpf_testmod_btf_type_tag_2 {
+       struct bpf_testmod_btf_type_tag_1 __user *p;
+};

What if there are 'const' or 'volatile' modifiers on that pointer too?
And in different order with btf_tag?
BTF gets normalized or not?
I wonder whether we should introduce something like
btf_type_collect_modifiers() instead of btf_type_skip_modifiers() ?
Yonghong Song Jan. 20, 2022, 4:10 a.m. UTC | #2
On 1/19/22 9:47 AM, Alexei Starovoitov wrote:
> On Wed, Jan 12, 2022 at 12:16 PM Yonghong Song <yhs@fb.com> wrote:
>> +
>> +                       /* check __user tag */
>> +                       t = btf_type_by_id(btf, mtype->type);
>> +                       if (btf_type_is_type_tag(t)) {
>> +                               tag_value = __btf_name_by_offset(btf, t->name_off);
>> +                               if (strcmp(tag_value, "user") == 0)
>> +                                       tmp_flag = MEM_USER;
>> +                       }
>> +
>>                          stype = btf_type_skip_modifiers(btf, mtype->type, &id);
> 
> Does LLVM guarantee that btf_tag will be the first in the modifiers?
> Looking at the selftest:
> +struct bpf_testmod_btf_type_tag_2 {
> +       struct bpf_testmod_btf_type_tag_1 __user *p;
> +};
> 
> What if there are 'const' or 'volatile' modifiers on that pointer too?
> And in different order with btf_tag?
> BTF gets normalized or not?
> I wonder whether we should introduce something like
> btf_type_collect_modifiers() instead of btf_type_skip_modifiers() ?

Yes, LLVM guarantees that btf_tag will be the first in the modifiers.
The type chain format looks like below:
   ptr -> [btf_type_tag ->]* (zero or more btf_type_tag's)
       -> [other modifiers: const and/or volatile and/or restrict]
       -> base_type

I only handled zero/one btf_type_tag case as we don't have use case
in kernel with two btf_type_tags for one pointer yet.
Alexei Starovoitov Jan. 20, 2022, 4:27 a.m. UTC | #3
On Wed, Jan 19, 2022 at 08:10:27PM -0800, Yonghong Song wrote:
> 
> 
> On 1/19/22 9:47 AM, Alexei Starovoitov wrote:
> > On Wed, Jan 12, 2022 at 12:16 PM Yonghong Song <yhs@fb.com> wrote:
> > > +
> > > +                       /* check __user tag */
> > > +                       t = btf_type_by_id(btf, mtype->type);
> > > +                       if (btf_type_is_type_tag(t)) {
> > > +                               tag_value = __btf_name_by_offset(btf, t->name_off);
> > > +                               if (strcmp(tag_value, "user") == 0)
> > > +                                       tmp_flag = MEM_USER;
> > > +                       }
> > > +
> > >                          stype = btf_type_skip_modifiers(btf, mtype->type, &id);
> > 
> > Does LLVM guarantee that btf_tag will be the first in the modifiers?
> > Looking at the selftest:
> > +struct bpf_testmod_btf_type_tag_2 {
> > +       struct bpf_testmod_btf_type_tag_1 __user *p;
> > +};
> > 
> > What if there are 'const' or 'volatile' modifiers on that pointer too?
> > And in different order with btf_tag?
> > BTF gets normalized or not?
> > I wonder whether we should introduce something like
> > btf_type_collect_modifiers() instead of btf_type_skip_modifiers() ?
> 
> Yes, LLVM guarantees that btf_tag will be the first in the modifiers.
> The type chain format looks like below:
>   ptr -> [btf_type_tag ->]* (zero or more btf_type_tag's)
>       -> [other modifiers: const and/or volatile and/or restrict]
>       -> base_type
> 
> I only handled zero/one btf_type_tag case as we don't have use case
> in kernel with two btf_type_tags for one pointer yet.

Makes sense. Would be good to document this LLVM behavior somewhere.
When GCC adds support for btf_tag it would need to do the same.
Or is it more of a pahole guarantee when it converts LLVM dwarf tags to BTF?

Separately... looking at:
FLAG_DONTCARE           = 0
It's not quite right.
bpf_types already have an enum value at zero:
enum bpf_reg_type {
        NOT_INIT = 0,            /* nothing was written into register */
and other bpf_*_types too.
So empty flag should really mean zeros in bits after BPF_BASE_TYPE_BITS.
But there is no good way to express it as enum.
So maybe use 0 directly when you init:
enum bpf_type_flag tmp_flag = 0;
?

Another bit.. this patch will conflict with
commit a672b2e36a64 ("bpf: Fix ringbuf memory type confusion when passing to helpers")
so please resubmit when that patch appears in bpf-next.
Thanks!
Yonghong Song Jan. 20, 2022, 6:51 a.m. UTC | #4
On 1/19/22 8:27 PM, Alexei Starovoitov wrote:
> On Wed, Jan 19, 2022 at 08:10:27PM -0800, Yonghong Song wrote:
>>
>>
>> On 1/19/22 9:47 AM, Alexei Starovoitov wrote:
>>> On Wed, Jan 12, 2022 at 12:16 PM Yonghong Song <yhs@fb.com> wrote:
>>>> +
>>>> +                       /* check __user tag */
>>>> +                       t = btf_type_by_id(btf, mtype->type);
>>>> +                       if (btf_type_is_type_tag(t)) {
>>>> +                               tag_value = __btf_name_by_offset(btf, t->name_off);
>>>> +                               if (strcmp(tag_value, "user") == 0)
>>>> +                                       tmp_flag = MEM_USER;
>>>> +                       }
>>>> +
>>>>                           stype = btf_type_skip_modifiers(btf, mtype->type, &id);
>>>
>>> Does LLVM guarantee that btf_tag will be the first in the modifiers?
>>> Looking at the selftest:
>>> +struct bpf_testmod_btf_type_tag_2 {
>>> +       struct bpf_testmod_btf_type_tag_1 __user *p;
>>> +};
>>>
>>> What if there are 'const' or 'volatile' modifiers on that pointer too?
>>> And in different order with btf_tag?
>>> BTF gets normalized or not?
>>> I wonder whether we should introduce something like
>>> btf_type_collect_modifiers() instead of btf_type_skip_modifiers() ?
>>
>> Yes, LLVM guarantees that btf_tag will be the first in the modifiers.
>> The type chain format looks like below:
>>    ptr -> [btf_type_tag ->]* (zero or more btf_type_tag's)
>>        -> [other modifiers: const and/or volatile and/or restrict]
>>        -> base_type
>>
>> I only handled zero/one btf_type_tag case as we don't have use case
>> in kernel with two btf_type_tags for one pointer yet.
> 
> Makes sense. Would be good to document this LLVM behavior somewhere.
> When GCC adds support for btf_tag it would need to do the same.
> Or is it more of a pahole guarantee when it converts LLVM dwarf tags to BTF?

Yes, this property is guaranteed by both llvm (for bpf target) and
pahole (for non bpf target). I will document this behavior in
btf.rst.


> 
> Separately... looking at:
> FLAG_DONTCARE           = 0
> It's not quite right.
> bpf_types already have an enum value at zero:
> enum bpf_reg_type {
>          NOT_INIT = 0,            /* nothing was written into register */
> and other bpf_*_types too.
> So empty flag should really mean zeros in bits after BPF_BASE_TYPE_BITS.
> But there is no good way to express it as enum.
> So maybe use 0 directly when you init:
> enum bpf_type_flag tmp_flag = 0;
> ?

I thought about this before and that is why I added FLAG_DONTCARE
to match value to the type. But I agree that is not elegent, I will
use 0 as you suggested.

> 
> Another bit.. this patch will conflict with
> commit a672b2e36a64 ("bpf: Fix ringbuf memory type confusion when passing to helpers")
> so please resubmit when that patch appears in bpf-next.

Thanks for heads-up. Will fix the above two issues and
resubmit once commit a672b2e36a64 arrives in bpf-next.

> Thanks!
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 6e947cd91152..c97ec30f2f12 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -308,6 +308,8 @@  extern const struct bpf_map_ops bpf_map_offload_ops;
 #define BPF_BASE_TYPE_BITS	8
 
 enum bpf_type_flag {
+	FLAG_DONTCARE		= 0,
+
 	/* PTR may be NULL. */
 	PTR_MAYBE_NULL		= BIT(0 + BPF_BASE_TYPE_BITS),
 
@@ -316,7 +318,10 @@  enum bpf_type_flag {
 	 */
 	MEM_RDONLY		= BIT(1 + BPF_BASE_TYPE_BITS),
 
-	__BPF_TYPE_LAST_FLAG	= MEM_RDONLY,
+	/* MEM is in user address space. */
+	MEM_USER		= BIT(2 + BPF_BASE_TYPE_BITS),
+
+	__BPF_TYPE_LAST_FLAG	= MEM_USER,
 };
 
 /* Max number of base types. */
@@ -572,7 +577,7 @@  struct bpf_verifier_ops {
 				 const struct btf *btf,
 				 const struct btf_type *t, int off, int size,
 				 enum bpf_access_type atype,
-				 u32 *next_btf_id);
+				 u32 *next_btf_id, enum bpf_type_flag *flag);
 	bool (*check_kfunc_call)(u32 kfunc_btf_id, struct module *owner);
 };
 
@@ -1749,7 +1754,7 @@  static inline bool bpf_tracing_btf_ctx_access(int off, int size,
 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
 		      const struct btf_type *t, int off, int size,
 		      enum bpf_access_type atype,
-		      u32 *next_btf_id);
+		      u32 *next_btf_id, enum bpf_type_flag *flag);
 bool btf_struct_ids_match(struct bpf_verifier_log *log,
 			  const struct btf *btf, u32 id, int off,
 			  const struct btf *need_btf, u32 need_type_id);
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 0c74348cbc9d..e4e2f7124fe6 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -216,6 +216,11 @@  static inline bool btf_type_is_var(const struct btf_type *t)
 	return BTF_INFO_KIND(t->info) == BTF_KIND_VAR;
 }
 
+static inline bool btf_type_is_type_tag(const struct btf_type *t)
+{
+	return BTF_INFO_KIND(t->info) == BTF_KIND_TYPE_TAG;
+}
+
 /* union is only a special case of struct:
  * all its offsetof(member) == 0
  */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 33bb8ae4a804..54bf483d01f3 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -4848,6 +4848,7 @@  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 	const char *tname = prog->aux->attach_func_name;
 	struct bpf_verifier_log *log = info->log;
 	const struct btf_param *args;
+	const char *tag_value;
 	u32 nr_args, arg;
 	int i, ret;
 
@@ -5000,6 +5001,13 @@  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 	info->btf = btf;
 	info->btf_id = t->type;
 	t = btf_type_by_id(btf, t->type);
+
+	if (btf_type_is_type_tag(t)) {
+		tag_value = __btf_name_by_offset(btf, t->name_off);
+		if (strcmp(tag_value, "user") == 0)
+			info->reg_type |= MEM_USER;
+	}
+
 	/* skip modifiers */
 	while (btf_type_is_modifier(t)) {
 		info->btf_id = t->type;
@@ -5026,12 +5034,12 @@  enum bpf_struct_walk_result {
 
 static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
 			   const struct btf_type *t, int off, int size,
-			   u32 *next_btf_id)
+			   u32 *next_btf_id, enum bpf_type_flag *flag)
 {
 	u32 i, moff, mtrue_end, msize = 0, total_nelems = 0;
 	const struct btf_type *mtype, *elem_type = NULL;
 	const struct btf_member *member;
-	const char *tname, *mname;
+	const char *tname, *mname, *tag_value;
 	u32 vlen, elem_id, mid;
 
 again:
@@ -5215,7 +5223,8 @@  static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
 		}
 
 		if (btf_type_is_ptr(mtype)) {
-			const struct btf_type *stype;
+			enum bpf_type_flag tmp_flag = FLAG_DONTCARE;
+			const struct btf_type *stype, *t;
 			u32 id;
 
 			if (msize != size || off != moff) {
@@ -5224,9 +5233,19 @@  static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
 					mname, moff, tname, off, size);
 				return -EACCES;
 			}
+
+			/* check __user tag */
+			t = btf_type_by_id(btf, mtype->type);
+			if (btf_type_is_type_tag(t)) {
+				tag_value = __btf_name_by_offset(btf, t->name_off);
+				if (strcmp(tag_value, "user") == 0)
+					tmp_flag = MEM_USER;
+			}
+
 			stype = btf_type_skip_modifiers(btf, mtype->type, &id);
 			if (btf_type_is_struct(stype)) {
 				*next_btf_id = id;
+				*flag = tmp_flag;
 				return WALK_PTR;
 			}
 		}
@@ -5253,13 +5272,14 @@  static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
 		      const struct btf_type *t, int off, int size,
 		      enum bpf_access_type atype __maybe_unused,
-		      u32 *next_btf_id)
+		      u32 *next_btf_id, enum bpf_type_flag *flag)
 {
+	enum bpf_type_flag tmp_flag = FLAG_DONTCARE;
 	int err;
 	u32 id;
 
 	do {
-		err = btf_struct_walk(log, btf, t, off, size, &id);
+		err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag);
 
 		switch (err) {
 		case WALK_PTR:
@@ -5267,6 +5287,7 @@  int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
 			 * we're done.
 			 */
 			*next_btf_id = id;
+			*flag = tmp_flag;
 			return PTR_TO_BTF_ID;
 		case WALK_SCALAR:
 			return SCALAR_VALUE;
@@ -5311,6 +5332,7 @@  bool btf_struct_ids_match(struct bpf_verifier_log *log,
 			  const struct btf *need_btf, u32 need_type_id)
 {
 	const struct btf_type *type;
+	enum bpf_type_flag flag;
 	int err;
 
 	/* Are we already done? */
@@ -5321,7 +5343,7 @@  bool btf_struct_ids_match(struct bpf_verifier_log *log,
 	type = btf_type_by_id(btf, id);
 	if (!type)
 		return false;
-	err = btf_struct_walk(log, btf, type, off, 1, &id);
+	err = btf_struct_walk(log, btf, type, off, 1, &id, &flag);
 	if (err != WALK_STRUCT)
 		return false;
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index bfb45381fb3f..6b78642ea437 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -568,6 +568,9 @@  static const char *reg_type_str(struct bpf_verifier_env *env,
 			strncpy(postfix, "_or_null", 16);
 	}
 
+	if (type & MEM_USER)
+		strncpy(prefix, "user_", 16);
+
 	if (type & MEM_RDONLY)
 		strncpy(prefix, "rdonly_", 16);
 
@@ -1544,14 +1547,15 @@  static void mark_reg_not_init(struct bpf_verifier_env *env,
 static void mark_btf_ld_reg(struct bpf_verifier_env *env,
 			    struct bpf_reg_state *regs, u32 regno,
 			    enum bpf_reg_type reg_type,
-			    struct btf *btf, u32 btf_id)
+			    struct btf *btf, u32 btf_id,
+			    enum bpf_type_flag flag)
 {
 	if (reg_type == SCALAR_VALUE) {
 		mark_reg_unknown(env, regs, regno);
 		return;
 	}
 	mark_reg_known_zero(env, regs, regno);
-	regs[regno].type = PTR_TO_BTF_ID;
+	regs[regno].type = PTR_TO_BTF_ID | flag;
 	regs[regno].btf = btf;
 	regs[regno].btf_id = btf_id;
 }
@@ -4149,6 +4153,7 @@  static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 	struct bpf_reg_state *reg = regs + regno;
 	const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id);
 	const char *tname = btf_name_by_offset(reg->btf, t->name_off);
+	enum bpf_type_flag flag = FLAG_DONTCARE;
 	u32 btf_id;
 	int ret;
 
@@ -4168,9 +4173,16 @@  static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 		return -EACCES;
 	}
 
+	if (reg->type & MEM_USER) {
+		verbose(env,
+			"R%d is ptr_%s access user memory: off=%d\n",
+			regno, tname, off);
+		return -EACCES;
+	}
+
 	if (env->ops->btf_struct_access) {
 		ret = env->ops->btf_struct_access(&env->log, reg->btf, t,
-						  off, size, atype, &btf_id);
+						  off, size, atype, &btf_id, &flag);
 	} else {
 		if (atype != BPF_READ) {
 			verbose(env, "only read is supported\n");
@@ -4178,14 +4190,14 @@  static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 		}
 
 		ret = btf_struct_access(&env->log, reg->btf, t, off, size,
-					atype, &btf_id);
+					atype, &btf_id, &flag);
 	}
 
 	if (ret < 0)
 		return ret;
 
 	if (atype == BPF_READ && value_regno >= 0)
-		mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id);
+		mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
 
 	return 0;
 }
@@ -4198,6 +4210,7 @@  static int check_ptr_to_map_access(struct bpf_verifier_env *env,
 {
 	struct bpf_reg_state *reg = regs + regno;
 	struct bpf_map *map = reg->map_ptr;
+	enum bpf_type_flag flag = FLAG_DONTCARE;
 	const struct btf_type *t;
 	const char *tname;
 	u32 btf_id;
@@ -4235,12 +4248,12 @@  static int check_ptr_to_map_access(struct bpf_verifier_env *env,
 		return -EACCES;
 	}
 
-	ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id);
+	ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id, &flag);
 	if (ret < 0)
 		return ret;
 
 	if (value_regno >= 0)
-		mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, btf_id);
+		mark_btf_ld_reg(env, regs, value_regno, ret, btf_vmlinux, btf_id, flag);
 
 	return 0;
 }
@@ -4441,7 +4454,8 @@  static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 		if (err < 0)
 			return err;
 
-		err = check_ctx_access(env, insn_idx, off, size, t, &reg_type, &btf, &btf_id);
+		err = check_ctx_access(env, insn_idx, off, size, t, &reg_type, &btf,
+				       &btf_id);
 		if (err)
 			verbose_linfo(env, insn_idx, "; ");
 		if (!err && t == BPF_READ && value_regno >= 0) {
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index fbc896323bec..d0e54e30658a 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -145,7 +145,8 @@  static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
 					   const struct btf *btf,
 					   const struct btf_type *t, int off,
 					   int size, enum bpf_access_type atype,
-					   u32 *next_btf_id)
+					   u32 *next_btf_id,
+					   enum bpf_type_flag *flag)
 {
 	const struct btf_type *state;
 	s32 type_id;
@@ -162,7 +163,8 @@  static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
 		return -EACCES;
 	}
 
-	err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id);
+	err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id,
+				flag);
 	if (err < 0)
 		return err;
 
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index de610cb83694..6b781eead784 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -95,12 +95,14 @@  static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log,
 					const struct btf *btf,
 					const struct btf_type *t, int off,
 					int size, enum bpf_access_type atype,
-					u32 *next_btf_id)
+					u32 *next_btf_id,
+					enum bpf_type_flag *flag)
 {
 	size_t end;
 
 	if (atype == BPF_READ)
-		return btf_struct_access(log, btf, t, off, size, atype, next_btf_id);
+		return btf_struct_access(log, btf, t, off, size, atype, next_btf_id,
+					 flag);
 
 	if (t != tcp_sock_type) {
 		bpf_log(log, "only read is supported\n");