diff mbox series

[bpf-next,v2] bpf: Pass map file to .map_update_batch directly

Message ID 20221111080757.2224969-1-houtao@huaweicloud.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next,v2] bpf: Pass map file to .map_update_batch directly | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1362 this patch: 1362
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 157 this patch: 157
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1352 this patch: 1352
netdev/checkpatch warning CHECK: Macro argument 'fn' may be better as '(fn)' to avoid precedence issues WARNING: ENOTSUPP is not a SUSV4 error code, prefer EOPNOTSUPP WARNING: Macros with flow control statements should be avoided
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 fail Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix

Commit Message

Hou Tao Nov. 11, 2022, 8:07 a.m. UTC
From: Hou Tao <houtao1@huawei.com>

Currently bpf_map_do_batch() first invokes fdget(batch.map_fd) to get
the target map file, then it invokes generic_map_update_batch() to do
batch update. generic_map_update_batch() will get the target map file
by using fdget(batch.map_fd) again and pass it to
bpf_map_update_value().

The problem is map file returned by the second fdget() may be NULL or a
totally different file compared by map file in bpf_map_do_batch(). The
reason is that the first fdget() only guarantees the liveness of struct
file instead of file descriptor and the file description may be released
by concurrent close() through pick_file().

It doesn't incur any problem as for now, because maps with batch update
support don't use map file in .map_fd_get_ptr() ops. But it is better to
fix the access of a potentially invalid map file.

using __bpf_map_get() again in generic_map_update_batch() can not fix
the problem, because batch.map_fd may be closed and reopened, and the
returned map file may be different with map file got in
bpf_map_do_batch(), so just passing the map file directly to
.map_update_batch() in bpf_map_do_batch().

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
v2:
 * rewrite the commit message to explain the problem and the reasoning.
v1: https://lore.kernel.org/bpf/20221107075537.1445644-1-houtao@huaweicloud.com

 include/linux/bpf.h  |  5 +++--
 kernel/bpf/syscall.c | 31 ++++++++++++++++++-------------
 2 files changed, 21 insertions(+), 15 deletions(-)

Comments

Stanislav Fomichev Nov. 11, 2022, 5:43 p.m. UTC | #1
On 11/11, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>

> Currently bpf_map_do_batch() first invokes fdget(batch.map_fd) to get
> the target map file, then it invokes generic_map_update_batch() to do
> batch update. generic_map_update_batch() will get the target map file
> by using fdget(batch.map_fd) again and pass it to
> bpf_map_update_value().

> The problem is map file returned by the second fdget() may be NULL or a
> totally different file compared by map file in bpf_map_do_batch(). The
> reason is that the first fdget() only guarantees the liveness of struct
> file instead of file descriptor and the file description may be released
> by concurrent close() through pick_file().

> It doesn't incur any problem as for now, because maps with batch update
> support don't use map file in .map_fd_get_ptr() ops. But it is better to
> fix the access of a potentially invalid map file.

> using __bpf_map_get() again in generic_map_update_batch() can not fix
> the problem, because batch.map_fd may be closed and reopened, and the
> returned map file may be different with map file got in
> bpf_map_do_batch(), so just passing the map file directly to
> .map_update_batch() in bpf_map_do_batch().

> Signed-off-by: Hou Tao <houtao1@huawei.com>

Acked-by: Stanislav Fomichev <sdf@google.com>

> ---
> v2:
>   * rewrite the commit message to explain the problem and the reasoning.
> v1:  
> https://lore.kernel.org/bpf/20221107075537.1445644-1-houtao@huaweicloud.com

>   include/linux/bpf.h  |  5 +++--
>   kernel/bpf/syscall.c | 31 ++++++++++++++++++-------------
>   2 files changed, 21 insertions(+), 15 deletions(-)

> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 798aec816970..20cfe88ee6df 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -85,7 +85,8 @@ struct bpf_map_ops {
>   	int (*map_lookup_and_delete_batch)(struct bpf_map *map,
>   					   const union bpf_attr *attr,
>   					   union bpf_attr __user *uattr);
> -	int (*map_update_batch)(struct bpf_map *map, const union bpf_attr *attr,
> +	int (*map_update_batch)(struct bpf_map *map, struct file *map_file,
> +				const union bpf_attr *attr,
>   				union bpf_attr __user *uattr);
>   	int (*map_delete_batch)(struct bpf_map *map, const union bpf_attr *attr,
>   				union bpf_attr __user *uattr);
> @@ -1776,7 +1777,7 @@ void bpf_map_init_from_attr(struct bpf_map *map,  
> union bpf_attr *attr);
>   int  generic_map_lookup_batch(struct bpf_map *map,
>   			      const union bpf_attr *attr,
>   			      union bpf_attr __user *uattr);
> -int  generic_map_update_batch(struct bpf_map *map,
> +int  generic_map_update_batch(struct bpf_map *map, struct file *map_file,
>   			      const union bpf_attr *attr,
>   			      union bpf_attr __user *uattr);
>   int  generic_map_delete_batch(struct bpf_map *map,
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 85532d301124..cb8a87277bf8 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -175,8 +175,8 @@ static void maybe_wait_bpf_programs(struct bpf_map  
> *map)
>   		synchronize_rcu();
>   }

> -static int bpf_map_update_value(struct bpf_map *map, struct fd f, void  
> *key,
> -				void *value, __u64 flags)
> +static int bpf_map_update_value(struct bpf_map *map, struct file  
> *map_file,
> +				void *key, void *value, __u64 flags)
>   {
>   	int err;

> @@ -190,7 +190,7 @@ static int bpf_map_update_value(struct bpf_map *map,  
> struct fd f, void *key,
>   		   map->map_type == BPF_MAP_TYPE_SOCKMAP) {
>   		return sock_map_update_elem_sys(map, key, value, flags);
>   	} else if (IS_FD_PROG_ARRAY(map)) {
> -		return bpf_fd_array_map_update_elem(map, f.file, key, value,
> +		return bpf_fd_array_map_update_elem(map, map_file, key, value,
>   						    flags);
>   	}

> @@ -205,12 +205,12 @@ static int bpf_map_update_value(struct bpf_map  
> *map, struct fd f, void *key,
>   						       flags);
>   	} else if (IS_FD_ARRAY(map)) {
>   		rcu_read_lock();
> -		err = bpf_fd_array_map_update_elem(map, f.file, key, value,
> +		err = bpf_fd_array_map_update_elem(map, map_file, key, value,
>   						   flags);
>   		rcu_read_unlock();
>   	} else if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
>   		rcu_read_lock();
> -		err = bpf_fd_htab_map_update_elem(map, f.file, key, value,
> +		err = bpf_fd_htab_map_update_elem(map, map_file, key, value,
>   						  flags);
>   		rcu_read_unlock();
>   	} else if (map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) {
> @@ -1390,7 +1390,7 @@ static int map_update_elem(union bpf_attr *attr,  
> bpfptr_t uattr)
>   		goto free_key;
>   	}

> -	err = bpf_map_update_value(map, f, key, value, attr->flags);
> +	err = bpf_map_update_value(map, f.file, key, value, attr->flags);

>   	kvfree(value);
>   free_key:
> @@ -1576,16 +1576,14 @@ int generic_map_delete_batch(struct bpf_map *map,
>   	return err;
>   }

> -int generic_map_update_batch(struct bpf_map *map,
> +int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
>   			     const union bpf_attr *attr,
>   			     union bpf_attr __user *uattr)
>   {
>   	void __user *values = u64_to_user_ptr(attr->batch.values);
>   	void __user *keys = u64_to_user_ptr(attr->batch.keys);
>   	u32 value_size, cp, max_count;
> -	int ufd = attr->batch.map_fd;
>   	void *key, *value;
> -	struct fd f;
>   	int err = 0;

>   	if (attr->batch.elem_flags & ~BPF_F_LOCK)
> @@ -1612,7 +1610,6 @@ int generic_map_update_batch(struct bpf_map *map,
>   		return -ENOMEM;
>   	}

> -	f = fdget(ufd); /* bpf_map_do_batch() guarantees ufd is valid */
>   	for (cp = 0; cp < max_count; cp++) {
>   		err = -EFAULT;
>   		if (copy_from_user(key, keys + cp * map->key_size,
> @@ -1620,7 +1617,7 @@ int generic_map_update_batch(struct bpf_map *map,
>   		    copy_from_user(value, values + cp * value_size, value_size))
>   			break;

> -		err = bpf_map_update_value(map, f, key, value,
> +		err = bpf_map_update_value(map, map_file, key, value,
>   					   attr->batch.elem_flags);

>   		if (err)
> @@ -1633,7 +1630,6 @@ int generic_map_update_batch(struct bpf_map *map,

>   	kvfree(value);
>   	kvfree(key);
> -	fdput(f);
>   	return err;
>   }

> @@ -4435,6 +4431,15 @@ static int bpf_task_fd_query(const union bpf_attr  
> *attr,
>   		err = fn(map, attr, uattr);	\
>   	} while (0)


[..]

> +#define BPF_DO_BATCH_WITH_FILE(fn)			\
> +	do {						\
> +		if (!fn) {				\
> +			err = -ENOTSUPP;		\
> +			goto err_put;			\
> +		}					\
> +		err = fn(map, f.file, attr, uattr);	\
> +	} while (0)
> +

nit: probably not worth defining this for a single user? but not sure
it matters..

>   static int bpf_map_do_batch(const union bpf_attr *attr,
>   			    union bpf_attr __user *uattr,
>   			    int cmd)
> @@ -4470,7 +4475,7 @@ static int bpf_map_do_batch(const union bpf_attr  
> *attr,
>   	else if (cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH)
>   		BPF_DO_BATCH(map->ops->map_lookup_and_delete_batch);
>   	else if (cmd == BPF_MAP_UPDATE_BATCH)
> -		BPF_DO_BATCH(map->ops->map_update_batch);
> +		BPF_DO_BATCH_WITH_FILE(map->ops->map_update_batch);
>   	else
>   		BPF_DO_BATCH(map->ops->map_delete_batch);
>   err_put:
> --
> 2.29.2
Yonghong Song Nov. 11, 2022, 11:02 p.m. UTC | #2
On 11/11/22 12:07 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
> 
> Currently bpf_map_do_batch() first invokes fdget(batch.map_fd) to get
> the target map file, then it invokes generic_map_update_batch() to do
> batch update. generic_map_update_batch() will get the target map file
> by using fdget(batch.map_fd) again and pass it to
> bpf_map_update_value().
> 
> The problem is map file returned by the second fdget() may be NULL or a
> totally different file compared by map file in bpf_map_do_batch(). The
> reason is that the first fdget() only guarantees the liveness of struct
> file instead of file descriptor and the file description may be released
> by concurrent close() through pick_file().
> 
> It doesn't incur any problem as for now, because maps with batch update
> support don't use map file in .map_fd_get_ptr() ops. But it is better to
> fix the access of a potentially invalid map file.
> 
> using __bpf_map_get() again in generic_map_update_batch() can not fix
> the problem, because batch.map_fd may be closed and reopened, and the
> returned map file may be different with map file got in
> bpf_map_do_batch(), so just passing the map file directly to
> .map_update_batch() in bpf_map_do_batch().
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Acked-by: Yonghong Song <yhs@fb.com>
Daniel Bokmann Nov. 14, 2022, 5:49 p.m. UTC | #3
On Fri, Nov 11, 2022 at 09:43:18AM -0800, sdf@google.com wrote:
> On 11/11, Hou Tao wrote:
> > From: Hou Tao <houtao1@huawei.com>
> 
> > Currently bpf_map_do_batch() first invokes fdget(batch.map_fd) to get
> > the target map file, then it invokes generic_map_update_batch() to do
> > batch update. generic_map_update_batch() will get the target map file
> > by using fdget(batch.map_fd) again and pass it to
> > bpf_map_update_value().
> 
> > The problem is map file returned by the second fdget() may be NULL or a
> > totally different file compared by map file in bpf_map_do_batch(). The
> > reason is that the first fdget() only guarantees the liveness of struct
> > file instead of file descriptor and the file description may be released
> > by concurrent close() through pick_file().
> 
> > It doesn't incur any problem as for now, because maps with batch update
> > support don't use map file in .map_fd_get_ptr() ops. But it is better to
> > fix the access of a potentially invalid map file.

Right, that's mainly for the perf RB map ...

> > using __bpf_map_get() again in generic_map_update_batch() can not fix
> > the problem, because batch.map_fd may be closed and reopened, and the
> > returned map file may be different with map file got in
> > bpf_map_do_batch(), so just passing the map file directly to
> > .map_update_batch() in bpf_map_do_batch().
> 
> > Signed-off-by: Hou Tao <houtao1@huawei.com>
> 
> Acked-by: Stanislav Fomichev <sdf@google.com>

> [..]
> 
> > +#define BPF_DO_BATCH_WITH_FILE(fn)			\
> > +	do {						\
> > +		if (!fn) {				\
> > +			err = -ENOTSUPP;		\
> > +			goto err_put;			\
> > +		}					\
> > +		err = fn(map, f.file, attr, uattr);	\
> > +	} while (0)
> > +
> 
> nit: probably not worth defining this for a single user? but not sure
> it matters..

Yeah, just the BPF_DO_BATCH could be used but extended via __VA_ARGS__.

Thanks,
Daniel
Hou Tao Nov. 15, 2022, 11:18 a.m. UTC | #4
Hi,

On 11/15/2022 1:49 AM, Daniel Bokmann wrote:
> On Fri, Nov 11, 2022 at 09:43:18AM -0800, sdf@google.com wrote:
>> On 11/11, Hou Tao wrote:
>>> From: Hou Tao <houtao1@huawei.com>
>>> Currently bpf_map_do_batch() first invokes fdget(batch.map_fd) to get
>>> the target map file, then it invokes generic_map_update_batch() to do
>>> batch update. generic_map_update_batch() will get the target map file
>>> by using fdget(batch.map_fd) again and pass it to
>>> bpf_map_update_value().
>>> The problem is map file returned by the second fdget() may be NULL or a
>>> totally different file compared by map file in bpf_map_do_batch(). The
>>> reason is that the first fdget() only guarantees the liveness of struct
>>> file instead of file descriptor and the file description may be released
>>> by concurrent close() through pick_file().
>>> It doesn't incur any problem as for now, because maps with batch update
>>> support don't use map file in .map_fd_get_ptr() ops. But it is better to
>>> fix the access of a potentially invalid map file.
> Right, that's mainly for the perf RB map ...
Yes. BPF_MAP_TYPE_PERF_EVENT_ARRAY will use the passed map file, but it doesn't
support batch update.
>
>>> using __bpf_map_get() again in generic_map_update_batch() can not fix
>>> the problem, because batch.map_fd may be closed and reopened, and the
>>> returned map file may be different with map file got in
>>> bpf_map_do_batch(), so just passing the map file directly to
>>> .map_update_batch() in bpf_map_do_batch().
>>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>> Acked-by: Stanislav Fomichev <sdf@google.com>
>> [..]
>>
>>> +#define BPF_DO_BATCH_WITH_FILE(fn)			\
>>> +	do {						\
>>> +		if (!fn) {				\
>>> +			err = -ENOTSUPP;		\
>>> +			goto err_put;			\
>>> +		}					\
>>> +		err = fn(map, f.file, attr, uattr);	\
>>> +	} while (0)
>>> +
>> nit: probably not worth defining this for a single user? but not sure
>> it matters..
> Yeah, just the BPF_DO_BATCH could be used but extended via __VA_ARGS__.
Good idea. Will do in v3.
>
> Thanks,
> Daniel
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 798aec816970..20cfe88ee6df 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -85,7 +85,8 @@  struct bpf_map_ops {
 	int (*map_lookup_and_delete_batch)(struct bpf_map *map,
 					   const union bpf_attr *attr,
 					   union bpf_attr __user *uattr);
-	int (*map_update_batch)(struct bpf_map *map, const union bpf_attr *attr,
+	int (*map_update_batch)(struct bpf_map *map, struct file *map_file,
+				const union bpf_attr *attr,
 				union bpf_attr __user *uattr);
 	int (*map_delete_batch)(struct bpf_map *map, const union bpf_attr *attr,
 				union bpf_attr __user *uattr);
@@ -1776,7 +1777,7 @@  void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr);
 int  generic_map_lookup_batch(struct bpf_map *map,
 			      const union bpf_attr *attr,
 			      union bpf_attr __user *uattr);
-int  generic_map_update_batch(struct bpf_map *map,
+int  generic_map_update_batch(struct bpf_map *map, struct file *map_file,
 			      const union bpf_attr *attr,
 			      union bpf_attr __user *uattr);
 int  generic_map_delete_batch(struct bpf_map *map,
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 85532d301124..cb8a87277bf8 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -175,8 +175,8 @@  static void maybe_wait_bpf_programs(struct bpf_map *map)
 		synchronize_rcu();
 }
 
-static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key,
-				void *value, __u64 flags)
+static int bpf_map_update_value(struct bpf_map *map, struct file *map_file,
+				void *key, void *value, __u64 flags)
 {
 	int err;
 
@@ -190,7 +190,7 @@  static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key,
 		   map->map_type == BPF_MAP_TYPE_SOCKMAP) {
 		return sock_map_update_elem_sys(map, key, value, flags);
 	} else if (IS_FD_PROG_ARRAY(map)) {
-		return bpf_fd_array_map_update_elem(map, f.file, key, value,
+		return bpf_fd_array_map_update_elem(map, map_file, key, value,
 						    flags);
 	}
 
@@ -205,12 +205,12 @@  static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key,
 						       flags);
 	} else if (IS_FD_ARRAY(map)) {
 		rcu_read_lock();
-		err = bpf_fd_array_map_update_elem(map, f.file, key, value,
+		err = bpf_fd_array_map_update_elem(map, map_file, key, value,
 						   flags);
 		rcu_read_unlock();
 	} else if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
 		rcu_read_lock();
-		err = bpf_fd_htab_map_update_elem(map, f.file, key, value,
+		err = bpf_fd_htab_map_update_elem(map, map_file, key, value,
 						  flags);
 		rcu_read_unlock();
 	} else if (map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) {
@@ -1390,7 +1390,7 @@  static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
 		goto free_key;
 	}
 
-	err = bpf_map_update_value(map, f, key, value, attr->flags);
+	err = bpf_map_update_value(map, f.file, key, value, attr->flags);
 
 	kvfree(value);
 free_key:
@@ -1576,16 +1576,14 @@  int generic_map_delete_batch(struct bpf_map *map,
 	return err;
 }
 
-int generic_map_update_batch(struct bpf_map *map,
+int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
 			     const union bpf_attr *attr,
 			     union bpf_attr __user *uattr)
 {
 	void __user *values = u64_to_user_ptr(attr->batch.values);
 	void __user *keys = u64_to_user_ptr(attr->batch.keys);
 	u32 value_size, cp, max_count;
-	int ufd = attr->batch.map_fd;
 	void *key, *value;
-	struct fd f;
 	int err = 0;
 
 	if (attr->batch.elem_flags & ~BPF_F_LOCK)
@@ -1612,7 +1610,6 @@  int generic_map_update_batch(struct bpf_map *map,
 		return -ENOMEM;
 	}
 
-	f = fdget(ufd); /* bpf_map_do_batch() guarantees ufd is valid */
 	for (cp = 0; cp < max_count; cp++) {
 		err = -EFAULT;
 		if (copy_from_user(key, keys + cp * map->key_size,
@@ -1620,7 +1617,7 @@  int generic_map_update_batch(struct bpf_map *map,
 		    copy_from_user(value, values + cp * value_size, value_size))
 			break;
 
-		err = bpf_map_update_value(map, f, key, value,
+		err = bpf_map_update_value(map, map_file, key, value,
 					   attr->batch.elem_flags);
 
 		if (err)
@@ -1633,7 +1630,6 @@  int generic_map_update_batch(struct bpf_map *map,
 
 	kvfree(value);
 	kvfree(key);
-	fdput(f);
 	return err;
 }
 
@@ -4435,6 +4431,15 @@  static int bpf_task_fd_query(const union bpf_attr *attr,
 		err = fn(map, attr, uattr);	\
 	} while (0)
 
+#define BPF_DO_BATCH_WITH_FILE(fn)			\
+	do {						\
+		if (!fn) {				\
+			err = -ENOTSUPP;		\
+			goto err_put;			\
+		}					\
+		err = fn(map, f.file, attr, uattr);	\
+	} while (0)
+
 static int bpf_map_do_batch(const union bpf_attr *attr,
 			    union bpf_attr __user *uattr,
 			    int cmd)
@@ -4470,7 +4475,7 @@  static int bpf_map_do_batch(const union bpf_attr *attr,
 	else if (cmd == BPF_MAP_LOOKUP_AND_DELETE_BATCH)
 		BPF_DO_BATCH(map->ops->map_lookup_and_delete_batch);
 	else if (cmd == BPF_MAP_UPDATE_BATCH)
-		BPF_DO_BATCH(map->ops->map_update_batch);
+		BPF_DO_BATCH_WITH_FILE(map->ops->map_update_batch);
 	else
 		BPF_DO_BATCH(map->ops->map_delete_batch);
 err_put: