Message ID | 20220827100134.1621137-2-houtao@huaweicloud.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | [bpf-next,v2,1/3] bpf: Disable preemption when increasing per-cpu map_locked | expand |
On Sat, Aug 27, 2022 at 11:43 AM Hou Tao <houtao@huaweicloud.com> wrote: > > From: Hou Tao <houtao1@huawei.com> > > In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns > -EBUSY, it will go to next bucket. Going to next bucket may not only > skip the elements in current bucket silently, but also incur > out-of-bound memory access or expose kernel memory to userspace if > current bucket_cnt is greater than bucket_size or zero. > > Fixing it by stopping batch operation and returning -EBUSY when > htab_lock_bucket() fails, and the application can retry or skip the busy > batch as needed. > > Reported-by: Hao Sun <sunhao.th@gmail.com> > Signed-off-by: Hou Tao <houtao1@huawei.com> Please add a Fixes tag here > --- > kernel/bpf/hashtab.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index 6fb3b7fd1622..eb1263f03e9b 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -1704,8 +1704,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, > /* do not grab the lock unless need it (bucket_cnt > 0). */ > if (locked) { > ret = htab_lock_bucket(htab, b, batch, &flags); > - if (ret) > - goto next_batch; > + if (ret) { > + rcu_read_unlock(); > + bpf_enable_instrumentation(); > + goto after_loop; > + } > } > > bucket_cnt = 0; > -- > 2.29.2 >
Hi, On 8/28/2022 8:24 AM, KP Singh wrote: > On Sat, Aug 27, 2022 at 11:43 AM Hou Tao <houtao@huaweicloud.com> wrote: >> From: Hou Tao <houtao1@huawei.com> >> >> In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns >> -EBUSY, it will go to next bucket. Going to next bucket may not only >> skip the elements in current bucket silently, but also incur >> out-of-bound memory access or expose kernel memory to userspace if >> current bucket_cnt is greater than bucket_size or zero. >> >> Fixing it by stopping batch operation and returning -EBUSY when >> htab_lock_bucket() fails, and the application can retry or skip the busy >> batch as needed. >> >> Reported-by: Hao Sun <sunhao.th@gmail.com> >> Signed-off-by: Hou Tao <houtao1@huawei.com> > Please add a Fixes tag here Will add "Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")" in v3. > >> --- >> kernel/bpf/hashtab.c | 7 +++++-- >> 1 file changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c >> index 6fb3b7fd1622..eb1263f03e9b 100644 >> --- a/kernel/bpf/hashtab.c >> +++ b/kernel/bpf/hashtab.c >> @@ -1704,8 +1704,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, >> /* do not grab the lock unless need it (bucket_cnt > 0). */ >> if (locked) { >> ret = htab_lock_bucket(htab, b, batch, &flags); >> - if (ret) >> - goto next_batch; >> + if (ret) { >> + rcu_read_unlock(); >> + bpf_enable_instrumentation(); >> + goto after_loop; >> + } >> } >> >> bucket_cnt = 0; >> -- >> 2.29.2 >> > .
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 6fb3b7fd1622..eb1263f03e9b 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1704,8 +1704,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, /* do not grab the lock unless need it (bucket_cnt > 0). */ if (locked) { ret = htab_lock_bucket(htab, b, batch, &flags); - if (ret) - goto next_batch; + if (ret) { + rcu_read_unlock(); + bpf_enable_instrumentation(); + goto after_loop; + } } bucket_cnt = 0;