mbox series

[0/3] fixes for concurrent htab updates

Message ID 20220821033223.2598791-1-houtao@huaweicloud.com (mailing list archive)
Headers show
Series fixes for concurrent htab updates | expand

Message

Hou Tao Aug. 21, 2022, 3:32 a.m. UTC
From: Hou Tao <houtao1@huawei.com>

Hi,

The patchset aims to fix the issues found during investigating the
syzkaller problem reported in [0]. It seems that the normally concurrent
updates in the same hash-table are disallowed as shown in patch 1.

Patch 1 uses preempt_disable() to fix the problem to !PREEMPT_RT case.

Patch 2 introduces an extra bpf_map_busy bit in task_struct to
detect the re-entrancy of htab_lock_bucket() and allow concurrent map
updates due to preemption in PREEMPT_RT case. It is coarse-grained
compared with map_locked in !PREEMPT_RT case, because if two different
maps are manipulated the re-entrancy is still be rejected. But
considering Alexei is working on "BPF specific memory allocator" [1],
and the !htab_use_raw_lock() case can be removed after the patchset is
landed, so I think may be it is fine and hope to get some more feedback
about the proposed fix in patch 2.

Patch 3 just fixes the out-of-bound memory read problem reported in [0].
Once patch 1 & patch 2 are merged, htab_lock_bucket() will always
succeed for userspace process, but it is better to handle it gracefully.

Selftests will be added after getting more feedback about the patchset
and comments are always welcome.

Regards,
Tao

[0]: https://lore.kernel.org/bpf/CACkBjsbuxaR6cv0kXJoVnBfL9ZJXjjoUcMpw_Ogc313jSrg14A@mail.gmail.com/
[1]: https://lore.kernel.org/bpf/20220819214232.18784-1-alexei.starovoitov@gmail.com/

Hou Tao (3):
  bpf: Disable preemption when increasing per-cpu map_locked
  bpf: Allow normally concurrent map updates for !htab_use_raw_lock()
    case
  bpf: Propagate error from htab_lock_bucket() to userspace

 include/linux/sched.h |  3 +++
 kernel/bpf/hashtab.c  | 59 ++++++++++++++++++++++++++++++-------------
 2 files changed, 44 insertions(+), 18 deletions(-)

Comments

Hou Tao Aug. 22, 2022, 1:21 a.m. UTC | #1
Oops, forget to add bpf prefix for the patchset.

On 8/21/2022 11:32 AM, Hou Tao wrote:
> From: Hou Tao <houtao1@huawei.com>
>
> Hi,
>
> The patchset aims to fix the issues found during investigating the
> syzkaller problem reported in [0]. It seems that the normally concurrent
> updates in the same hash-table are disallowed as shown in patch 1.
>
> Patch 1 uses preempt_disable() to fix the problem to !PREEMPT_RT case.
>
> Patch 2 introduces an extra bpf_map_busy bit in task_struct to
> detect the re-entrancy of htab_lock_bucket() and allow concurrent map
> updates due to preemption in PREEMPT_RT case. It is coarse-grained
> compared with map_locked in !PREEMPT_RT case, because if two different
> maps are manipulated the re-entrancy is still be rejected. But
> considering Alexei is working on "BPF specific memory allocator" [1],
> and the !htab_use_raw_lock() case can be removed after the patchset is
> landed, so I think may be it is fine and hope to get some more feedback
> about the proposed fix in patch 2.
>
> Patch 3 just fixes the out-of-bound memory read problem reported in [0].
> Once patch 1 & patch 2 are merged, htab_lock_bucket() will always
> succeed for userspace process, but it is better to handle it gracefully.
>
> Selftests will be added after getting more feedback about the patchset
> and comments are always welcome.
>
> Regards,
> Tao
>
> [0]: https://lore.kernel.org/bpf/CACkBjsbuxaR6cv0kXJoVnBfL9ZJXjjoUcMpw_Ogc313jSrg14A@mail.gmail.com/
> [1]: https://lore.kernel.org/bpf/20220819214232.18784-1-alexei.starovoitov@gmail.com/
>
> Hou Tao (3):
>   bpf: Disable preemption when increasing per-cpu map_locked
>   bpf: Allow normally concurrent map updates for !htab_use_raw_lock()
>     case
>   bpf: Propagate error from htab_lock_bucket() to userspace
>
>  include/linux/sched.h |  3 +++
>  kernel/bpf/hashtab.c  | 59 ++++++++++++++++++++++++++++++-------------
>  2 files changed, 44 insertions(+), 18 deletions(-)
>