Message ID | 20231005032350.1877318-1-song@kernel.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | [v3,bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket | expand |
> On Oct 4, 2023, at 8:23 PM, Song Liu <song@kernel.org> wrote: > > htab_lock_bucket uses the following logic to avoid recursion: > > 1. preempt_disable(); > 2. check percpu counter htab->map_locked[hash] for recursion; > 2.1. if map_lock[hash] is already taken, return -BUSY; > 3. raw_spin_lock_irqsave(); > > However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ > logic will not able to access the same hash of the hashtab and get -EBUSY. > This -EBUSY is not really necessary. Fix it by disabling IRQ before > checking map_locked: > > 1. preempt_disable(); > 2. local_irq_save(); > 3. check percpu counter htab->map_locked[hash] for recursion; > 3.1. if map_lock[hash] is already taken, return -BUSY; > 4. raw_spin_lock(). > > Similarly, use raw_spin_unlock() and local_irq_restore() in > htab_unlock_bucket(). > > Suggested-by: Tejun Heo <tj@kernel.org> > Signed-off-by: Song Liu <song@kernel.org> Somehow this didn't make to lore and thus not to patchwork. Let me resend, sorry for the noise. Song
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index a8c7e1c5abfa..74c8d1b41dd5 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); preempt_disable(); + raw_local_irq_save(flags); if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { __this_cpu_dec(*(htab->map_locked[hash])); + raw_local_irq_restore(flags); preempt_enable(); return -EBUSY; } - raw_spin_lock_irqsave(&b->raw_lock, flags); + raw_spin_lock(&b->raw_lock); *pflags = flags; return 0; @@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab, unsigned long flags) { hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1); - raw_spin_unlock_irqrestore(&b->raw_lock, flags); + raw_spin_unlock(&b->raw_lock); __this_cpu_dec(*(htab->map_locked[hash])); + raw_local_irq_restore(flags); preempt_enable(); }
htab_lock_bucket uses the following logic to avoid recursion: 1. preempt_disable(); 2. check percpu counter htab->map_locked[hash] for recursion; 2.1. if map_lock[hash] is already taken, return -BUSY; 3. raw_spin_lock_irqsave(); However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ logic will not able to access the same hash of the hashtab and get -EBUSY. This -EBUSY is not really necessary. Fix it by disabling IRQ before checking map_locked: 1. preempt_disable(); 2. local_irq_save(); 3. check percpu counter htab->map_locked[hash] for recursion; 3.1. if map_lock[hash] is already taken, return -BUSY; 4. raw_spin_lock(). Similarly, use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket(). Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Song Liu <song@kernel.org> --- Changes in v3: 1. Use raw_local_irq_* APIs instead. Changes in v2: 1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket(). (Andrii) --- kernel/bpf/hashtab.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)