From patchwork Sun Aug 21 03:32:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12949855 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79D8AC32792 for ; Sun, 21 Aug 2022 03:14:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbiHUDO3 (ORCPT ); Sat, 20 Aug 2022 23:14:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231214AbiHUDOV (ORCPT ); Sat, 20 Aug 2022 23:14:21 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EE16101EE for ; Sat, 20 Aug 2022 20:14:19 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4M9LCL1dd7z6T4x8 for ; Sun, 21 Aug 2022 11:12:46 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAH8r0DowFjsgGLAg--.45130S5; Sun, 21 Aug 2022 11:14:17 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org, Song Liu Cc: Hao Sun , Sebastian Andrzej Siewior , Andrii Nakryiko , Yonghong Song , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , KP Singh , "David S . Miller" , Jakub Kicinski , Stanislav Fomichev , Hao Luo , Jiri Olsa , John Fastabend , Lorenz Bauer , houtao1@huawei.com Subject: [PATCH 1/3] bpf: Disable preemption when increasing per-cpu map_locked Date: Sun, 21 Aug 2022 11:32:21 +0800 Message-Id: <20220821033223.2598791-2-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20220821033223.2598791-1-houtao@huaweicloud.com> References: <20220821033223.2598791-1-houtao@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAH8r0DowFjsgGLAg--.45130S5 X-Coremail-Antispam: 1UD129KBjvJXoWxWFykXw17Aw1UWw1fKF4xXrb_yoW5Aw13pF 48GF9Yyw48XF9a9a9Fqr1Iqr15Jw1xK3yIy3ykGFW8Zr45Ar1fXr1xtFySqF1Fvr9xArsa vr42qa1Yyr4UAFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUB0b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r1S6rWUM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IYc2Ij64 vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK 8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I 0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2mL9UUUUU X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao Per-cpu htab->map_locked is used to prohibit the concurrent accesses from both NMI and non-NMI contexts. But since 74d862b682f51 ("sched: Make migrate_disable/enable() independent of RT"), migrations_disable() is also preemptible under !PREEMPT_RT case, so now map_locked also disallows concurrent updates from normal contexts (e.g. userspace processes) unexpectedly as shown below: process A process B htab_map_update_elem() htab_lock_bucket() migrate_disable() /* return 1 */ __this_cpu_inc_return() /* preempted by B */ htab_map_update_elem() /* the same bucket as A */ htab_lock_bucket() migrate_disable() /* return 2, so lock fails */ __this_cpu_inc_return() return -EBUSY A fix that seems feasible is using in_nmi() in htab_lock_bucket() and only checking the value of map_locked for nmi context. But it will re-introduce dead-lock on bucket lock if htab_lock_bucket() is re-entered through non-tracing program (e.g. fentry program). So fixing it by using disable_preempt() instead of migrate_disable() when increasing htab->map_locked. However when htab_use_raw_lock() is false, bucket lock will be a sleepable spin-lock and it breaks disable_preempt(), so still use migrate_disable() for spin-lock case. Signed-off-by: Hou Tao Reviewed-by: Hao Luo --- kernel/bpf/hashtab.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 6c530a5e560a..ad09da139589 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -162,17 +162,25 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, unsigned long *pflags) { unsigned long flags; + bool use_raw_lock; hash = hash & HASHTAB_MAP_LOCK_MASK; - migrate_disable(); + use_raw_lock = htab_use_raw_lock(htab); + if (use_raw_lock) + preempt_disable(); + else + migrate_disable(); if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { __this_cpu_dec(*(htab->map_locked[hash])); - migrate_enable(); + if (use_raw_lock) + preempt_enable(); + else + migrate_enable(); return -EBUSY; } - if (htab_use_raw_lock(htab)) + if (use_raw_lock) raw_spin_lock_irqsave(&b->raw_lock, flags); else spin_lock_irqsave(&b->lock, flags); @@ -185,13 +193,18 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab, struct bucket *b, u32 hash, unsigned long flags) { + bool use_raw_lock = htab_use_raw_lock(htab); + hash = hash & HASHTAB_MAP_LOCK_MASK; - if (htab_use_raw_lock(htab)) + if (use_raw_lock) raw_spin_unlock_irqrestore(&b->raw_lock, flags); else spin_unlock_irqrestore(&b->lock, flags); __this_cpu_dec(*(htab->map_locked[hash])); - migrate_enable(); + if (use_raw_lock) + preempt_enable(); + else + migrate_enable(); } static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node); From patchwork Sun Aug 21 03:32:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12949854 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EBFFC32774 for ; Sun, 21 Aug 2022 03:14:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230212AbiHUDO3 (ORCPT ); Sat, 20 Aug 2022 23:14:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232706AbiHUDOW (ORCPT ); Sat, 20 Aug 2022 23:14:22 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7A2811808 for ; Sat, 20 Aug 2022 20:14:20 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4M9LCc0Vktzlkdd for ; Sun, 21 Aug 2022 11:13:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAH8r0DowFjsgGLAg--.45130S6; Sun, 21 Aug 2022 11:14:18 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org, Song Liu Cc: Hao Sun , Sebastian Andrzej Siewior , Andrii Nakryiko , Yonghong Song , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , KP Singh , "David S . Miller" , Jakub Kicinski , Stanislav Fomichev , Hao Luo , Jiri Olsa , John Fastabend , Lorenz Bauer , houtao1@huawei.com Subject: [PATCH 2/3] bpf: Allow normally concurrent map updates for !htab_use_raw_lock() case Date: Sun, 21 Aug 2022 11:32:22 +0800 Message-Id: <20220821033223.2598791-3-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20220821033223.2598791-1-houtao@huaweicloud.com> References: <20220821033223.2598791-1-houtao@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAH8r0DowFjsgGLAg--.45130S6 X-Coremail-Antispam: 1UD129KBjvJXoWxAw4DAr15XF17Xw1DAFWDurg_yoWrGFy3pF W8Gr9IkF4UXF9293yDXr40qr45Jw1I93yIy397Gay8ZF4jqr1fXr1ktF1IvryFvr93CrZa 9r4IqrWFkw1UZa7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBYb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r1S6rWUM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IYc2Ij64 vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x 0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU1sa9DUUUUU== X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao For htab_use_raw_lock=true case, the normally concurrent map updates are allowed by using preempt_disable() instead of migrate_disable() before increasing htab->map_locked. However the false case can not use preempt_disable(), because a sleepable spin-lock is acquired afterwards. So introducing a locking_bpf_map bit in task_struct. Setting it before acquiring bucket lock and clearing it after releasing the lock, so if htab_lock_bucket() is re-entered, the re-enterancy will be rejected. And if there is just preemption from another process, these processes can run concurrently. Signed-off-by: Hou Tao --- include/linux/sched.h | 3 +++ kernel/bpf/hashtab.c | 61 ++++++++++++++++++++++++------------------- 2 files changed, 37 insertions(+), 27 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 51dc1e89d43f..55667f46e459 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -944,6 +944,9 @@ struct task_struct { #ifdef CONFIG_CPU_SUP_INTEL unsigned reported_split_lock:1; #endif +#if defined(CONFIG_PREEMPT_RT) && defined(CONFIG_BPF_SYSCALL) + unsigned bpf_map_busy:1; +#endif unsigned long atomic_flags; /* Flags requiring atomic access. */ diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index ad09da139589..3ef7a853c737 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -138,6 +138,23 @@ static inline bool htab_use_raw_lock(const struct bpf_htab *htab) return (!IS_ENABLED(CONFIG_PREEMPT_RT) || htab_is_prealloc(htab)); } +static inline void bpf_clear_map_busy(void) +{ +#ifdef CONFIG_PREEMPT_RT + current->bpf_map_busy = 0; +#endif +} + +static inline int bpf_test_and_set_map_busy(void) +{ +#ifdef CONFIG_PREEMPT_RT + if (current->bpf_map_busy) + return 1; + current->bpf_map_busy = 1; +#endif + return 0; +} + static void htab_init_buckets(struct bpf_htab *htab) { unsigned int i; @@ -162,28 +179,21 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, unsigned long *pflags) { unsigned long flags; - bool use_raw_lock; - hash = hash & HASHTAB_MAP_LOCK_MASK; - - use_raw_lock = htab_use_raw_lock(htab); - if (use_raw_lock) + if (htab_use_raw_lock(htab)) { + hash = hash & HASHTAB_MAP_LOCK_MASK; preempt_disable(); - else - migrate_disable(); - if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { - __this_cpu_dec(*(htab->map_locked[hash])); - if (use_raw_lock) + if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { + __this_cpu_dec(*(htab->map_locked[hash])); preempt_enable(); - else - migrate_enable(); - return -EBUSY; - } - - if (use_raw_lock) + return -EBUSY; + } raw_spin_lock_irqsave(&b->raw_lock, flags); - else + } else { + if (bpf_test_and_set_map_busy()) + return -EBUSY; spin_lock_irqsave(&b->lock, flags); + } *pflags = flags; return 0; @@ -193,18 +203,15 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab, struct bucket *b, u32 hash, unsigned long flags) { - bool use_raw_lock = htab_use_raw_lock(htab); - - hash = hash & HASHTAB_MAP_LOCK_MASK; - if (use_raw_lock) + if (htab_use_raw_lock(htab)) { + hash = hash & HASHTAB_MAP_LOCK_MASK; raw_spin_unlock_irqrestore(&b->raw_lock, flags); - else - spin_unlock_irqrestore(&b->lock, flags); - __this_cpu_dec(*(htab->map_locked[hash])); - if (use_raw_lock) + __this_cpu_dec(*(htab->map_locked[hash])); preempt_enable(); - else - migrate_enable(); + } else { + spin_unlock_irqrestore(&b->lock, flags); + bpf_clear_map_busy(); + } } static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node); From patchwork Sun Aug 21 03:32:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12949856 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF87CC32793 for ; Sun, 21 Aug 2022 03:14:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231214AbiHUDOa (ORCPT ); Sat, 20 Aug 2022 23:14:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233430AbiHUDOW (ORCPT ); Sat, 20 Aug 2022 23:14:22 -0400 Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A46711822 for ; Sat, 20 Aug 2022 20:14:21 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4M9LCc5Rbfzll2D for ; Sun, 21 Aug 2022 11:13:00 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAH8r0DowFjsgGLAg--.45130S7; Sun, 21 Aug 2022 11:14:19 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org, Song Liu Cc: Hao Sun , Sebastian Andrzej Siewior , Andrii Nakryiko , Yonghong Song , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , KP Singh , "David S . Miller" , Jakub Kicinski , Stanislav Fomichev , Hao Luo , Jiri Olsa , John Fastabend , Lorenz Bauer , houtao1@huawei.com Subject: [PATCH 3/3] bpf: Propagate error from htab_lock_bucket() to userspace Date: Sun, 21 Aug 2022 11:32:23 +0800 Message-Id: <20220821033223.2598791-4-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20220821033223.2598791-1-houtao@huaweicloud.com> References: <20220821033223.2598791-1-houtao@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAH8r0DowFjsgGLAg--.45130S7 X-Coremail-Antispam: 1UD129KBjvJXoWruFWfZw48Gry7WFWrtr4Dtwb_yoW8Jryrpa yIk347Kw40vrZFyas3JFs7K3y5Zwn5Wr4UGFZ8J34Fvr1qqr9agw18t3sIqFyFqry3tr4Y vr42vFyFg345AFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBYb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r1S6rWUM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IYc2Ij64 vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4j6r4UJwCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x 0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU1c4S7UUUUU== X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns -EBUSY, it will go to next bucket. Going to next bucket may not only skip the elements in current bucket silently, but also incur out-of-bound memory access or expose kernel memory to userspace if current bucket_cnt is greater than bucket_size or zero. Fixing it by stopping batch operation and returning -EBUSY when htab_lock_bucket() fails, and the application can retry or skip the busy batch as needed. Reported-by: Hao Sun Signed-off-by: Hou Tao --- kernel/bpf/hashtab.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 3ef7a853c737..ffd39048e6da 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1711,8 +1711,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, /* do not grab the lock unless need it (bucket_cnt > 0). */ if (locked) { ret = htab_lock_bucket(htab, b, batch, &flags); - if (ret) - goto next_batch; + if (ret) { + rcu_read_unlock(); + bpf_enable_instrumentation(); + goto after_loop; + } } bucket_cnt = 0;