From patchwork Fri Dec 8 10:23:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 13485199 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F12C170F for ; Fri, 8 Dec 2023 02:22:54 -0800 (PST) Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4SmnJn08rNz4f3kG8 for ; Fri, 8 Dec 2023 18:22:49 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 5774A1A035F for ; Fri, 8 Dec 2023 18:22:51 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP1 (Coremail) with SMTP id cCh0CgCnqxF57nJlY85LDA--.46390S4; Fri, 08 Dec 2023 18:22:51 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org Cc: Martin KaFai Lau , Alexei Starovoitov , Andrii Nakryiko , Song Liu , Hao Luo , Yonghong Song , Daniel Borkmann , KP Singh , Stanislav Fomichev , Jiri Olsa , John Fastabend , houtao1@huawei.com Subject: [PATCH bpf-next 0/7] bpf: Fixes for maybe_wait_bpf_programs() Date: Fri, 8 Dec 2023 18:23:48 +0800 Message-Id: <20231208102355.2628918-1-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgCnqxF57nJlY85LDA--.46390S4 X-Coremail-Antispam: 1UD129KBjvJXoW7Ww4rWrWxZFy5Wry8Ww15Arb_yoW8WryUpF Zag343KF18GrnFkr43A3yjv3s0yr4ktw1xJr1ayry0qwsrZFy09ry8KanYy345ArWftF4F qry8trn3Kw1UAaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUkFb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Cr0_Gr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I 0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40E x7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x 0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IY c2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s 026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF 0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0x vE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2 jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07UWE__UUUUU= X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao Hi, The patch set aims to fix the problems found when inspecting the code related with maybe_wait_bpf_programs(). Patch #1 removes unnecessary invocation of maybe_wait_bpf_programs(). Patch #2 calls maybe_wait_bpf_programs() only once for batched update. Patch #3 adds the missed waiting when doing batched lookup_deletion on htab of maps. Patch #4 does wait only if the update or deletion operation succeeds. Patch #5 fixes the value of batch.count when memory allocation fails. Patch #6 does the similar thing as patch #4, except it fixes the problem for batched map operations. Patch #7 handles sleepable BPF program in maybe_wait_bpf_programs(), but it doesn't handle the bpf syscall from syscall program. Please see individual patches for more details. Comments are always welcome. Hou Tao (7): bpf: Remove unnecessary wait from bpf_map_copy_value() bpf: Call maybe_wait_bpf_programs() only once for generic_map_update_batch() bpf: Add missed maybe_wait_bpf_programs() for htab of maps bpf: Only call maybe_wait_bpf_programs() when map operation succeeds bpf: Set uattr->batch.count as zero before batched update or deletion bpf: Only call maybe_wait_bpf_programs() when at least one map operation succeeds bpf: Wait for sleepable BPF program in maybe_wait_bpf_programs() include/linux/bpf.h | 14 +++++------ kernel/bpf/hashtab.c | 20 ++++++++------- kernel/bpf/syscall.c | 60 +++++++++++++++++++++++++++++++------------- 3 files changed, 61 insertions(+), 33 deletions(-)