From patchwork Sat Dec 16 13:10:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 13495561 X-Patchwork-Delegate: bpf@iogearbox.net Received: from dggsgout11.his.huawei.com (unknown [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5DFE30135 for ; Sat, 16 Dec 2023 13:09:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4Ssmdq1sK7z4f3k6J for ; Sat, 16 Dec 2023 21:09:51 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 8D5261A0320 for ; Sat, 16 Dec 2023 21:09:52 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP1 (Coremail) with SMTP id cCh0CgBHlQuboX1lBaU8Dw--.10750S5; Sat, 16 Dec 2023 21:09:52 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org Cc: Martin KaFai Lau , Alexei Starovoitov , Andrii Nakryiko , Song Liu , Hao Luo , Yonghong Song , Daniel Borkmann , KP Singh , Stanislav Fomichev , Jiri Olsa , John Fastabend , kernel test robot , houtao1@huawei.com Subject: [PATCH bpf-next 1/2] bpf: Use c->unit_size to select target cache during free Date: Sat, 16 Dec 2023 21:10:51 +0800 Message-Id: <20231216131052.27621-2-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20231216131052.27621-1-houtao@huaweicloud.com> References: <20231216131052.27621-1-houtao@huaweicloud.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgBHlQuboX1lBaU8Dw--.10750S5 X-Coremail-Antispam: 1UD129KBjvJXoWxtrW3Wr1DGFy7XFyUur1ftFb_yoWfZFykpF W7Kry8Ars5ZFW8Cw1xXwn7Aa4rAw1vq3W7J3yYq348ZrZ5ur1DXrsrtrW7WF95urWDGa13 tF4kKr4furWUAaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBjb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCF04k20xvY0x0EwI xGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480 Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7 IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k2 6cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxV AFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07jn9N3UUUUU= X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao At present, bpf memory allocator uses check_obj_size() to ensure that ksize() of allocated pointer is equal with the unit_size of used bpf_mem_cache. Its purpose is to prevent bpf_mem_free() from selecting a bpf_mem_cache which has different unit_size compared with the bpf_mem_cache used for allocation. But as reported by lkp, the return value of ksize() or kmalloc_size_roundup() may change due to slab merge and it will lead to the warning report in check_obj_size(). The reported warning happened as follows: (1) in bpf_mem_cache_adjust_size(), kmalloc_size_roundup(96) returns the object_size of kmalloc-96 instead of kmalloc-cg-96. The object_size of kmalloc-96 is 96, so size_index for 96 is not adjusted accordingly. (2) the object_size of kmalloc-cg-96 is adjust from 96 to 128 due to slab merge in __kmem_cache_alias(). For SLAB, SLAB_HWCACHE_ALIGN is enabled by default for kmalloc slab, so align is 64 and size is 128 for kmalloc-cg-96. SLUB has a similar merge logic, but its object_size will not be changed, because its align is 8 under x86-64. (3) when unit_alloc() does kmalloc_node(96, __GFP_ACCOUNT, node), ksize() returns 128 instead of 96 for the returned pointer. (4) the warning in check_obj_size() is triggered. Considering the slab merge can happen in anytime (e.g, a slab created in a new module), the following case is also possible: during the initialization of bpf_global_ma, there is no slab merge and ksize() for a 96-bytes object returns 96. But after that a new slab created by a kernel module is merged to kmalloc-cg-96 and the object_size of kmalloc-cg-96 is adjust from 96 to 128 (which is possible for x86-64 + CONFIG_SLAB, because its alignment requirement is 64 for 96-bytes slab). So soon or later, when bpf_global_ma frees a 96-byte-sized pointer which is allocated from bpf_mem_cache with unit_size=96, bpf_mem_free() will free the pointer through a bpf_mem_cache in which unit_size is 128, because the return value of ksize() changes. The warning for the mismatch will be triggered again. A feasible fix is introducing similar APIs compared with ksize() and kmalloc_size_roundup() to return the actually-allocated size instead of size which may change due to slab merge, but it will introduce unnecessary dependency on the implementation details of mm subsystem. As for now the pointer of bpf_mem_cache is saved in the 8-bytes area (or 4-bytes under 32-bit host) above the returned pointer, using unit_size in the saved bpf_mem_cache to select the target cache instead of inferring the size from the pointer itself. Beside no extra dependency on mm subsystem, the performance for bpf_mem_free_rcu() is also improved as shown below. Before applying the patch, the performances of bpf_mem_alloc() and bpf_mem_free_rcu() on 8-CPUs VM with one producer are as follows: kmalloc : alloc 11.69 ± 0.28M/s free 29.58 ± 0.93M/s percpu : alloc 14.11 ± 0.52M/s free 14.29 ± 0.99M/s After apply the patch, the performance for bpf_mem_free_rcu() increases 9% and 146% for kmalloc memory and per-cpu memory respectively: kmalloc: alloc 11.01 ± 0.03M/s free 32.42 ± 0.48M/s percpu: alloc 12.84 ± 0.12M/s free 35.24 ± 0.23M/s After the fixes, there is no need to adjust size_index to fix the mismatch between allocation and free, so remove it as well. Also return NULL instead of ZERO_SIZE_PTR for zero-sized alloc in bpf_mem_alloc(), because there is no bpf_mem_cache pointer saved above ZERO_SIZE_PTR. Fixes: 9077fc228f09 ("bpf: Use kmalloc_size_roundup() to adjust size_index") Reported-by: kernel test robot Closes: https://lore.kernel.org/bpf/202310302113.9f8fe705-oliver.sang@intel.com Signed-off-by: Hou Tao --- kernel/bpf/memalloc.c | 105 +++++------------------------------------- 1 file changed, 11 insertions(+), 94 deletions(-) diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index 6a51cfe4c2d6..aa0fbf000a12 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -490,27 +490,6 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu) alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false); } -static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx) -{ - struct llist_node *first; - unsigned int obj_size; - - first = c->free_llist.first; - if (!first) - return 0; - - if (c->percpu_size) - obj_size = pcpu_alloc_size(((void **)first)[1]); - else - obj_size = ksize(first); - if (obj_size != c->unit_size) { - WARN_ONCE(1, "bpf_mem_cache[%u]: percpu %d, unexpected object size %u, expect %u\n", - idx, c->percpu_size, obj_size, c->unit_size); - return -EINVAL; - } - return 0; -} - /* When size != 0 bpf_mem_cache for each cpu. * This is typical bpf hash map use case when all elements have equal size. * @@ -521,10 +500,10 @@ static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx) int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu) { static u16 sizes[NUM_CACHES] = {96, 192, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096}; - int cpu, i, err, unit_size, percpu_size = 0; struct bpf_mem_caches *cc, __percpu *pcc; struct bpf_mem_cache *c, __percpu *pc; struct obj_cgroup *objcg = NULL; + int cpu, i, unit_size, percpu_size = 0; /* room for llist_node and per-cpu pointer */ if (percpu) @@ -560,7 +539,6 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu) pcc = __alloc_percpu_gfp(sizeof(*cc), 8, GFP_KERNEL); if (!pcc) return -ENOMEM; - err = 0; #ifdef CONFIG_MEMCG_KMEM objcg = get_obj_cgroup_from_current(); #endif @@ -574,28 +552,12 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu) c->tgt = c; init_refill_work(c); - /* Another bpf_mem_cache will be used when allocating - * c->unit_size in bpf_mem_alloc(), so doesn't prefill - * for the bpf_mem_cache because these free objects will - * never be used. - */ - if (i != bpf_mem_cache_idx(c->unit_size)) - continue; prefill_mem_cache(c, cpu); - err = check_obj_size(c, i); - if (err) - goto out; } } -out: ma->caches = pcc; - /* refill_work is either zeroed or initialized, so it is safe to - * call irq_work_sync(). - */ - if (err) - bpf_mem_alloc_destroy(ma); - return err; + return 0; } static void drain_mem_cache(struct bpf_mem_cache *c) @@ -869,7 +831,7 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size) void *ret; if (!size) - return ZERO_SIZE_PTR; + return NULL; idx = bpf_mem_cache_idx(size + LLIST_NODE_SZ); if (idx < 0) @@ -879,26 +841,17 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size) return !ret ? NULL : ret + LLIST_NODE_SZ; } -static notrace int bpf_mem_free_idx(void *ptr, bool percpu) -{ - size_t size; - - if (percpu) - size = pcpu_alloc_size(*((void **)ptr)); - else - size = ksize(ptr - LLIST_NODE_SZ); - return bpf_mem_cache_idx(size); -} - void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr) { + struct bpf_mem_cache *c; int idx; if (!ptr) return; - idx = bpf_mem_free_idx(ptr, ma->percpu); - if (idx < 0) + c = *(void **)(ptr - LLIST_NODE_SZ); + idx = bpf_mem_cache_idx(c->unit_size); + if (WARN_ON_ONCE(idx < 0)) return; unit_free(this_cpu_ptr(ma->caches)->cache + idx, ptr); @@ -906,13 +859,15 @@ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr) void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr) { + struct bpf_mem_cache *c; int idx; if (!ptr) return; - idx = bpf_mem_free_idx(ptr, ma->percpu); - if (idx < 0) + c = *(void **)(ptr - LLIST_NODE_SZ); + idx = bpf_mem_cache_idx(c->unit_size); + if (WARN_ON_ONCE(idx < 0)) return; unit_free_rcu(this_cpu_ptr(ma->caches)->cache + idx, ptr); @@ -986,41 +941,3 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags) return !ret ? NULL : ret + LLIST_NODE_SZ; } - -/* The alignment of dynamic per-cpu area is 8, so c->unit_size and the - * actual size of dynamic per-cpu area will always be matched and there is - * no need to adjust size_index for per-cpu allocation. However for the - * simplicity of the implementation, use an unified size_index for both - * kmalloc and per-cpu allocation. - */ -static __init int bpf_mem_cache_adjust_size(void) -{ - unsigned int size; - - /* Adjusting the indexes in size_index() according to the object_size - * of underlying slab cache, so bpf_mem_alloc() will select a - * bpf_mem_cache with unit_size equal to the object_size of - * the underlying slab cache. - * - * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is - * 256-bytes, so only do adjustment for [8-bytes, 192-bytes]. - */ - for (size = 192; size >= 8; size -= 8) { - unsigned int kmalloc_size, index; - - kmalloc_size = kmalloc_size_roundup(size); - if (kmalloc_size == size) - continue; - - if (kmalloc_size <= 192) - index = size_index[(kmalloc_size - 1) / 8]; - else - index = fls(kmalloc_size - 1) - 1; - /* Only overwrite if necessary */ - if (size_index[(size - 1) / 8] != index) - size_index[(size - 1) / 8] = index; - } - - return 0; -} -subsys_initcall(bpf_mem_cache_adjust_size); From patchwork Sat Dec 16 13:10:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 13495562 X-Patchwork-Delegate: bpf@iogearbox.net Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42E9F25553 for ; Sat, 16 Dec 2023 13:10:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4Ssmdp3Tghz4f3jXs for ; Sat, 16 Dec 2023 21:09:50 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id 212F61A01CA for ; Sat, 16 Dec 2023 21:09:53 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP1 (Coremail) with SMTP id cCh0CgBHlQuboX1lBaU8Dw--.10750S6; Sat, 16 Dec 2023 21:09:52 +0800 (CST) From: Hou Tao To: bpf@vger.kernel.org Cc: Martin KaFai Lau , Alexei Starovoitov , Andrii Nakryiko , Song Liu , Hao Luo , Yonghong Song , Daniel Borkmann , KP Singh , Stanislav Fomichev , Jiri Olsa , John Fastabend , kernel test robot , houtao1@huawei.com Subject: [PATCH bpf-next 2/2] selftests/bpf: Remove tests for zeroed-array kptr Date: Sat, 16 Dec 2023 21:10:52 +0800 Message-Id: <20231216131052.27621-3-houtao@huaweicloud.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20231216131052.27621-1-houtao@huaweicloud.com> References: <20231216131052.27621-1-houtao@huaweicloud.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: cCh0CgBHlQuboX1lBaU8Dw--.10750S6 X-Coremail-Antispam: 1UD129KBjvJXoWxtrWkXFWUGF1UWw13JrW5ZFb_yoWxWryDpw 1FyrnakrnFgw1Iv3WYkw1I93sxZr4fKry5GrZ3WFyDWw15J3yUAF9xCrn3JF95GFWxWr1U Jwsaga4DKFs3Ar7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBjb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCF04k20xvY0x0EwI xGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480 Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7 IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k2 6cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxV AFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x07UC9aPUUUUU= X-CM-SenderInfo: xkrx3t3r6k3tpzhluzxrxghudrp/ X-Patchwork-Delegate: bpf@iogearbox.net From: Hou Tao bpf_mem_alloc() doesn't support zero-sized allocation, so removing these tests from test_bpf_ma test. After the removal, there will no definition for bin_data_8, so remove 8 from data_sizes array and adjust the index of data_btf_ids array in all test cases accordingly. Signed-off-by: Hou Tao --- .../testing/selftests/bpf/progs/test_bpf_ma.c | 100 +++++++++--------- 1 file changed, 49 insertions(+), 51 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/test_bpf_ma.c b/tools/testing/selftests/bpf/progs/test_bpf_ma.c index b685a4aba6bd..069db9085e78 100644 --- a/tools/testing/selftests/bpf/progs/test_bpf_ma.c +++ b/tools/testing/selftests/bpf/progs/test_bpf_ma.c @@ -17,7 +17,7 @@ struct generic_map_value { char _license[] SEC("license") = "GPL"; -const unsigned int data_sizes[] = {8, 16, 32, 64, 96, 128, 192, 256, 512, 1024, 2048, 4096}; +const unsigned int data_sizes[] = {16, 32, 64, 96, 128, 192, 256, 512, 1024, 2048, 4096}; const volatile unsigned int data_btf_ids[ARRAY_SIZE(data_sizes)] = {}; int err = 0; @@ -166,7 +166,7 @@ static __always_inline void batch_percpu_free(struct bpf_map *map, unsigned int batch_percpu_free((struct bpf_map *)(&array_percpu_##size), batch, idx); \ } while (0) -DEFINE_ARRAY_WITH_KPTR(8); +/* kptr doesn't support bin_data_8 which is a zero-sized array */ DEFINE_ARRAY_WITH_KPTR(16); DEFINE_ARRAY_WITH_KPTR(32); DEFINE_ARRAY_WITH_KPTR(64); @@ -198,21 +198,20 @@ int test_batch_alloc_free(void *ctx) if ((u32)bpf_get_current_pid_tgid() != pid) return 0; - /* Alloc 128 8-bytes objects in batch to trigger refilling, - * then free 128 8-bytes objects in batch to trigger freeing. + /* Alloc 128 16-bytes objects in batch to trigger refilling, + * then free 128 16-bytes objects in batch to trigger freeing. */ - CALL_BATCH_ALLOC_FREE(8, 128, 0); - CALL_BATCH_ALLOC_FREE(16, 128, 1); - CALL_BATCH_ALLOC_FREE(32, 128, 2); - CALL_BATCH_ALLOC_FREE(64, 128, 3); - CALL_BATCH_ALLOC_FREE(96, 128, 4); - CALL_BATCH_ALLOC_FREE(128, 128, 5); - CALL_BATCH_ALLOC_FREE(192, 128, 6); - CALL_BATCH_ALLOC_FREE(256, 128, 7); - CALL_BATCH_ALLOC_FREE(512, 64, 8); - CALL_BATCH_ALLOC_FREE(1024, 32, 9); - CALL_BATCH_ALLOC_FREE(2048, 16, 10); - CALL_BATCH_ALLOC_FREE(4096, 8, 11); + CALL_BATCH_ALLOC_FREE(16, 128, 0); + CALL_BATCH_ALLOC_FREE(32, 128, 1); + CALL_BATCH_ALLOC_FREE(64, 128, 2); + CALL_BATCH_ALLOC_FREE(96, 128, 3); + CALL_BATCH_ALLOC_FREE(128, 128, 4); + CALL_BATCH_ALLOC_FREE(192, 128, 5); + CALL_BATCH_ALLOC_FREE(256, 128, 6); + CALL_BATCH_ALLOC_FREE(512, 64, 7); + CALL_BATCH_ALLOC_FREE(1024, 32, 8); + CALL_BATCH_ALLOC_FREE(2048, 16, 9); + CALL_BATCH_ALLOC_FREE(4096, 8, 10); return 0; } @@ -223,21 +222,20 @@ int test_free_through_map_free(void *ctx) if ((u32)bpf_get_current_pid_tgid() != pid) return 0; - /* Alloc 128 8-bytes objects in batch to trigger refilling, + /* Alloc 128 16-bytes objects in batch to trigger refilling, * then free these objects through map free. */ - CALL_BATCH_ALLOC(8, 128, 0); - CALL_BATCH_ALLOC(16, 128, 1); - CALL_BATCH_ALLOC(32, 128, 2); - CALL_BATCH_ALLOC(64, 128, 3); - CALL_BATCH_ALLOC(96, 128, 4); - CALL_BATCH_ALLOC(128, 128, 5); - CALL_BATCH_ALLOC(192, 128, 6); - CALL_BATCH_ALLOC(256, 128, 7); - CALL_BATCH_ALLOC(512, 64, 8); - CALL_BATCH_ALLOC(1024, 32, 9); - CALL_BATCH_ALLOC(2048, 16, 10); - CALL_BATCH_ALLOC(4096, 8, 11); + CALL_BATCH_ALLOC(16, 128, 0); + CALL_BATCH_ALLOC(32, 128, 1); + CALL_BATCH_ALLOC(64, 128, 2); + CALL_BATCH_ALLOC(96, 128, 3); + CALL_BATCH_ALLOC(128, 128, 4); + CALL_BATCH_ALLOC(192, 128, 5); + CALL_BATCH_ALLOC(256, 128, 6); + CALL_BATCH_ALLOC(512, 64, 7); + CALL_BATCH_ALLOC(1024, 32, 8); + CALL_BATCH_ALLOC(2048, 16, 9); + CALL_BATCH_ALLOC(4096, 8, 10); return 0; } @@ -251,17 +249,17 @@ int test_batch_percpu_alloc_free(void *ctx) /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling, * then free 128 16-bytes per-cpu objects in batch to trigger freeing. */ - CALL_BATCH_PERCPU_ALLOC_FREE(16, 128, 1); - CALL_BATCH_PERCPU_ALLOC_FREE(32, 128, 2); - CALL_BATCH_PERCPU_ALLOC_FREE(64, 128, 3); - CALL_BATCH_PERCPU_ALLOC_FREE(96, 128, 4); - CALL_BATCH_PERCPU_ALLOC_FREE(128, 128, 5); - CALL_BATCH_PERCPU_ALLOC_FREE(192, 128, 6); - CALL_BATCH_PERCPU_ALLOC_FREE(256, 128, 7); - CALL_BATCH_PERCPU_ALLOC_FREE(512, 64, 8); - CALL_BATCH_PERCPU_ALLOC_FREE(1024, 32, 9); - CALL_BATCH_PERCPU_ALLOC_FREE(2048, 16, 10); - CALL_BATCH_PERCPU_ALLOC_FREE(4096, 8, 11); + CALL_BATCH_PERCPU_ALLOC_FREE(16, 128, 0); + CALL_BATCH_PERCPU_ALLOC_FREE(32, 128, 1); + CALL_BATCH_PERCPU_ALLOC_FREE(64, 128, 2); + CALL_BATCH_PERCPU_ALLOC_FREE(96, 128, 3); + CALL_BATCH_PERCPU_ALLOC_FREE(128, 128, 4); + CALL_BATCH_PERCPU_ALLOC_FREE(192, 128, 5); + CALL_BATCH_PERCPU_ALLOC_FREE(256, 128, 6); + CALL_BATCH_PERCPU_ALLOC_FREE(512, 64, 7); + CALL_BATCH_PERCPU_ALLOC_FREE(1024, 32, 8); + CALL_BATCH_PERCPU_ALLOC_FREE(2048, 16, 9); + CALL_BATCH_PERCPU_ALLOC_FREE(4096, 8, 10); return 0; } @@ -275,17 +273,17 @@ int test_percpu_free_through_map_free(void *ctx) /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling, * then free these object through map free. */ - CALL_BATCH_PERCPU_ALLOC(16, 128, 1); - CALL_BATCH_PERCPU_ALLOC(32, 128, 2); - CALL_BATCH_PERCPU_ALLOC(64, 128, 3); - CALL_BATCH_PERCPU_ALLOC(96, 128, 4); - CALL_BATCH_PERCPU_ALLOC(128, 128, 5); - CALL_BATCH_PERCPU_ALLOC(192, 128, 6); - CALL_BATCH_PERCPU_ALLOC(256, 128, 7); - CALL_BATCH_PERCPU_ALLOC(512, 64, 8); - CALL_BATCH_PERCPU_ALLOC(1024, 32, 9); - CALL_BATCH_PERCPU_ALLOC(2048, 16, 10); - CALL_BATCH_PERCPU_ALLOC(4096, 8, 11); + CALL_BATCH_PERCPU_ALLOC(16, 128, 0); + CALL_BATCH_PERCPU_ALLOC(32, 128, 1); + CALL_BATCH_PERCPU_ALLOC(64, 128, 2); + CALL_BATCH_PERCPU_ALLOC(96, 128, 3); + CALL_BATCH_PERCPU_ALLOC(128, 128, 4); + CALL_BATCH_PERCPU_ALLOC(192, 128, 5); + CALL_BATCH_PERCPU_ALLOC(256, 128, 6); + CALL_BATCH_PERCPU_ALLOC(512, 64, 7); + CALL_BATCH_PERCPU_ALLOC(1024, 32, 8); + CALL_BATCH_PERCPU_ALLOC(2048, 16, 9); + CALL_BATCH_PERCPU_ALLOC(4096, 8, 10); return 0; }