Message ID | 20230928101558.2594068-1-houtao@huaweicloud.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 9077fc228f09c9f975c498c55f5d2e882cd0da59 |
Delegated to: | BPF |
Headers | show |
Series | [bpf] bpf: Use kmalloc_size_roundup() to adjust size_index | expand |
Hou Tao wrote: > From: Hou Tao <houtao1@huawei.com> > > Commit d52b59315bf5 ("bpf: Adjust size_index according to the value of > KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as > reported by Nathan, the adjustment is not enough, because > __kmalloc_minalign() also decides the minimal alignment of slab object > as shown in new_kmalloc_cache() and its value may be greater than > KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM). > > Instead of invoking __kmalloc_minalign() in bpf subsystem to find the > maximal alignment, just using kmalloc_size_roundup() directly to get the > corresponding slab object size for each allocation size. If these two > sizes are unmatched, adjust size_index to select a bpf_mem_cache with > unit_size equal to the object_size of the underlying slab cache for the > allocation size. I applied this to 6.6-rc3 and it fixes the warning on my Nezha board (Allwinner D1) and also boots fine on my VisionFive 2 (JH7110) which didn't show the error before. I didn't do any other testing beyond that though, but for basic boot testing: Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> > Fixes: 822fb26bdb55 ("bpf: Add a hint to allocated objects.") > Reported-by: Nathan Chancellor <nathan@kernel.org> > Closes: https://lore.kernel.org/bpf/20230914181407.GA1000274@dev-arch.thelio-3990X/ > Signed-off-by: Hou Tao <houtao1@huawei.com> > --- > kernel/bpf/memalloc.c | 44 +++++++++++++++++++------------------------ > 1 file changed, 19 insertions(+), 25 deletions(-) > > diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c > index 1c22b90e754a..06fbb5168482 100644 > --- a/kernel/bpf/memalloc.c > +++ b/kernel/bpf/memalloc.c > @@ -958,37 +958,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags) > return !ret ? NULL : ret + LLIST_NODE_SZ; > } > > -/* Most of the logic is taken from setup_kmalloc_cache_index_table() */ > static __init int bpf_mem_cache_adjust_size(void) > { > - unsigned int size, index; > + unsigned int size; > > - /* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be > - * up-to 256-bytes. > + /* Adjusting the indexes in size_index() according to the object_size > + * of underlying slab cache, so bpf_mem_alloc() will select a > + * bpf_mem_cache with unit_size equal to the object_size of > + * the underlying slab cache. > + * > + * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is > + * 256-bytes, so only do adjustment for [8-bytes, 192-bytes]. > */ > - size = KMALLOC_MIN_SIZE; > - if (size <= 192) > - index = size_index[(size - 1) / 8]; > - else > - index = fls(size - 1) - 1; > - for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8) > - size_index[(size - 1) / 8] = index; > + for (size = 192; size >= 8; size -= 8) { > + unsigned int kmalloc_size, index; > > - /* The minimal alignment is 64-bytes, so disable 96-bytes cache and > - * use 128-bytes cache instead. > - */ > - if (KMALLOC_MIN_SIZE >= 64) { > - index = size_index[(128 - 1) / 8]; > - for (size = 64 + 8; size <= 96; size += 8) > - size_index[(size - 1) / 8] = index; > - } > + kmalloc_size = kmalloc_size_roundup(size); > + if (kmalloc_size == size) > + continue; > > - /* The minimal alignment is 128-bytes, so disable 192-bytes cache and > - * use 256-bytes cache instead. > - */ > - if (KMALLOC_MIN_SIZE >= 128) { > - index = fls(256 - 1) - 1; > - for (size = 128 + 8; size <= 192; size += 8) > + if (kmalloc_size <= 192) > + index = size_index[(kmalloc_size - 1) / 8]; > + else > + index = fls(kmalloc_size - 1) - 1; > + /* Only overwrite if necessary */ > + if (size_index[(size - 1) / 8] != index) > size_index[(size - 1) / 8] = index; > } >
Hello: This patch was applied to bpf/bpf.git (master) by Alexei Starovoitov <ast@kernel.org>: On Thu, 28 Sep 2023 18:15:58 +0800 you wrote: > From: Hou Tao <houtao1@huawei.com> > > Commit d52b59315bf5 ("bpf: Adjust size_index according to the value of > KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as > reported by Nathan, the adjustment is not enough, because > __kmalloc_minalign() also decides the minimal alignment of slab object > as shown in new_kmalloc_cache() and its value may be greater than > KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM). > > [...] Here is the summary with links: - [bpf] bpf: Use kmalloc_size_roundup() to adjust size_index https://git.kernel.org/bpf/bpf/c/9077fc228f09 You are awesome, thank you!
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index 1c22b90e754a..06fbb5168482 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -958,37 +958,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags) return !ret ? NULL : ret + LLIST_NODE_SZ; } -/* Most of the logic is taken from setup_kmalloc_cache_index_table() */ static __init int bpf_mem_cache_adjust_size(void) { - unsigned int size, index; + unsigned int size; - /* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be - * up-to 256-bytes. + /* Adjusting the indexes in size_index() according to the object_size + * of underlying slab cache, so bpf_mem_alloc() will select a + * bpf_mem_cache with unit_size equal to the object_size of + * the underlying slab cache. + * + * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is + * 256-bytes, so only do adjustment for [8-bytes, 192-bytes]. */ - size = KMALLOC_MIN_SIZE; - if (size <= 192) - index = size_index[(size - 1) / 8]; - else - index = fls(size - 1) - 1; - for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8) - size_index[(size - 1) / 8] = index; + for (size = 192; size >= 8; size -= 8) { + unsigned int kmalloc_size, index; - /* The minimal alignment is 64-bytes, so disable 96-bytes cache and - * use 128-bytes cache instead. - */ - if (KMALLOC_MIN_SIZE >= 64) { - index = size_index[(128 - 1) / 8]; - for (size = 64 + 8; size <= 96; size += 8) - size_index[(size - 1) / 8] = index; - } + kmalloc_size = kmalloc_size_roundup(size); + if (kmalloc_size == size) + continue; - /* The minimal alignment is 128-bytes, so disable 192-bytes cache and - * use 256-bytes cache instead. - */ - if (KMALLOC_MIN_SIZE >= 128) { - index = fls(256 - 1) - 1; - for (size = 128 + 8; size <= 192; size += 8) + if (kmalloc_size <= 192) + index = size_index[(kmalloc_size - 1) / 8]; + else + index = fls(kmalloc_size - 1) - 1; + /* Only overwrite if necessary */ + if (size_index[(size - 1) / 8] != index) size_index[(size - 1) / 8] = index; }