diff mbox series

[bpf-next,v3] bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage

Message ID 20221118183409.give.387-kees@kernel.org (mailing list archive)
State Accepted
Commit ceb35b666d42c2e91b1f94aeca95bb5eb0943268
Delegated to: BPF
Headers show
Series [bpf-next,v3] bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 12 this patch: 12
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 12 this patch: 12
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 38 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 fail Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc

Commit Message

Kees Cook Nov. 18, 2022, 6:34 p.m. UTC
Most allocation sites in the kernel want an explicitly sized allocation
(and not "more"), and that dynamic runtime analysis tools (e.g. KASAN,
UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise bounds checking
(i.e. not something that is rounded up). A tiny handful of allocations
were doing an implicit alloc/realloc loop that actually depended on
ksize(), and didn't actually always call realloc. This has created a
long series of bugs and problems over many years related to the runtime
bounds checking, so these callers are finally being adjusted to _not_
depend on the ksize() side-effect, by doing one of several things:

- tracking the allocation size precisely and just never calling ksize()
  at all[1].

- always calling realloc and not using ksize() at all. (This solution
  ends up actually be a subset of the next solution.)

- using kmalloc_size_roundup() to explicitly round up the desired
  allocation size immediately[2].

The bpf/verifier case is this another of this latter case, and is the
last outstanding case to be fixed in the kernel.

Because some of the dynamic bounds checking depends on the size being an
_argument_ to an allocator function (i.e. see the __alloc_size attribute),
the ksize() users are rare, and it could waste local variables, it
was been deemed better to explicitly separate the rounding up from the
allocation itself[3].

Round up allocations with kmalloc_size_roundup() so that the verifier's
use of ksize() is always accurate.

[1] e.g.:
    https://git.kernel.org/linus/712f210a457d
    https://git.kernel.org/linus/72c08d9f4c72

[2] e.g.:
    https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad
    https://git.kernel.org/netdev/net-next/c/ab3f7828c979
    https://git.kernel.org/netdev/net-next/c/d6dd508080a3

[3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com/

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Song Liu <song@kernel.org>
Cc: Yonghong Song <yhs@fb.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Stanislav Fomichev <sdf@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: bpf@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
v3:
- memory leak already taken into -next (daniel)
- improve commit log (daniel)
- drop optimization patch for now (sdf)
v2: https://lore.kernel.org/lkml/20221029024444.gonna.633-kees@kernel.org/
v1: https://lore.kernel.org/lkml/20221018090550.never.834-kees@kernel.org/
---
 kernel/bpf/verifier.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Stanislav Fomichev Nov. 18, 2022, 9:46 p.m. UTC | #1
On Fri, Nov 18, 2022 at 10:34 AM Kees Cook <keescook@chromium.org> wrote:
>
> Most allocation sites in the kernel want an explicitly sized allocation
> (and not "more"), and that dynamic runtime analysis tools (e.g. KASAN,
> UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise bounds checking
> (i.e. not something that is rounded up). A tiny handful of allocations
> were doing an implicit alloc/realloc loop that actually depended on
> ksize(), and didn't actually always call realloc. This has created a
> long series of bugs and problems over many years related to the runtime
> bounds checking, so these callers are finally being adjusted to _not_
> depend on the ksize() side-effect, by doing one of several things:
>
> - tracking the allocation size precisely and just never calling ksize()
>   at all[1].
>
> - always calling realloc and not using ksize() at all. (This solution
>   ends up actually be a subset of the next solution.)
>
> - using kmalloc_size_roundup() to explicitly round up the desired
>   allocation size immediately[2].
>
> The bpf/verifier case is this another of this latter case, and is the
> last outstanding case to be fixed in the kernel.
>
> Because some of the dynamic bounds checking depends on the size being an
> _argument_ to an allocator function (i.e. see the __alloc_size attribute),
> the ksize() users are rare, and it could waste local variables, it
> was been deemed better to explicitly separate the rounding up from the
> allocation itself[3].
>
> Round up allocations with kmalloc_size_roundup() so that the verifier's
> use of ksize() is always accurate.
>
> [1] e.g.:
>     https://git.kernel.org/linus/712f210a457d
>     https://git.kernel.org/linus/72c08d9f4c72
>
> [2] e.g.:
>     https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad
>     https://git.kernel.org/netdev/net-next/c/ab3f7828c979
>     https://git.kernel.org/netdev/net-next/c/d6dd508080a3
>
> [3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com/
>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: John Fastabend <john.fastabend@gmail.com>
> Cc: Andrii Nakryiko <andrii@kernel.org>
> Cc: Martin KaFai Lau <martin.lau@linux.dev>
> Cc: Song Liu <song@kernel.org>
> Cc: Yonghong Song <yhs@fb.com>
> Cc: KP Singh <kpsingh@kernel.org>
> Cc: Stanislav Fomichev <sdf@google.com>

Acked-by: Stanislav Fomichev <sdf@google.com>

> Cc: Hao Luo <haoluo@google.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: bpf@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> v3:
> - memory leak already taken into -next (daniel)
> - improve commit log (daniel)
> - drop optimization patch for now (sdf)
> v2: https://lore.kernel.org/lkml/20221029024444.gonna.633-kees@kernel.org/
> v1: https://lore.kernel.org/lkml/20221018090550.never.834-kees@kernel.org/
> ---
>  kernel/bpf/verifier.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index beed7e03addc..c596c7c75d25 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1010,9 +1010,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>         if (unlikely(check_mul_overflow(n, size, &bytes)))
>                 return NULL;
>
> -       if (ksize(dst) < bytes) {
> +       if (ksize(dst) < ksize(src)) {
>                 kfree(dst);
> -               dst = kmalloc_track_caller(bytes, flags);
> +               dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
>                 if (!dst)
>                         return NULL;
>         }
> @@ -1029,12 +1029,14 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>   */
>  static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
>  {
> +       size_t alloc_size;
>         void *new_arr;
>
>         if (!new_n || old_n == new_n)
>                 goto out;
>
> -       new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
> +       alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
> +       new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
>         if (!new_arr) {
>                 kfree(arr);
>                 return NULL;
> @@ -2506,9 +2508,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
>  {
>         u32 cnt = cur->jmp_history_cnt;
>         struct bpf_idx_pair *p;
> +       size_t alloc_size;
>
>         cnt++;
> -       p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
> +       alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
> +       p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
>         if (!p)
>                 return -ENOMEM;
>         p[cnt - 1].idx = env->insn_idx;
> --
> 2.34.1
>
patchwork-bot+netdevbpf@kernel.org Nov. 21, 2022, 2:20 p.m. UTC | #2
Hello:

This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Fri, 18 Nov 2022 10:34:14 -0800 you wrote:
> Most allocation sites in the kernel want an explicitly sized allocation
> (and not "more"), and that dynamic runtime analysis tools (e.g. KASAN,
> UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise bounds checking
> (i.e. not something that is rounded up). A tiny handful of allocations
> were doing an implicit alloc/realloc loop that actually depended on
> ksize(), and didn't actually always call realloc. This has created a
> long series of bugs and problems over many years related to the runtime
> bounds checking, so these callers are finally being adjusted to _not_
> depend on the ksize() side-effect, by doing one of several things:
> 
> [...]

Here is the summary with links:
  - [bpf-next,v3] bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage
    https://git.kernel.org/bpf/bpf-next/c/ceb35b666d42

You are awesome, thank you!
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index beed7e03addc..c596c7c75d25 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1010,9 +1010,9 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
 	if (unlikely(check_mul_overflow(n, size, &bytes)))
 		return NULL;
 
-	if (ksize(dst) < bytes) {
+	if (ksize(dst) < ksize(src)) {
 		kfree(dst);
-		dst = kmalloc_track_caller(bytes, flags);
+		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
 		if (!dst)
 			return NULL;
 	}
@@ -1029,12 +1029,14 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
  */
 static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
 {
+	size_t alloc_size;
 	void *new_arr;
 
 	if (!new_n || old_n == new_n)
 		goto out;
 
-	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
+	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
+	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
 	if (!new_arr) {
 		kfree(arr);
 		return NULL;
@@ -2506,9 +2508,11 @@  static int push_jmp_history(struct bpf_verifier_env *env,
 {
 	u32 cnt = cur->jmp_history_cnt;
 	struct bpf_idx_pair *p;
+	size_t alloc_size;
 
 	cnt++;
-	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
+	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
+	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
 	if (!p)
 		return -ENOMEM;
 	p[cnt - 1].idx = env->insn_idx;