diff mbox series

[bpf-next,v2,2/3] bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage

Message ID 20221029025433.2533810-2-keescook@chromium.org (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series bpf/verifier: Use kmalloc_size_roundup() to match ksize() usage | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR pending PR summary
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 10 this patch: 10
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 5 this patch: 5
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 10 this patch: 10
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 38 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 pending Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-5 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 fail Logs for test_progs on s390x with gcc

Commit Message

Kees Cook Oct. 29, 2022, 2:54 a.m. UTC
Round up allocations with kmalloc_size_roundup() so that the verifier's
use of ksize() is always accurate and no special handling of the memory
is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
information back up to callers so they can use the space immediately,
so array resizing to happen less frequently as well.

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Song Liu <song@kernel.org>
Cc: Yonghong Song <yhs@fb.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Stanislav Fomichev <sdf@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: bpf@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 kernel/bpf/verifier.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Daniel Borkmann Nov. 1, 2022, 1:52 p.m. UTC | #1
On 10/29/22 4:54 AM, Kees Cook wrote:
> Round up allocations with kmalloc_size_roundup() so that the verifier's
> use of ksize() is always accurate and no special handling of the memory
> is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
> information back up to callers so they can use the space immediately,
> so array resizing to happen less frequently as well.
> 
[...]

The commit message is a bit cryptic here without further context. Is this
a bug fix or improvement? I read the latter, but it would be good to have
more context here for reviewers (maybe Link tag pointing to some discussion
or the like). Also, why is the kmalloc_size_roundup() not hidden for kmalloc
callers, isn't this a tree-wide issue?

Thanks,
Daniel

>   kernel/bpf/verifier.c | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index eb8c34db74c7..1c040d27b8f6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1008,9 +1008,9 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>   	if (unlikely(check_mul_overflow(n, size, &bytes)))
>   		return NULL;
>   
> -	if (ksize(dst) < bytes) {
> +	if (ksize(dst) < ksize(src)) {
>   		kfree(dst);
> -		dst = kmalloc_track_caller(bytes, flags);
> +		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
>   		if (!dst)
>   			return NULL;
>   	}
> @@ -1027,12 +1027,14 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>    */
>   static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
>   {
> +	size_t alloc_size;
>   	void *new_arr;
>   
>   	if (!new_n || old_n == new_n)
>   		goto out;
>   
> -	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
> +	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
> +	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
>   	if (!new_arr) {
>   		kfree(arr);
>   		return NULL;
> @@ -2504,9 +2506,11 @@ static int push_jmp_history(struct bpf_verifier_env *env,
>   {
>   	u32 cnt = cur->jmp_history_cnt;
>   	struct bpf_idx_pair *p;
> +	size_t alloc_size;
>   
>   	cnt++;
> -	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
> +	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
> +	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
>   	if (!p)
>   		return -ENOMEM;
>   	p[cnt - 1].idx = env->insn_idx;
>
Kees Cook Nov. 1, 2022, 5:01 p.m. UTC | #2
On Tue, Nov 01, 2022 at 02:52:16PM +0100, Daniel Borkmann wrote:
> On 10/29/22 4:54 AM, Kees Cook wrote:
> > Round up allocations with kmalloc_size_roundup() so that the verifier's
> > use of ksize() is always accurate and no special handling of the memory
> > is needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Pass the new size
> > information back up to callers so they can use the space immediately,
> > so array resizing to happen less frequently as well.
> > 
> [...]
> 
> The commit message is a bit cryptic here without further context. Is this
> a bug fix or improvement? I read the latter, but it would be good to have

It's an improvement -- e.g. it depends on the recently added
kmalloc_size_roundup() helper.

> more context here for reviewers (maybe Link tag pointing to some discussion
> or the like). Also, why is the kmalloc_size_roundup() not hidden for kmalloc
> callers, isn't this a tree-wide issue?

The main issue is that _most_ allocation callers want an explicitly sized
allocation (and not "more"), and that dynamic runtime analysis tools
(e.g. KASAN, UBSAN_BOUNDS, FORTIFY_SOURCE, etc) are looking for precise
bounds checking (i.e. not something that is rounded up). A tiny handful
of allocations were doing an implicit alloc/realloc loop that actually
depended on ksize(), and didn't actually always call realloc. This has
created a long series of bugs and problems over many years related to the
runtime bounds checking, so these callers are finally being adjusted to
_not_ depend on the ksize() side-effect, by doing one of several things:

- tracking the allocation size precisely and just never calling ksize()
  at all[1].

- always calling realloc and not using ksize() at all. (This solution
  ends up actually be a subset of the next solution.)

- using kmalloc_size_roundup() to explicitly round up the desired
  allocation size immediately[2].

The bpf/verifier case is this another of this latter case.

Because some of the dynamic bounds checking depends on the size being an
_argument_ to an allocator function (i.e. see the __alloc_size attribute),
the ksize() users are rare, and it could waste local variables, it
was been deemed better to explicitly separate the rounding up from the
allocation itself[3].

Hopefully that helps clarify! :)

-Kees

[1] e.g.:
    https://git.kernel.org/linus/712f210a457d
    https://git.kernel.org/linus/72c08d9f4c72

[2] e.g.:
    https://git.kernel.org/netdev/net-next/c/12d6c1d3a2ad
    https://git.kernel.org/netdev/net-next/c/ab3f7828c979
    https://git.kernel.org/netdev/net-next/c/d6dd508080a3

[3] https://lore.kernel.org/lkml/0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com/
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index eb8c34db74c7..1c040d27b8f6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1008,9 +1008,9 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
 	if (unlikely(check_mul_overflow(n, size, &bytes)))
 		return NULL;
 
-	if (ksize(dst) < bytes) {
+	if (ksize(dst) < ksize(src)) {
 		kfree(dst);
-		dst = kmalloc_track_caller(bytes, flags);
+		dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);
 		if (!dst)
 			return NULL;
 	}
@@ -1027,12 +1027,14 @@  static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
  */
 static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
 {
+	size_t alloc_size;
 	void *new_arr;
 
 	if (!new_n || old_n == new_n)
 		goto out;
 
-	new_arr = krealloc_array(arr, new_n, size, GFP_KERNEL);
+	alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
+	new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
 	if (!new_arr) {
 		kfree(arr);
 		return NULL;
@@ -2504,9 +2506,11 @@  static int push_jmp_history(struct bpf_verifier_env *env,
 {
 	u32 cnt = cur->jmp_history_cnt;
 	struct bpf_idx_pair *p;
+	size_t alloc_size;
 
 	cnt++;
-	p = krealloc(cur->jmp_history, cnt * sizeof(*p), GFP_USER);
+	alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
+	p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
 	if (!p)
 		return -ENOMEM;
 	p[cnt - 1].idx = env->insn_idx;