diff mbox series

[bpf-next,3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs

Message ID 20221217021711.172247-4-eddyz87@gmail.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series reduce BPF_ID_MAP_SIZE to fit only valid programs | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 66 this patch: 66
netdev/cc_maintainers warning 7 maintainers not CCed: kpsingh@kernel.org haoluo@google.com song@kernel.org martin.lau@linux.dev sdf@google.com john.fastabend@gmail.com jolsa@kernel.org
netdev/build_clang success Errors and warnings before: 13 this patch: 13
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 66 this patch: 66
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 fail Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc

Commit Message

Eduard Zingerman Dec. 17, 2022, 2:17 a.m. UTC
BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
however this is enforced by function check_max_stack_depth() which is
executed after do_check_{subprogs,main}().

This means that when check_ids() is executed the maximal stack depth is not
yet verified, thus in theory the number of stack spills might be
MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.

However, any program with stack usage deeper than
MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.

Hence save some memory by reducing the BPF_ID_MAP_SIZE.

This is a follow up for
https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h | 4 ++--
 kernel/bpf/verifier.c        | 6 ++++--
 2 files changed, 6 insertions(+), 4 deletions(-)

Comments

Yonghong Song Dec. 17, 2022, 6:59 p.m. UTC | #1
On 12/16/22 6:17 PM, Eduard Zingerman wrote:
> BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
> however this is enforced by function check_max_stack_depth() which is
> executed after do_check_{subprogs,main}().
> 
> This means that when check_ids() is executed the maximal stack depth is not
> yet verified, thus in theory the number of stack spills might be
> MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.
> 
> However, any program with stack usage deeper than
> MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.
> 
> Hence save some memory by reducing the BPF_ID_MAP_SIZE.
> 
> This is a follow up for
> https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Yonghong Song <yhs@fb.com>
Andrii Nakryiko Dec. 20, 2022, 9:06 p.m. UTC | #2
On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
> however this is enforced by function check_max_stack_depth() which is
> executed after do_check_{subprogs,main}().
>
> This means that when check_ids() is executed the maximal stack depth is not
> yet verified, thus in theory the number of stack spills might be
> MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.
>
> However, any program with stack usage deeper than
> MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.
>
> Hence save some memory by reducing the BPF_ID_MAP_SIZE.
>
> This is a follow up for
> https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> ---

LGTM, thanks.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  include/linux/bpf_verifier.h | 4 ++--
>  kernel/bpf/verifier.c        | 6 ++++--
>  2 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 53d175cbaa02..da72e16f1dee 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -274,8 +274,8 @@ struct bpf_id_pair {
>  };
>
>  #define MAX_CALL_FRAMES 8
> -/* Maximum number of register states that can exist at once */
> -#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> +/* Maximum number of register states that can exist at once in a valid program */
> +#define BPF_ID_MAP_SIZE (MAX_BPF_REG * MAX_CALL_FRAMES + MAX_BPF_STACK / BPF_REG_SIZE)
>  struct bpf_verifier_state {
>         /* call stack tracking */
>         struct bpf_func_state *frame[MAX_CALL_FRAMES];
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a5255a0dcbb6..fb040516a946 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12951,8 +12951,10 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
>                 if (idmap[i].old == old_id)
>                         return idmap[i].cur == cur_id;
>         }
> -       /* We ran out of idmap slots, which should be impossible */
> -       WARN_ON_ONCE(1);
> +       /* Run out of slots in idmap, conservatively return false, cached
> +        * state will not be reused. The BPF_ID_MAP_SIZE is sufficiently
> +        * large to fit all valid programs.
> +        */
>         return false;
>  }
>
> --
> 2.38.2
>
diff mbox series

Patch

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 53d175cbaa02..da72e16f1dee 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -274,8 +274,8 @@  struct bpf_id_pair {
 };
 
 #define MAX_CALL_FRAMES 8
-/* Maximum number of register states that can exist at once */
-#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
+/* Maximum number of register states that can exist at once in a valid program */
+#define BPF_ID_MAP_SIZE (MAX_BPF_REG * MAX_CALL_FRAMES + MAX_BPF_STACK / BPF_REG_SIZE)
 struct bpf_verifier_state {
 	/* call stack tracking */
 	struct bpf_func_state *frame[MAX_CALL_FRAMES];
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a5255a0dcbb6..fb040516a946 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12951,8 +12951,10 @@  static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
 		if (idmap[i].old == old_id)
 			return idmap[i].cur == cur_id;
 	}
-	/* We ran out of idmap slots, which should be impossible */
-	WARN_ON_ONCE(1);
+	/* Run out of slots in idmap, conservatively return false, cached
+	 * state will not be reused. The BPF_ID_MAP_SIZE is sufficiently
+	 * large to fit all valid programs.
+	 */
 	return false;
 }