diff mbox series

[bpf-next,1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN

Message ID 20221123141546.238297-2-sunhao.th@gmail.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: Add LDX/STX/ST sanitize in jited BPF progs | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-21 fail Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 fail Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1400 this patch: 1400
netdev/cc_maintainers warning 9 maintainers not CCed: dave.hansen@linux.intel.com netdev@vger.kernel.org hpa@zytor.com mingo@redhat.com tglx@linutronix.de bp@alien8.de dsahern@kernel.org yoshfuji@linux-ipv6.org x86@kernel.org
netdev/build_clang success Errors and warnings before: 168 this patch: 168
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1390 this patch: 1390
netdev/checkpatch warning CHECK: Alignment should match open parenthesis CHECK: multiple assignments should be avoided CHECK: spaces preferred around that '*' (ctx:WxV) WARNING: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 fail Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc

Commit Message

Hao Sun Nov. 23, 2022, 2:15 p.m. UTC
Make the verifier sanitize STX/ST insns in jited BPF programs
by dispatching addr to kernel functions that are instrumented
by KASAN.

Only STX/ST insns that aren't in patches added by other passes
using REG_AX or dst_reg isn't R10 are sanitized. The former
confilicts with us, the latter are trivial for the verifier to
check, skip them to reduce the footprint.

The instrumentation is conducted in two places: fixup and jit.
During fixup, R0 and R1 are backed up or exchanged with dst_reg,
and the address to check is stored into R1, and the corresponding
bpf_asan_storeN() is inserted. In jit, R1~R5 are pushed on stack
before calling the sanitize function. The sanitize functions are
instrumented with KASAN and they simply write to the target addr
for certain bytes, KASAN conducts the actual checking. An extra
Kconfig is used to enable this.

Signed-off-by: Hao Sun <sunhao.th@gmail.com>
---
 arch/x86/net/bpf_jit_comp.c |  32 +++++++++++
 include/linux/bpf.h         |   9 ++++
 kernel/bpf/Kconfig          |  14 +++++
 kernel/bpf/verifier.c       | 102 ++++++++++++++++++++++++++++++++++++
 4 files changed, 157 insertions(+)
diff mbox series

Patch

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index cec5195602bc..ceaef69adc49 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -338,7 +338,39 @@  static int emit_patch(u8 **pprog, void *func, void *ip, u8 opcode)
 
 static int emit_call(u8 **pprog, void *func, void *ip)
 {
+#ifdef CONFIG_BPF_PROG_KASAN
+	s64 offset;
+	u8 *prog = *pprog;
+	bool is_sanitize =
+		func == bpf_asan_store8 || func == bpf_asan_store16 ||
+		func == bpf_asan_store32 || func == bpf_asan_store64;
+
+	if (!is_sanitize)
+		return emit_patch(pprog, func, ip, 0xE8);
+
+	/* Six extra bytes from push insns */
+	offset = func - (ip + X86_PATCH_SIZE + 6);
+	BUG_ON(!is_simm32(offset));
+
+	/* R1 has the addr to check, backup R1~R5 here, we don't
+	 * have free regs during the fixup.
+	 */
+	EMIT1(0x57); /* push rdi */
+	EMIT1(0x56); /* push rsi */
+	EMIT1(0x52); /* push rdx */
+	EMIT1(0x51); /* push rcx */
+	EMIT2(0x41, 0x50); /* push r8 */
+	EMIT1_off32(0xE8, offset);
+	EMIT2(0x41, 0x58); /* pop r8 */
+	EMIT1(0x59); /* pop rcx */
+	EMIT1(0x5a); /* pop rdx */
+	EMIT1(0x5e); /* pop rsi */
+	EMIT1(0x5f); /* pop rdi */
+	*pprog = prog;
+	return 0;
+#else
 	return emit_patch(pprog, func, ip, 0xE8);
+#endif
 }
 
 static int emit_jump(u8 **pprog, void *func, void *ip)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c9eafa67f2a2..a7eb99928fee 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2835,4 +2835,13 @@  static inline bool type_is_alloc(u32 type)
 	return type & MEM_ALLOC;
 }
 
+#ifdef CONFIG_BPF_PROG_KASAN
+
+u64 bpf_asan_store8(u8 *addr);
+u64 bpf_asan_store16(u16 *addr);
+u64 bpf_asan_store32(u32 *addr);
+u64 bpf_asan_store64(u64 *addr);
+
+#endif /* CONFIG_BPF_PROG_KASAN */
+
 #endif /* _LINUX_BPF_H */
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index 2dfe1079f772..aeba6059b9e2 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -99,4 +99,18 @@  config BPF_LSM
 
 	  If you are unsure how to answer this question, answer N.
 
+config BPF_PROG_KASAN
+	bool "Enable BPF Program Address Sanitize"
+	depends on BPF_JIT
+	depends on KASAN
+	depends on X86_64
+    help
+	  Enables instrumentation on LDX/STX/ST insn to capture memory
+	  access errors in BPF programs missed by the verifier.
+
+	  The actual check is conducted by KASAN, this feature presents
+	  certain overhead, and should be used mainly by testing purpose.
+
+	  If you are unsure how to answer this question, answer N.
+
 endmenu # "BPF subsystem"
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9528a066cfa5..af214f0191e0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15221,6 +15221,25 @@  static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	return 0;
 }
 
+#ifdef CONFIG_BPF_PROG_KASAN
+
+/* Those are functions instrumented with KASAN for actual sanitizing. */
+
+#define BPF_ASAN_STORE(n)                         \
+	notrace u64 bpf_asan_store##n(u##n *addr) \
+	{                                         \
+		u##n ret = *addr;                 \
+		*addr = ret;                      \
+		return ret;                       \
+	}
+
+BPF_ASAN_STORE(8);
+BPF_ASAN_STORE(16);
+BPF_ASAN_STORE(32);
+BPF_ASAN_STORE(64);
+
+#endif
+
 /* Do various post-verification rewrites in a single program pass.
  * These rewrites simplify JIT and interpreter implementations.
  */
@@ -15238,6 +15257,9 @@  static int do_misc_fixups(struct bpf_verifier_env *env)
 	struct bpf_prog *new_prog;
 	struct bpf_map *map_ptr;
 	int i, ret, cnt, delta = 0;
+#ifdef CONFIG_BPF_PROG_KASAN
+	bool in_patch_use_ax = false;
+#endif
 
 	for (i = 0; i < insn_cnt; i++, insn++) {
 		/* Make divide-by-zero exceptions impossible. */
@@ -15354,6 +15376,86 @@  static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+#ifdef CONFIG_BPF_PROG_KASAN
+		/* Patches that use REG_AX confilict with us, skip it.
+		 * This starts with first use of REG_AX, stops only when
+		 * we see next ldx/stx/st insn with valid aux information.
+		 */
+		aux = &env->insn_aux_data[i + delta];
+		if (in_patch_use_ax && (int)aux->ptr_type != 0)
+			in_patch_use_ax = false;
+		if (insn->dst_reg == BPF_REG_AX || insn->src_reg == BPF_REG_AX)
+			in_patch_use_ax = true;
+
+		/* Sanitize ST/STX operation. */
+		if (BPF_CLASS(insn->code) == BPF_ST ||
+		    BPF_CLASS(insn->code) == BPF_STX) {
+			struct bpf_insn sanitize_fn;
+			struct bpf_insn *patch = &insn_buf[0];
+
+			/* Skip st/stx to R10, they're trivial to check. */
+			if (in_patch_use_ax || insn->dst_reg == BPF_REG_10 ||
+				BPF_MODE(insn->code) == BPF_NOSPEC)
+				continue;
+
+			switch (BPF_SIZE(insn->code)) {
+			case BPF_B:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store8);
+				break;
+			case BPF_H:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store16);
+				break;
+			case BPF_W:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store32);
+				break;
+			case BPF_DW:
+				sanitize_fn = BPF_EMIT_CALL(bpf_asan_store64);
+				break;
+			}
+
+			/* Backup R0 and R1, store `dst + off` to R1, invoke the
+			 * sanitize fn, and then restore each reg.
+			 */
+			if (insn->dst_reg == BPF_REG_1) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0);
+			} else if (insn->dst_reg == BPF_REG_0) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, insn->dst_reg);
+				*patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0);
+			}
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off);
+			/* Call sanitize fn, R1~R5 are saved to stack during jit. */
+			*patch++ = sanitize_fn;
+			if (insn->off != 0)
+				*patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off);
+			if (insn->dst_reg == BPF_REG_1) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX);
+			} else if (insn->dst_reg == BPF_REG_0) {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			} else {
+				*patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg);
+				*patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_1);
+				*patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+			}
+			*patch++ = *insn;
+			cnt = patch - insn_buf;
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta += cnt - 1;
+			env->prog = prog = new_prog;
+			insn = new_prog->insnsi + i + delta;
+			continue;
+		}
+#endif
+
 		if (insn->code != (BPF_JMP | BPF_CALL))
 			continue;
 		if (insn->src_reg == BPF_PSEUDO_CALL)