Message ID | 20240209040608.98927-9-alexei.starovoitov@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | bpf: Introduce BPF arena. | expand |
On Thu, 2024-02-08 at 20:05 -0800, Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@kernel.org> > > LLVM generates bpf_cast_kern and bpf_cast_user instructions while translating > pointers with __attribute__((address_space(1))). > > rX = cast_kern(rY) is processed by the verifier and converted to > normal 32-bit move: wX = wY > > bpf_cast_user has to be converted by JIT. > > rX = cast_user(rY) is > > aux_reg = upper_32_bits of arena->user_vm_start > aux_reg <<= 32 > wX = wY // clear upper 32 bits of dst register > if (wX) // if not zero add upper bits of user_vm_start > wX |= aux_reg > > JIT can do it more efficiently: > > mov dst_reg32, src_reg32 // 32-bit move > shl dst_reg, 32 > or dst_reg, user_vm_start > rol dst_reg, 32 > xor r11, r11 > test dst_reg32, dst_reg32 // check if lower 32-bit are zero > cmove r11, dst_reg // if so, set dst_reg to zero > // Intel swapped src/dst register encoding in CMOVcc > > Signed-off-by: Alexei Starovoitov <ast@kernel.org> Checked generated x86 code for all reg combinations, works as expected. Acked-by: Eduard Zingerman <eddyz87@gmail.com>
On Sat, Feb 10, 2024 at 12:40 AM Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote: > > On Fri, 9 Feb 2024 at 05:06, Alexei Starovoitov > <alexei.starovoitov@gmail.com> wrote: > > > > From: Alexei Starovoitov <ast@kernel.org> > > > > LLVM generates bpf_cast_kern and bpf_cast_user instructions while translating > > pointers with __attribute__((address_space(1))). > > > > rX = cast_kern(rY) is processed by the verifier and converted to > > normal 32-bit move: wX = wY > > > > bpf_cast_user has to be converted by JIT. > > > > rX = cast_user(rY) is > > > > aux_reg = upper_32_bits of arena->user_vm_start > > aux_reg <<= 32 > > wX = wY // clear upper 32 bits of dst register > > if (wX) // if not zero add upper bits of user_vm_start > > wX |= aux_reg > > > > Would this be ok if the rY is somehow aligned at the 4GB boundary, and > the lower 32-bits end up being zero. > Then this transformation would confuse it with the NULL case, right? yes. it will. I tried to fix it by reserving a zero page, but the end result was bad. See discussion with Barret. So we decided to drop this idea. Might come back to it eventually. Also, I was thinking of doing if (rX) instead of if (wX) to mitigate a bit, but that is probably wrong too. The best is to mitigate this inside bpf program by never returning lo32 zero from bpf_alloc() function. In general with the latest llvm we see close to zero cast_user when bpf prog is not mixing (void *) with (void __arena *) casts, so it shouldn't be an issue in practice with patches as-is.
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 883b7f604b9a..a042ed57af7b 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1272,13 +1272,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image bool tail_call_seen = false; bool seen_exit = false; u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY]; - u64 arena_vm_start; + u64 arena_vm_start, user_vm_start; int i, excnt = 0; int ilen, proglen = 0; u8 *prog = temp; int err; arena_vm_start = bpf_arena_get_kern_vm_start(bpf_prog->aux->arena); + user_vm_start = bpf_arena_get_user_vm_start(bpf_prog->aux->arena); detect_reg_usage(insn, insn_cnt, callee_regs_used, &tail_call_seen); @@ -1346,6 +1347,39 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image break; case BPF_ALU64 | BPF_MOV | BPF_X: + if (insn->off == BPF_ARENA_CAST_USER) { + if (dst_reg != src_reg) + /* 32-bit mov */ + emit_mov_reg(&prog, false, dst_reg, src_reg); + /* shl dst_reg, 32 */ + maybe_emit_1mod(&prog, dst_reg, true); + EMIT3(0xC1, add_1reg(0xE0, dst_reg), 32); + + /* or dst_reg, user_vm_start */ + maybe_emit_1mod(&prog, dst_reg, true); + if (is_axreg(dst_reg)) + EMIT1_off32(0x0D, user_vm_start >> 32); + else + EMIT2_off32(0x81, add_1reg(0xC8, dst_reg), user_vm_start >> 32); + + /* rol dst_reg, 32 */ + maybe_emit_1mod(&prog, dst_reg, true); + EMIT3(0xC1, add_1reg(0xC0, dst_reg), 32); + + /* xor r11, r11 */ + EMIT3(0x4D, 0x31, 0xDB); + + /* test dst_reg32, dst_reg32; check if lower 32-bit are zero */ + maybe_emit_mod(&prog, dst_reg, dst_reg, false); + EMIT2(0x85, add_2reg(0xC0, dst_reg, dst_reg)); + + /* cmove r11, dst_reg; if so, set dst_reg to zero */ + /* WARNING: Intel swapped src/dst register encoding in CMOVcc !!! */ + maybe_emit_mod(&prog, AUX_REG, dst_reg, true); + EMIT3(0x0F, 0x44, add_2reg(0xC0, AUX_REG, dst_reg)); + break; + } + fallthrough; case BPF_ALU | BPF_MOV | BPF_X: if (insn->off == 0) emit_mov_reg(&prog, @@ -3424,6 +3458,11 @@ void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, } } +bool bpf_jit_supports_arena(void) +{ + return true; +} + bool bpf_jit_supports_ptr_xchg(void) { return true; diff --git a/include/linux/filter.h b/include/linux/filter.h index cd76d43412d0..78ea63002531 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -959,6 +959,7 @@ bool bpf_jit_supports_kfunc_call(void); bool bpf_jit_supports_far_kfunc_call(void); bool bpf_jit_supports_exceptions(void); bool bpf_jit_supports_ptr_xchg(void); +bool bpf_jit_supports_arena(void); void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie); bool bpf_helper_changes_pkt_data(void *func); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 2539d9bfe369..2829077f0461 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2926,6 +2926,11 @@ bool __weak bpf_jit_supports_far_kfunc_call(void) return false; } +bool __weak bpf_jit_supports_arena(void) +{ + return false; +} + /* Return TRUE if the JIT backend satisfies the following two conditions: * 1) JIT backend supports atomic_xchg() on pointer-sized words. * 2) Under the specific arch, the implementation of xchg() is the same