Message ID | 20231005145814.83122-2-hffilwlqm@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf, x64: Fix tailcall hierarchy | expand |
On 10/05, Leon Hwang wrote: > From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall > handling in JIT"), the tailcall on x64 works better than before. > > From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms > for x64 JIT"), tailcall is able to run in BPF subprograms on x64. > > How about: > > 1. More than 1 subprograms are called in a bpf program. > 2. The tailcalls in the subprograms call the bpf program. > > Because of missing tail_call_cnt back-propagation, a tailcall hierarchy > comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. > > As we know, in tail call context, the tail_call_cnt propagates by stack > and rax register between BPF subprograms. So, propagating tail_call_cnt > pointer by stack and rax register makes tail_call_cnt as like a global > variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall > hierarchy cases. > > Before jumping to other bpf prog, load tail_call_cnt from the pointer > and then compare with MAX_TAIL_CALL_CNT. Finally, increment > tail_call_cnt by the pointer. > > But, where does tail_call_cnt store? > > It stores on the stack of uppest-hierarchy-layer bpf prog, like > > | STACK | > +---------+ RBP > | | > | | > | | > | tcc_ptr | > | tcc | > | rbx | > +---------+ RSP > > Why not back-propagate tail_call_cnt? > > It's because it's vulnerable to back-propagate it. It's unable to work > well with the following case. > > int prog1(); > int prog2(); > > prog1 is tail caller, and prog2 is tail callee. If we do back-propagate > tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the > same time? The answer is NO. Otherwise, there will be a register to be > polluted, which will make kernel crash. > > Can tail_call_cnt store at other place instead of the stack of bpf prog? > > I'm not able to infer a better place to store tail_call_cnt. It's not a > working inference to store it at ctx or on the stack of bpf prog's > caller. > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") > Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> > --- > arch/x86/net/bpf_jit_comp.c | 120 +++++++++++++++++++++++------------- > 1 file changed, 76 insertions(+), 44 deletions(-) > > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 8c10d9abc2394..8ad6368353c2b 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -256,7 +256,7 @@ struct jit_context { > /* Number of bytes emit_patch() needs to generate instructions */ > #define X86_PATCH_SIZE 5 > /* Number of bytes that will be skipped on tailcall */ > -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) > +#define X86_TAIL_CALL_OFFSET (24 + ENDBR_INSN_SIZE) > > static void push_r12(u8 **pprog) > { > @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) > *pprog = prog; > } > [..] > +static void emit_nops(u8 **pprog, int len) > +{ > + u8 *prog = *pprog; > + int i, noplen; > + > + while (len > 0) { > + noplen = len; > + > + if (noplen > ASM_NOP_MAX) > + noplen = ASM_NOP_MAX; > + > + for (i = 0; i < noplen; i++) > + EMIT1(x86_nops[noplen][i]); > + len -= noplen; > + } > + > + *pprog = prog; > +} From high level - makes sense to me. I'll leave a thorough review to the people who understand more :-) I see Maciej commenting on your original "Fix tailcall infinite loop" series. One suggestion I have is: the changes to 'memcpy(prog, x86_nops[5], X86_PATCH_SIZE);' and this emit_nops move here don't seem like they actually belong to this patch. Maybe do them separately?
On 6/10/23 02:05, Stanislav Fomichev wrote: > On 10/05, Leon Hwang wrote: >> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall >> handling in JIT"), the tailcall on x64 works better than before. >> >> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms >> for x64 JIT"), tailcall is able to run in BPF subprograms on x64. >> >> How about: >> >> 1. More than 1 subprograms are called in a bpf program. >> 2. The tailcalls in the subprograms call the bpf program. >> >> Because of missing tail_call_cnt back-propagation, a tailcall hierarchy >> comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. >> >> As we know, in tail call context, the tail_call_cnt propagates by stack >> and rax register between BPF subprograms. So, propagating tail_call_cnt >> pointer by stack and rax register makes tail_call_cnt as like a global >> variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall >> hierarchy cases. >> >> Before jumping to other bpf prog, load tail_call_cnt from the pointer >> and then compare with MAX_TAIL_CALL_CNT. Finally, increment >> tail_call_cnt by the pointer. >> >> But, where does tail_call_cnt store? >> >> It stores on the stack of uppest-hierarchy-layer bpf prog, like >> >> | STACK | >> +---------+ RBP >> | | >> | | >> | | >> | tcc_ptr | >> | tcc | >> | rbx | >> +---------+ RSP >> >> Why not back-propagate tail_call_cnt? >> >> It's because it's vulnerable to back-propagate it. It's unable to work >> well with the following case. >> >> int prog1(); >> int prog2(); >> >> prog1 is tail caller, and prog2 is tail callee. If we do back-propagate >> tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the >> same time? The answer is NO. Otherwise, there will be a register to be >> polluted, which will make kernel crash. >> >> Can tail_call_cnt store at other place instead of the stack of bpf prog? >> >> I'm not able to infer a better place to store tail_call_cnt. It's not a >> working inference to store it at ctx or on the stack of bpf prog's >> caller. >> >> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") >> Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") >> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> >> --- >> arch/x86/net/bpf_jit_comp.c | 120 +++++++++++++++++++++++------------- >> 1 file changed, 76 insertions(+), 44 deletions(-) >> >> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c >> index 8c10d9abc2394..8ad6368353c2b 100644 >> --- a/arch/x86/net/bpf_jit_comp.c >> +++ b/arch/x86/net/bpf_jit_comp.c >> @@ -256,7 +256,7 @@ struct jit_context { >> /* Number of bytes emit_patch() needs to generate instructions */ >> #define X86_PATCH_SIZE 5 >> /* Number of bytes that will be skipped on tailcall */ >> -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) >> +#define X86_TAIL_CALL_OFFSET (24 + ENDBR_INSN_SIZE) >> >> static void push_r12(u8 **pprog) >> { >> @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) >> *pprog = prog; >> } >> > > [..] > >> +static void emit_nops(u8 **pprog, int len) >> +{ >> + u8 *prog = *pprog; >> + int i, noplen; >> + >> + while (len > 0) { >> + noplen = len; >> + >> + if (noplen > ASM_NOP_MAX) >> + noplen = ASM_NOP_MAX; >> + >> + for (i = 0; i < noplen; i++) >> + EMIT1(x86_nops[noplen][i]); >> + len -= noplen; >> + } >> + >> + *pprog = prog; >> +} > > From high level - makes sense to me. > I'll leave a thorough review to the people who understand more :-) > I see Maciej commenting on your original "Fix tailcall infinite loop" > series. Welcome for your review. > > One suggestion I have is: the changes to 'memcpy(prog, x86_nops[5], > X86_PATCH_SIZE);' and this emit_nops move here don't seem like > they actually belong to this patch. Maybe do them separately? Moving emit_nops here is for them: + /* Keep the same instruction layout. */ + emit_nops(&prog, 3); + emit_nops(&prog, 6); + emit_nops(&prog, 6); and do the changes to 'memcpy(prog, x86_nops[5], X86_PATCH_SIZE);' BTW. Thanks, Leon
On Thu, Oct 5, 2023 at 6:43 PM Leon Hwang <hffilwlqm@gmail.com> wrote: > > > > On 6/10/23 02:05, Stanislav Fomichev wrote: > > On 10/05, Leon Hwang wrote: > >> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall > >> handling in JIT"), the tailcall on x64 works better than before. > >> > >> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms > >> for x64 JIT"), tailcall is able to run in BPF subprograms on x64. > >> > >> How about: > >> > >> 1. More than 1 subprograms are called in a bpf program. > >> 2. The tailcalls in the subprograms call the bpf program. > >> > >> Because of missing tail_call_cnt back-propagation, a tailcall hierarchy > >> comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. > >> > >> As we know, in tail call context, the tail_call_cnt propagates by stack > >> and rax register between BPF subprograms. So, propagating tail_call_cnt > >> pointer by stack and rax register makes tail_call_cnt as like a global > >> variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall > >> hierarchy cases. > >> > >> Before jumping to other bpf prog, load tail_call_cnt from the pointer > >> and then compare with MAX_TAIL_CALL_CNT. Finally, increment > >> tail_call_cnt by the pointer. > >> > >> But, where does tail_call_cnt store? > >> > >> It stores on the stack of uppest-hierarchy-layer bpf prog, like > >> > >> | STACK | > >> +---------+ RBP > >> | | > >> | | > >> | | > >> | tcc_ptr | > >> | tcc | > >> | rbx | > >> +---------+ RSP > >> > >> Why not back-propagate tail_call_cnt? > >> > >> It's because it's vulnerable to back-propagate it. It's unable to work > >> well with the following case. > >> > >> int prog1(); > >> int prog2(); > >> > >> prog1 is tail caller, and prog2 is tail callee. If we do back-propagate > >> tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the > >> same time? The answer is NO. Otherwise, there will be a register to be > >> polluted, which will make kernel crash. > >> > >> Can tail_call_cnt store at other place instead of the stack of bpf prog? > >> > >> I'm not able to infer a better place to store tail_call_cnt. It's not a > >> working inference to store it at ctx or on the stack of bpf prog's > >> caller. > >> > >> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > >> Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") > >> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> > >> --- > >> arch/x86/net/bpf_jit_comp.c | 120 +++++++++++++++++++++++------------- > >> 1 file changed, 76 insertions(+), 44 deletions(-) > >> > >> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > >> index 8c10d9abc2394..8ad6368353c2b 100644 > >> --- a/arch/x86/net/bpf_jit_comp.c > >> +++ b/arch/x86/net/bpf_jit_comp.c > >> @@ -256,7 +256,7 @@ struct jit_context { > >> /* Number of bytes emit_patch() needs to generate instructions */ > >> #define X86_PATCH_SIZE 5 > >> /* Number of bytes that will be skipped on tailcall */ > >> -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) > >> +#define X86_TAIL_CALL_OFFSET (24 + ENDBR_INSN_SIZE) > >> > >> static void push_r12(u8 **pprog) > >> { > >> @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) > >> *pprog = prog; > >> } > >> > > > > [..] > > > >> +static void emit_nops(u8 **pprog, int len) > >> +{ > >> + u8 *prog = *pprog; > >> + int i, noplen; > >> + > >> + while (len > 0) { > >> + noplen = len; > >> + > >> + if (noplen > ASM_NOP_MAX) > >> + noplen = ASM_NOP_MAX; > >> + > >> + for (i = 0; i < noplen; i++) > >> + EMIT1(x86_nops[noplen][i]); > >> + len -= noplen; > >> + } > >> + > >> + *pprog = prog; > >> +} > > > > From high level - makes sense to me. > > I'll leave a thorough review to the people who understand more :-) > > I see Maciej commenting on your original "Fix tailcall infinite loop" > > series. > > Welcome for your review. > > > > > One suggestion I have is: the changes to 'memcpy(prog, x86_nops[5], > > X86_PATCH_SIZE);' and this emit_nops move here don't seem like > > they actually belong to this patch. Maybe do them separately? > > Moving emit_nops here is for them: > > + /* Keep the same instruction layout. */ > + emit_nops(&prog, 3); > + emit_nops(&prog, 6); > + emit_nops(&prog, 6); > > and do the changes to 'memcpy(prog, x86_nops[5], X86_PATCH_SIZE);' BTW. Right, I'm saying that you can do the move + replace memcpy in a separate (first) patch to make the patch with the actual changes a bit smaller. But that's not strictly required, up to you.
On 2023/10/7 00:44, Stanislav Fomichev wrote: > On Thu, Oct 5, 2023 at 6:43 PM Leon Hwang <hffilwlqm@gmail.com> wrote: >> >> >> >> On 6/10/23 02:05, Stanislav Fomichev wrote: >>> On 10/05, Leon Hwang wrote: >>>> From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall >>>> handling in JIT"), the tailcall on x64 works better than before. >>>> >>>> From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms >>>> for x64 JIT"), tailcall is able to run in BPF subprograms on x64. >>>> >>>> How about: >>>> >>>> 1. More than 1 subprograms are called in a bpf program. >>>> 2. The tailcalls in the subprograms call the bpf program. >>>> >>>> Because of missing tail_call_cnt back-propagation, a tailcall hierarchy >>>> comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. >>>> >>>> As we know, in tail call context, the tail_call_cnt propagates by stack >>>> and rax register between BPF subprograms. So, propagating tail_call_cnt >>>> pointer by stack and rax register makes tail_call_cnt as like a global >>>> variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall >>>> hierarchy cases. >>>> >>>> Before jumping to other bpf prog, load tail_call_cnt from the pointer >>>> and then compare with MAX_TAIL_CALL_CNT. Finally, increment >>>> tail_call_cnt by the pointer. >>>> >>>> But, where does tail_call_cnt store? >>>> >>>> It stores on the stack of uppest-hierarchy-layer bpf prog, like >>>> >>>> | STACK | >>>> +---------+ RBP >>>> | | >>>> | | >>>> | | >>>> | tcc_ptr | >>>> | tcc | >>>> | rbx | >>>> +---------+ RSP >>>> >>>> Why not back-propagate tail_call_cnt? >>>> >>>> It's because it's vulnerable to back-propagate it. It's unable to work >>>> well with the following case. >>>> >>>> int prog1(); >>>> int prog2(); >>>> >>>> prog1 is tail caller, and prog2 is tail callee. If we do back-propagate >>>> tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the >>>> same time? The answer is NO. Otherwise, there will be a register to be >>>> polluted, which will make kernel crash. >>>> >>>> Can tail_call_cnt store at other place instead of the stack of bpf prog? >>>> >>>> I'm not able to infer a better place to store tail_call_cnt. It's not a >>>> working inference to store it at ctx or on the stack of bpf prog's >>>> caller. >>>> >>>> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") >>>> Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") >>>> Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> >>>> --- >>>> arch/x86/net/bpf_jit_comp.c | 120 +++++++++++++++++++++++------------- >>>> 1 file changed, 76 insertions(+), 44 deletions(-) >>>> >>>> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c >>>> index 8c10d9abc2394..8ad6368353c2b 100644 >>>> --- a/arch/x86/net/bpf_jit_comp.c >>>> +++ b/arch/x86/net/bpf_jit_comp.c >>>> @@ -256,7 +256,7 @@ struct jit_context { >>>> /* Number of bytes emit_patch() needs to generate instructions */ >>>> #define X86_PATCH_SIZE 5 >>>> /* Number of bytes that will be skipped on tailcall */ >>>> -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) >>>> +#define X86_TAIL_CALL_OFFSET (24 + ENDBR_INSN_SIZE) >>>> >>>> static void push_r12(u8 **pprog) >>>> { >>>> @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) >>>> *pprog = prog; >>>> } >>>> >>> >>> [..] >>> >>>> +static void emit_nops(u8 **pprog, int len) >>>> +{ >>>> + u8 *prog = *pprog; >>>> + int i, noplen; >>>> + >>>> + while (len > 0) { >>>> + noplen = len; >>>> + >>>> + if (noplen > ASM_NOP_MAX) >>>> + noplen = ASM_NOP_MAX; >>>> + >>>> + for (i = 0; i < noplen; i++) >>>> + EMIT1(x86_nops[noplen][i]); >>>> + len -= noplen; >>>> + } >>>> + >>>> + *pprog = prog; >>>> +} >>> >>> From high level - makes sense to me. >>> I'll leave a thorough review to the people who understand more :-) >>> I see Maciej commenting on your original "Fix tailcall infinite loop" >>> series. >> >> Welcome for your review. >> >>> >>> One suggestion I have is: the changes to 'memcpy(prog, x86_nops[5], >>> X86_PATCH_SIZE);' and this emit_nops move here don't seem like >>> they actually belong to this patch. Maybe do them separately? >> >> Moving emit_nops here is for them: >> >> + /* Keep the same instruction layout. */ >> + emit_nops(&prog, 3); >> + emit_nops(&prog, 6); >> + emit_nops(&prog, 6); >> >> and do the changes to 'memcpy(prog, x86_nops[5], X86_PATCH_SIZE);' BTW. > > Right, I'm saying that you can do the move + replace memcpy in a > separate (first) patch to make the patch with the actual changes a bit > smaller. > But that's not strictly required, up to you. LGTM Thanks, Leon
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8c10d9abc2394..8ad6368353c2b 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -256,7 +256,7 @@ struct jit_context { /* Number of bytes emit_patch() needs to generate instructions */ #define X86_PATCH_SIZE 5 /* Number of bytes that will be skipped on tailcall */ -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) +#define X86_TAIL_CALL_OFFSET (24 + ENDBR_INSN_SIZE) static void push_r12(u8 **pprog) { @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) *pprog = prog; } +static void emit_nops(u8 **pprog, int len) +{ + u8 *prog = *pprog; + int i, noplen; + + while (len > 0) { + noplen = len; + + if (noplen > ASM_NOP_MAX) + noplen = ASM_NOP_MAX; + + for (i = 0; i < noplen; i++) + EMIT1(x86_nops[noplen][i]); + len -= noplen; + } + + *pprog = prog; +} + /* * Emit x86-64 prologue code for BPF program. * bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes @@ -313,24 +332,15 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, bool tail_call_reachable, bool is_subprog, bool is_exception_cb) { + int tcc_ptr_off = round_up(stack_depth, 8) + 8; + int tcc_off = tcc_ptr_off + 8; u8 *prog = *pprog; /* BPF trampoline can be made to work without these nops, * but let's waste 5 bytes for now and optimize later */ EMIT_ENDBR(); - memcpy(prog, x86_nops[5], X86_PATCH_SIZE); - prog += X86_PATCH_SIZE; - if (!ebpf_from_cbpf) { - if (tail_call_reachable && !is_subprog) - /* When it's the entry of the whole tailcall context, - * zeroing rax means initialising tail_call_cnt. - */ - EMIT2(0x31, 0xC0); /* xor eax, eax */ - else - /* Keep the same instruction layout. */ - EMIT2(0x66, 0x90); /* nop2 */ - } + emit_nops(&prog, X86_PATCH_SIZE); /* Exception callback receives FP as third parameter */ if (is_exception_cb) { EMIT3(0x48, 0x89, 0xF4); /* mov rsp, rsi */ @@ -347,15 +357,52 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, EMIT1(0x55); /* push rbp */ EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */ } + if (!ebpf_from_cbpf) { + if (tail_call_reachable && !is_subprog) { + /* Make rax as ptr that points to tail_call_cnt. */ + EMIT3(0x48, 0x89, 0xE8); /* mov rax, rbp */ + EMIT2_off32(0x48, 0x2D, tcc_off); /* sub rax, tcc_off */ + /* When it's the entry of the whole tail call context, + * storing 0 means initialising tail_call_cnt. + */ + EMIT2_off32(0xC7, 0x00, 0); /* mov dword ptr [rax], 0 */ + } else { + /* Keep the same instruction layout. */ + emit_nops(&prog, 3); + emit_nops(&prog, 6); + emit_nops(&prog, 6); + } + } /* X86_TAIL_CALL_OFFSET is here */ EMIT_ENDBR(); + if (tail_call_reachable) { + /* Here, rax is tail_call_cnt_ptr. */ + if (!is_subprog) { + /* Because pushing tail_call_cnt_ptr may cover tail_call_cnt, + * it's required to store tail_call_cnt before storing + * tail_call_cnt_ptr. + */ + EMIT1(0x50); /* push rax */ + EMIT2(0x8B, 0x00); /* mov eax, dword ptr [rax] */ + EMIT2_off32(0x89, 0x85, -tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT1(0x58); /* pop rax */ + /* mov qword ptr [rbp - tcc_ptr_off], rax */ + EMIT3_off32(0x48, 0x89, 0x85, -tcc_ptr_off); + } else { + /* As for subprog, tail_call_cnt is meaningless. Storing + * tail_call_cnt_ptr is enough. + */ + /* mov qword ptr [rbp - tcc_ptr_off], rax */ + EMIT3_off32(0x48, 0x89, 0x85, -tcc_ptr_off); + } + /* Reserve 16 bytes for tail_call_cnt_ptr and tail_call_cnt. */ + stack_depth += 16; + } /* sub rsp, rounded_stack_depth */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8)); - if (tail_call_reachable) - EMIT1(0x50); /* push rax */ *pprog = prog; } @@ -510,7 +557,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, u32 stack_depth, u8 *ip, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -535,13 +582,12 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_indirect_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ /* prog = array->ptrs[index]; */ EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6, /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */ @@ -563,6 +609,9 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt */ + EMIT1(0x58); /* pop rax */ + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, /* add rsp, sd */ @@ -591,7 +640,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, bool *callee_regs_used, u32 stack_depth, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -599,13 +648,12 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_direct_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ poke->tailcall_bypass = ip + (prog - start); poke->adj_off = X86_TAIL_CALL_OFFSET; @@ -622,6 +670,9 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt */ + EMIT1(0x58); /* pop rax */ + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); @@ -989,25 +1040,6 @@ static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt, } } -static void emit_nops(u8 **pprog, int len) -{ - u8 *prog = *pprog; - int i, noplen; - - while (len > 0) { - noplen = len; - - if (noplen > ASM_NOP_MAX) - noplen = ASM_NOP_MAX; - - for (i = 0; i < noplen; i++) - EMIT1(x86_nops[noplen][i]); - len -= noplen; - } - - *pprog = prog; -} - /* emit the 3-byte VEX prefix * * r: same as rex.r, extra bit for ModRM reg field
From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT"), the tailcall on x64 works better than before. From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms for x64 JIT"), tailcall is able to run in BPF subprograms on x64. How about: 1. More than 1 subprograms are called in a bpf program. 2. The tailcalls in the subprograms call the bpf program. Because of missing tail_call_cnt back-propagation, a tailcall hierarchy comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. As we know, in tail call context, the tail_call_cnt propagates by stack and rax register between BPF subprograms. So, propagating tail_call_cnt pointer by stack and rax register makes tail_call_cnt as like a global variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall hierarchy cases. Before jumping to other bpf prog, load tail_call_cnt from the pointer and then compare with MAX_TAIL_CALL_CNT. Finally, increment tail_call_cnt by the pointer. But, where does tail_call_cnt store? It stores on the stack of uppest-hierarchy-layer bpf prog, like | STACK | +---------+ RBP | | | | | | | tcc_ptr | | tcc | | rbx | +---------+ RSP Why not back-propagate tail_call_cnt? It's because it's vulnerable to back-propagate it. It's unable to work well with the following case. int prog1(); int prog2(); prog1 is tail caller, and prog2 is tail callee. If we do back-propagate tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the same time? The answer is NO. Otherwise, there will be a register to be polluted, which will make kernel crash. Can tail_call_cnt store at other place instead of the stack of bpf prog? I'm not able to infer a better place to store tail_call_cnt. It's not a working inference to store it at ctx or on the stack of bpf prog's caller. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> --- arch/x86/net/bpf_jit_comp.c | 120 +++++++++++++++++++++++------------- 1 file changed, 76 insertions(+), 44 deletions(-)