Message ID | 20210728170502.351010-9-johan.almbladh@anyfinetworks.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf/tests: Extend the eBPF test suite | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On 7/28/21 10:04 AM, Johan Almbladh wrote: > 32-bit JITs may implement complex ALU64 instructions using function calls. > The new tests check aspects related to this, such as register clobbering > and register argument re-ordering. > > Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> > --- > lib/test_bpf.c | 138 +++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 138 insertions(+) > > diff --git a/lib/test_bpf.c b/lib/test_bpf.c > index eb61088a674f..1115e39630ce 100644 > --- a/lib/test_bpf.c > +++ b/lib/test_bpf.c > @@ -1916,6 +1916,144 @@ static struct bpf_test tests[] = { > { }, > { { 0, -1 } } > }, > + { > + /* > + * Register (non-)clobbering test, in the case where a 32-bit > + * JIT implements complex ALU64 operations via function calls. > + */ > + "INT: Register clobbering, R1 updated", > + .u.insns_int = { > + BPF_ALU32_IMM(BPF_MOV, R0, 0), > + BPF_ALU32_IMM(BPF_MOV, R1, 123456789), > + BPF_ALU32_IMM(BPF_MOV, R2, 2), > + BPF_ALU32_IMM(BPF_MOV, R3, 3), > + BPF_ALU32_IMM(BPF_MOV, R4, 4), > + BPF_ALU32_IMM(BPF_MOV, R5, 5), > + BPF_ALU32_IMM(BPF_MOV, R6, 6), > + BPF_ALU32_IMM(BPF_MOV, R7, 7), > + BPF_ALU32_IMM(BPF_MOV, R8, 8), > + BPF_ALU32_IMM(BPF_MOV, R9, 9), > + BPF_ALU64_IMM(BPF_DIV, R1, 123456789), > + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), > + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), > + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), > + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), > + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), > + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), > + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), > + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), > + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), > + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), > + BPF_ALU32_IMM(BPF_MOV, R0, 1), > + BPF_EXIT_INSN(), > + }, > + INTERNAL, > + { }, > + { { 0, 1 } } > + }, > + { > + "INT: Register clobbering, R2 updated", > + .u.insns_int = { > + BPF_ALU32_IMM(BPF_MOV, R0, 0), > + BPF_ALU32_IMM(BPF_MOV, R1, 1), > + BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), > + BPF_ALU32_IMM(BPF_MOV, R3, 3), > + BPF_ALU32_IMM(BPF_MOV, R4, 4), > + BPF_ALU32_IMM(BPF_MOV, R5, 5), > + BPF_ALU32_IMM(BPF_MOV, R6, 6), > + BPF_ALU32_IMM(BPF_MOV, R7, 7), > + BPF_ALU32_IMM(BPF_MOV, R8, 8), > + BPF_ALU32_IMM(BPF_MOV, R9, 9), > + BPF_ALU64_IMM(BPF_DIV, R2, 123456789), > + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), > + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), > + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), > + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), > + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), > + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), > + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), > + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), > + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), > + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), > + BPF_ALU32_IMM(BPF_MOV, R0, 1), > + BPF_EXIT_INSN(), > + }, > + INTERNAL, > + { }, > + { { 0, 1 } } > + }, It looks like the above two tests, "R1 updated" and "R2 updated" should be very similar and the only difference is one immediate is 123456789 and another is 2 * 123456789. But for generated code, they all just have the final immediate. Could you explain what the difference in terms of jit for the above two tests? > + { > + /* > + * Test 32-bit JITs that implement complex ALU64 operations as > + * function calls R0 = f(R1, R2), and must re-arrange operands. > + */ > +#define NUMER 0xfedcba9876543210ULL > +#define DENOM 0x0123456789abcdefULL > + "ALU64_DIV X: Operand register permutations", > + .u.insns_int = { > + /* R0 / R2 */ > + BPF_LD_IMM64(R0, NUMER), > + BPF_LD_IMM64(R2, DENOM), > + BPF_ALU64_REG(BPF_DIV, R0, R2), > + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* R1 / R0 */ > + BPF_LD_IMM64(R1, NUMER), > + BPF_LD_IMM64(R0, DENOM), > + BPF_ALU64_REG(BPF_DIV, R1, R0), > + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* R0 / R1 */ > + BPF_LD_IMM64(R0, NUMER), > + BPF_LD_IMM64(R1, DENOM), > + BPF_ALU64_REG(BPF_DIV, R0, R1), > + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* R2 / R0 */ > + BPF_LD_IMM64(R2, NUMER), > + BPF_LD_IMM64(R0, DENOM), > + BPF_ALU64_REG(BPF_DIV, R2, R0), > + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* R2 / R1 */ > + BPF_LD_IMM64(R2, NUMER), > + BPF_LD_IMM64(R1, DENOM), > + BPF_ALU64_REG(BPF_DIV, R2, R1), > + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* R1 / R2 */ > + BPF_LD_IMM64(R1, NUMER), > + BPF_LD_IMM64(R2, DENOM), > + BPF_ALU64_REG(BPF_DIV, R1, R2), > + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + BPF_LD_IMM64(R0, 1), Do we need this BPF_LD_IMM64(R0, 1)? First, if we have it, and next "BPF_ALU64_REG(BPF_DIV, R1, R1)" generates incorrect value and exit and then you will get exit value 1, which will signal the test success. Second, if you don't have this R0 = 1, R0 will be DENOM and you will be fine. > + /* R1 / R1 */ > + BPF_LD_IMM64(R1, NUMER), > + BPF_ALU64_REG(BPF_DIV, R1, R1), > + BPF_JMP_IMM(BPF_JEQ, R1, 1, 1), > + BPF_EXIT_INSN(), > + /* R2 / R2 */ > + BPF_LD_IMM64(R2, DENOM), > + BPF_ALU64_REG(BPF_DIV, R2, R2), > + BPF_JMP_IMM(BPF_JEQ, R2, 1, 1), > + BPF_EXIT_INSN(), > + /* R3 / R4 */ > + BPF_LD_IMM64(R3, NUMER), > + BPF_LD_IMM64(R4, DENOM), > + BPF_ALU64_REG(BPF_DIV, R3, R4), > + BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1), > + BPF_EXIT_INSN(), > + /* Successful return */ > + BPF_LD_IMM64(R0, 1), > + BPF_EXIT_INSN(), > + }, > + INTERNAL, > + { }, > + { { 0, 1 } }, > +#undef NUMER > +#undef DENOM > + }, > { > "check: missing ret", > .u.insns = { >
On Thu, Jul 29, 2021 at 1:52 AM Yonghong Song <yhs@fb.com> wrote: > > + /* > > + * Register (non-)clobbering test, in the case where a 32-bit > > + * JIT implements complex ALU64 operations via function calls. > > + */ > > + "INT: Register clobbering, R1 updated", > > + .u.insns_int = { > > + BPF_ALU32_IMM(BPF_MOV, R0, 0), > > + BPF_ALU32_IMM(BPF_MOV, R1, 123456789), > > + BPF_ALU32_IMM(BPF_MOV, R2, 2), > > + BPF_ALU32_IMM(BPF_MOV, R3, 3), > > + BPF_ALU32_IMM(BPF_MOV, R4, 4), > > + BPF_ALU32_IMM(BPF_MOV, R5, 5), > > + BPF_ALU32_IMM(BPF_MOV, R6, 6), > > + BPF_ALU32_IMM(BPF_MOV, R7, 7), > > + BPF_ALU32_IMM(BPF_MOV, R8, 8), > > + BPF_ALU32_IMM(BPF_MOV, R9, 9), > > + BPF_ALU64_IMM(BPF_DIV, R1, 123456789), > > + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), > > + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), > > + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), > > + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), > > + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), > > + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), > > + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), > > + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), > > + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), > > + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), > > + BPF_ALU32_IMM(BPF_MOV, R0, 1), > > + BPF_EXIT_INSN(), > > + }, > > + INTERNAL, > > + { }, > > + { { 0, 1 } } > > + }, > > + { > > + "INT: Register clobbering, R2 updated", > > + .u.insns_int = { > > + BPF_ALU32_IMM(BPF_MOV, R0, 0), > > + BPF_ALU32_IMM(BPF_MOV, R1, 1), > > + BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), > > + BPF_ALU32_IMM(BPF_MOV, R3, 3), > > + BPF_ALU32_IMM(BPF_MOV, R4, 4), > > + BPF_ALU32_IMM(BPF_MOV, R5, 5), > > + BPF_ALU32_IMM(BPF_MOV, R6, 6), > > + BPF_ALU32_IMM(BPF_MOV, R7, 7), > > + BPF_ALU32_IMM(BPF_MOV, R8, 8), > > + BPF_ALU32_IMM(BPF_MOV, R9, 9), > > + BPF_ALU64_IMM(BPF_DIV, R2, 123456789), > > + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), > > + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), > > + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), > > + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), > > + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), > > + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), > > + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), > > + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), > > + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), > > + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), > > + BPF_ALU32_IMM(BPF_MOV, R0, 1), > > + BPF_EXIT_INSN(), > > + }, > > + INTERNAL, > > + { }, > > + { { 0, 1 } } > > + }, > > It looks like the above two tests, "R1 updated" and "R2 updated" should > be very similar and the only difference is one immediate is 123456789 > and another is 2 * 123456789. But for generated code, they all just have > the final immediate. Could you explain what the difference in terms of > jit for the above two tests? When a BPF_CALL instruction is executed, the eBPF assembler have already saved any caller-saved registers that must be preserved, put the arguments in R1-R5, and expects a return value in R0. It is just for the JIT to emit the call. Not so when an eBPF instruction is implemented by a function call, like ALU64 DIV in a 32-bit JIT. In this case, the function call is unexpected by the eBPF assembler, and must be invisible to it. Now the JIT must take care of saving all caller-saved registers on stack, put the operands in the right argument registers, put the return value in the destination register, and finally restore all caller-saved registers without overwriting the computed result. The test checks that all other registers retain their value after such a hidden function call. However, one register will contain the result. In order to verify that all registers are saved and restored properly, we must vary the destination and run it two times. It is not the result of the operation that its tested, it is absence of possible side effects. I can put a more elaborate description in the comment to explain this. > > > + { > > + /* > > + * Test 32-bit JITs that implement complex ALU64 operations as > > + * function calls R0 = f(R1, R2), and must re-arrange operands. > > + */ > > +#define NUMER 0xfedcba9876543210ULL > > +#define DENOM 0x0123456789abcdefULL > > + "ALU64_DIV X: Operand register permutations", > > + .u.insns_int = { > > + /* R0 / R2 */ > > + BPF_LD_IMM64(R0, NUMER), > > + BPF_LD_IMM64(R2, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R0, R2), > > + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + /* R1 / R0 */ > > + BPF_LD_IMM64(R1, NUMER), > > + BPF_LD_IMM64(R0, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R1, R0), > > + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + /* R0 / R1 */ > > + BPF_LD_IMM64(R0, NUMER), > > + BPF_LD_IMM64(R1, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R0, R1), > > + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + /* R2 / R0 */ > > + BPF_LD_IMM64(R2, NUMER), > > + BPF_LD_IMM64(R0, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R2, R0), > > + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + /* R2 / R1 */ > > + BPF_LD_IMM64(R2, NUMER), > > + BPF_LD_IMM64(R1, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R2, R1), > > + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + /* R1 / R2 */ > > + BPF_LD_IMM64(R1, NUMER), > > + BPF_LD_IMM64(R2, DENOM), > > + BPF_ALU64_REG(BPF_DIV, R1, R2), > > + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), > > + BPF_EXIT_INSN(), > > + BPF_LD_IMM64(R0, 1), > > Do we need this BPF_LD_IMM64(R0, 1)? > First, if we have it, and next "BPF_ALU64_REG(BPF_DIV, R1, R1)" > generates incorrect value and exit and then you will get > exit value 1, which will signal the test success. > > Second, if you don't have this R0 = 1, R0 will be DENOM > and you will be fine. Good catch! No, it should not be there. Maybe left from previous debugging, or a copy-and-paste error. I'll remove it.
On 7/29/21 2:17 PM, Johan Almbladh wrote: > On Thu, Jul 29, 2021 at 1:52 AM Yonghong Song <yhs@fb.com> wrote: >>> + /* >>> + * Register (non-)clobbering test, in the case where a 32-bit >>> + * JIT implements complex ALU64 operations via function calls. >>> + */ >>> + "INT: Register clobbering, R1 updated", >>> + .u.insns_int = { >>> + BPF_ALU32_IMM(BPF_MOV, R0, 0), >>> + BPF_ALU32_IMM(BPF_MOV, R1, 123456789), >>> + BPF_ALU32_IMM(BPF_MOV, R2, 2), >>> + BPF_ALU32_IMM(BPF_MOV, R3, 3), >>> + BPF_ALU32_IMM(BPF_MOV, R4, 4), >>> + BPF_ALU32_IMM(BPF_MOV, R5, 5), >>> + BPF_ALU32_IMM(BPF_MOV, R6, 6), >>> + BPF_ALU32_IMM(BPF_MOV, R7, 7), >>> + BPF_ALU32_IMM(BPF_MOV, R8, 8), >>> + BPF_ALU32_IMM(BPF_MOV, R9, 9), >>> + BPF_ALU64_IMM(BPF_DIV, R1, 123456789), >>> + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), >>> + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), >>> + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), >>> + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), >>> + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), >>> + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), >>> + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), >>> + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), >>> + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), >>> + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), >>> + BPF_ALU32_IMM(BPF_MOV, R0, 1), >>> + BPF_EXIT_INSN(), >>> + }, >>> + INTERNAL, >>> + { }, >>> + { { 0, 1 } } >>> + }, >>> + { >>> + "INT: Register clobbering, R2 updated", >>> + .u.insns_int = { >>> + BPF_ALU32_IMM(BPF_MOV, R0, 0), >>> + BPF_ALU32_IMM(BPF_MOV, R1, 1), >>> + BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), >>> + BPF_ALU32_IMM(BPF_MOV, R3, 3), >>> + BPF_ALU32_IMM(BPF_MOV, R4, 4), >>> + BPF_ALU32_IMM(BPF_MOV, R5, 5), >>> + BPF_ALU32_IMM(BPF_MOV, R6, 6), >>> + BPF_ALU32_IMM(BPF_MOV, R7, 7), >>> + BPF_ALU32_IMM(BPF_MOV, R8, 8), >>> + BPF_ALU32_IMM(BPF_MOV, R9, 9), >>> + BPF_ALU64_IMM(BPF_DIV, R2, 123456789), >>> + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), >>> + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), >>> + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), >>> + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), >>> + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), >>> + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), >>> + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), >>> + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), >>> + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), >>> + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), >>> + BPF_ALU32_IMM(BPF_MOV, R0, 1), >>> + BPF_EXIT_INSN(), >>> + }, >>> + INTERNAL, >>> + { }, >>> + { { 0, 1 } } >>> + }, >> >> It looks like the above two tests, "R1 updated" and "R2 updated" should >> be very similar and the only difference is one immediate is 123456789 >> and another is 2 * 123456789. But for generated code, they all just have >> the final immediate. Could you explain what the difference in terms of >> jit for the above two tests? > > When a BPF_CALL instruction is executed, the eBPF assembler have > already saved any caller-saved registers that must be preserved, put > the arguments in R1-R5, and expects a return value in R0. It is just > for the JIT to emit the call. > > Not so when an eBPF instruction is implemented by a function call, > like ALU64 DIV in a 32-bit JIT. In this case, the function call is > unexpected by the eBPF assembler, and must be invisible to it. Now the > JIT must take care of saving all caller-saved registers on stack, put > the operands in the right argument registers, put the return value in > the destination register, and finally restore all caller-saved > registers without overwriting the computed result. > > The test checks that all other registers retain their value after such > a hidden function call. However, one register will contain the result. > In order to verify that all registers are saved and restored properly, > we must vary the destination and run it two times. It is not the > result of the operation that its tested, it is absence of possible > side effects. > > I can put a more elaborate description in the comment to explain this. Indeed, an elaborate description as comments will be great. > >> >>> + { >>> + /* >>> + * Test 32-bit JITs that implement complex ALU64 operations as >>> + * function calls R0 = f(R1, R2), and must re-arrange operands. >>> + */ [...]
diff --git a/lib/test_bpf.c b/lib/test_bpf.c index eb61088a674f..1115e39630ce 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1916,6 +1916,144 @@ static struct bpf_test tests[] = { { }, { { 0, -1 } } }, + { + /* + * Register (non-)clobbering test, in the case where a 32-bit + * JIT implements complex ALU64 operations via function calls. + */ + "INT: Register clobbering, R1 updated", + .u.insns_int = { + BPF_ALU32_IMM(BPF_MOV, R0, 0), + BPF_ALU32_IMM(BPF_MOV, R1, 123456789), + BPF_ALU32_IMM(BPF_MOV, R2, 2), + BPF_ALU32_IMM(BPF_MOV, R3, 3), + BPF_ALU32_IMM(BPF_MOV, R4, 4), + BPF_ALU32_IMM(BPF_MOV, R5, 5), + BPF_ALU32_IMM(BPF_MOV, R6, 6), + BPF_ALU32_IMM(BPF_MOV, R7, 7), + BPF_ALU32_IMM(BPF_MOV, R8, 8), + BPF_ALU32_IMM(BPF_MOV, R9, 9), + BPF_ALU64_IMM(BPF_DIV, R1, 123456789), + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), + BPF_ALU32_IMM(BPF_MOV, R0, 1), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 1 } } + }, + { + "INT: Register clobbering, R2 updated", + .u.insns_int = { + BPF_ALU32_IMM(BPF_MOV, R0, 0), + BPF_ALU32_IMM(BPF_MOV, R1, 1), + BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), + BPF_ALU32_IMM(BPF_MOV, R3, 3), + BPF_ALU32_IMM(BPF_MOV, R4, 4), + BPF_ALU32_IMM(BPF_MOV, R5, 5), + BPF_ALU32_IMM(BPF_MOV, R6, 6), + BPF_ALU32_IMM(BPF_MOV, R7, 7), + BPF_ALU32_IMM(BPF_MOV, R8, 8), + BPF_ALU32_IMM(BPF_MOV, R9, 9), + BPF_ALU64_IMM(BPF_DIV, R2, 123456789), + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), + BPF_ALU32_IMM(BPF_MOV, R0, 1), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 1 } } + }, + { + /* + * Test 32-bit JITs that implement complex ALU64 operations as + * function calls R0 = f(R1, R2), and must re-arrange operands. + */ +#define NUMER 0xfedcba9876543210ULL +#define DENOM 0x0123456789abcdefULL + "ALU64_DIV X: Operand register permutations", + .u.insns_int = { + /* R0 / R2 */ + BPF_LD_IMM64(R0, NUMER), + BPF_LD_IMM64(R2, DENOM), + BPF_ALU64_REG(BPF_DIV, R0, R2), + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* R1 / R0 */ + BPF_LD_IMM64(R1, NUMER), + BPF_LD_IMM64(R0, DENOM), + BPF_ALU64_REG(BPF_DIV, R1, R0), + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* R0 / R1 */ + BPF_LD_IMM64(R0, NUMER), + BPF_LD_IMM64(R1, DENOM), + BPF_ALU64_REG(BPF_DIV, R0, R1), + BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* R2 / R0 */ + BPF_LD_IMM64(R2, NUMER), + BPF_LD_IMM64(R0, DENOM), + BPF_ALU64_REG(BPF_DIV, R2, R0), + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* R2 / R1 */ + BPF_LD_IMM64(R2, NUMER), + BPF_LD_IMM64(R1, DENOM), + BPF_ALU64_REG(BPF_DIV, R2, R1), + BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* R1 / R2 */ + BPF_LD_IMM64(R1, NUMER), + BPF_LD_IMM64(R2, DENOM), + BPF_ALU64_REG(BPF_DIV, R1, R2), + BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + BPF_LD_IMM64(R0, 1), + /* R1 / R1 */ + BPF_LD_IMM64(R1, NUMER), + BPF_ALU64_REG(BPF_DIV, R1, R1), + BPF_JMP_IMM(BPF_JEQ, R1, 1, 1), + BPF_EXIT_INSN(), + /* R2 / R2 */ + BPF_LD_IMM64(R2, DENOM), + BPF_ALU64_REG(BPF_DIV, R2, R2), + BPF_JMP_IMM(BPF_JEQ, R2, 1, 1), + BPF_EXIT_INSN(), + /* R3 / R4 */ + BPF_LD_IMM64(R3, NUMER), + BPF_LD_IMM64(R4, DENOM), + BPF_ALU64_REG(BPF_DIV, R3, R4), + BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1), + BPF_EXIT_INSN(), + /* Successful return */ + BPF_LD_IMM64(R0, 1), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 1 } }, +#undef NUMER +#undef DENOM + }, { "check: missing ret", .u.insns = {
32-bit JITs may implement complex ALU64 instructions using function calls. The new tests check aspects related to this, such as register clobbering and register argument re-ordering. Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> --- lib/test_bpf.c | 138 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 138 insertions(+)