Message ID | 85536-177443-curtm@phaethon (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | [v5] bpf: core: fix shift-out-of-bounds in ___bpf_prog_run | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On 15/06/2021 17:42, Kurt Manucredo wrote: > Syzbot detects a shift-out-of-bounds in ___bpf_prog_run() > kernel/bpf/core.c:1414:2. > > The shift-out-of-bounds happens when we have BPF_X. This means we have > to go the same way we go when we want to avoid a divide-by-zero. We do > it in do_misc_fixups(). Shifts by more than insn_bitness are legal in the eBPF ISA; they are implementation-defined behaviour, rather than UB, and have been made legal for performance reasons. Each of the JIT backends compiles the eBPF shift operations to machine instructions which produce implementation-defined results in such a case; the resulting contents of the register may be arbitrary but program behaviour as a whole remains defined. Guard checks in the fast path (i.e. affecting JITted code) will thus not be accepted. The case of division by zero is not truly analogous, as division instructions on many of the JIT-targeted architectures will raise a machine exception / fault on division by zero, whereas (to the best of my knowledge) none will do so on an out-of-bounds shift. (That said, it would be possible to record from the verifier division instructions in the program which are known never to be passed zero as divisor, and eliding the fixup patch in those cases. However, the extra complexity may not be worthwhile.) As I understand it, the UBSAN report is coming from the eBPF interpreter, which is the *slow path* and indeed on many production systems is compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). Perhaps a better approach to the fix would be to change the interpreter to compute "DST = DST << (SRC & 63);" (and similar for other shifts and bitnesses), thus matching the behaviour of most chips' shift opcodes. This would shut up UBSAN, without affecting JIT code generation. -ed
On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: > > As I understand it, the UBSAN report is coming from the eBPF interpreter, > which is the *slow path* and indeed on many production systems is > compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). > Perhaps a better approach to the fix would be to change the interpreter > to compute "DST = DST << (SRC & 63);" (and similar for other shifts and > bitnesses), thus matching the behaviour of most chips' shift opcodes. > This would shut up UBSAN, without affecting JIT code generation. > Yes, I suggested that last week (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even get optimized out when compiling for most CPUs. - Eric
On 6/15/21 9:33 PM, Eric Biggers wrote: > On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: >> >> As I understand it, the UBSAN report is coming from the eBPF interpreter, >> which is the *slow path* and indeed on many production systems is >> compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). >> Perhaps a better approach to the fix would be to change the interpreter >> to compute "DST = DST << (SRC & 63);" (and similar for other shifts and >> bitnesses), thus matching the behaviour of most chips' shift opcodes. >> This would shut up UBSAN, without affecting JIT code generation. > > Yes, I suggested that last week > (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even > get optimized out when compiling for most CPUs. Did you check if the generated interpreter code for e.g. x86 is the same before/after with that? How does UBSAN detect this in general? I would assume generated code for interpreter wrt DST = DST << SRC would not really change as otherwise all valid cases would be broken as well, given compiler has not really room to optimize or make any assumptions here, in other words, it's only propagating potential quirks under such cases from underlying arch. Thanks, Daniel
On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: > On 6/15/21 9:33 PM, Eric Biggers wrote: > > On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: > > > > > > As I understand it, the UBSAN report is coming from the eBPF interpreter, > > > which is the *slow path* and indeed on many production systems is > > > compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). > > > Perhaps a better approach to the fix would be to change the interpreter > > > to compute "DST = DST << (SRC & 63);" (and similar for other shifts and > > > bitnesses), thus matching the behaviour of most chips' shift opcodes. > > > This would shut up UBSAN, without affecting JIT code generation. > > > > Yes, I suggested that last week > > (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even > > get optimized out when compiling for most CPUs. > > Did you check if the generated interpreter code for e.g. x86 is the same > before/after with that? Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same both before and after (with UBSAN disabled). Here is the patch I used: diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 5e31ee9f7512..996db8a1bbfb 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) DST = (u32) DST OP (u32) IMM; \ CONT; + /* + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined + * behavior. Normally this won't affect the generated code. + */ +#define ALU_SHIFT(OPCODE, OP) \ + ALU64_##OPCODE##_X: \ + DST = DST OP (SRC & 63);\ + CONT; \ + ALU_##OPCODE##_X: \ + DST = (u32) DST OP ((u32)SRC & 31); \ + CONT; \ + ALU64_##OPCODE##_K: \ + DST = DST OP (IMM & 63); \ + CONT; \ + ALU_##OPCODE##_K: \ + DST = (u32) DST OP ((u32)IMM & 31); \ + CONT; + ALU(ADD, +) ALU(SUB, -) ALU(AND, &) ALU(OR, |) - ALU(LSH, <<) - ALU(RSH, >>) + ALU_SHIFT(LSH, <<) + ALU_SHIFT(RSH, >>) ALU(XOR, ^) ALU(MUL, *) #undef ALU > > How does UBSAN detect this in general? I would assume generated code for > interpreter wrt DST = DST << SRC would not really change as otherwise all > valid cases would be broken as well, given compiler has not really room > to optimize or make any assumptions here, in other words, it's only > propagating potential quirks under such cases from underlying arch. UBSAN inserts code that checks that shift amounts are in range. In theory there are cases where the undefined behavior of out-of-range shift amounts could cause problems. For example, a compiler could make the following function always return true, as it can assume that 'b' is in the range [0, 31]. bool foo(int a, int b, int *c) { *c = a << b; return b < 32; } - Eric
On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote: > On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: > > On 6/15/21 9:33 PM, Eric Biggers wrote: > > > On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: > > > > > > > > As I understand it, the UBSAN report is coming from the eBPF interpreter, > > > > which is the *slow path* and indeed on many production systems is > > > > compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). > > > > Perhaps a better approach to the fix would be to change the interpreter > > > > to compute "DST = DST << (SRC & 63);" (and similar for other shifts and > > > > bitnesses), thus matching the behaviour of most chips' shift opcodes. > > > > This would shut up UBSAN, without affecting JIT code generation. > > > > > > Yes, I suggested that last week > > > (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even > > > get optimized out when compiling for most CPUs. > > > > Did you check if the generated interpreter code for e.g. x86 is the same > > before/after with that? > > Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same > both before and after (with UBSAN disabled). Here is the patch I used: > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 5e31ee9f7512..996db8a1bbfb 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) > DST = (u32) DST OP (u32) IMM; \ > CONT; > > + /* > + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined > + * behavior. Normally this won't affect the generated code. > + */ > +#define ALU_SHIFT(OPCODE, OP) \ > + ALU64_##OPCODE##_X: \ > + DST = DST OP (SRC & 63);\ > + CONT; \ > + ALU_##OPCODE##_X: \ > + DST = (u32) DST OP ((u32)SRC & 31); \ > + CONT; \ > + ALU64_##OPCODE##_K: \ > + DST = DST OP (IMM & 63); \ > + CONT; \ > + ALU_##OPCODE##_K: \ > + DST = (u32) DST OP ((u32)IMM & 31); \ > + CONT; > + > ALU(ADD, +) > ALU(SUB, -) > ALU(AND, &) > ALU(OR, |) > - ALU(LSH, <<) > - ALU(RSH, >>) > + ALU_SHIFT(LSH, <<) > + ALU_SHIFT(RSH, >>) > ALU(XOR, ^) > ALU(MUL, *) > #undef ALU > Note, I missed the arithmetic right shifts later on in the function. Same result there, though. - Eric
On 6/15/21 11:38 PM, Eric Biggers wrote: > On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote: >> On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: >>> On 6/15/21 9:33 PM, Eric Biggers wrote: >>>> On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: >>>>> >>>>> As I understand it, the UBSAN report is coming from the eBPF interpreter, >>>>> which is the *slow path* and indeed on many production systems is >>>>> compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). >>>>> Perhaps a better approach to the fix would be to change the interpreter >>>>> to compute "DST = DST << (SRC & 63);" (and similar for other shifts and >>>>> bitnesses), thus matching the behaviour of most chips' shift opcodes. >>>>> This would shut up UBSAN, without affecting JIT code generation. >>>> >>>> Yes, I suggested that last week >>>> (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even >>>> get optimized out when compiling for most CPUs. >>> >>> Did you check if the generated interpreter code for e.g. x86 is the same >>> before/after with that? >> >> Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same >> both before and after (with UBSAN disabled). Here is the patch I used: >> >> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c >> index 5e31ee9f7512..996db8a1bbfb 100644 >> --- a/kernel/bpf/core.c >> +++ b/kernel/bpf/core.c >> @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) >> DST = (u32) DST OP (u32) IMM; \ >> CONT; >> >> + /* >> + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined >> + * behavior. Normally this won't affect the generated code. The last one should probably be more specific in terms of 'normally', e.g. that it is expected that the compiler is optimizing this away for archs like x86. Is arm64 also covered by this ... do you happen to know on which archs this won't be the case? Additionally, I think such comment should probably be more clear in that it also needs to give proper guidance to JIT authors that look at the interpreter code to see what they need to implement, in other words, that they don't end up copying an explicit AND instruction emission if not needed there. >> + */ >> +#define ALU_SHIFT(OPCODE, OP) \ >> + ALU64_##OPCODE##_X: \ >> + DST = DST OP (SRC & 63);\ >> + CONT; \ >> + ALU_##OPCODE##_X: \ >> + DST = (u32) DST OP ((u32)SRC & 31); \ >> + CONT; \ >> + ALU64_##OPCODE##_K: \ >> + DST = DST OP (IMM & 63); \ >> + CONT; \ >> + ALU_##OPCODE##_K: \ >> + DST = (u32) DST OP ((u32)IMM & 31); \ >> + CONT; For the *_K cases these are explicitly rejected by the verifier already. Is this required here nevertheless to suppress UBSAN false positive? >> ALU(ADD, +) >> ALU(SUB, -) >> ALU(AND, &) >> ALU(OR, |) >> - ALU(LSH, <<) >> - ALU(RSH, >>) >> + ALU_SHIFT(LSH, <<) >> + ALU_SHIFT(RSH, >>) >> ALU(XOR, ^) >> ALU(MUL, *) >> #undef ALU > > Note, I missed the arithmetic right shifts later on in the function. Same > result there, though. > > - Eric >
On Tue, Jun 15, 2021 at 11:54:41PM +0200, Daniel Borkmann wrote: > On 6/15/21 11:38 PM, Eric Biggers wrote: > > On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote: > > > On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: > > > > On 6/15/21 9:33 PM, Eric Biggers wrote: > > > > > On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: > > > > > > > > > > > > As I understand it, the UBSAN report is coming from the eBPF interpreter, > > > > > > which is the *slow path* and indeed on many production systems is > > > > > > compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). > > > > > > Perhaps a better approach to the fix would be to change the interpreter > > > > > > to compute "DST = DST << (SRC & 63);" (and similar for other shifts and > > > > > > bitnesses), thus matching the behaviour of most chips' shift opcodes. > > > > > > This would shut up UBSAN, without affecting JIT code generation. > > > > > > > > > > Yes, I suggested that last week > > > > > (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even > > > > > get optimized out when compiling for most CPUs. > > > > > > > > Did you check if the generated interpreter code for e.g. x86 is the same > > > > before/after with that? > > > > > > Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same > > > both before and after (with UBSAN disabled). Here is the patch I used: > > > > > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > > > index 5e31ee9f7512..996db8a1bbfb 100644 > > > --- a/kernel/bpf/core.c > > > +++ b/kernel/bpf/core.c > > > @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) > > > DST = (u32) DST OP (u32) IMM; \ > > > CONT; > > > + /* > > > + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined > > > + * behavior. Normally this won't affect the generated code. > > The last one should probably be more specific in terms of 'normally', e.g. that > it is expected that the compiler is optimizing this away for archs like x86. Is > arm64 also covered by this ... do you happen to know on which archs this won't > be the case? > > Additionally, I think such comment should probably be more clear in that it also > needs to give proper guidance to JIT authors that look at the interpreter code to > see what they need to implement, in other words, that they don't end up copying > an explicit AND instruction emission if not needed there. Same result on arm64 with gcc 10.2.0. On arm32 it is different, probably because the 64-bit shifts aren't native in that case. I don't know about other architectures. But there aren't many ways to implement shifts, and using just the low bits of the shift amount is the most logical way. Please feel free to send out a patch with whatever comment you want. The diff I gave was just an example and I am not an expert in BPF. > > > > + */ > > > +#define ALU_SHIFT(OPCODE, OP) \ > > > + ALU64_##OPCODE##_X: \ > > > + DST = DST OP (SRC & 63);\ > > > + CONT; \ > > > + ALU_##OPCODE##_X: \ > > > + DST = (u32) DST OP ((u32)SRC & 31); \ > > > + CONT; \ > > > + ALU64_##OPCODE##_K: \ > > > + DST = DST OP (IMM & 63); \ > > > + CONT; \ > > > + ALU_##OPCODE##_K: \ > > > + DST = (u32) DST OP ((u32)IMM & 31); \ > > > + CONT; > > For the *_K cases these are explicitly rejected by the verifier already. Is this > required here nevertheless to suppress UBSAN false positive? > No, I just didn't know that these constants are never out of range. Please feel free to send out a patch that does this properly. - Eric
On Tue, 15 Jun 2021 15:07:43 -0700, Eric Biggers <ebiggers@kernel.org> wrote: > > On Tue, Jun 15, 2021 at 11:54:41PM +0200, Daniel Borkmann wrote: > > On 6/15/21 11:38 PM, Eric Biggers wrote: > > > On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote: > > > > On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: > > > > > On 6/15/21 9:33 PM, Eric Biggers wrote: > > > > > > On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: > > > > > > > > > > > > > > As I understand it, the UBSAN report is coming from the eBPF interpreter, > > > > > > > which is the *slow path* and indeed on many production systems is > > > > > > > compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). > > > > > > > Perhaps a better approach to the fix would be to change the interpreter > > > > > > > to compute "DST = DST << (SRC & 63);" (and similar for other shifts and > > > > > > > bitnesses), thus matching the behaviour of most chips' shift opcodes. > > > > > > > This would shut up UBSAN, without affecting JIT code generation. > > > > > > > > > > > > Yes, I suggested that last week > > > > > > (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even > > > > > > get optimized out when compiling for most CPUs. > > > > > > > > > > Did you check if the generated interpreter code for e.g. x86 is the same > > > > > before/after with that? > > > > > > > > Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same > > > > both before and after (with UBSAN disabled). Here is the patch I used: > > > > > > > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > > > > index 5e31ee9f7512..996db8a1bbfb 100644 > > > > --- a/kernel/bpf/core.c > > > > +++ b/kernel/bpf/core.c > > > > @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) > > > > DST = (u32) DST OP (u32) IMM; > > > CONT; > > > > + /* > > > > + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined > > > > + * behavior. Normally this won't affect the generated code. > > > > The last one should probably be more specific in terms of 'normally', e.g. that > > it is expected that the compiler is optimizing this away for archs like x86. Is > > arm64 also covered by this ... do you happen to know on which archs this won't > > be the case? > > > > Additionally, I think such comment should probably be more clear in that it also > > needs to give proper guidance to JIT authors that look at the interpreter code to > > see what they need to implement, in other words, that they don't end up copying > > an explicit AND instruction emission if not needed there. > > Same result on arm64 with gcc 10.2.0. > > On arm32 it is different, probably because the 64-bit shifts aren't native in > that case. I don't know about other architectures. But there aren't many ways > to implement shifts, and using just the low bits of the shift amount is the most > logical way. > > Please feel free to send out a patch with whatever comment you want. The diff I > gave was just an example and I am not an expert in BPF. > > > > > > > + */ > > > > +#define ALU_SHIFT(OPCODE, OP) > > > + ALU64_##OPCODE##_X: > > > + DST = DST OP (SRC & 63);> > > + CONT; > > > + ALU_##OPCODE##_X: > > > + DST = (u32) DST OP ((u32)SRC & 31); > > > + CONT; > > > + ALU64_##OPCODE##_K: > > > + DST = DST OP (IMM & 63); > > > + CONT; > > > + ALU_##OPCODE##_K: > > > + DST = (u32) DST OP ((u32)IMM & 31); > > > + CONT; > > > > For the *_K cases these are explicitly rejected by the verifier already. Is this > > required here nevertheless to suppress UBSAN false positive? > > > > No, I just didn't know that these constants are never out of range. Please feel > free to send out a patch that does this properly. > The shift-out-of-bounds on syzbot happens in ALU_##OPCODE##_X only. To pass the syzbot test, only ALU_##OPCODE##_X needs to be guarded. This old patch I tested on syzbot puts a check in all four. https://syzkaller.appspot.com/text?tag=Patch&x=11f8cacbd00000 https://syzkaller.appspot.com/bug?id=edb51be4c9a320186328893287bb30d5eed09231 thanks, kind regards Kurt Manucredo
On 6/16/21 12:07 AM, Eric Biggers wrote: > On Tue, Jun 15, 2021 at 11:54:41PM +0200, Daniel Borkmann wrote: >> On 6/15/21 11:38 PM, Eric Biggers wrote: >>> On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote: >>>> On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote: >>>>> On 6/15/21 9:33 PM, Eric Biggers wrote: >>>>>> On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote: >>>>>>> >>>>>>> As I understand it, the UBSAN report is coming from the eBPF interpreter, >>>>>>> which is the *slow path* and indeed on many production systems is >>>>>>> compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON). >>>>>>> Perhaps a better approach to the fix would be to change the interpreter >>>>>>> to compute "DST = DST << (SRC & 63);" (and similar for other shifts and >>>>>>> bitnesses), thus matching the behaviour of most chips' shift opcodes. >>>>>>> This would shut up UBSAN, without affecting JIT code generation. >>>>>> >>>>>> Yes, I suggested that last week >>>>>> (https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@gmail.com). The AND will even >>>>>> get optimized out when compiling for most CPUs. >>>>> >>>>> Did you check if the generated interpreter code for e.g. x86 is the same >>>>> before/after with that? >>>> >>>> Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same >>>> both before and after (with UBSAN disabled). Here is the patch I used: >>>> >>>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c >>>> index 5e31ee9f7512..996db8a1bbfb 100644 >>>> --- a/kernel/bpf/core.c >>>> +++ b/kernel/bpf/core.c >>>> @@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) >>>> DST = (u32) DST OP (u32) IMM; \ >>>> CONT; >>>> + /* >>>> + * Explicitly mask the shift amounts with 63 or 31 to avoid undefined >>>> + * behavior. Normally this won't affect the generated code. >> >> The last one should probably be more specific in terms of 'normally', e.g. that >> it is expected that the compiler is optimizing this away for archs like x86. Is >> arm64 also covered by this ... do you happen to know on which archs this won't >> be the case? >> >> Additionally, I think such comment should probably be more clear in that it also >> needs to give proper guidance to JIT authors that look at the interpreter code to >> see what they need to implement, in other words, that they don't end up copying >> an explicit AND instruction emission if not needed there. > > Same result on arm64 with gcc 10.2.0. > > On arm32 it is different, probably because the 64-bit shifts aren't native in > that case. I don't know about other architectures. But there aren't many ways > to implement shifts, and using just the low bits of the shift amount is the most > logical way. > > Please feel free to send out a patch with whatever comment you want. The diff I > gave was just an example and I am not an expert in BPF. > >> >>>> + */ >>>> +#define ALU_SHIFT(OPCODE, OP) \ >>>> + ALU64_##OPCODE##_X: \ >>>> + DST = DST OP (SRC & 63);\ >>>> + CONT; \ >>>> + ALU_##OPCODE##_X: \ >>>> + DST = (u32) DST OP ((u32)SRC & 31); \ >>>> + CONT; \ >>>> + ALU64_##OPCODE##_K: \ >>>> + DST = DST OP (IMM & 63); \ >>>> + CONT; \ >>>> + ALU_##OPCODE##_K: \ >>>> + DST = (u32) DST OP ((u32)IMM & 31); \ >>>> + CONT; >> >> For the *_K cases these are explicitly rejected by the verifier already. Is this >> required here nevertheless to suppress UBSAN false positive? > > No, I just didn't know that these constants are never out of range. Please feel > free to send out a patch that does this properly. Summarized and fixed via: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=28131e9d933339a92f78e7ab6429f4aaaa07061c Thanks everyone, Daniel
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 94ba5163d4c5..83c7c1ccaf26 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7496,7 +7496,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, u64 umin_val, umax_val; s32 s32_min_val, s32_max_val; u32 u32_min_val, u32_max_val; - u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32; bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64); int ret; @@ -7592,39 +7591,18 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, scalar_min_max_xor(dst_reg, &src_reg); break; case BPF_LSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_lsh(dst_reg, &src_reg); else scalar_min_max_lsh(dst_reg, &src_reg); break; case BPF_RSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_rsh(dst_reg, &src_reg); else scalar_min_max_rsh(dst_reg, &src_reg); break; case BPF_ARSH: - if (umax_val >= insn_bitness) { - /* Shifts greater than 31 or 63 are undefined. - * This includes shifts by a negative number. - */ - mark_reg_unknown(env, regs, insn->dst_reg); - break; - } if (alu32) scalar32_min_max_arsh(dst_reg, &src_reg); else @@ -12353,6 +12331,37 @@ static int do_misc_fixups(struct bpf_verifier_env *env) continue; } + /* Make shift-out-of-bounds exceptions impossible. */ + if (insn->code == (BPF_ALU64 | BPF_LSH | BPF_X) || + insn->code == (BPF_ALU64 | BPF_RSH | BPF_X) || + insn->code == (BPF_ALU64 | BPF_ARSH | BPF_X) || + insn->code == (BPF_ALU | BPF_LSH | BPF_X) || + insn->code == (BPF_ALU | BPF_RSH | BPF_X) || + insn->code == (BPF_ALU | BPF_ARSH | BPF_X)) { + bool is64 = BPF_CLASS(insn->code) == BPF_ALU64; + u8 insn_bitness = is64 ? 64 : 32; + struct bpf_insn chk_and_shift[] = { + /* [R,W]x shift >= 32||64 -> 0 */ + BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) | + BPF_JLT | BPF_K, insn->src_reg, + insn_bitness, 2, 0), + BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg), + BPF_JMP_IMM(BPF_JA, 0, 0, 1), + *insn, + }; + + cnt = ARRAY_SIZE(chk_and_shift); + + new_prog = bpf_patch_insn_data(env, i + delta, chk_and_shift, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } + /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */ if (BPF_CLASS(insn->code) == BPF_LD && (BPF_MODE(insn->code) == BPF_ABS ||
Syzbot detects a shift-out-of-bounds in ___bpf_prog_run() kernel/bpf/core.c:1414:2. The shift-out-of-bounds happens when we have BPF_X. This means we have to go the same way we go when we want to avoid a divide-by-zero. We do it in do_misc_fixups(). When we have BPF_K we find divide-by-zero and shift-out-of-bounds guards next each other in check_alu_op(). It seems only logical to me that the same should be true for BPF_X in do_misc_fixups() since it is there where I found the divide-by-zero guard. Or is there a reason I'm not aware of, that dictates that the checks should be in adjust_scalar_min_max_vals(), as they are now? This patch was tested by syzbot. Reported-and-tested-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com Signed-off-by: Kurt Manucredo <fuzzybritches0@gmail.com> --- https://syzkaller.appspot.com/bug?id=edb51be4c9a320186328893287bb30d5eed09231 Changelog: ---------- v5 - Fix shift-out-of-bounds in do_misc_fixups(). v4 - Fix shift-out-of-bounds in adjust_scalar_min_max_vals. Fix commit message. v3 - Make it clearer what the fix is for. v2 - Fix shift-out-of-bounds in ___bpf_prog_run() by adding boundary check in check_alu_op() in verifier.c. v1 - Fix shift-out-of-bounds in ___bpf_prog_run() by adding boundary check in ___bpf_prog_run(). thanks kind regards Kurt kernel/bpf/verifier.c | 53 +++++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 22 deletions(-)