From patchwork Fri Feb 24 03:40:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Liaw X-Patchwork-Id: 13150905 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2255C678D5 for ; Fri, 24 Feb 2023 03:40:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229568AbjBXDko (ORCPT ); Thu, 23 Feb 2023 22:40:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229525AbjBXDkn (ORCPT ); Thu, 23 Feb 2023 22:40:43 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C7E64FC8C for ; Thu, 23 Feb 2023 19:40:42 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id t18-20020a170902e85200b0019c91fd0967so5068811plg.20 for ; Thu, 23 Feb 2023 19:40:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tz/fgJduKsVCOU3o92NlRuL4rbz7T6tbZlMwZgCQlZA=; b=YQm24ux8aqXaHfxAZO+kBkwkXuWzed/yCs1ecIlfFHNmN3YbzYPnUy5Y5cN62A/miI FlUaJloJJDV1jAS8Cw5njmoI2Paddj+qFcvV1ZXi7/g6OiJMgt0dLGxBdegHQ3TDEKYN ivEFmmT5Aple1wz6w/mui8hHnWiDWBtap4WWfKOlEoSrz1TYn3tu8jKkLZj0wBZ3T56O +t+yQJiE+/mUexfNoupTGkpVp+sWLum8gFa/cwCTmW1Tn3SfHoCC7nr2HbJZyjEMM0MD aQOT7R7q/IrvAYkWh/2d2jT9TsPLTiItjoPlGuR4G96g3YUqyYSoE8Po4CWJz10DI40W L6Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tz/fgJduKsVCOU3o92NlRuL4rbz7T6tbZlMwZgCQlZA=; b=xOT0uohqys+8jbiHO/dxo940Ikfyp5l9nHzatMJR6tOs9uOZ4TnCJiepzYJQCnuuE8 hjIEWZwamDRnTesfwCGYud4PbE0knSdZJIhEt9i2z/IyQNCk+DELYzUc98XVy9dewnpW ZDuFeVaLIcOudjrpUQTGiqD1c36HxRRRbpY6GixCa7e9G0jTDp+jh1dsnA89cyqb7J0e agxv0g4cIjxpVe0Pah1wCZN+OPiiXt6zQS37gn7JEoj7lg7r+OP4DGyy/KIETpl9bcWe +pz6KS7CLsIQ8QKwV/2AotAE0D8dydrsxTFWe/uyLGBLe6qPr4q1TTUiNjP4D3Zo4h+i Cyfg== X-Gm-Message-State: AO0yUKVoFt/lq+AkZxjVkExTeV4QhTkM0rLDGgGaNlLM9ovTUwGeaD0j zMHlACYcaUjzIGSrRupB0Jc1Ef/dAQA= X-Google-Smtp-Source: AK7set8Y74VoWYiHplekedhR9miefSOxqdya1d+JkmMLyRnHZ+UkVnfNrNE/F4+Hlf+xCYbNh9yLqGcN9Bo= X-Received: from edliaw.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:305d]) (user=edliaw job=sendgmr) by 2002:a62:ce0c:0:b0:5aa:72b4:2fe1 with SMTP id y12-20020a62ce0c000000b005aa72b42fe1mr2592075pfg.1.1677210041847; Thu, 23 Feb 2023 19:40:41 -0800 (PST) Date: Fri, 24 Feb 2023 03:40:16 +0000 In-Reply-To: <20230224034020.2080637-1-edliaw@google.com> Mime-Version: 1.0 References: <20230224034020.2080637-1-edliaw@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230224034020.2080637-2-edliaw@google.com> Subject: [PATCH 4.14 v3 1/4] bpf: Do not use ax register in interpreter on div/mod From: Edward Liaw To: stable@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , "David S. Miller" Cc: bpf@vger.kernel.org, kernel-team@android.com, Edward Liaw , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, John Fastabend , Thadeu Lima de Souza Cascardo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Daniel Borkmann Commit c348d806ed1d3075af52345344243824d72c4945 upstream. Partially undo old commit 144cd91c4c2b ("bpf: move tmp variable into ax register in interpreter"). The reason we need this here is because ax register will be used for holding temporary state for div/mod instruction which otherwise interpreter would corrupt. This will cause a small +8 byte stack increase for interpreter, but with the gain that we can use it from verifier rewrites as scratch register. Signed-off-by: Daniel Borkmann Reviewed-by: John Fastabend [cascardo: This partial revert is needed in order to support using AX for the following two commits, as there is no JMP32 on 4.19.y] Signed-off-by: Thadeu Lima de Souza Cascardo [edliaw: Removed redeclaration of tmp introduced by patch differences between 4.14 and 4.19] Signed-off-by: Edward Liaw --- kernel/bpf/core.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 5d649983de07..4ddb846693bb 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -663,9 +663,6 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, * below. * * Constant blinding is only used by JITs, not in the interpreter. - * The interpreter uses AX in some occasions as a local temporary - * register e.g. in DIV or MOD instructions. - * * In restricted circumstances, the verifier can also use the AX * register for rewrites as long as they do not interfere with * the above cases! @@ -1060,22 +1057,22 @@ static unsigned int ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, ALU64_MOD_X: if (unlikely(SRC == 0)) return 0; - div64_u64_rem(DST, SRC, &AX); - DST = AX; + div64_u64_rem(DST, SRC, &tmp); + DST = tmp; CONT; ALU_MOD_X: if (unlikely((u32)SRC == 0)) return 0; - AX = (u32) DST; - DST = do_div(AX, (u32) SRC); + tmp = (u32) DST; + DST = do_div(tmp, (u32) SRC); CONT; ALU64_MOD_K: - div64_u64_rem(DST, IMM, &AX); - DST = AX; + div64_u64_rem(DST, IMM, &tmp); + DST = tmp; CONT; ALU_MOD_K: - AX = (u32) DST; - DST = do_div(AX, (u32) IMM); + tmp = (u32) DST; + DST = do_div(tmp, (u32) IMM); CONT; ALU64_DIV_X: if (unlikely(SRC == 0)) @@ -1085,17 +1082,17 @@ static unsigned int ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, ALU_DIV_X: if (unlikely((u32)SRC == 0)) return 0; - AX = (u32) DST; - do_div(AX, (u32) SRC); - DST = (u32) AX; + tmp = (u32) DST; + do_div(tmp, (u32) SRC); + DST = (u32) tmp; CONT; ALU64_DIV_K: DST = div64_u64(DST, IMM); CONT; ALU_DIV_K: - AX = (u32) DST; - do_div(AX, (u32) IMM); - DST = (u32) AX; + tmp = (u32) DST; + do_div(tmp, (u32) IMM); + DST = (u32) tmp; CONT; ALU_END_TO_BE: switch (IMM) { From patchwork Fri Feb 24 03:40:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Liaw X-Patchwork-Id: 13150906 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E196EC64ED6 for ; Fri, 24 Feb 2023 03:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229610AbjBXDk4 (ORCPT ); Thu, 23 Feb 2023 22:40:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229579AbjBXDkq (ORCPT ); Thu, 23 Feb 2023 22:40:46 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 910695DCF2 for ; Thu, 23 Feb 2023 19:40:44 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id d3-20020a17090acd0300b00237659aae8dso664760pju.1 for ; Thu, 23 Feb 2023 19:40:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IEu3aWHhriZFaQsrYOIOcLcwUO+KxYZIRZWPNbGsh2k=; b=TEhRetKCqjCfNqsz0Wynco/kYEYdyUlnCH4PfQYQLND8kxum13M7EHKJwClpzoLjcp d7GPv0tB5ISEeCqSTCR2z2XvNj8RAI4OdAKFIqL9XSWycT7tkUxMSrSySKG9FoA9p3uZ JwR8UXIxPDMu4d12e9zS/SSsHZbOHh7UIOpeVYCLdc1uH2x7rC9oErvbj9/ArZkfb0Lx YajrWLNbJfIydRaURWEEe3crRepXlgtjr6JgGTidunkyWuhRvk9rMT83ngUjJS/TMlRH EZeiqg24ZsT5lGaf/kSC2Ng3K7sgAnQ0VeTrztb/lPiCVzEw3kS3S+yMQLUxR0mbbh5Q 2G7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IEu3aWHhriZFaQsrYOIOcLcwUO+KxYZIRZWPNbGsh2k=; b=U1w0FC2WwuZHoJA88e2vbxEzoMZDkL985wfnl808pBzX9xdn1LEctBDgonCM4BZQBG lq1TGajqsMpYYXfk21PieHUqWTlyizLH1oUx0qN200xSeECCXncvBwNFI7E7RYrNKYNB VfcfmF7KW8NoN3Kks36XUrpV+/aJ+BEWMLYSx9WgPrMF2+j9eWgPJEJBd2QppmK6rfWQ NMUqe/hOqGZjcvYaaLAIpUVpfgmUog7wischKGxtZGL0uMbN+h9YoucbSj7sk2vuPb2/ VlklHz5HJk0q+om0mkIv0w7MuyykJZVtZLXKAkwdfUEd38NIpZwkilyKmlWlJ77oPpWn pX+A== X-Gm-Message-State: AO0yUKWuQKHHzdf9NfB+m5A36lXukADamErmMstppzJ79Pua9k3WDtiW +TLV1SUQA+YeUdHfhRz0AiVEWt+5H78= X-Google-Smtp-Source: AK7set9TInkjgHOd5ixm+jlXQ4UUUxSjvdV5u8wZGaaGQYKp2u1ux5Pq8Af63SNXR49dFU8NIJN92t+ofjk= X-Received: from edliaw.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:305d]) (user=edliaw job=sendgmr) by 2002:a62:82ce:0:b0:5af:db19:b1c with SMTP id w197-20020a6282ce000000b005afdb190b1cmr2539064pfd.2.1677210044001; Thu, 23 Feb 2023 19:40:44 -0800 (PST) Date: Fri, 24 Feb 2023 03:40:17 +0000 In-Reply-To: <20230224034020.2080637-1-edliaw@google.com> Mime-Version: 1.0 References: <20230224034020.2080637-1-edliaw@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230224034020.2080637-3-edliaw@google.com> Subject: [PATCH 4.14 v3 2/4] bpf: fix subprog verifier bypass by div/mod by 0 exception From: Edward Liaw To: stable@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , "David S. Miller" Cc: bpf@vger.kernel.org, kernel-team@android.com, Edward Liaw , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Thadeu Lima de Souza Cascardo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Daniel Borkmann Commit f6b1b3bf0d5f681631a293cfe1ca934b81716f1e upstream. One of the ugly leftovers from the early eBPF days is that div/mod operations based on registers have a hard-coded src_reg == 0 test in the interpreter as well as in JIT code generators that would return from the BPF program with exit code 0. This was basically adopted from cBPF interpreter for historical reasons. There are multiple reasons why this is very suboptimal and prone to bugs. To name one: the return code mapping for such abnormal program exit of 0 does not always match with a suitable program type's exit code mapping. For example, '0' in tc means action 'ok' where the packet gets passed further up the stack, which is just undesirable for such cases (e.g. when implementing policy) and also does not match with other program types. While trying to work out an exception handling scheme, I also noticed that programs crafted like the following will currently pass the verifier: 0: (bf) r6 = r1 1: (85) call pc+8 caller: R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 callee: frame1: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_1 10: (b4) (u32) r2 = (u32) 0 11: (b4) (u32) r3 = (u32) 1 12: (3c) (u32) r3 /= (u32) r2 13: (61) r0 = *(u32 *)(r1 +76) 14: (95) exit returning from callee: frame1: R0_w=pkt(id=0,off=0,r=0,imm=0) R1=ctx(id=0,off=0,imm=0) R2_w=inv0 R3_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0,call_1 to caller at 2: R0_w=pkt(id=0,off=0,r=0,imm=0) R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 from 14 to 2: R0=pkt(id=0,off=0,r=0,imm=0) R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 2: (bf) r1 = r6 3: (61) r1 = *(u32 *)(r1 +80) 4: (bf) r2 = r0 5: (07) r2 += 8 6: (2d) if r2 > r1 goto pc+1 R0=pkt(id=0,off=0,r=8,imm=0) R1=pkt_end(id=0,off=0,imm=0) R2=pkt(id=0,off=8,r=8,imm=0) R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 7: (71) r0 = *(u8 *)(r0 +0) 8: (b7) r0 = 1 9: (95) exit from 6 to 8: safe processed 16 insns (limit 131072), stack depth 0+0 Basically what happens is that in the subprog we make use of a div/mod by 0 exception and in the 'normal' subprog's exit path we just return skb->data back to the main prog. This has the implication that the verifier thinks we always get a pkt pointer in R0 while we still have the implicit 'return 0' from the div as an alternative unconditional return path earlier. Thus, R0 then contains 0, meaning back in the parent prog we get the address range of [0x0, skb->data_end] as read and writeable. Similar can be crafted with other pointer register types. Since i) BPF_ABS/IND is not allowed in programs that contain BPF to BPF calls (and generally it's also disadvised to use in native eBPF context), ii) unknown opcodes don't return zero anymore, iii) we don't return an exception code in dead branches, the only last missing case affected and to fix is the div/mod handling. What we would really need is some infrastructure to propagate exceptions all the way to the original prog unwinding the current stack and returning that code to the caller of the BPF program. In user space such exception handling for similar runtimes is typically implemented with setjmp(3) and longjmp(3) as one possibility which is not available in the kernel, though (kgdb used to implement it in kernel long time ago). I implemented a PoC exception handling mechanism into the BPF interpreter with porting setjmp()/longjmp() into x86_64 and adding a new internal BPF_ABRT opcode that can use a program specific exception code for all exception cases we have (e.g. div/mod by 0, unknown opcodes, etc). While this seems to work in the constrained BPF environment (meaning, here, we don't need to deal with state e.g. from memory allocations that we would need to undo before going into exception state), it still has various drawbacks: i) we would need to implement the setjmp()/longjmp() for every arch supported in the kernel and for x86_64, arm64, sparc64 JITs currently supporting calls, ii) it has unconditional additional cost on main program entry to store CPU register state in initial setjmp() call, and we would need some way to pass the jmp_buf down into ___bpf_prog_run() for main prog and all subprogs, but also storing on stack is not really nice (other option would be per-cpu storage for this, but it also has the drawback that we need to disable preemption for every BPF program types). All in all this approach would add a lot of complexity. Another poor-man's solution would be to have some sort of additional shared register or scratch buffer to hold state for exceptions, and test that after every call return to chain returns and pass R0 all the way down to BPF prog caller. This is also problematic in various ways: i) an additional register doesn't map well into JITs, and some other scratch space could only be on per-cpu storage, which, again has the side-effect that this only works when we disable preemption, or somewhere in the input context which is not available everywhere either, and ii) this adds significant runtime overhead by putting conditionals after each and every call, as well as implementation complexity. Yet another option is to teach verifier that div/mod can return an integer, which however is also complex to implement as verifier would need to walk such fake 'mov r0,; exit;' sequeuence and there would still be no guarantee for having propagation of this further down to the BPF caller as proper exception code. For parent prog, it is also is not distinguishable from a normal return of a constant scalar value. The approach taken here is a completely different one with little complexity and no additional overhead involved in that we make use of the fact that a div/mod by 0 is undefined behavior. Instead of bailing out, we adapt the same behavior as on some major archs like ARMv8 [0] into eBPF as well: X div 0 results in 0, and X mod 0 results in X. aarch64 and aarch32 ISA do not generate any traps or otherwise aborts of program execution for unsigned divides. I verified this also with a test program compiled by gcc and clang, and the behavior matches with the spec. Going forward we adapt the eBPF verifier to emit such rewrites once div/mod by register was seen. cBPF is not touched and will keep existing 'return 0' semantics. Given the options, it seems the most suitable from all of them, also since major archs have similar schemes in place. Given this is all in the realm of undefined behavior, we still have the option to adapt if deemed necessary and this way we would also have the option of more flexibility from LLVM code generation side (which is then fully visible to verifier). Thus, this patch i) fixes the panic seen in above program and ii) doesn't bypass the verifier observations. [0] ARM Architecture Reference Manual, ARMv8 [ARM DDI 0487B.b] http://infocenter.arm.com/help/topic/com.arm.doc.ddi0487b.b/DDI0487B_b_armv8_arm.pdf 1) aarch64 instruction set: section C3.4.7 and C6.2.279 (UDIV) "A division by zero results in a zero being written to the destination register, without any indication that the division by zero occurred." 2) aarch32 instruction set: section F1.4.8 and F5.1.263 (UDIV) "For the SDIV and UDIV instructions, division by zero always returns a zero result." Fixes: f4d7e40a5b71 ("bpf: introduce function calls (verification)") Signed-off-by: Daniel Borkmann Acked-by: Alexei Starovoitov Signed-off-by: Alexei Starovoitov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Edward Liaw --- kernel/bpf/core.c | 8 -------- kernel/bpf/verifier.c | 38 ++++++++++++++++++++++++++++++-------- net/core/filter.c | 9 ++++++++- 3 files changed, 38 insertions(+), 17 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 4ddb846693bb..2ca36bb440de 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1055,14 +1055,10 @@ static unsigned int ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, (*(s64 *) &DST) >>= IMM; CONT; ALU64_MOD_X: - if (unlikely(SRC == 0)) - return 0; div64_u64_rem(DST, SRC, &tmp); DST = tmp; CONT; ALU_MOD_X: - if (unlikely((u32)SRC == 0)) - return 0; tmp = (u32) DST; DST = do_div(tmp, (u32) SRC); CONT; @@ -1075,13 +1071,9 @@ static unsigned int ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, DST = do_div(tmp, (u32) IMM); CONT; ALU64_DIV_X: - if (unlikely(SRC == 0)) - return 0; DST = div64_u64(DST, SRC); CONT; ALU_DIV_X: - if (unlikely((u32)SRC == 0)) - return 0; tmp = (u32) DST; do_div(tmp, (u32) SRC); DST = (u32) tmp; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e8d9ddd5cb18..836791403129 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4839,15 +4839,37 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) struct bpf_insn_aux_data *aux; for (i = 0; i < insn_cnt; i++, insn++) { - if (insn->code == (BPF_ALU | BPF_MOD | BPF_X) || + if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || + insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || + insn->code == (BPF_ALU | BPF_MOD | BPF_X) || insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { - /* due to JIT bugs clear upper 32-bits of src register - * before div/mod operation - */ - insn_buf[0] = BPF_MOV32_REG(insn->src_reg, insn->src_reg); - insn_buf[1] = *insn; - cnt = 2; - new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + bool is64 = BPF_CLASS(insn->code) == BPF_ALU64; + struct bpf_insn mask_and_div[] = { + BPF_MOV32_REG(insn->src_reg, insn->src_reg), + /* Rx div 0 -> 0 */ + BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2), + BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg), + BPF_JMP_IMM(BPF_JA, 0, 0, 1), + *insn, + }; + struct bpf_insn mask_and_mod[] = { + BPF_MOV32_REG(insn->src_reg, insn->src_reg), + /* Rx mod 0 -> Rx */ + BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1), + *insn, + }; + struct bpf_insn *patchlet; + + if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || + insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { + patchlet = mask_and_div + (is64 ? 1 : 0); + cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0); + } else { + patchlet = mask_and_mod + (is64 ? 1 : 0); + cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0); + } + + new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt); if (!new_prog) return -ENOMEM; diff --git a/net/core/filter.c b/net/core/filter.c index 29d85a20f4fc..fc665885d57c 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -458,8 +458,15 @@ static int bpf_convert_filter(struct sock_filter *prog, int len, break; if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) || - fp->code == (BPF_ALU | BPF_MOD | BPF_X)) + fp->code == (BPF_ALU | BPF_MOD | BPF_X)) { *insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X); + /* Error with exception code on div/mod by 0. + * For cBPF programs, this was always return 0. + */ + *insn++ = BPF_JMP_IMM(BPF_JNE, BPF_REG_X, 0, 2); + *insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A); + *insn++ = BPF_EXIT_INSN(); + } *insn = BPF_RAW_INSN(fp->code, BPF_REG_A, BPF_REG_X, 0, fp->k); break; From patchwork Fri Feb 24 03:40:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Liaw X-Patchwork-Id: 13150907 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EF28C678D5 for ; Fri, 24 Feb 2023 03:41:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229603AbjBXDlG (ORCPT ); Thu, 23 Feb 2023 22:41:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229605AbjBXDk4 (ORCPT ); Thu, 23 Feb 2023 22:40:56 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9BC16013E for ; Thu, 23 Feb 2023 19:40:46 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-536cb268ab8so155125637b3.17 for ; Thu, 23 Feb 2023 19:40:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=COcCL1RuPlrtQv/M0zJqPpXpZtIWKGOdSOSaIpnmqc0=; b=S9et4NA2Y+3yjx/zHug2YBeKwTYCZ5F+tL0oL5SIvF2HoOgkNjF0xtkFU+3wKsh77Y 1h60CG/8UI7gvLOmOwrIszIT24M+nOxaGT/6Ib8vXoUFIIOedcB1bPL04yusXokHbf/w uOFwamGNfM418dktZcmao1FqB0W1+UjpUaLCekhZTfIfqc1xLiVpg4LbT0fjwEtTj9Dm d4g0wZHLVup+y+rWNfzIIGavXDyvJt2kD0Nc+nwjY4EHXAvsVKdHrzH/2S3Quwc2b4FD C7w2akalMpmyAksccQ5ULx7iEGbvvEQUobZskx5uLsayQWgRaJXxsTLAie1rV9O79Vtj y/DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=COcCL1RuPlrtQv/M0zJqPpXpZtIWKGOdSOSaIpnmqc0=; b=sH6ofFDGqFMtDkqk5zZQQnPB52D5ahWkl+0m09LnZS6VTfV/ZhdF/u2csHtbTEw8KK qCYEw50JcW07nSE5FyO06Nf8zihD0JEgol1iXudvm1d4cPBVsc58jyz25LXa192X1j5Y vPhdcftrp8ROVEUcDlqSwe3Hlmh5uA5kwu687rnwHJLsd/lY9z/jVu9b2LS5ZrMgmU4C phjLcBiIyLGh/+RDeUuyBlbN7WBHwhFZFZh83/rWJYn/0viUIliSpbHnjAarsJvxQmef IrtyjtDD1z408BBAZ4eon0jvQ4r22ZwPsGD++z5voar/WK/nDO5uE8/CxE5PFeecYLPo eypA== X-Gm-Message-State: AO0yUKW10btas43PeVxkyVM/36MBkCwmFmQmKa478Kf0MFJTjt84ym3F QbV3jk6FO/kcEjbBNWgTtVWERO5kOAI= X-Google-Smtp-Source: AK7set9sQBImjSEw6hSMkC77Qx+4gfNnWsGaQCSGqCOwenwD3YlIWJP/B0BW4UIYXRhCtD+q2ir2G4nO9PQ= X-Received: from edliaw.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:305d]) (user=edliaw job=sendgmr) by 2002:a5b:cc:0:b0:966:a047:4ce4 with SMTP id d12-20020a5b00cc000000b00966a0474ce4mr3005056ybp.10.1677210046078; Thu, 23 Feb 2023 19:40:46 -0800 (PST) Date: Fri, 24 Feb 2023 03:40:18 +0000 In-Reply-To: <20230224034020.2080637-1-edliaw@google.com> Mime-Version: 1.0 References: <20230224034020.2080637-1-edliaw@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230224034020.2080637-4-edliaw@google.com> Subject: [PATCH 4.14 v3 3/4] bpf: Fix 32 bit src register truncation on div/mod From: Edward Liaw To: stable@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , "David S. Miller" Cc: bpf@vger.kernel.org, kernel-team@android.com, Edward Liaw , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, John Fastabend , Salvatore Bonaccorso , Thadeu Lima de Souza Cascardo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Daniel Borkmann Commit e88b2c6e5a4d9ce30d75391e4d950da74bb2bd90 upstream. While reviewing a different fix, John and I noticed an oddity in one of the BPF program dumps that stood out, for example: # bpftool p d x i 13 0: (b7) r0 = 808464450 1: (b4) w4 = 808464432 2: (bc) w0 = w0 3: (15) if r0 == 0x0 goto pc+1 4: (9c) w4 %= w0 [...] In line 2 we noticed that the mov32 would 32 bit truncate the original src register for the div/mod operation. While for the two operations the dst register is typically marked unknown e.g. from adjust_scalar_min_max_vals() the src register is not, and thus verifier keeps tracking original bounds, simplified: 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 0: (b7) r0 = -1 1: R0_w=invP-1 R1=ctx(id=0,off=0,imm=0) R10=fp0 1: (b7) r1 = -1 2: R0_w=invP-1 R1_w=invP-1 R10=fp0 2: (3c) w0 /= w1 3: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1_w=invP-1 R10=fp0 3: (77) r1 >>= 32 4: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1_w=invP4294967295 R10=fp0 4: (bf) r0 = r1 5: R0_w=invP4294967295 R1_w=invP4294967295 R10=fp0 5: (95) exit processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 Runtime result of r0 at exit is 0 instead of expected -1. Remove the verifier mov32 src rewrite in div/mod and replace it with a jmp32 test instead. After the fix, we result in the following code generation when having dividend r1 and divisor r6: div, 64 bit: div, 32 bit: 0: (b7) r6 = 8 0: (b7) r6 = 8 1: (b7) r1 = 8 1: (b7) r1 = 8 2: (55) if r6 != 0x0 goto pc+2 2: (56) if w6 != 0x0 goto pc+2 3: (ac) w1 ^= w1 3: (ac) w1 ^= w1 4: (05) goto pc+1 4: (05) goto pc+1 5: (3f) r1 /= r6 5: (3c) w1 /= w6 6: (b7) r0 = 0 6: (b7) r0 = 0 7: (95) exit 7: (95) exit mod, 64 bit: mod, 32 bit: 0: (b7) r6 = 8 0: (b7) r6 = 8 1: (b7) r1 = 8 1: (b7) r1 = 8 2: (15) if r6 == 0x0 goto pc+1 2: (16) if w6 == 0x0 goto pc+1 3: (9f) r1 %= r6 3: (9c) w1 %= w6 4: (b7) r0 = 0 4: (b7) r0 = 0 5: (95) exit 5: (95) exit x86 in particular can throw a 'divide error' exception for div instruction not only for divisor being zero, but also for the case when the quotient is too large for the designated register. For the edx:eax and rdx:rax dividend pair it is not an issue in x86 BPF JIT since we always zero edx (rdx). Hence really the only protection needed is against divisor being zero. [Salvatore Bonaccorso: This is an earlier version of the patch provided by Daniel Borkmann which does not rely on availability of the BPF_JMP32 instruction class. This means it is not even strictly a backport of the upstream commit mentioned but based on Daniel's and John's work to address the issue.] Fixes: 68fda450a7df ("bpf: fix 32-bit divide by zero") Co-developed-by: John Fastabend Signed-off-by: John Fastabend Signed-off-by: Daniel Borkmann Tested-by: Salvatore Bonaccorso Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Edward Liaw --- include/linux/filter.h | 24 ++++++++++++++++++++++++ kernel/bpf/verifier.c | 22 +++++++++++----------- 2 files changed, 35 insertions(+), 11 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 7c0e616362f0..55a639609070 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -67,6 +67,14 @@ struct bpf_prog_aux; /* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */ +#define BPF_ALU_REG(CLASS, OP, DST, SRC) \ + ((struct bpf_insn) { \ + .code = CLASS | BPF_OP(OP) | BPF_X, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = 0, \ + .imm = 0 }) + #define BPF_ALU64_REG(OP, DST, SRC) \ ((struct bpf_insn) { \ .code = BPF_ALU64 | BPF_OP(OP) | BPF_X, \ @@ -113,6 +121,14 @@ struct bpf_prog_aux; /* Short form of mov, dst_reg = src_reg */ +#define BPF_MOV_REG(CLASS, DST, SRC) \ + ((struct bpf_insn) { \ + .code = CLASS | BPF_MOV | BPF_X, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = 0, \ + .imm = 0 }) + #define BPF_MOV64_REG(DST, SRC) \ ((struct bpf_insn) { \ .code = BPF_ALU64 | BPF_MOV | BPF_X, \ @@ -147,6 +163,14 @@ struct bpf_prog_aux; .off = 0, \ .imm = IMM }) +#define BPF_RAW_REG(insn, DST, SRC) \ + ((struct bpf_insn) { \ + .code = (insn).code, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = (insn).off, \ + .imm = (insn).imm }) + /* BPF_LD_IMM64 macro encodes single 'load 64-bit immediate' insn */ #define BPF_LD_IMM64(DST, IMM) \ BPF_LD_IMM64_RAW(DST, 0, IMM) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 836791403129..9f04d413df92 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4845,28 +4845,28 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { bool is64 = BPF_CLASS(insn->code) == BPF_ALU64; struct bpf_insn mask_and_div[] = { - BPF_MOV32_REG(insn->src_reg, insn->src_reg), + BPF_MOV_REG(BPF_CLASS(insn->code), BPF_REG_AX, insn->src_reg), /* Rx div 0 -> 0 */ - BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2), - BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, 2), + BPF_RAW_REG(*insn, insn->dst_reg, BPF_REG_AX), BPF_JMP_IMM(BPF_JA, 0, 0, 1), - *insn, + BPF_ALU_REG(BPF_CLASS(insn->code), BPF_XOR, insn->dst_reg, insn->dst_reg), }; struct bpf_insn mask_and_mod[] = { - BPF_MOV32_REG(insn->src_reg, insn->src_reg), + BPF_MOV_REG(BPF_CLASS(insn->code), BPF_REG_AX, insn->src_reg), /* Rx mod 0 -> Rx */ - BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1), - *insn, + BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, 1), + BPF_RAW_REG(*insn, insn->dst_reg, BPF_REG_AX), }; struct bpf_insn *patchlet; if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { - patchlet = mask_and_div + (is64 ? 1 : 0); - cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0); + patchlet = mask_and_div; + cnt = ARRAY_SIZE(mask_and_div); } else { - patchlet = mask_and_mod + (is64 ? 1 : 0); - cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0); + patchlet = mask_and_mod; + cnt = ARRAY_SIZE(mask_and_mod); } new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt); From patchwork Fri Feb 24 03:40:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Liaw X-Patchwork-Id: 13150908 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B7C5C678D5 for ; Fri, 24 Feb 2023 03:41:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229702AbjBXDlU (ORCPT ); Thu, 23 Feb 2023 22:41:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229626AbjBXDlF (ORCPT ); Thu, 23 Feb 2023 22:41:05 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBFEB60D60 for ; Thu, 23 Feb 2023 19:40:48 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id m6-20020a17090a668600b002375cbab773so645507pjj.9 for ; Thu, 23 Feb 2023 19:40:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R3iBhiHTRJXOrBGrqzDxss6bg8LHug+SM3RBUwYxANE=; b=J6+YN9IGP6MwLPZckWbo5E/Xk82MfTw13Lhg5Ll7egHVE9N1Gz17NrAEoEi5jefz8h s5ZPFOX/8PJLvK8IZCoKa2hDIiAlP345R3P0/QmKYhzYHJiNlawUTL+V6gPo//gkOR/M mOuBhSmC9tdwpYZQpoGvgsscg2IxTOllCkHlB3kujyatWQlAuhKf1miS4RZdd2850qO4 +qtC2AkdlCtVdHThYkZIvoiVdrWWzeUE+eYrgb8yTUigK6VC6a9NjY0qTRfItPHTBl0v EtzRt4Z3mQqbuTjbxi5VNiZOEUoFEnKXp3uQgeDAlkBFgJmGLxIpVMkMBLKZI3M9gPxw Ie5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R3iBhiHTRJXOrBGrqzDxss6bg8LHug+SM3RBUwYxANE=; b=JZ/XJVjJIDci8TETGSRBwDtGEC+78LkBbhEyb8C9/JcOu/kC6G/E4XJkzlXNzKnsgm cetAUzeNX21u6I6dVf9L/n5kSuwvAb+S24Z53njgRTYqjPTW2xjDaX4ycFpI6dDp1C4F /8xqKtzBGdiU97uHJBUOmzVMunEIuVt8t7yJy5rTg5+bqerwW+3or1BDFDXBcS+IJ2oy /tKNX3v3eTQ0SGyAkjQ6Eauyw7hp6bJFlp2R/Gd3s7XBlbkP+U3YhV7bvjrN2Lr4KtHP bAxG9gJajpv9qBabFQ51GdbRUfSnmv6fHSQdZKNEaJBm74gau35iysje3BGI4hZeP1eh wp4w== X-Gm-Message-State: AO0yUKUpfbttiX1DoLaPBIBdkQ6aBmpnpBJXk6NEATY1fCeqWo2HX8e0 9OKXCav5Qpo4ExElFzo3iAYCaOtM8O4= X-Google-Smtp-Source: AK7set8E3MEPhXV1/45uWmz2lxMH5cm58fZ0faRvczxB9TFOAlLdrNUle4su/XjXkfOaSWJtLZDQpa/gB4k= X-Received: from edliaw.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:305d]) (user=edliaw job=sendgmr) by 2002:a17:90a:f82:b0:237:29b1:188f with SMTP id 2-20020a17090a0f8200b0023729b1188fmr1158374pjz.8.1677210048093; Thu, 23 Feb 2023 19:40:48 -0800 (PST) Date: Fri, 24 Feb 2023 03:40:19 +0000 In-Reply-To: <20230224034020.2080637-1-edliaw@google.com> Mime-Version: 1.0 References: <20230224034020.2080637-1-edliaw@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230224034020.2080637-5-edliaw@google.com> Subject: [PATCH 4.14 v3 4/4] bpf: Fix truncation handling for mod32 dst reg wrt zero From: Edward Liaw To: stable@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , "David S. Miller" Cc: bpf@vger.kernel.org, kernel-team@android.com, Edward Liaw , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, John Fastabend , Salvatore Bonaccorso , Thadeu Lima de Souza Cascardo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Daniel Borkmann Commit 9b00f1b78809309163dda2d044d9e94a3c0248a3 upstream. Recently noticed that when mod32 with a known src reg of 0 is performed, then the dst register is 32-bit truncated in verifier: 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 0: (b7) r0 = 0 1: R0_w=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0 1: (b7) r1 = -1 2: R0_w=inv0 R1_w=inv-1 R10=fp0 2: (b4) w2 = -1 3: R0_w=inv0 R1_w=inv-1 R2_w=inv4294967295 R10=fp0 3: (9c) w1 %= w0 4: R0_w=inv0 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv4294967295 R10=fp0 4: (b7) r0 = 1 5: R0_w=inv1 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv4294967295 R10=fp0 5: (1d) if r1 == r2 goto pc+1 R0_w=inv1 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv4294967295 R10=fp0 6: R0_w=inv1 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv4294967295 R10=fp0 6: (b7) r0 = 2 7: R0_w=inv2 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv4294967295 R10=fp0 7: (95) exit 7: R0=inv1 R1=inv(id=0,umin_value=4294967295,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2=inv4294967295 R10=fp0 7: (95) exit However, as a runtime result, we get 2 instead of 1, meaning the dst register does not contain (u32)-1 in this case. The reason is fairly straight forward given the 0 test leaves the dst register as-is: # ./bpftool p d x i 23 0: (b7) r0 = 0 1: (b7) r1 = -1 2: (b4) w2 = -1 3: (16) if w0 == 0x0 goto pc+1 4: (9c) w1 %= w0 5: (b7) r0 = 1 6: (1d) if r1 == r2 goto pc+1 7: (b7) r0 = 2 8: (95) exit This was originally not an issue given the dst register was marked as completely unknown (aka 64 bit unknown). However, after 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification") the verifier casts the register output to 32 bit, and hence it becomes 32 bit unknown. Note that for the case where the src register is unknown, the dst register is marked 64 bit unknown. After the fix, the register is truncated by the runtime and the test passes: # ./bpftool p d x i 23 0: (b7) r0 = 0 1: (b7) r1 = -1 2: (b4) w2 = -1 3: (16) if w0 == 0x0 goto pc+2 4: (9c) w1 %= w0 5: (05) goto pc+1 6: (bc) w1 = w1 7: (b7) r0 = 1 8: (1d) if r1 == r2 goto pc+1 9: (b7) r0 = 2 10: (95) exit Semantics also match with {R,W}x mod{64,32} 0 -> {R,W}x. Invalid div has always been {R,W}x div{64,32} 0 -> 0. Rewrites are as follows: mod32: mod64: (16) if w0 == 0x0 goto pc+2 (15) if r0 == 0x0 goto pc+1 (9c) w1 %= w0 (9f) r1 %= r0 (05) goto pc+1 (bc) w1 = w1 [Salvatore Bonaccorso: This is an earlier version based on work by Daniel and John which does not rely on availability of the BPF_JMP32 instruction class. This means it is not even strictly a backport of the upstream commit mentioned but based on Daniel's and John's work to address the issue and was finalized by Thadeu Lima de Souza Cascardo.] Fixes: 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification") Signed-off-by: Daniel Borkmann Reviewed-by: John Fastabend Tested-by: Salvatore Bonaccorso Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Edward Liaw --- kernel/bpf/verifier.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9f04d413df92..a55e264cdb54 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4846,7 +4846,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) bool is64 = BPF_CLASS(insn->code) == BPF_ALU64; struct bpf_insn mask_and_div[] = { BPF_MOV_REG(BPF_CLASS(insn->code), BPF_REG_AX, insn->src_reg), - /* Rx div 0 -> 0 */ + /* [R,W]x div 0 -> 0 */ BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, 2), BPF_RAW_REG(*insn, insn->dst_reg, BPF_REG_AX), BPF_JMP_IMM(BPF_JA, 0, 0, 1), @@ -4854,9 +4854,10 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) }; struct bpf_insn mask_and_mod[] = { BPF_MOV_REG(BPF_CLASS(insn->code), BPF_REG_AX, insn->src_reg), - /* Rx mod 0 -> Rx */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, 1), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, 1 + (is64 ? 0 : 1)), BPF_RAW_REG(*insn, insn->dst_reg, BPF_REG_AX), + BPF_JMP_IMM(BPF_JA, 0, 0, 1), + BPF_MOV32_REG(insn->dst_reg, insn->dst_reg), }; struct bpf_insn *patchlet; @@ -4866,7 +4867,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) cnt = ARRAY_SIZE(mask_and_div); } else { patchlet = mask_and_mod; - cnt = ARRAY_SIZE(mask_and_mod); + cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 2 : 0); } new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);