From patchwork Fri Jul 12 08:01:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shung-Hsi Yu X-Patchwork-Id: 13731368 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BADA48801 for ; Fri, 12 Jul 2024 08:01:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771300; cv=none; b=abCAq0iZlAmu/pDKrfI2SBWRIW279lD9+B1By8bR35jhASWh/oVj0FTQ69FGTfC/RqfopVWDjzCidHTFpnUDZ6ptNxZbcR9WaRcFluO7HH4FzmqP2dBUbNzEr2rS/RDUMu646NYCK03nTPz4sxt+KsalEf8/nOE4Dcng7vzVcn0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771300; c=relaxed/simple; bh=UEqnkVGvrjwB0eOnHy8QaG95mDIuwov6z7HrWMcwRZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jg65/w9ezahzFdPhIiD/KLhln9Nie/yKCo85nLLhwrd3pXW/88lO+0rwTwaKnTJ3rFBagFd5G+LxxN81PUEdSYuq3GFbrncNZYYJHfgmGJJSNH0I7Je+g9ckXr2vDuYVOwsuCCNv1/yLzx/dF5nMBEAJP/9ZIj7e5yK9MH7J83I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=KzsLVO1G; arc=none smtp.client-ip=209.85.218.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="KzsLVO1G" Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-a77c349bb81so205971366b.3 for ; Fri, 12 Jul 2024 01:01:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1720771297; x=1721376097; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=85Vroh6o1AJc57S5SB1Osch/futnKt1i+fE+PkxElpM=; b=KzsLVO1GHwGYklEeqw6PEM8Pf0fXDvMbTdquFzqGvcKrQ3eygju2eNYdepS7R7rqz1 PMlube3YGz5Gw+uwSrc4RmRRxKlBPDUZ3fFcZD/mhU4TXAtDyJJGT6v/fA53gih4p+mP Uy+cpEKGmc/0/fTywIL09GnyructcNWTSgA9gOMvveEAise6RKmVFkvUmM8nJCwK9g4q zUE4fqyqhdCfABEveYKPDTN734taXFazpLMLt6nQsopO0e611VZ+uBw7cogC2Z9kKFkr EDrxpKaNTlhbYZbhn55+Mmle2gIGMo6M9fOSkDQ6vjxsHiTQWuNarUGHdpFTI5EnhSHK EBdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720771297; x=1721376097; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=85Vroh6o1AJc57S5SB1Osch/futnKt1i+fE+PkxElpM=; b=l4qhl2ANgVeh00CUkfIETW4iBFsqR5UGTlljCxmpDNZAyWstWi7ilr/aPlHH/crbzW wnJ+v6wyQL7W//4N7ZdlDanIdv7knt0jtN9eaL2VqGle7Od1zj2u5cTijgrzgMLeg/7P I5js7bklL0/G6/CgDHMnCJVDFdAFveRM4UI0bREuoaHnkyvwOqvHdvLiF1PfvB62PshN iLzupQtvufcJPNX2bV8T87SDZ8WbVW9jqce2/qbZUAG864T2geqEFFpqyWyVAd6J+87B thbWvF7wSzLj60shHU0AlTiAOnOHYSlJohomPwGC1YijAvy4At7ZTCyH/jcy/zNmlV6n lucQ== X-Gm-Message-State: AOJu0YxQd2d0T0PchttgK/3ARECPIxpydWsSnYq6r6mzCF9GX6bORoPL c8ZP9yTjDlZt60R4d9431ATvBkuD3GAAbz506HadOVnsSlBlYKC85XiReSVgj31uU3XdCCAW9fE MvB0= X-Google-Smtp-Source: AGHT+IHEgVNe05NzMD7RCxTz+i9KVExa0ytUbHPNFz4/atGmuRwtDI6VxDhyKT0MA0jmB+M3TFYtjg== X-Received: by 2002:a17:907:7da8:b0:a77:da14:8409 with SMTP id a640c23a62f3a-a780b88665amr867259766b.48.1720771296888; Fri, 12 Jul 2024 01:01:36 -0700 (PDT) Received: from localhost ([2401:e180:8873:3610:b5e2:cfc7:c3d8:6a2b]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2cacd5281f4sm846813a91.26.2024.07.12.01.01.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jul 2024 01:01:35 -0700 (PDT) From: Shung-Hsi Yu To: bpf@vger.kernel.org, Alexei Starovoitov , Jiri Olsa Cc: Daniel Borkmann , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Shung-Hsi Yu Subject: [PATCH bpf-next v3 1/3] bpf: fix overflow check in adjust_jmp_off() Date: Fri, 12 Jul 2024 16:01:24 +0800 Message-ID: <20240712080127.136608-2-shung-hsi.yu@suse.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240712080127.136608-1-shung-hsi.yu@suse.com> References: <20240712080127.136608-1-shung-hsi.yu@suse.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net adjust_jmp_off() incorrectly used the insn->imm field for all overflow check, which is incorrect as that should only be done or the BPF_JMP32 | BPF_JA case, not the general jump instruction case. Fix it by using insn->off for overflow check in the general case. Fixes: 5337ac4c9b80 ("bpf: Fix the corner case with may_goto and jump to the 1st insn.") Signed-off-by: Shung-Hsi Yu --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c0263fb5ca4b..cf2eb07ddf28 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18852,7 +18852,7 @@ static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta) } else { if (i + 1 + insn->off != tgt_idx) continue; - if (signed_add16_overflows(insn->imm, delta)) + if (signed_add16_overflows(insn->off, delta)) return -ERANGE; insn->off += delta; } From patchwork Fri Jul 12 08:01:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shung-Hsi Yu X-Patchwork-Id: 13731369 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1ABA88801 for ; Fri, 12 Jul 2024 08:01:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771306; cv=none; b=hOAJ8TDe/KWag7WrPfFpPch2pgMtts/nyQp9gO800Vrtwb2DF0HBrHf2yy4eQHnlkmyCkhNO6/JNAfFfV9IHx1SsRZ6j21alWAi3VJfLbp6t8gDo6tp/5laXJNqPlh7ebJQrKN+hAZ1djg9JfbjjP7jRSkaiumW2MKeA1PFCj2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771306; c=relaxed/simple; bh=X8tM3iamUt3Yh9KRqu7/OonvmZQYF7aHTzXe2hrAVts=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QQzDe7Tj3EHRjbWuUnJefOOdh8mYa9wHiQto7Zpz8HSEPxEnkZfcGmDHj9t42roFPErJZITMLURHKBYLyx8MpqffX5z1/dBQSHuNeU2EyL8UmP1RN/eoKqUo3My/+wyhK6634jeKUpVijoY14WHpdWfCagluS/TGUjAsufSakp8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Q8WErLfp; arc=none smtp.client-ip=209.85.218.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Q8WErLfp" Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-a728f74c23dso246628266b.1 for ; Fri, 12 Jul 2024 01:01:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1720771302; x=1721376102; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uK52txpLfvkMPlRg/xKLVKz51d+lmNCqEmprJzwH3To=; b=Q8WErLfpzoUh5Omx6u31/0u6dqkvhMZdbctAdE42p/LX/PIbUOg9I1/HKpOF09dQX9 DIJ3IOtKlGCIzV1g0H8mHs7f6k2jfr3vJqiKK9wVpoKQ1a2lFWdYX/aOqJYFcVEKC2Ol 67R9YEe1d0y9Vvkdr8pGbcoaKgf49wj1q7mnX6rC5tR9pf8DD5bMHmSDJKE9Uisr1iv3 44TlJ6Coedut9nlPJaBxixKmV5v6kXg8/ZAQQqDjhNmWKnu4X7syXKuEEO1tKqfad44T cuX+Y6+MOk3mlgcwkZUXXD1RFQOJRRu9h/8VMvatOOjDA1yonQQ1QEVuSDOftxCriTwm bVIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720771302; x=1721376102; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uK52txpLfvkMPlRg/xKLVKz51d+lmNCqEmprJzwH3To=; b=rV15/kgE8CdLitgR+Glh4HgFJWf9MLSTUlrsLTYRRtOPv9vwYHxkoBsdbNY0uWA37U WNNwLzSkCpEOKe0Ji752lY9sEJpT9NIar/y10t65R3v9BLKRQXNTGPTA6wwRlRPo+9AB 4oZaH8azTBaUzh5VFNJFnExDm2bHyROjSDf3KlRuPkzBFGNj1enV29hzLBIUYjQ+9ymd whYQpEmaf2ck3B7NcRCx5M03YO7IZaZn82LTdOyspp4oJgRG9ffqJQ/9JY33GsCPJLx9 k9Eg9R6JLQTHt3dgsdxoTKFoGmStlvJpTir2EyFKlGqaNfsf+ow28pnlMHOOWNYmiZfH Uc3A== X-Gm-Message-State: AOJu0Yxx9Vc6W0cLCF3b3C+4/ygGmqswzrDpkgQPy80xswL2Q25cR0qq b9qkW8tOHz/j0cw86a7GKKnYIkH2a9t7LRl2PNQgAYXCu6tPFUcm5y9fWCuP7I46CSQGIexkJI3 AN9M= X-Google-Smtp-Source: AGHT+IETAm8qBo6b5jF/6kS/rbC0iWteQR+YOmSW/7B2G7VzvVZFGHxcH9fIjnyPGZ5wY7+zuSAlqQ== X-Received: by 2002:a17:906:408e:b0:a72:7603:49ef with SMTP id a640c23a62f3a-a780b6fee60mr670115166b.35.1720771302204; Fri, 12 Jul 2024 01:01:42 -0700 (PDT) Received: from localhost ([2401:e180:8873:3610:b5e2:cfc7:c3d8:6a2b]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b438bfa94sm6849873b3a.50.2024.07.12.01.01.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jul 2024 01:01:40 -0700 (PDT) From: Shung-Hsi Yu To: bpf@vger.kernel.org, Alexei Starovoitov , Jiri Olsa Cc: Daniel Borkmann , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Shung-Hsi Yu Subject: [PATCH bpf-next v3 2/3] bpf: use check_add_overflow() to check for addition overflows Date: Fri, 12 Jul 2024 16:01:25 +0800 Message-ID: <20240712080127.136608-3-shung-hsi.yu@suse.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240712080127.136608-1-shung-hsi.yu@suse.com> References: <20240712080127.136608-1-shung-hsi.yu@suse.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net signed_add*_overflows() was added back when there was no overflow-check helper. With the introduction of such helpers in commit f0907827a8a91 ("compiler.h: enable builtin overflow checkers and add fallback code"), we can drop signed_add*_overflows() in kernel/bpf/verifier.c and use the generic check_add_overflow() instead. This will make future refactoring easier, and takes advantage of compiler-emitted hardware instructions that efficiently implement these checks. After the change GCC 13.3.0 generates cleaner assembly on x86_64: err = adjust_scalar_min_max_vals(env, insn, dst_reg, *src_reg); 13625: mov 0x28(%rbx),%r9 /* r9 = src_reg->smin_value */ 13629: mov 0x30(%rbx),%rcx /* rcx = src_reg->smax_value */ ... if (check_add_overflow(*dst_smin, src_reg->smin_value, dst_smin) || 141c1: mov %r9,%rax 141c4: add 0x28(%r12),%rax 141c9: mov %rax,0x28(%r12) 141ce: jo 146e4 check_add_overflow(*dst_smax, src_reg->smax_value, dst_smax)) { 141d4: add 0x30(%r12),%rcx 141d9: mov %rcx,0x30(%r12) if (check_add_overflow(*dst_smin, src_reg->smin_value, dst_smin) || 141de: jo 146e4 ... *dst_smin = S64_MIN; 146e4: movabs $0x8000000000000000,%rax 146ee: mov %rax,0x28(%r12) *dst_smax = S64_MAX; 146f3: sub $0x1,%rax 146f7: mov %rax,0x30(%r12) Before the change it gives: s64 smin_val = src_reg->smin_value; 675: mov 0x28(%rsi),%r8 s64 smax_val = src_reg->smax_value; u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; 679: mov %rdi,%rax /* rax = dst_reg */ if (signed_add_overflows(dst_reg->smin_value, smin_val) || 67c: mov 0x28(%rdi),%rdi /* rdi = dst_reg->smin_value */ u64 umin_val = src_reg->umin_value; 680: mov 0x38(%rsi),%rdx u64 umax_val = src_reg->umax_value; 684: mov 0x40(%rsi),%rcx s64 res = (s64)((u64)a + (u64)b); 688: lea (%r8,%rdi,1),%r9 /* r9 = dst_reg->smin_value + src_reg->smin_value */ return res < a; 68c: cmp %r9,%rdi 68f: setg %r10b /* r10b = (dst_reg->smin_value + src_reg->smin_value) > dst_reg->smin_value */ if (b < 0) 693: test %r8,%r8 696: js 72b signed_add_overflows(dst_reg->smax_value, smax_val)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; 69c: movabs $0x7fffffffffffffff,%rdi s64 smax_val = src_reg->smax_value; 6a6: mov 0x30(%rsi),%r8 dst_reg->smin_value = S64_MIN; 6aa: 00 00 00 movabs $0x8000000000000000,%rsi if (signed_add_overflows(dst_reg->smin_value, smin_val) || 6b4: test %r10b,%r10b /* (dst_reg->smin_value + src_reg->smin_value) > dst_reg->smin_value ? goto 6cb */ 6b7: jne 6cb signed_add_overflows(dst_reg->smax_value, smax_val)) { 6b9: mov 0x30(%rax),%r10 /* r10 = dst_reg->smax_value */ s64 res = (s64)((u64)a + (u64)b); 6bd: lea (%r10,%r8,1),%r11 /* r11 = dst_reg->smax_value + src_reg->smax_value */ if (b < 0) 6c1: test %r8,%r8 6c4: js 71e if (signed_add_overflows(dst_reg->smin_value, smin_val) || 6c6: cmp %r11,%r10 /* (dst_reg->smax_value + src_reg->smax_value) <= dst_reg->smax_value ? goto 723 */ 6c9: jle 723 } else { dst_reg->smin_value += smin_val; dst_reg->smax_value += smax_val; } 6cb: mov %rsi,0x28(%rax) ... 6d5: mov %rdi,0x30(%rax) ... if (signed_add_overflows(dst_reg->smin_value, smin_val) || 71e: cmp %r11,%r10 721: jl 6cb dst_reg->smin_value += smin_val; 723: mov %r9,%rsi dst_reg->smax_value += smax_val; 726: mov %r11,%rdi 729: jmp 6cb return res > a; 72b: cmp %r9,%rdi 72e: setl %r10b 732: jmp 69c 737: nopw 0x0(%rax,%rax,1) Note: unlike adjust_ptr_min_max_vals() and scalar*_min_max_add(), it is necessary to introduce intermediate variable in adjust_jmp_off() to keep the functional behavior unchanged. Without an intermediate variable imm/off will be altered even on overflow. Suggested-by: Jiri Olsa Signed-off-by: Shung-Hsi Yu --- Change since v2: - Refactor use of signed_add*_overflows() in adjust_jmp_off() and remove signed_add16_overflow(), both added in commit 5337ac4c9b80 ("bpf: Fix the corner case with may_goto and jump to the 1st insn.") --- kernel/bpf/verifier.c | 114 +++++++++++++----------------------------- 1 file changed, 34 insertions(+), 80 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index cf2eb07ddf28..0126c9c80c58 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12726,36 +12726,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } -static bool signed_add_overflows(s64 a, s64 b) -{ - /* Do the add in u64, where overflow is well-defined */ - s64 res = (s64)((u64)a + (u64)b); - - if (b < 0) - return res > a; - return res < a; -} - -static bool signed_add32_overflows(s32 a, s32 b) -{ - /* Do the add in u32, where overflow is well-defined */ - s32 res = (s32)((u32)a + (u32)b); - - if (b < 0) - return res > a; - return res < a; -} - -static bool signed_add16_overflows(s16 a, s16 b) -{ - /* Do the add in u16, where overflow is well-defined */ - s16 res = (s16)((u16)a + (u16)b); - - if (b < 0) - return res > a; - return res < a; -} - static bool signed_sub_overflows(s64 a, s64 b) { /* Do the sub in u64, where overflow is well-defined */ @@ -13257,21 +13227,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, * added into the variable offset, and we copy the fixed offset * from ptr_reg. */ - if (signed_add_overflows(smin_ptr, smin_val) || - signed_add_overflows(smax_ptr, smax_val)) { + if (check_add_overflow(smin_ptr, smin_val, &dst_reg->smin_value) || + check_add_overflow(smax_ptr, smax_val, &dst_reg->smax_value)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value = smin_ptr + smin_val; - dst_reg->smax_value = smax_ptr + smax_val; } - if (umin_ptr + umin_val < umin_ptr || - umax_ptr + umax_val < umax_ptr) { + if (check_add_overflow(umin_ptr, umin_val, &dst_reg->umin_value) || + check_add_overflow(umax_ptr, umax_val, &dst_reg->umax_value)) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; - } else { - dst_reg->umin_value = umin_ptr + umin_val; - dst_reg->umax_value = umax_ptr + umax_val; } dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off); dst_reg->off = ptr_reg->off; @@ -13374,52 +13338,40 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, static void scalar32_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg) { - s32 smin_val = src_reg->s32_min_value; - s32 smax_val = src_reg->s32_max_value; - u32 umin_val = src_reg->u32_min_value; - u32 umax_val = src_reg->u32_max_value; + s32 *dst_smin = &dst_reg->s32_min_value; + s32 *dst_smax = &dst_reg->s32_max_value; + u32 *dst_umin = &dst_reg->u32_min_value; + u32 *dst_umax = &dst_reg->u32_max_value; - if (signed_add32_overflows(dst_reg->s32_min_value, smin_val) || - signed_add32_overflows(dst_reg->s32_max_value, smax_val)) { - dst_reg->s32_min_value = S32_MIN; - dst_reg->s32_max_value = S32_MAX; - } else { - dst_reg->s32_min_value += smin_val; - dst_reg->s32_max_value += smax_val; + if (check_add_overflow(*dst_smin, src_reg->s32_min_value, dst_smin) || + check_add_overflow(*dst_smax, src_reg->s32_max_value, dst_smax)) { + *dst_smin = S32_MIN; + *dst_smax = S32_MAX; } - if (dst_reg->u32_min_value + umin_val < umin_val || - dst_reg->u32_max_value + umax_val < umax_val) { - dst_reg->u32_min_value = 0; - dst_reg->u32_max_value = U32_MAX; - } else { - dst_reg->u32_min_value += umin_val; - dst_reg->u32_max_value += umax_val; + if (check_add_overflow(*dst_umin, src_reg->u32_min_value, dst_umin) || + check_add_overflow(*dst_umax, src_reg->u32_max_value, dst_umax)) { + *dst_umin = 0; + *dst_umax = U32_MAX; } } static void scalar_min_max_add(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg) { - s64 smin_val = src_reg->smin_value; - s64 smax_val = src_reg->smax_value; - u64 umin_val = src_reg->umin_value; - u64 umax_val = src_reg->umax_value; + s64 *dst_smin = &dst_reg->smin_value; + s64 *dst_smax = &dst_reg->smax_value; + u64 *dst_umin = &dst_reg->umin_value; + u64 *dst_umax = &dst_reg->umax_value; - if (signed_add_overflows(dst_reg->smin_value, smin_val) || - signed_add_overflows(dst_reg->smax_value, smax_val)) { - dst_reg->smin_value = S64_MIN; - dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value += smin_val; - dst_reg->smax_value += smax_val; + if (check_add_overflow(*dst_smin, src_reg->smin_value, dst_smin) || + check_add_overflow(*dst_smax, src_reg->smax_value, dst_smax)) { + *dst_smin = S64_MIN; + *dst_smax = S64_MAX; } - if (dst_reg->umin_value + umin_val < umin_val || - dst_reg->umax_value + umax_val < umax_val) { - dst_reg->umin_value = 0; - dst_reg->umax_value = U64_MAX; - } else { - dst_reg->umin_value += umin_val; - dst_reg->umax_value += umax_val; + if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) || + check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) { + *dst_umin = 0; + *dst_umax = U64_MAX; } } @@ -18835,6 +18787,8 @@ static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta) { struct bpf_insn *insn = prog->insnsi; u32 insn_cnt = prog->len, i; + s32 imm; + s16 off; for (i = 0; i < insn_cnt; i++, insn++) { u8 code = insn->code; @@ -18846,15 +18800,15 @@ static int adjust_jmp_off(struct bpf_prog *prog, u32 tgt_idx, u32 delta) if (insn->code == (BPF_JMP32 | BPF_JA)) { if (i + 1 + insn->imm != tgt_idx) continue; - if (signed_add32_overflows(insn->imm, delta)) + if (check_add_overflow(insn->imm, delta, &imm)) return -ERANGE; - insn->imm += delta; + insn->imm = imm; } else { if (i + 1 + insn->off != tgt_idx) continue; - if (signed_add16_overflows(insn->off, delta)) + if (check_add_overflow(insn->off, delta, &off)) return -ERANGE; - insn->off += delta; + insn->off = off; } } return 0; From patchwork Fri Jul 12 08:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shung-Hsi Yu X-Patchwork-Id: 13731370 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6ADB12FB3C for ; Fri, 12 Jul 2024 08:01:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771311; cv=none; b=WLoTaAADGwAF631M9SEZNUxcREHxr7niskiL2FqIHCM/Nm342PXh2KpcxsAfKiVWxOU1sZMimD8GhOkajzPZ0QKepgluIN/x+sfgrf5I8EPyIZTzQ3UHNJo8Q4hoQIDGWS7tDHb6H01I6KbJtvLoTZG7HsPdie49C6fZe/NwR88= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720771311; c=relaxed/simple; bh=q2crKi91CF8z4iB1wfF/SUbNPXEZvk+LSuEa4FiKy3g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tmCpusd0uoKlRVs8sRJLrb8V6YaQb+m5JfXz7LWmpSBYnRYI1ucm9Cnpiy7JCkZhNvE77tik23TOihKhFCq1hTRV8i4yAXcGBXPEgVqu+cd7xlU9ZKlf3OjTVrwxeK42VaIfg0Bips1y9G/cvp7GZ1DPd8k21StmpjNvjngMMbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Yr8MtIdn; arc=none smtp.client-ip=209.85.208.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Yr8MtIdn" Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2eeb2d60efbso20199041fa.1 for ; Fri, 12 Jul 2024 01:01:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1720771307; x=1721376107; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tu0rD67yYFMjFdOHEvQS2rbGhPRx9HRDC8kVS5gPdn8=; b=Yr8MtIdnKXK1nnA9ComMIJbbXYYiyfdDVAkxgFuDpypHk5zOUQvbrrnMFnHmafv7hH aSklnpRj+xKJyh19fmfMIcyypVqco7LVrdPjY7TNRNVVxzLPQ+BMByGGLf1D50bJT0t1 TCr8lLn5isvdgTQ3EyKg/h7qe3IOP7Hx+Vti9DdoZ9qFTiqNlNfpUHhakx60eYjiVNoN uj6vOuZ5DxiWYcfJ6MBX15IfzauCx+Wr+cdHzlxMae1dPPBM187oM+Ay8Yy8+yOylzj/ pgodiTtIy62kYvyYEI/JXS63gdOZe1qQrQSD6onknxtVLOGddiPnrFXkHEUFcZduIAxC oxOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720771307; x=1721376107; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tu0rD67yYFMjFdOHEvQS2rbGhPRx9HRDC8kVS5gPdn8=; b=SlCaRRw1E5jPDy55ISyvVXhn9ew+IoGzlz6niSZ00NCg1JgOycoScFGy/NhoIhKfL7 PEvZTQWFPfONt0b260iAuVmKTOfKGDCvXBKTTNDXWBoZzX/gkp3pU14DGDA7Dqs0Zyze Wk+7iptJ3Kxx80XY1eMtesIi1H+TOAn752VWjHmVmdr3FZ1Wos+3dqhPuwEPTPi76xfh NB/aFvDeoT0cVrO+1diUzauDbEJH7/kW+BvzADDFSVAtpgg6GXk4WCcnrlDufwBTsx2n iXzr74OM01rbS/rR1xmqefOcWpRRcbSIH3gAHgWI3aV199R2jBEW546+l53g/+7A/tzG BOEw== X-Gm-Message-State: AOJu0YysP8U8J5DcAq1bnX68lq1Xn3OGs4s4K6/pO3Xy/NftL6TsdbNX s75SYhC6s7yWsCVrHMgmD0gWEobNC377zmKoP6FxwgU8XKkdaDP+5xsuPcIb1KSYBj3GnONoQFW GLJc= X-Google-Smtp-Source: AGHT+IGy9rK9SzYGE8yiUoZdjiHNMLWCPEJZ9EUFbH0BpRM2xCzahtH3Cadu3kwXKgjCou0BYv2HNg== X-Received: by 2002:a2e:a4ad:0:b0:2ec:3d2e:2408 with SMTP id 38308e7fff4ca-2eeb317175emr69842741fa.33.1720771305792; Fri, 12 Jul 2024 01:01:45 -0700 (PDT) Received: from localhost ([2401:e180:8873:3610:b5e2:cfc7:c3d8:6a2b]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fbb6ad23a6sm61326475ad.281.2024.07.12.01.01.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jul 2024 01:01:45 -0700 (PDT) From: Shung-Hsi Yu To: bpf@vger.kernel.org, Alexei Starovoitov , Jiri Olsa Cc: Daniel Borkmann , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Shung-Hsi Yu Subject: [PATCH bpf-next v3 3/3] bpf: use check_sub_overflow() to check for subtraction overflows Date: Fri, 12 Jul 2024 16:01:26 +0800 Message-ID: <20240712080127.136608-4-shung-hsi.yu@suse.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240712080127.136608-1-shung-hsi.yu@suse.com> References: <20240712080127.136608-1-shung-hsi.yu@suse.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Similar to previous patch that drops signed_add*_overflows() and uses (compiler) builtin-based check_add_overflow(), do the same for signed_sub*_overflows() and replace them with the generic check_sub_overflow() to make future refactoring easier and have the checks implemented more efficiently. Unsigned overflow check for subtraction does not use helpers and are simple enough already, so they're left untouched. After the change GCC 13.3.0 generates cleaner assembly on x86_64: if (check_sub_overflow(*dst_smin, src_reg->smax_value, dst_smin) || 139bf: mov 0x28(%r12),%rax 139c4: mov %edx,0x54(%r12) 139c9: sub %r11,%rax 139cc: mov %rax,0x28(%r12) 139d1: jo 14627 check_sub_overflow(*dst_smax, src_reg->smin_value, dst_smax)) { 139d7: mov 0x30(%r12),%rax 139dc: sub %r9,%rax 139df: mov %rax,0x30(%r12) if (check_sub_overflow(*dst_smin, src_reg->smax_value, dst_smin) || 139e4: jo 14627 ... *dst_smin = S64_MIN; 14627: movabs $0x8000000000000000,%rax 14631: mov %rax,0x28(%r12) *dst_smax = S64_MAX; 14636: sub $0x1,%rax 1463a: mov %rax,0x30(%r12) Before the change it gives: if (signed_sub_overflows(dst_reg->smin_value, smax_val) || 13a50: mov 0x28(%r12),%rdi 13a55: mov %edx,0x54(%r12) dst_reg->smax_value = S64_MAX; 13a5a: movabs $0x7fffffffffffffff,%rdx 13a64: mov %eax,0x50(%r12) dst_reg->smin_value = S64_MIN; 13a69: movabs $0x8000000000000000,%rax s64 res = (s64)((u64)a - (u64)b); 13a73: mov %rdi,%rsi 13a76: sub %rcx,%rsi if (b < 0) 13a79: test %rcx,%rcx 13a7c: js 145ea if (signed_sub_overflows(dst_reg->smin_value, smax_val) || 13a82: cmp %rsi,%rdi 13a85: jl 13ac7 signed_sub_overflows(dst_reg->smax_value, smin_val)) { 13a87: mov 0x30(%r12),%r8 s64 res = (s64)((u64)a - (u64)b); 13a8c: mov %r8,%rax 13a8f: sub %r9,%rax return res > a; 13a92: cmp %rax,%r8 13a95: setl %sil if (b < 0) 13a99: test %r9,%r9 13a9c: js 147d1 dst_reg->smax_value = S64_MAX; 13aa2: movabs $0x7fffffffffffffff,%rdx dst_reg->smin_value = S64_MIN; 13aac: movabs $0x8000000000000000,%rax if (signed_sub_overflows(dst_reg->smin_value, smax_val) || 13ab6: test %sil,%sil 13ab9: jne 13ac7 dst_reg->smin_value -= smax_val; 13abb: mov %rdi,%rax dst_reg->smax_value -= smin_val; 13abe: mov %r8,%rdx dst_reg->smin_value -= smax_val; 13ac1: sub %rcx,%rax dst_reg->smax_value -= smin_val; 13ac4: sub %r9,%rdx 13ac7: mov %rax,0x28(%r12) ... 13ad1: mov %rdx,0x30(%r12) ... if (signed_sub_overflows(dst_reg->smin_value, smax_val) || 145ea: cmp %rsi,%rdi 145ed: jg 13ac7 145f3: jmp 13a87 Suggested-by: Jiri Olsa Signed-off-by: Shung-Hsi Yu Acked-by: Jiri Olsa --- kernel/bpf/verifier.c | 57 +++++++++++-------------------------------- 1 file changed, 14 insertions(+), 43 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 0126c9c80c58..8da132a1ef28 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12726,26 +12726,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } -static bool signed_sub_overflows(s64 a, s64 b) -{ - /* Do the sub in u64, where overflow is well-defined */ - s64 res = (s64)((u64)a - (u64)b); - - if (b < 0) - return res < a; - return res > a; -} - -static bool signed_sub32_overflows(s32 a, s32 b) -{ - /* Do the sub in u32, where overflow is well-defined */ - s32 res = (s32)((u32)a - (u32)b); - - if (b < 0) - return res < a; - return res > a; -} - static bool check_reg_sane_offset(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, enum bpf_reg_type type) @@ -13278,14 +13258,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, /* A new variable offset is created. If the subtrahend is known * nonnegative, then any reg->range we had before is still good. */ - if (signed_sub_overflows(smin_ptr, smax_val) || - signed_sub_overflows(smax_ptr, smin_val)) { + if (check_sub_overflow(smin_ptr, smax_val, &dst_reg->smin_value) || + check_sub_overflow(smax_ptr, smin_val, &dst_reg->smax_value)) { /* Overflow possible, we know nothing */ dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value = smin_ptr - smax_val; - dst_reg->smax_value = smax_ptr - smin_val; } if (umin_ptr < umax_val) { /* Overflow possible, we know nothing */ @@ -13378,19 +13355,16 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg, static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg) { - s32 smin_val = src_reg->s32_min_value; - s32 smax_val = src_reg->s32_max_value; + s32 *dst_smin = &dst_reg->s32_min_value; + s32 *dst_smax = &dst_reg->s32_max_value; u32 umin_val = src_reg->u32_min_value; u32 umax_val = src_reg->u32_max_value; - if (signed_sub32_overflows(dst_reg->s32_min_value, smax_val) || - signed_sub32_overflows(dst_reg->s32_max_value, smin_val)) { + if (check_sub_overflow(*dst_smin, src_reg->s32_max_value, dst_smin) || + check_sub_overflow(*dst_smax, src_reg->s32_min_value, dst_smax)) { /* Overflow possible, we know nothing */ - dst_reg->s32_min_value = S32_MIN; - dst_reg->s32_max_value = S32_MAX; - } else { - dst_reg->s32_min_value -= smax_val; - dst_reg->s32_max_value -= smin_val; + *dst_smin = S32_MIN; + *dst_smax = S32_MAX; } if (dst_reg->u32_min_value < umax_val) { /* Overflow possible, we know nothing */ @@ -13406,19 +13380,16 @@ static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg, static void scalar_min_max_sub(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg) { - s64 smin_val = src_reg->smin_value; - s64 smax_val = src_reg->smax_value; + s64 *dst_smin = &dst_reg->smin_value; + s64 *dst_smax = &dst_reg->smax_value; u64 umin_val = src_reg->umin_value; u64 umax_val = src_reg->umax_value; - if (signed_sub_overflows(dst_reg->smin_value, smax_val) || - signed_sub_overflows(dst_reg->smax_value, smin_val)) { + if (check_sub_overflow(*dst_smin, src_reg->smax_value, dst_smin) || + check_sub_overflow(*dst_smax, src_reg->smin_value, dst_smax)) { /* Overflow possible, we know nothing */ - dst_reg->smin_value = S64_MIN; - dst_reg->smax_value = S64_MAX; - } else { - dst_reg->smin_value -= smax_val; - dst_reg->smax_value -= smin_val; + *dst_smin = S64_MIN; + *dst_smax = S64_MAX; } if (dst_reg->umin_value < umax_val) { /* Overflow possible, we know nothing */