Message ID | 20240402212039.51815-1-harishankar.vishwanathan@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | BPF |
Headers | show |
Series | [v2,bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking | expand |
On 4/2/24 11:20 PM, Harishankar Vishwanathan wrote: [...] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index fcb62300f407..a7404a7d690f 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, > return; > } > > - /* We get our minimum from the var_off, since that's inherently > + /* We get our minimum from the var32_off, since that's inherently > * bitwise. Our maximum is the minimum of the operands' maxima. > */ > dst_reg->u32_min_value = var32_off.value; > dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val); > - if (dst_reg->s32_min_value < 0 || smin_val < 0) { The smin_val is now unused, triggering the following warnings : ../kernel/bpf/verifier.c:13321:6: warning: unused variable 'smin_val' [-Wunused-variable] 13321 | s32 smin_val = src_reg->s32_min_value; | ^~~~~~~~ ../kernel/bpf/verifier.c:13352:6: warning: unused variable 'smin_val' [-Wunused-variable] 13352 | s64 smin_val = src_reg->smin_value; | ^~~~~~~~ ../kernel/bpf/verifier.c:13386:6: warning: unused variable 'smin_val' [-Wunused-variable] 13386 | s32 smin_val = src_reg->s32_min_value; | ^~~~~~~~ ../kernel/bpf/verifier.c:13417:6: warning: unused variable 'smin_val' [-Wunused-variable] 13417 | s64 smin_val = src_reg->smin_value; | ^~~~~~~~ ../kernel/bpf/verifier.c:13451:6: warning: unused variable 'smin_val' [-Wunused-variable] 13451 | s32 smin_val = src_reg->s32_min_value; | ^~~~~~~~ ../kernel/bpf/verifier.c:13479:6: warning: unused variable 'smin_val' [-Wunused-variable] 13479 | s64 smin_val = src_reg->smin_value; | ^~~~~~~~ Removing these builds fine then, please follow-up with a v3. Thanks, Daniel
On 4/2/24 22:20, Harishankar Vishwanathan wrote: > Previous works [1, 2] have discovered and reported this issue. Our tool > Agni [2, 3] consideres it a false positive. This is because, during the > verification of the abstract operator scalar_min_max_and(), Agni restricts > its inputs to those passing through reg_bounds_sync(). This mimics > real-world verifier behavior, as reg_bounds_sync() is invariably executed > at the tail of every abstract operator. Therefore, such behavior is > unlikely in an actual verifier execution. > > However, it is still unsound for an abstract operator to set signed bounds > such that smin_value > smax_value. This patch fixes it, making the abstract > operator sound for all (well-formed) inputs. Just to check I'm understanding correctly: you're saying that the existing code has an undocumented precondition, that's currently maintained by the callers, and your patch removes the precondition in case a future patch (or cosmic rays?) makes a call without satisfying it? Or is it in principle possible (just "unlikely") for a program to induce the current verifier to call scalar_min_max_foo() on a register that hasn't been through reg_bounds_sync()? If the former, I think Fixes: is inappropriate here as there is no need to backport this change to stable kernels, although I agree the change is worth making in -next. > It is worth noting that we can update the signed bounds using the unsigned > bounds whenever the unsigned bounds do not cross the sign boundary (not > just when the input signed bounds are positive, as was the case > previously). This patch does exactly that Commit message could also make clearer that the new code considers whether the *output* ubounds cross sign, rather than looking at the input bounds as the previous code did. At first I was confused as to why XOR didn't need special handling (since -ve xor -ve is +ve). > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index fcb62300f407..a7404a7d690f 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, > return; > } > > - /* We get our minimum from the var_off, since that's inherently > + /* We get our minimum from the var32_off, since that's inherently > * bitwise. Our maximum is the minimum of the operands' maxima. > */ This change, adjusting a comment to match the existing code, should probably be in a separate patch. > @@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg, > return; > } > > - /* We get our maximum from the var_off, and our minimum is the > - * maximum of the operands' minima > + /* We get our maximum from the var32_off, and our minimum is the > + * maximum of the operands' minima. > */ Same here. Apart from that, Acked-by: Edward Cree <ecree.xilinx@gmail.com>
On Wed, Apr 3, 2024 at 9:25 AM Edward Cree <ecree@amd.com> wrote: > > On 4/2/24 22:20, Harishankar Vishwanathan wrote: > > Previous works [1, 2] have discovered and reported this issue. Our tool > > Agni [2, 3] consideres it a false positive. This is because, during the > > verification of the abstract operator scalar_min_max_and(), Agni restricts > > its inputs to those passing through reg_bounds_sync(). This mimics > > real-world verifier behavior, as reg_bounds_sync() is invariably executed > > at the tail of every abstract operator. Therefore, such behavior is > > unlikely in an actual verifier execution. > > > > However, it is still unsound for an abstract operator to set signed bounds > > such that smin_value > smax_value. This patch fixes it, making the abstract > > operator sound for all (well-formed) inputs. > > Just to check I'm understanding correctly: you're saying that the existing > code has an undocumented precondition, that's currently maintained by the > callers, and your patch removes the precondition in case a future patch > (or cosmic rays?) makes a call without satisfying it? > Or is it in principle possible (just "unlikely") for a program to induce > the current verifier to call scalar_min_max_foo() on a register that > hasn't been through reg_bounds_sync()? > If the former, I think Fixes: is inappropriate here as there is no need to > backport this change to stable kernels, although I agree the change is > worth making in -next. You are kind of right on both counts. The existing code contains an undocumented precondition. When violated, scalar_min_max_and() can produce unsound s64 bounds (where smin > smax). Certain well-formed register state inputs can violate this precondition, resulting in eventual unsoundness. However, register states that have passed through reg_bounds_sync() -- or those that are completely known or completely unknown -- satisfy the precondition, preventing unsoundness. Since we haven’t examined all possible paths through the verifier, and we cannot guarantee that every instruction preceding a BPF_AND in an eBPF program will maintain the precondition, we cannot definitively say that register state inputs to scalar_min_max_and() will always meet the precondition. There is a potential for an invocation of scalar_min_max_and() on a register state that hasn’t undergone reg_bounds_sync(). The patch indeed removes the precondition. Given the above, please advise if we should backport this patch to older kernels (and whether I should use the fixes tag). > > It is worth noting that we can update the signed bounds using the unsigned > > bounds whenever the unsigned bounds do not cross the sign boundary (not > > just when the input signed bounds are positive, as was the case > > previously). This patch does exactly that > Commit message could also make clearer that the new code considers whether > the *output* ubounds cross sign, rather than looking at the input bounds > as the previous code did. At first I was confused as to why XOR didn't > need special handling (since -ve xor -ve is +ve). Sounds good regarding making it clearer within the context of what the existing code does. However, I wanted to clarify that XOR does indeed use the same handling as all the other operations. Could you elaborate on what you mean? > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > index fcb62300f407..a7404a7d690f 100644 > > --- a/kernel/bpf/verifier.c > > +++ b/kernel/bpf/verifier.c > > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, > > return; > > } > > > > - /* We get our minimum from the var_off, since that's inherently > > + /* We get our minimum from the var32_off, since that's inherently > > * bitwise. Our maximum is the minimum of the operands' maxima. > > */ > > This change, adjusting a comment to match the existing code, should probably > be in a separate patch. Sounds good. > > @@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg, > > return; > > } > > > > - /* We get our maximum from the var_off, and our minimum is the > > - * maximum of the operands' minima > > + /* We get our maximum from the var32_off, and our minimum is the > > + * maximum of the operands' minima. > > */ > > Same here. > > Apart from that, > Acked-by: Edward Cree <ecree.xilinx@gmail.com>
On Wed, Apr 3, 2024 at 5:09 AM Daniel Borkmann <daniel@iogearbox.net> wrote: > > On 4/2/24 11:20 PM, Harishankar Vishwanathan wrote: > [...] > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > > index fcb62300f407..a7404a7d690f 100644 > > --- a/kernel/bpf/verifier.c > > +++ b/kernel/bpf/verifier.c > > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, > > return; > > } > > > > - /* We get our minimum from the var_off, since that's inherently > > + /* We get our minimum from the var32_off, since that's inherently > > * bitwise. Our maximum is the minimum of the operands' maxima. > > */ > > dst_reg->u32_min_value = var32_off.value; > > dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val); > > - if (dst_reg->s32_min_value < 0 || smin_val < 0) { > > The smin_val is now unused, triggering the following warnings : > > ../kernel/bpf/verifier.c:13321:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13321 | s32 smin_val = src_reg->s32_min_value; > | ^~~~~~~~ > ../kernel/bpf/verifier.c:13352:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13352 | s64 smin_val = src_reg->smin_value; > | ^~~~~~~~ > ../kernel/bpf/verifier.c:13386:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13386 | s32 smin_val = src_reg->s32_min_value; > | ^~~~~~~~ > ../kernel/bpf/verifier.c:13417:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13417 | s64 smin_val = src_reg->smin_value; > | ^~~~~~~~ > ../kernel/bpf/verifier.c:13451:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13451 | s32 smin_val = src_reg->s32_min_value; > | ^~~~~~~~ > ../kernel/bpf/verifier.c:13479:6: warning: unused variable 'smin_val' [-Wunused-variable] > 13479 | s64 smin_val = src_reg->smin_value; > | ^~~~~~~~ > > Removing these builds fine then, please follow-up with a v3. Apologies. Yes, these smin_vals are not required anymore. I'll remove them when sending the v3. > Thanks, > Daniel
On Wed, Apr 03, 2024 at 10:40:23PM -0400, Harishankar Vishwanathan wrote: > On Wed, Apr 3, 2024 at 9:25 AM Edward Cree <ecree@amd.com> wrote: > > On 4/2/24 22:20, Harishankar Vishwanathan wrote: > > > Previous works [1, 2] have discovered and reported this issue. Our tool > > > Agni [2, 3] consideres it a false positive. This is because, during the > > > verification of the abstract operator scalar_min_max_and(), Agni restricts > > > its inputs to those passing through reg_bounds_sync(). This mimics > > > real-world verifier behavior, as reg_bounds_sync() is invariably executed > > > at the tail of every abstract operator. Therefore, such behavior is > > > unlikely in an actual verifier execution. > > > > > > However, it is still unsound for an abstract operator to set signed bounds > > > such that smin_value > smax_value. This patch fixes it, making the abstract > > > operator sound for all (well-formed) inputs. > > > > Just to check I'm understanding correctly: you're saying that the existing > > code has an undocumented precondition, that's currently maintained by the > > callers, and your patch removes the precondition in case a future patch > > (or cosmic rays?) makes a call without satisfying it? > > Or is it in principle possible (just "unlikely") for a program to induce > > the current verifier to call scalar_min_max_foo() on a register that > > hasn't been through reg_bounds_sync()? > > If the former, I think Fixes: is inappropriate here as there is no need to > > backport this change to stable kernels, although I agree the change is > > worth making in -next. > > You are kind of right on both counts. > > The existing code contains an undocumented precondition. When violated, > scalar_min_max_and() can produce unsound s64 bounds (where smin > smax). > Certain well-formed register state inputs can violate this precondition, > resulting in eventual unsoundness. However, register states that have > passed through reg_bounds_sync() -- or those that are completely known or > completely unknown -- satisfy the precondition, preventing unsoundness. > > Since we haven’t examined all possible paths through the verifier, and we > cannot guarantee that every instruction preceding a BPF_AND in an eBPF > program will maintain the precondition, we cannot definitively say that > register state inputs to scalar_min_max_and() will always meet the > precondition. There is a potential for an invocation of > scalar_min_max_and() on a register state that hasn’t undergone > reg_bounds_sync(). The patch indeed removes the precondition. > > Given the above, please advise if we should backport this patch to older > kernels (and whether I should use the fixes tag). I suggested the fixes tag to Harishankar in the v1 patchset, admittedly without a thorough understanding at the same level of above. However, given smin_value > smax_value is something we check in reg_bounds_sanity_check(), I would still vote to have the patch backported to stable (with "Cc: stable@vger.kernel.org"?) even if the fixes tag is dropped. The overall change should be rather well contained and isolated for relatively ease of backport; and probably save some head scratching over the difference of behavior between mainline and stable. > [...]
On 04/04/2024 03:40, Harishankar Vishwanathan wrote: > [...] > Given the above, please advise if we should backport this patch to older > kernels (and whether I should use the fixes tag). I don't feel too strongly about it, and if you or Shung-Hsi still think, on reflection, that backporting is desirable, then go ahead and keep the Fixes: tag. But maybe tweak the description so someone doesn't see "latent unsoundness" and think they need to CVE and rush this patch out as a security thing; it's more like hardening. *shrug* >> Commit message could also make clearer that the new code considers whether >> the *output* ubounds cross sign, rather than looking at the input bounds >> as the previous code did. At first I was confused as to why XOR didn't >> need special handling (since -ve xor -ve is +ve). > > Sounds good regarding making it clearer within the context of what the > existing code does. However, I wanted to clarify that XOR does indeed use > the same handling as all the other operations. Could you elaborate on what > you mean? Just that if you XOR two negative numbers you get a positive number, which isn't true for AND or OR; and my confused little brain thought that fact was relevant, which it isn't. -e
On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote: > I don't feel too strongly about it, and if you or Shung-Hsi still > think, on reflection, that backporting is desirable, then go ahead > and keep the Fixes: tag. > But maybe tweak the description so someone doesn't see "latent > unsoundness" and think they need to CVE and rush this patch out as > a security thing; it's more like hardening. *shrug* Unfortunately with Linux Kernel's current approach as a CVE Numbering Authority I don't think this can be avoided. Patches with fixes tag will almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]), and we can only dispute[3] after such assignment happend for the CVE to be rejected. Shung-Hsi 1: https://lore.kernel.org/linux-cve-announce/2024030648-CVE-2024-26624-3032@gregkh/ 2: https://lore.kernel.org/linux-cve-announce/2024032747-REJECTED-f2cf@gregkh/ 3: https://docs.kernel.org/process/cve.html#disputes-of-assigned-cves
> On Apr 10, 2024, at 7:43 AM, Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote: >> I don't feel too strongly about it, and if you or Shung-Hsi still >> think, on reflection, that backporting is desirable, then go ahead >> and keep the Fixes: tag. >> But maybe tweak the description so someone doesn't see "latent >> unsoundness" and think they need to CVE and rush this patch out as >> a security thing; it's more like hardening. *shrug* > > Unfortunately with Linux Kernel's current approach as a CVE Numbering > Authority I don't think this can be avoided. Patches with fixes tag will > almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]), > and we can only dispute[3] after such assignment happend for the CVE to > be rejected. It seems the best option is to CC the patch to stable@vger.kernel.org (so that it will be backported), and not add the fixes tag (so that no CVE will be assigned). Does this seem reasonable? If yes, I’ll proceed with v3. I'll also mention that this is a hardening in the commit message. Hari > > Shung-Hsi > > 1: https://lore.kernel.org/linux-cve-announce/2024030648-CVE-2024-26624-3032@gregkh/ > 2: https://lore.kernel.org/linux-cve-announce/2024032747-REJECTED-f2cf@gregkh/ > 3: https://docs.kernel.org/process/cve.html#disputes-of-assigned-cves
On Sat, Apr 13, 2024 at 12:05:18AM +0000, Harishankar Vishwanathan wrote: > > On Apr 10, 2024, at 7:43 AM, Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote: > > On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote: > >> I don't feel too strongly about it, and if you or Shung-Hsi still > >> think, on reflection, that backporting is desirable, then go ahead > >> and keep the Fixes: tag. > >> But maybe tweak the description so someone doesn't see "latent > >> unsoundness" and think they need to CVE and rush this patch out as > >> a security thing; it's more like hardening. *shrug* > > > > Unfortunately with Linux Kernel's current approach as a CVE Numbering > > Authority I don't think this can be avoided. Patches with fixes tag will > > almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]), > > and we can only dispute[3] after such assignment happend for the CVE to > > be rejected. > > It seems the best option is to CC the patch to stable@vger.kernel.org (so > that it will be backported), and not add the fixes tag (so that no CVE will > be assigned). Does this seem reasonable? If yes, I’ll proceed with v3. > I'll also mention that this is a hardening in the commit message. Sounds good to me. Not 100% certain that this will avoid CVE assignment, but does seems like the best option. Shung-Hsi
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fcb62300f407..a7404a7d690f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, return; } - /* We get our minimum from the var_off, since that's inherently + /* We get our minimum from the var32_off, since that's inherently * bitwise. Our maximum is the minimum of the operands' maxima. */ dst_reg->u32_min_value = var32_off.value; dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val); - if (dst_reg->s32_min_value < 0 || smin_val < 0) { - /* Lose signed bounds when ANDing negative numbers, - * ain't nobody got time for that. - */ - dst_reg->s32_min_value = S32_MIN; - dst_reg->s32_max_value = S32_MAX; - } else { - /* ANDing two positives gives a positive, so safe to - * cast result into s64. - */ + + /* Safe to set s32 bounds by casting u32 result into s32 when u32 + * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded. + */ + if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) { dst_reg->s32_min_value = dst_reg->u32_min_value; dst_reg->s32_max_value = dst_reg->u32_max_value; + } else { + dst_reg->s32_min_value = S32_MIN; + dst_reg->s32_max_value = S32_MAX; } } @@ -13364,18 +13362,16 @@ static void scalar_min_max_and(struct bpf_reg_state *dst_reg, */ dst_reg->umin_value = dst_reg->var_off.value; dst_reg->umax_value = min(dst_reg->umax_value, umax_val); - if (dst_reg->smin_value < 0 || smin_val < 0) { - /* Lose signed bounds when ANDing negative numbers, - * ain't nobody got time for that. - */ - dst_reg->smin_value = S64_MIN; - dst_reg->smax_value = S64_MAX; - } else { - /* ANDing two positives gives a positive, so safe to - * cast result into s64. - */ + + /* Safe to set s64 bounds by casting u64 result into s64 when u64 + * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded. + */ + if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) { dst_reg->smin_value = dst_reg->umin_value; dst_reg->smax_value = dst_reg->umax_value; + } else { + dst_reg->smin_value = S64_MIN; + dst_reg->smax_value = S64_MAX; } /* We may learn something more from the var_off */ __update_reg_bounds(dst_reg); @@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg, return; } - /* We get our maximum from the var_off, and our minimum is the - * maximum of the operands' minima + /* We get our maximum from the var32_off, and our minimum is the + * maximum of the operands' minima. */ dst_reg->u32_min_value = max(dst_reg->u32_min_value, umin_val); dst_reg->u32_max_value = var32_off.value | var32_off.mask; - if (dst_reg->s32_min_value < 0 || smin_val < 0) { - /* Lose signed bounds when ORing negative numbers, - * ain't nobody got time for that. - */ - dst_reg->s32_min_value = S32_MIN; - dst_reg->s32_max_value = S32_MAX; - } else { - /* ORing two positives gives a positive, so safe to - * cast result into s64. - */ + + /* Safe to set s32 bounds by casting u32 result into s32 when u32 + * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded. + */ + if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) { dst_reg->s32_min_value = dst_reg->u32_min_value; dst_reg->s32_max_value = dst_reg->u32_max_value; + } else { + dst_reg->s32_min_value = S32_MIN; + dst_reg->s32_max_value = S32_MAX; } } @@ -13433,18 +13427,16 @@ static void scalar_min_max_or(struct bpf_reg_state *dst_reg, */ dst_reg->umin_value = max(dst_reg->umin_value, umin_val); dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask; - if (dst_reg->smin_value < 0 || smin_val < 0) { - /* Lose signed bounds when ORing negative numbers, - * ain't nobody got time for that. - */ - dst_reg->smin_value = S64_MIN; - dst_reg->smax_value = S64_MAX; - } else { - /* ORing two positives gives a positive, so safe to - * cast result into s64. - */ + + /* Safe to set s64 bounds by casting u64 result into s64 when u64 + * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded. + */ + if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) { dst_reg->smin_value = dst_reg->umin_value; dst_reg->smax_value = dst_reg->umax_value; + } else { + dst_reg->smin_value = S64_MIN; + dst_reg->smax_value = S64_MAX; } /* We may learn something more from the var_off */ __update_reg_bounds(dst_reg); @@ -13467,10 +13459,10 @@ static void scalar32_min_max_xor(struct bpf_reg_state *dst_reg, dst_reg->u32_min_value = var32_off.value; dst_reg->u32_max_value = var32_off.value | var32_off.mask; - if (dst_reg->s32_min_value >= 0 && smin_val >= 0) { - /* XORing two positive sign numbers gives a positive, - * so safe to cast u32 result into s32. - */ + /* Safe to set s32 bounds by casting u32 result into s32 when u32 + * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded. + */ + if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) { dst_reg->s32_min_value = dst_reg->u32_min_value; dst_reg->s32_max_value = dst_reg->u32_max_value; } else { @@ -13496,10 +13488,10 @@ static void scalar_min_max_xor(struct bpf_reg_state *dst_reg, dst_reg->umin_value = dst_reg->var_off.value; dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask; - if (dst_reg->smin_value >= 0 && smin_val >= 0) { - /* XORing two positive sign numbers gives a positive, - * so safe to cast u64 result into s64. - */ + /* Safe to set s64 bounds by casting u64 result into s64 when u64 + * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded. + */ + if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) { dst_reg->smin_value = dst_reg->umin_value; dst_reg->smax_value = dst_reg->umax_value; } else {