Message ID | 7c4cb0ee-cddf-434a-95d9-58cec0efd976@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86emul: fix UB multiplications in S/G handling | expand |
On 13/08/2024 1:43 pm, Jan Beulich wrote: > The conversion of the shifts to multiplications by the commits tagged > below still wasn't quite right: The multiplications (of signed values) > can overflow, too. As of 298556c7b5f8 ("x86emul: correct 32-bit address > handling for AVX2 gathers") signed multiplication wasn't necessary > anymore, though: The necessary sign-extension (if any) will happen as > well when using intermediate variables of unsigned long types, and > excess address bits are chopped off by truncate_ea(). > > Fixes: b6a907f8c83d ("x86emul: replace UB shifts") > Fixes: 21de9680eb59 ("x86emul: replace further UB shifts") > Oss-fuzz: 71138 > Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
On 13/08/2024 2:19 pm, Andrew Cooper wrote: > On 13/08/2024 1:43 pm, Jan Beulich wrote: >> The conversion of the shifts to multiplications by the commits tagged >> below still wasn't quite right: The multiplications (of signed values) >> can overflow, too. As of 298556c7b5f8 ("x86emul: correct 32-bit address >> handling for AVX2 gathers") signed multiplication wasn't necessary >> anymore, though: The necessary sign-extension (if any) will happen as >> well when using intermediate variables of unsigned long types, and >> excess address bits are chopped off by truncate_ea(). >> >> Fixes: b6a907f8c83d ("x86emul: replace UB shifts") >> Fixes: 21de9680eb59 ("x86emul: replace further UB shifts") >> Oss-fuzz: 71138 It's too late on this one, but it occurs to me that we probably want Link: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=71138 rather than an abstract Oss-fuzz number. The bugtracker entry becomes public after 90d or when ClusterFuzz thinks we've fixed the bug, and the full link will be more useful to anyone interested. ~Andrew
On 13.08.2024 17:16, Andrew Cooper wrote: > On 13/08/2024 2:19 pm, Andrew Cooper wrote: >> On 13/08/2024 1:43 pm, Jan Beulich wrote: >>> The conversion of the shifts to multiplications by the commits tagged >>> below still wasn't quite right: The multiplications (of signed values) >>> can overflow, too. As of 298556c7b5f8 ("x86emul: correct 32-bit address >>> handling for AVX2 gathers") signed multiplication wasn't necessary >>> anymore, though: The necessary sign-extension (if any) will happen as >>> well when using intermediate variables of unsigned long types, and >>> excess address bits are chopped off by truncate_ea(). >>> >>> Fixes: b6a907f8c83d ("x86emul: replace UB shifts") >>> Fixes: 21de9680eb59 ("x86emul: replace further UB shifts") >>> Oss-fuzz: 71138 > > It's too late on this one, but it occurs to me that we probably want > > Link: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=71138 > > rather than an abstract Oss-fuzz number. The bugtracker entry becomes > public after 90d or when ClusterFuzz thinks we've fixed the bug, and the > full link will be more useful to anyone interested. I can try to remember doing so going forward. Let me adjust the one that's still pending right away. Jan
--- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -6369,11 +6369,11 @@ x86_emulate( { if ( (vex.w ? mask.qw[i] : mask.dw[i]) < 0 ) { - signed long idx = b & 1 ? index.qw[i] : index.dw[i]; + unsigned long idx = b & 1 ? index.qw[i] : index.dw[i]; rc = ops->read(ea.mem.seg, truncate_ea(ea.mem.off + - idx * (1 << state->sib_scale)), + (idx << state->sib_scale)), (void *)mmvalp + i * op_bytes, op_bytes, ctxt); if ( rc != X86EMUL_OKAY ) { @@ -6489,14 +6489,14 @@ x86_emulate( for ( i = 0; op_mask; ++i ) { - long idx = b & 1 ? index.qw[i] : index.dw[i]; + unsigned long idx = b & 1 ? index.qw[i] : index.dw[i]; if ( !(op_mask & (1 << i)) ) continue; rc = ops->read(ea.mem.seg, truncate_ea(ea.mem.off + - idx * (1 << state->sib_scale)), + (idx << state->sib_scale)), (void *)mmvalp + i * op_bytes, op_bytes, ctxt); if ( rc != X86EMUL_OKAY ) { @@ -6643,9 +6643,9 @@ x86_emulate( for ( i = 0; op_mask; ++i ) { - long idx = (b & 1 ? index.qw[i] - : index.dw[i]) * (1 << state->sib_scale); - unsigned long offs = truncate_ea(ea.mem.off + idx); + unsigned long idx = b & 1 ? index.qw[i] : index.dw[i]; + unsigned long offs = truncate_ea(ea.mem.off + + (idx << state->sib_scale)); unsigned int j, slot; if ( !(op_mask & (1 << i)) ) @@ -6663,11 +6663,10 @@ x86_emulate( */ for ( j = (slot = i) + 1; j < n; ++j ) { - long idx2 = (b & 1 ? index.qw[j] - : index.dw[j]) * (1 << state->sib_scale); - + idx = b & 1 ? index.qw[j] : index.dw[j]; if ( (op_mask & (1 << j)) && - truncate_ea(ea.mem.off + idx2) == offs ) + truncate_ea(ea.mem.off + + (idx << state->sib_scale)) == offs ) slot = j; }
The conversion of the shifts to multiplications by the commits tagged below still wasn't quite right: The multiplications (of signed values) can overflow, too. As of 298556c7b5f8 ("x86emul: correct 32-bit address handling for AVX2 gathers") signed multiplication wasn't necessary anymore, though: The necessary sign-extension (if any) will happen as well when using intermediate variables of unsigned long types, and excess address bits are chopped off by truncate_ea(). Fixes: b6a907f8c83d ("x86emul: replace UB shifts") Fixes: 21de9680eb59 ("x86emul: replace further UB shifts") Oss-fuzz: 71138 Signed-off-by: Jan Beulich <jbeulich@suse.com>