Message ID | f455e5ba-e786-e8d5-4a1c-b42552e17377@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | assorted replacement of x[mz]alloc_bytes() | expand |
--- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1924,7 +1924,7 @@ static int hvmemul_rep_movs( dgpa -= bytes - bytes_per_rep; /* Allocate temporary buffer. Fall back to slow emulation if this fails. */ - buf = xmalloc_bytes(bytes); + buf = xmalloc_array(char, bytes); if ( buf == NULL ) return X86EMUL_UNHANDLEABLE; @@ -2037,7 +2037,7 @@ static int hvmemul_rep_stos( for ( ; ; ) { bytes = *reps * bytes_per_rep; - buf = xmalloc_bytes(bytes); + buf = xmalloc_array(char, bytes); if ( buf || *reps <= 1 ) break; *reps >>= 1;
There is a difference in generated code: xmalloc_bytes() forces SMP_CACHE_BYTES alignment. I think we not only don't need this here, but actually don't want it. Signed-off-by: Jan Beulich <jbeulich@suse.com>