Message ID | 1c3d573d-051f-3d18-cd63-6ccad5911786@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | assorted replacement of x[mz]alloc_bytes() | expand |
--- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1924,7 +1924,7 @@ static int hvmemul_rep_movs( dgpa -= bytes - bytes_per_rep; /* Allocate temporary buffer. Fall back to slow emulation if this fails. */ - buf = xmalloc_bytes(bytes); + buf = xmalloc_array(char, bytes); if ( buf == NULL ) return X86EMUL_UNHANDLEABLE; @@ -2037,7 +2037,7 @@ static int hvmemul_rep_stos( for ( ; ; ) { bytes = *reps * bytes_per_rep; - buf = xmalloc_bytes(bytes); + buf = xmalloc_array(char, bytes); if ( buf || *reps <= 1 ) break; *reps >>= 1;
There is a difference in generated code: xmalloc_bytes() forces SMP_CACHE_BYTES alignment. But if code really cared about such higher than default alignment, it should request so explicitly rather than using a type-unsafe interface. And if e.g. cache line sharing was a concern, the allocator itself should arrange to avoid such. Signed-off-by: Jan Beulich <jbeulich@suse.com>