diff mbox series

x86/shadow: use single (atomic) MOV for emulated writes

Message ID 20200116202926.23230-1-jandryuk@gmail.com (mailing list archive)
State New, archived
Headers show
Series x86/shadow: use single (atomic) MOV for emulated writes | expand

Commit Message

Jason Andryuk Jan. 16, 2020, 8:29 p.m. UTC
This is the corresponding change to the shadow code as made by
bf08a8a08a2e "x86/HVM: use single (atomic) MOV for aligned emulated
writes" to the non-shadow HVM code.

The bf08a8a08a2e commit message:
Using memcpy() may result in multiple individual byte accesses
(depending how memcpy() is implemented and how the resulting insns,
e.g. REP MOVSB, get carried out in hardware), which isn't what we
want/need for carrying out guest insns as correctly as possible. Fall
back to memcpy() only for accesses not 2, 4, or 8 bytes in size.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/mm/shadow/hvm.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Tim Deegan Jan. 17, 2020, 7:20 a.m. UTC | #1
At 15:29 -0500 on 16 Jan (1579188566), Jason Andryuk wrote:
> This is the corresponding change to the shadow code as made by
> bf08a8a08a2e "x86/HVM: use single (atomic) MOV for aligned emulated
> writes" to the non-shadow HVM code.
> 
> The bf08a8a08a2e commit message:
> Using memcpy() may result in multiple individual byte accesses
> (depending how memcpy() is implemented and how the resulting insns,
> e.g. REP MOVSB, get carried out in hardware), which isn't what we
> want/need for carrying out guest insns as correctly as possible. Fall
> back to memcpy() only for accesses not 2, 4, or 8 bytes in size.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Tim Deegan <tim@xen.org>
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 48dfad4557..a219266fa2 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -215,7 +215,15 @@  hvm_emulate_write(enum x86_segment seg,
         return ~PTR_ERR(ptr);
 
     paging_lock(v->domain);
-    memcpy(ptr, p_data, bytes);
+
+    /* Where possible use single (and hence generally atomic) MOV insns. */
+    switch ( bytes )
+    {
+    case 2: write_u16_atomic(ptr, *(uint16_t *)p_data); break;
+    case 4: write_u32_atomic(ptr, *(uint32_t *)p_data); break;
+    case 8: write_u64_atomic(ptr, *(uint64_t *)p_data); break;
+    default: memcpy(ptr, p_data, bytes);                break;
+    }
 
     if ( tb_init_done )
         v->arch.paging.mode->shadow.trace_emul_write_val(ptr, addr,