[v6,1/6] x86/hvm: allow ASID flush when v != current
diff mbox series

Message ID 20200303172046.50569-2-roger.pau@citrix.com
State New
Headers show
  • x86: improve assisted tlb flush and use it in guest mode
Related show

Commit Message

Roger Pau Monné March 3, 2020, 5:20 p.m. UTC
Current implementation of hvm_asid_flush_vcpu is not safe to use
unless the target vCPU is either paused or the currently running one,
as it modifies the generation without any locking.

Fix this by using atomic operations when accessing the generation
field, both in hvm_asid_flush_vcpu_asid and other ASID functions. This
allows to safely flush the current ASID generation. Note that for the
flush to take effect if the vCPU is currently running a vmexit is

Compilers will normally do such writes and reads as a single
instruction, so the usage of atomic operations is mostly used as a
safety measure.

Note the same could be achieved by introducing an extra field to
hvm_vcpu_asid that signals hvm_asid_handle_vmenter the need to call
hvm_asid_flush_vcpu on the given vCPU before vmentry, this however
seems unnecessary as hvm_asid_flush_vcpu itself only sets two vCPU
fields to 0, so there's no need to delay this to the vmentry ASID

This is not a bugfix as no callers that would violate the assumptions
listed in the first paragraph have been found, but a preparatory
change in order to allow remote flushing of HVM vCPUs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wl@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
 xen/arch/x86/hvm/asid.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff mbox series

diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c
index 8e00a28443..63ce462d56 100644
--- a/xen/arch/x86/hvm/asid.c
+++ b/xen/arch/x86/hvm/asid.c
@@ -83,7 +83,7 @@  void hvm_asid_init(int nasids)
 void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid)
-    asid->generation = 0;
+    write_atomic(&asid->generation, 0);
 void hvm_asid_flush_vcpu(struct vcpu *v)
@@ -121,7 +121,7 @@  bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
         goto disabled;
     /* Test if VCPU has valid ASID. */
-    if ( asid->generation == data->core_asid_generation )
+    if ( read_atomic(&asid->generation) == data->core_asid_generation )
         return 0;
     /* If there are no free ASIDs, need to go to a new generation */
@@ -135,7 +135,7 @@  bool_t hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
     /* Now guaranteed to be a free ASID. */
     asid->asid = data->next_asid++;
-    asid->generation = data->core_asid_generation;
+    write_atomic(&asid->generation, data->core_asid_generation);
      * When we assign ASID 1, flush all TLB entries as we are starting a new