diff mbox series

[v9] x86/emulate: Send vm_event from emulate

Message ID 20190909153508.10847-1-aisaila@bitdefender.com (mailing list archive)
State Superseded
Headers show
Series [v9] x86/emulate: Send vm_event from emulate | expand

Commit Message

Alexandru Stefan ISAILA Sept. 9, 2019, 3:35 p.m. UTC
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind != npfec_kind_with_gla. This is, however,
incorrect, because the instruction at RIP might legitimately cause an
EPT fault of its own while accessing a _different_ page from the original one,
where A/D were set.
The solution is to perform the whole emulation, while ignoring EPT restrictions
for the walk part, and taking them into account for the "actual" emulating of
the instruction at RIP. When we send out a vm_event, we don't want the emulation
to complete, since in that case we won't be able to veto whatever it is doing.
That would mean that we can't actually prevent any malicious activity, instead
we'd only be able to report on it.
When we see a "send-vm_event" case while emulating, we need to first send the
event out and then suspend the emulation (return X86EMUL_RETRY).
After the emulation stops we'll call hvm_vm_event_do_resume() again after the
introspection agent treats the event and resumes the guest. There, the
instruction at RIP will be fully emulated (with the EPT ignored) if the
introspection application allows it, and the guest will continue to run past
the instruction.

A common example is if the hardware exits because of an EPT fault caused by a
page walk, p2m_mem_access_check() decides if it is going to send a vm_event.
If the vm_event was sent and it would be treated so it runs the instruction
at RIP, that instruction might also hit a protected page and provoke a vm_event.

Now if npfec.kind == npfec_kind_in_gpt and d->arch.monitor.inguest_pagefault_disabled
is true then we are in the page walk case and we can do this emulation optimization
and emulate the page walk while ignoring the EPT, but don't ignore the EPT for the
emulation of the actual instruction.

In the first case we would have 2 EPT events, in the second case we would have
1 EPT event if the instruction at the RIP triggers an EPT event.

We use hvmemul_map_linear_addr() to intercept r/w access and
__hvm_copy() to intercept exec access.

hvm_emulate_send_vm_event() can return false if there was no violation,
if there was an error from monitor_traps() or p2m_get_mem_access().
Returning false if p2m_get_mem_access() fails is needed because the EPT
entry will have rwx memory access rights.

NOTE: hvm_emulate_send_vm_event() assumes the caller will check
arch.vm_event->send_event

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V8:
	- Replace npfec.kind != npfec_kind_with_gla with
npfec.kind == npfec_kind_in_gpt
	- Update commit message
	- Add unlikely(curr->arch.vm_event)
	- Move goto out in the loop in hvmemul_map_linear_addr()
	- Rename hvm_emulate_send_vm_event() to hvm_monitor_check_ept()
and move it to hvm/monitor.c
---
 xen/arch/x86/hvm/emulate.c        | 17 ++++---
 xen/arch/x86/hvm/hvm.c            |  8 ++++
 xen/arch/x86/hvm/monitor.c        | 75 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/mem_access.c      |  3 +-
 xen/include/asm-x86/hvm/monitor.h |  3 ++
 xen/include/asm-x86/vm_event.h    |  2 +
 6 files changed, 101 insertions(+), 7 deletions(-)

Comments

Jan Beulich Sept. 11, 2019, 9:57 a.m. UTC | #1
On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
> A/D bit writes (on page walks) can be considered benign by an introspection
> agent, so receiving vm_events for them is a pessimization. We try here to
> optimize by filtering these events out.
> Currently, we are fully emulating the instruction at RIP when the hardware sees
> an EPT fault with npfec.kind != npfec_kind_with_gla. This is, however,
> incorrect, because the instruction at RIP might legitimately cause an
> EPT fault of its own while accessing a _different_ page from the original one,
> where A/D were set.
> The solution is to perform the whole emulation, while ignoring EPT restrictions
> for the walk part, and taking them into account for the "actual" emulating of
> the instruction at RIP. When we send out a vm_event, we don't want the emulation
> to complete, since in that case we won't be able to veto whatever it is doing.
> That would mean that we can't actually prevent any malicious activity, instead
> we'd only be able to report on it.
> When we see a "send-vm_event" case while emulating, we need to first send the
> event out and then suspend the emulation (return X86EMUL_RETRY).
> After the emulation stops we'll call hvm_vm_event_do_resume() again after the
> introspection agent treats the event and resumes the guest. There, the
> instruction at RIP will be fully emulated (with the EPT ignored) if the
> introspection application allows it, and the guest will continue to run past
> the instruction.
> 
> A common example is if the hardware exits because of an EPT fault caused by a
> page walk, p2m_mem_access_check() decides if it is going to send a vm_event.
> If the vm_event was sent and it would be treated so it runs the instruction
> at RIP, that instruction might also hit a protected page and provoke a vm_event.
> 
> Now if npfec.kind == npfec_kind_in_gpt and d->arch.monitor.inguest_pagefault_disabled
> is true then we are in the page walk case and we can do this emulation optimization
> and emulate the page walk while ignoring the EPT, but don't ignore the EPT for the
> emulation of the actual instruction.
> 
> In the first case we would have 2 EPT events, in the second case we would have
> 1 EPT event if the instruction at the RIP triggers an EPT event.
> 
> We use hvmemul_map_linear_addr() to intercept r/w access and
> __hvm_copy() to intercept exec access.

Just like said for v8 - this doesn't look to match the implementation.

> hvm_emulate_send_vm_event() can return false if there was no violation,
> if there was an error from monitor_traps() or p2m_get_mem_access().
> Returning false if p2m_get_mem_access() fails is needed because the EPT
> entry will have rwx memory access rights.

I have to admit I still don't understand this reasoning, but I
guess I should leave it to the VM event maintainers to judge.
In particular it's unclear to me why p2m_get_mem_access()
failure would imply rwx access.

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -544,10 +544,11 @@ static void *hvmemul_map_linear_addr(
>      struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      struct vcpu *curr = current;
> -    void *err, *mapping;
> +    void *err = NULL, *mapping;

As also said during v8 review, I don't think this (and the related)
changes is needed anymore now that you've moved your new goto into
the loop.

> @@ -215,6 +217,79 @@ void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
>      monitor_traps(current, 1, &req);
>  }
>  
> +/*
> + * Send memory access vm_events based on pfec. Returns true if the event was
> + * sent and false for p2m_get_mem_access() error, no violation and event send
> + * error. Assumes the caller will check arch.vm_event->send_event.
> + *
> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
> + * (in which case access to it is unrestricted, so no violations can occur).
> + * In this cases it is fine to continue the emulation.
> + */
> +bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
> +                           uint16_t kind)

Why did you choose to have "ept" in the name and also mention it
in the commit? Is there anything in here which isn't generic p2m?

> --- a/xen/arch/x86/mm/mem_access.c
> +++ b/xen/arch/x86/mm/mem_access.c
> @@ -212,8 +212,9 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>      }
>      if ( vm_event_check_ring(d->vm_event_monitor) &&
>           d->arch.monitor.inguest_pagefault_disabled &&
> -         npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */
> +         npfec.kind == npfec_kind_in_gpt ) /* don't send a mem_event */
>      {
> +        v->arch.vm_event->send_event = true;

Since I'm being puzzled every time I see this: The comment and
the line you add look to be in curious disagreement. Do you
perhaps want to extend it to include something like "right
away", or make it e.g. "try to avoid sending a mem event"?
Personally I think it wouldn't hurt to even mention the "why"
here.

Jan
Alexandru Stefan ISAILA Sept. 11, 2019, 10:39 a.m. UTC | #2
On 11.09.2019 12:57, Jan Beulich wrote:
> On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
>> A/D bit writes (on page walks) can be considered benign by an introspection
>> agent, so receiving vm_events for them is a pessimization. We try here to
>> optimize by filtering these events out.
>> Currently, we are fully emulating the instruction at RIP when the hardware sees
>> an EPT fault with npfec.kind != npfec_kind_with_gla. This is, however,
>> incorrect, because the instruction at RIP might legitimately cause an
>> EPT fault of its own while accessing a _different_ page from the original one,
>> where A/D were set.
>> The solution is to perform the whole emulation, while ignoring EPT restrictions
>> for the walk part, and taking them into account for the "actual" emulating of
>> the instruction at RIP. When we send out a vm_event, we don't want the emulation
>> to complete, since in that case we won't be able to veto whatever it is doing.
>> That would mean that we can't actually prevent any malicious activity, instead
>> we'd only be able to report on it.
>> When we see a "send-vm_event" case while emulating, we need to first send the
>> event out and then suspend the emulation (return X86EMUL_RETRY).
>> After the emulation stops we'll call hvm_vm_event_do_resume() again after the
>> introspection agent treats the event and resumes the guest. There, the
>> instruction at RIP will be fully emulated (with the EPT ignored) if the
>> introspection application allows it, and the guest will continue to run past
>> the instruction.
>>
>> A common example is if the hardware exits because of an EPT fault caused by a
>> page walk, p2m_mem_access_check() decides if it is going to send a vm_event.
>> If the vm_event was sent and it would be treated so it runs the instruction
>> at RIP, that instruction might also hit a protected page and provoke a vm_event.
>>
>> Now if npfec.kind == npfec_kind_in_gpt and d->arch.monitor.inguest_pagefault_disabled
>> is true then we are in the page walk case and we can do this emulation optimization
>> and emulate the page walk while ignoring the EPT, but don't ignore the EPT for the
>> emulation of the actual instruction.
>>
>> In the first case we would have 2 EPT events, in the second case we would have
>> 1 EPT event if the instruction at the RIP triggers an EPT event.
>>
>> We use hvmemul_map_linear_addr() to intercept r/w access and
>> __hvm_copy() to intercept exec access.
> 
> Just like said for v8 - this doesn't look to match the implementation.
> 
>> hvm_emulate_send_vm_event() can return false if there was no violation,
>> if there was an error from monitor_traps() or p2m_get_mem_access().
>> Returning false if p2m_get_mem_access() fails is needed because the EPT
>> entry will have rwx memory access rights.
> 
> I have to admit I still don't understand this reasoning, but I
> guess I should leave it to the VM event maintainers to judge.
> In particular it's unclear to me why p2m_get_mem_access()
> failure would imply rwx access.
> 
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -544,10 +544,11 @@ static void *hvmemul_map_linear_addr(
>>       struct hvm_emulate_ctxt *hvmemul_ctxt)
>>   {
>>       struct vcpu *curr = current;
>> -    void *err, *mapping;
>> +    void *err = NULL, *mapping;
> 
> As also said during v8 review, I don't think this (and the related)
> changes is needed anymore now that you've moved your new goto into
> the loop.

I thought it is simpler to init err with NULL but you are right there is 
no need for this in this patch. I will revert the changes.

> 
>> @@ -215,6 +217,79 @@ void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
>>       monitor_traps(current, 1, &req);
>>   }
>>   
>> +/*
>> + * Send memory access vm_events based on pfec. Returns true if the event was
>> + * sent and false for p2m_get_mem_access() error, no violation and event send
>> + * error. Assumes the caller will check arch.vm_event->send_event.
>> + *
>> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
>> + * (in which case access to it is unrestricted, so no violations can occur).
>> + * In this cases it is fine to continue the emulation.
>> + */
>> +bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
>> +                           uint16_t kind)
> 
> Why did you choose to have "ept" in the name and also mention it
> in the commit? Is there anything in here which isn't generic p2m?

The name was suggested by Razvan Cojocaru. I have no preference in the 
name. Maybe Tamas can suggest a good one.

> 
>> --- a/xen/arch/x86/mm/mem_access.c
>> +++ b/xen/arch/x86/mm/mem_access.c
>> @@ -212,8 +212,9 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>>       }
>>       if ( vm_event_check_ring(d->vm_event_monitor) &&
>>            d->arch.monitor.inguest_pagefault_disabled &&
>> -         npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */
>> +         npfec.kind == npfec_kind_in_gpt ) /* don't send a mem_event */
>>       {
>> +        v->arch.vm_event->send_event = true;
> 
> Since I'm being puzzled every time I see this: The comment and
> the line you add look to be in curious disagreement. Do you
> perhaps want to extend it to include something like "right
> away", or make it e.g. "try to avoid sending a mem event"?
> Personally I think it wouldn't hurt to even mention the "why"
> here.

I agree, I will update that comment.

Thanks,
Alex
Razvan Cojocaru Sept. 11, 2019, 11:21 a.m. UTC | #3
On 9/11/19 1:39 PM, Alexandru Stefan ISAILA wrote:
> 
> 
> On 11.09.2019 12:57, Jan Beulich wrote:
>> On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
>>> +/*
>>> + * Send memory access vm_events based on pfec. Returns true if the event was
>>> + * sent and false for p2m_get_mem_access() error, no violation and event send
>>> + * error. Assumes the caller will check arch.vm_event->send_event.
>>> + *
>>> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
>>> + * (in which case access to it is unrestricted, so no violations can occur).
>>> + * In this cases it is fine to continue the emulation.
>>> + */
>>> +bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
>>> +                           uint16_t kind)
>>
>> Why did you choose to have "ept" in the name and also mention it
>> in the commit? Is there anything in here which isn't generic p2m?
> 
> The name was suggested by Razvan Cojocaru. I have no preference in the
> name. Maybe Tamas can suggest a good one.

I've suggested "ept" in the name because "regular" emulation ignores it, 
and this function takes it into account, hence the "check_ept" (which I 
thought would be read together). It's fine to change it to something else.


Thanks,
Razvan
Jan Beulich Sept. 11, 2019, 11:41 a.m. UTC | #4
On 11.09.2019 13:21, Razvan Cojocaru wrote:
> On 9/11/19 1:39 PM, Alexandru Stefan ISAILA wrote:
>>
>>
>> On 11.09.2019 12:57, Jan Beulich wrote:
>>> On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
>>>> +/*
>>>> + * Send memory access vm_events based on pfec. Returns true if the event was
>>>> + * sent and false for p2m_get_mem_access() error, no violation and event send
>>>> + * error. Assumes the caller will check arch.vm_event->send_event.
>>>> + *
>>>> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
>>>> + * (in which case access to it is unrestricted, so no violations can occur).
>>>> + * In this cases it is fine to continue the emulation.
>>>> + */
>>>> +bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
>>>> +                           uint16_t kind)
>>>
>>> Why did you choose to have "ept" in the name and also mention it
>>> in the commit? Is there anything in here which isn't generic p2m?
>>
>> The name was suggested by Razvan Cojocaru. I have no preference in the
>> name. Maybe Tamas can suggest a good one.
> 
> I've suggested "ept" in the name because "regular" emulation ignores it, 
> and this function takes it into account, hence the "check_ept" (which I 
> thought would be read together). It's fine to change it to something else.

Then "check_p2m" perhaps?

Jan
Razvan Cojocaru Sept. 11, 2019, 11:44 a.m. UTC | #5
On 9/11/19 2:41 PM, Jan Beulich wrote:
> On 11.09.2019 13:21, Razvan Cojocaru wrote:
>> On 9/11/19 1:39 PM, Alexandru Stefan ISAILA wrote:
>>>
>>>
>>> On 11.09.2019 12:57, Jan Beulich wrote:
>>>> On 09.09.2019 17:35, Alexandru Stefan ISAILA wrote:
>>>>> +/*
>>>>> + * Send memory access vm_events based on pfec. Returns true if the event was
>>>>> + * sent and false for p2m_get_mem_access() error, no violation and event send
>>>>> + * error. Assumes the caller will check arch.vm_event->send_event.
>>>>> + *
>>>>> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
>>>>> + * (in which case access to it is unrestricted, so no violations can occur).
>>>>> + * In this cases it is fine to continue the emulation.
>>>>> + */
>>>>> +bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
>>>>> +                           uint16_t kind)
>>>>
>>>> Why did you choose to have "ept" in the name and also mention it
>>>> in the commit? Is there anything in here which isn't generic p2m?
>>>
>>> The name was suggested by Razvan Cojocaru. I have no preference in the
>>> name. Maybe Tamas can suggest a good one.
>>
>> I've suggested "ept" in the name because "regular" emulation ignores it,
>> and this function takes it into account, hence the "check_ept" (which I
>> thought would be read together). It's fine to change it to something else.
> 
> Then "check_p2m" perhaps?

Sounds good to me.


Thanks,
Razvan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index e4b3f330a8..dcbddb925f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -544,10 +544,11 @@  static void *hvmemul_map_linear_addr(
     struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
-    void *err, *mapping;
+    void *err = NULL, *mapping;
     unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
         (linear >> PAGE_SHIFT) + 1;
     unsigned int i;
+    gfn_t gfn;
 
     /*
      * mfn points to the next free slot.  All used slots have a page reference
@@ -582,7 +583,7 @@  static void *hvmemul_map_linear_addr(
         ASSERT(mfn_x(*mfn) == 0);
 
         res = hvm_translate_get_page(curr, addr, true, pfec,
-                                     &pfinfo, &page, NULL, &p2mt);
+                                     &pfinfo, &page, &gfn, &p2mt);
 
         switch ( res )
         {
@@ -596,7 +597,6 @@  static void *hvmemul_map_linear_addr(
             goto out;
 
         case HVMTRANS_bad_gfn_to_mfn:
-            err = NULL;
             goto out;
 
         case HVMTRANS_gfn_paged_out:
@@ -619,13 +619,18 @@  static void *hvmemul_map_linear_addr(
             }
 
             if ( p2mt == p2m_ioreq_server )
-            {
-                err = NULL;
                 goto out;
-            }
 
             ASSERT(p2mt == p2m_ram_logdirty || !p2m_is_readonly(p2mt));
         }
+
+        if ( unlikely(curr->arch.vm_event) &&
+             curr->arch.vm_event->send_event &&
+             hvm_monitor_check_ept(addr, gfn, pfec, npfec_kind_with_gla) )
+        {
+            err = ERR_PTR(~X86EMUL_RETRY);
+            goto out;
+        }
     }
 
     /* Entire access within a single frame? */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2b8189946b..c859bbe9cf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3224,6 +3224,14 @@  static enum hvm_translation_result __hvm_copy(
             return HVMTRANS_bad_gfn_to_mfn;
         }
 
+        if ( unlikely(v->arch.vm_event) &&
+             v->arch.vm_event->send_event &&
+             hvm_monitor_check_ept(addr, gfn, pfec, npfec_kind_with_gla) )
+        {
+            put_page(page);
+            return HVMTRANS_gfn_paged_out;
+        }
+
         p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
 
         if ( flags & HVMCOPY_to_guest )
diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
index 2a41ccc930..1071f1c7a9 100644
--- a/xen/arch/x86/hvm/monitor.c
+++ b/xen/arch/x86/hvm/monitor.c
@@ -23,8 +23,10 @@ 
  */
 
 #include <xen/vm_event.h>
+#include <xen/mem_access.h>
 #include <xen/monitor.h>
 #include <asm/hvm/monitor.h>
+#include <asm/altp2m.h>
 #include <asm/monitor.h>
 #include <asm/paging.h>
 #include <asm/vm_event.h>
@@ -215,6 +217,79 @@  void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
     monitor_traps(current, 1, &req);
 }
 
+/*
+ * Send memory access vm_events based on pfec. Returns true if the event was
+ * sent and false for p2m_get_mem_access() error, no violation and event send
+ * error. Assumes the caller will check arch.vm_event->send_event.
+ *
+ * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
+ * (in which case access to it is unrestricted, so no violations can occur).
+ * In this cases it is fine to continue the emulation.
+ */
+bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
+                           uint16_t kind)
+{
+    xenmem_access_t access;
+    vm_event_request_t req = {};
+    paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK));
+
+    ASSERT(current->arch.vm_event->send_event);
+
+    current->arch.vm_event->send_event = false;
+
+    if ( p2m_get_mem_access(current->domain, gfn, &access,
+                            altp2m_vcpu_idx(current)) != 0 )
+        return false;
+
+    switch ( access )
+    {
+    case XENMEM_access_x:
+    case XENMEM_access_rx:
+        if ( pfec & PFEC_write_access )
+            req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
+        break;
+
+    case XENMEM_access_w:
+    case XENMEM_access_rw:
+        if ( pfec & PFEC_insn_fetch )
+            req.u.mem_access.flags = MEM_ACCESS_X;
+        break;
+
+    case XENMEM_access_r:
+    case XENMEM_access_n:
+        if ( pfec & PFEC_write_access )
+            req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
+        if ( pfec & PFEC_insn_fetch )
+            req.u.mem_access.flags |= MEM_ACCESS_X;
+        break;
+
+    case XENMEM_access_wx:
+    case XENMEM_access_rwx:
+    case XENMEM_access_rx2rw:
+    case XENMEM_access_n2rwx:
+    case XENMEM_access_default:
+        break;
+    }
+
+    if ( !req.u.mem_access.flags )
+        return false; /* no violation */
+
+    if ( kind == npfec_kind_with_gla )
+        req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA |
+                                  MEM_ACCESS_GLA_VALID;
+    else if ( kind == npfec_kind_in_gpt )
+        req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT |
+                                  MEM_ACCESS_GLA_VALID;
+
+
+    req.reason = VM_EVENT_REASON_MEM_ACCESS;
+    req.u.mem_access.gfn = gfn_x(gfn);
+    req.u.mem_access.gla = gla;
+    req.u.mem_access.offset = gpa & ~PAGE_MASK;
+
+    return monitor_traps(current, true, &req) >= 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 0144f92b98..c1e21402a6 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -212,8 +212,9 @@  bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     }
     if ( vm_event_check_ring(d->vm_event_monitor) &&
          d->arch.monitor.inguest_pagefault_disabled &&
-         npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */
+         npfec.kind == npfec_kind_in_gpt ) /* don't send a mem_event */
     {
+        v->arch.vm_event->send_event = true;
         hvm_emulate_one_vm_event(EMUL_KIND_NORMAL, TRAP_invalid_op, X86_EVENT_NO_EC);
 
         return true;
diff --git a/xen/include/asm-x86/hvm/monitor.h b/xen/include/asm-x86/hvm/monitor.h
index f1af4f812a..3dd4906f67 100644
--- a/xen/include/asm-x86/hvm/monitor.h
+++ b/xen/include/asm-x86/hvm/monitor.h
@@ -49,6 +49,9 @@  void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
                            unsigned int err, uint64_t cr2);
 bool hvm_monitor_emul_unimplemented(void);
 
+bool hvm_monitor_check_ept(unsigned long gla, gfn_t gfn, uint32_t pfec,
+                           uint16_t kind);
+
 #endif /* __ASM_X86_HVM_MONITOR_H__ */
 
 /*
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 23e655710b..66db9e1e25 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -36,6 +36,8 @@  struct arch_vm_event {
     bool set_gprs;
     /* A sync vm_event has been sent and we're not done handling it. */
     bool sync_event;
+    /* Send mem access events from emulator */
+    bool send_event;
 };
 
 int vm_event_init_domain(struct domain *d);