diff mbox

[3/7] vm-event: introduce vm_event_vcpu_enter

Message ID 1466086127-7563-1-git-send-email-czuzu@bitdefender.com (mailing list archive)
State New, archived
Headers show

Commit Message

Corneliu ZUZU June 16, 2016, 2:08 p.m. UTC
In an effort to improve on the vm-event interface, we introduce a new function
called vm_event_vcpu_enter. Its significance is that of a "final touch" vCPU
function - i.e. it should be called by implementing architectures just before
re-entering vCPUs.
On X86 for example, it is called on the scheduling tail (hvm_do_resume) and just
before reentering the guest world after a hypervisor trap (vmx_vmenter_helper).

Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com>
---
 xen/arch/arm/domain.c          |  5 ++++-
 xen/arch/arm/traps.c           |  2 ++
 xen/arch/x86/hvm/emulate.c     |  2 +-
 xen/arch/x86/hvm/event.c       |  1 -
 xen/arch/x86/hvm/hvm.c         |  3 ++-
 xen/arch/x86/hvm/vmx/vmx.c     |  4 ++++
 xen/arch/x86/mm/p2m.c          |  1 -
 xen/arch/x86/vm_event.c        |  4 +---
 xen/common/monitor.c           |  2 +-
 xen/common/vm_event.c          |  1 -
 xen/include/asm-arm/vm_event.h |  6 +++++-
 xen/include/asm-x86/vm_event.h |  6 +++++-
 xen/include/xen/vm_event.h     | 15 +++++++++++++++
 13 files changed, 40 insertions(+), 12 deletions(-)

Comments

Jan Beulich June 16, 2016, 2:51 p.m. UTC | #1
>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>          }
>      }
>  
> +    vm_event_vcpu_enter(v);

Why here?

> @@ -3874,6 +3875,9 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
>      }
>  
>   out:
> +    if ( guest_mode(regs) )
> +        vm_event_vcpu_enter(curr);
> +
>      HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
>  
>      __vmwrite(GUEST_RIP,    regs->rip);

Why with a conditional? The registers can't possibly hold to non-
guest state when you're here.

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -36,7 +36,6 @@
>  #include <asm/hvm/nestedhvm.h>
>  #include <asm/altp2m.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h>
> -#include <asm/vm_event.h>
>  #include <xsm/xsm.h>

There are way too many of these #include adjustments here. If
you really mean to clean these up, please don't randomly throw
this into various unrelated patches.

> --- a/xen/common/monitor.c
> +++ b/xen/common/monitor.c
> @@ -22,8 +22,8 @@
>  #include <xen/monitor.h>
>  #include <xen/sched.h>
>  #include <xsm/xsm.h>
> +#include <xen/vm_event.h>
>  #include <asm/monitor.h>
> -#include <asm/vm_event.h>

Please retain at least basic grouping (i.e. all xen/ ones together,
meaning the insertion should move up by one line).

Jan
Tamas K Lengyel June 16, 2016, 4:17 p.m. UTC | #2
On Thu, Jun 16, 2016 at 8:08 AM, Corneliu ZUZU <czuzu@bitdefender.com> wrote:
> In an effort to improve on the vm-event interface, we introduce a new function
> called vm_event_vcpu_enter. Its significance is that of a "final touch" vCPU
> function - i.e. it should be called by implementing architectures just before
> re-entering vCPUs.
> On X86 for example, it is called on the scheduling tail (hvm_do_resume) and just
> before reentering the guest world after a hypervisor trap (vmx_vmenter_helper).
>

I don't think this belongs to the vm_event system. It should probably
be in the monitor subsystem as it has nothing to do with passing
messages, which is essentially what vm_event is for.

Tamas
Corneliu ZUZU June 16, 2016, 8:10 p.m. UTC | #3
On 6/16/2016 5:51 PM, Jan Beulich wrote:
>>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
>> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>>           }
>>       }
>>   
>> +    vm_event_vcpu_enter(v);
> Why here?
Why indeed. It made sense because monitor_write_data handling was 
originally there and then the plan was to move it to vm_event_vcpu_enter 
(which happens in the following commit).
The question is though, why was monitor_write_data handled there in the 
first place? Why was it not put e.g. in vmx_do_resume immediately after 
the call to hvm_do_resume and just before
the reset_stack_and_jump...? And what happens with handling 
monitor_write_data if this:

if ( !handle_hvm_io_completion(v) )
         return;

causes a return?

>> @@ -3874,6 +3875,9 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
>>       }
>>   
>>    out:
>> +    if ( guest_mode(regs) )
>> +        vm_event_vcpu_enter(curr);
>> +
>>       HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
>>   
>>       __vmwrite(GUEST_RIP,    regs->rip);
> Why with a conditional? The registers can't possibly hold to non-
> guest state when you're here.

Right, I must've mistakenly taken that from ARM-side @ some point. There 
it made sense, here it doesn't.
I'll remove it in V2.

>
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -36,7 +36,6 @@
>>   #include <asm/hvm/nestedhvm.h>
>>   #include <asm/altp2m.h>
>>   #include <asm/hvm/svm/amd-iommu-proto.h>
>> -#include <asm/vm_event.h>
>>   #include <xsm/xsm.h>
> There are way too many of these #include adjustments here. If
> you really mean to clean these up, please don't randomly throw
> this into various unrelated patches.

I haven't thrown anything "randomly into unrelated patches", please 
first ask for my reasoning and then draw such conclusions.
That was removed because xen/vm_event.h includes asm/vm_event.h with 
this patch (because it calls arch_vm_event_vcpu_enter) and this file 
(p2m.c) already included xen/vm_event.h.

>> --- a/xen/common/monitor.c
>> +++ b/xen/common/monitor.c
>> @@ -22,8 +22,8 @@
>>   #include <xen/monitor.h>
>>   #include <xen/sched.h>
>>   #include <xsm/xsm.h>
>> +#include <xen/vm_event.h>
>>   #include <asm/monitor.h>
>> -#include <asm/vm_event.h>
> Please retain at least basic grouping (i.e. all xen/ ones together,
> meaning the insertion should move up by one line).
>
> Jan
>
>

I usually do that, in this case I just didn't notice. Acked.

Corneliu.
Razvan Cojocaru June 16, 2016, 8:33 p.m. UTC | #4
On 06/16/16 23:10, Corneliu ZUZU wrote:
> On 6/16/2016 5:51 PM, Jan Beulich wrote:
>>>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
>>> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>>>           }
>>>       }
>>>   +    vm_event_vcpu_enter(v);
>> Why here?
> Why indeed. It made sense because monitor_write_data handling was
> originally there and then the plan was to move it to vm_event_vcpu_enter
> (which happens in the following commit).
> The question is though, why was monitor_write_data handled there in the
> first place? Why was it not put e.g. in vmx_do_resume immediately after
> the call to hvm_do_resume and just before
> the reset_stack_and_jump...? And what happens with handling
> monitor_write_data if this:
> 
> if ( !handle_hvm_io_completion(v) )
>         return;
> 
> causes a return?

It's in hvm_do_resume() because, for one, that's the place that was
suggested (or at least confirmed when I've proposed it for such things)
on this list back when I wrote the code. And then it's here because
vmx_do_things()-type functions are, well, VMX, and I had hoped that by
choosing hvm-prefixed functions I'd get SVM support for free.

As for the handle_hvm_io_completion(v) return, my understanding was that
that would eventually cause another exit, and eventually we'd get to the
code below once the IO part is done.


Thanks,
Razvan
Jan Beulich June 17, 2016, 7:17 a.m. UTC | #5
>>> On 16.06.16 at 22:10, <czuzu@bitdefender.com> wrote:
> On 6/16/2016 5:51 PM, Jan Beulich wrote:
>>>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
>>> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>>>           }
>>>       }
>>>   
>>> +    vm_event_vcpu_enter(v);
>> Why here?
> Why indeed. It made sense because monitor_write_data handling was 
> originally there and then the plan was to move it to vm_event_vcpu_enter 
> (which happens in the following commit).
> The question is though, why was monitor_write_data handled there in the 
> first place? Why was it not put e.g. in vmx_do_resume immediately after 
> the call to hvm_do_resume and just before
> the reset_stack_and_jump...? And what happens with handling 
> monitor_write_data if this:
> 
> if ( !handle_hvm_io_completion(v) )
>          return;
> 
> causes a return?

I see Razvan responded to this. I don't have a strong opinion
either way, my only request if for the call to be in exactly one
place.

>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -36,7 +36,6 @@
>>>   #include <asm/hvm/nestedhvm.h>
>>>   #include <asm/altp2m.h>
>>>   #include <asm/hvm/svm/amd-iommu-proto.h>
>>> -#include <asm/vm_event.h>
>>>   #include <xsm/xsm.h>
>> There are way too many of these #include adjustments here. If
>> you really mean to clean these up, please don't randomly throw
>> this into various unrelated patches.
> 
> I haven't thrown anything "randomly into unrelated patches", please 
> first ask for my reasoning and then draw such conclusions.

See patch 1. Plus I don't think I (or in fact any reviewer) should ask
for such reasoning: Instead you should state extra cleanup you do
to unrelated (to the purpose of your patch) files in the description.
Or even better, split it off to a follow-on, purely cleanup patch.
(And to be clear, I much appreciate any form of reduction of the
sometimes extremely long lists of #include-s, just not [apparently
or really] randomly mixed with other, substantial changes. That's
namely because it's not clear whether source files should explicitly
include everything they need, or instead be allowed to rely on
headers they include to include further headers they also
_explicitly_ rely on. IOW there's likely a discussion to be had for
this kind of cleanup, and such a discussion should be a separate
thread from the one on the functional adjustments here.)

Jan

> That was removed because xen/vm_event.h includes asm/vm_event.h with 
> this patch (because it calls arch_vm_event_vcpu_enter) and this file 
> (p2m.c) already included xen/vm_event.h.
Julien Grall June 17, 2016, 8:55 a.m. UTC | #6
Hello,

On 16/06/16 15:08, Corneliu ZUZU wrote:
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index d31f821..ba248c8 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -19,6 +19,7 @@
>   #include <xen/errno.h>
>   #include <xen/bitops.h>
>   #include <xen/grant_table.h>
> +#include <xen/vm_event.h>
>
>   #include <asm/current.h>
>   #include <asm/event.h>
> @@ -251,6 +252,8 @@ static void schedule_tail(struct vcpu *prev)
>
>       ctxt_switch_to(current);
>
> +    vm_event_vcpu_enter(current);
> +
>       local_irq_enable();
>
>       context_saved(prev);
> @@ -296,7 +299,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>
>   void continue_running(struct vcpu *same)
>   {
> -    /* Nothing to do */
> +    vm_event_vcpu_enter(same);
>   }
>
>   void sync_local_execstate(void)

 From my understanding of the commit message, vm_event_vcpu_enter should 
be called before returning to the guest. The scheduling functions are 
not called every-time Xen is returning to the guest. So if you want to 
do every time Xen is re-entering to the guest, then this should be done 
in leave_hypervisor_tail.

Regards,
Corneliu ZUZU June 17, 2016, 9:19 a.m. UTC | #7
On 6/16/2016 7:17 PM, Tamas K Lengyel wrote:
> On Thu, Jun 16, 2016 at 8:08 AM, Corneliu ZUZU <czuzu@bitdefender.com> wrote:
>> In an effort to improve on the vm-event interface, we introduce a new function
>> called vm_event_vcpu_enter. Its significance is that of a "final touch" vCPU
>> function - i.e. it should be called by implementing architectures just before
>> re-entering vCPUs.
>> On X86 for example, it is called on the scheduling tail (hvm_do_resume) and just
>> before reentering the guest world after a hypervisor trap (vmx_vmenter_helper).
>>
> I don't think this belongs to the vm_event system. It should probably
> be in the monitor subsystem as it has nothing to do with passing
> messages, which is essentially what vm_event is for.
>
> Tamas
>

I figured it's usage is Xen-internal and it's meant to be of eventual 
use - whenever the need arises to do something just before a vCPU is 
reentered from the vm-event subsystem, you can simply adjust the 
implementation of that function.
I thought non-monitor vm-events could also be 'eventual users'. But then 
again by this reasoning I could broaden its significance even more, 
outside the vm-event world and make it 'usable' by any parts of code in Xen.
Shall I then rename it to vm_event_monitor_vcpu_enter and put it 
alongside the other vm_event_monitor_* functions?

Corneliu.
Corneliu ZUZU June 17, 2016, 10:41 a.m. UTC | #8
On 6/16/2016 11:33 PM, Razvan Cojocaru wrote:
> On 06/16/16 23:10, Corneliu ZUZU wrote:
>> On 6/16/2016 5:51 PM, Jan Beulich wrote:
>>>>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
>>>> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>>>>            }
>>>>        }
>>>>    +    vm_event_vcpu_enter(v);
>>> Why here?
>> Why indeed. It made sense because monitor_write_data handling was
>> originally there and then the plan was to move it to vm_event_vcpu_enter
>> (which happens in the following commit).
>> The question is though, why was monitor_write_data handled there in the
>> first place? Why was it not put e.g. in vmx_do_resume immediately after
>> the call to hvm_do_resume and just before
>> the reset_stack_and_jump...? And what happens with handling
>> monitor_write_data if this:
>>
>> if ( !handle_hvm_io_completion(v) )
>>          return;
>>
>> causes a return?
> It's in hvm_do_resume() because, for one, that's the place that was
> suggested (or at least confirmed when I've proposed it for such things)
> on this list back when I wrote the code. And then it's here because
> vmx_do_things()-type functions are, well, VMX, and I had hoped that by
> choosing hvm-prefixed functions I'd get SVM support for free.
>
> As for the handle_hvm_io_completion(v) return, my understanding was that
> that would eventually cause another exit, and eventually we'd get to the
> code below once the IO part is done.
>
>
> Thanks,
> Razvan

Thanks, so then indeed the vm_event_vcpu_enter call should be there to 
avoid wrongfully calling it many times before actually entering the vCPU 
(due to IO).
I wonder though if anything wrong would happen if I put the call after 
the "inject pending hw/sw trap" part.

Corneliu.
Corneliu ZUZU June 17, 2016, 11:13 a.m. UTC | #9
On 6/17/2016 10:17 AM, Jan Beulich wrote:
>>>> On 16.06.16 at 22:10, <czuzu@bitdefender.com> wrote:
>> On 6/16/2016 5:51 PM, Jan Beulich wrote:
>>>>>> On 16.06.16 at 16:08, <czuzu@bitdefender.com> wrote:
>>>> @@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
>>>>            }
>>>>        }
>>>>    
>>>> +    vm_event_vcpu_enter(v);
>>> Why here?
>> Why indeed. It made sense because monitor_write_data handling was
>> originally there and then the plan was to move it to vm_event_vcpu_enter
>> (which happens in the following commit).
>> The question is though, why was monitor_write_data handled there in the
>> first place? Why was it not put e.g. in vmx_do_resume immediately after
>> the call to hvm_do_resume and just before
>> the reset_stack_and_jump...? And what happens with handling
>> monitor_write_data if this:
>>
>> if ( !handle_hvm_io_completion(v) )
>>           return;
>>
>> causes a return?
> I see Razvan responded to this. I don't have a strong opinion
> either way, my only request if for the call to be in exactly one
> place.
>
>>>> --- a/xen/arch/x86/mm/p2m.c
>>>> +++ b/xen/arch/x86/mm/p2m.c
>>>> @@ -36,7 +36,6 @@
>>>>    #include <asm/hvm/nestedhvm.h>
>>>>    #include <asm/altp2m.h>
>>>>    #include <asm/hvm/svm/amd-iommu-proto.h>
>>>> -#include <asm/vm_event.h>
>>>>    #include <xsm/xsm.h>
>>> There are way too many of these #include adjustments here. If
>>> you really mean to clean these up, please don't randomly throw
>>> this into various unrelated patches.
>> I haven't thrown anything "randomly into unrelated patches", please
>> first ask for my reasoning and then draw such conclusions.
> See patch 1.

"Sorry, that was done out of reflex, should have stated the reasoning."

> Plus I don't think I (or in fact any reviewer) should ask
> for such reasoning: Instead you should state extra cleanup you do
> to unrelated (to the purpose of your patch) files in the description.

Is that still the case when that reasoning is obvious? (at least it 
seemed to me)

but anyway..

> Or even better, split it off to a follow-on, purely cleanup patch.

I agree with this. Will keep in mind with v2.

> (And to be clear, I much appreciate any form of reduction of the
> sometimes extremely long lists of #include-s, just not [apparently
> or really] randomly mixed with other, substantial changes. That's
> namely because it's not clear whether source files should explicitly
> include everything they need, or instead be allowed to rely on
> headers they include to include further headers they also
> _explicitly_ rely on.

Personally I prefer the former since I think it also cuts down 
compilation time.
Having header H include every header Ni needed by source S makes H 
unnecessarily bulky at compilation time for other sources <> S that 
don't need headers Ni but which depend on H nonetheless.

> IOW there's likely a discussion to be had for
> this kind of cleanup, and such a discussion should be a separate
> thread from the one on the functional adjustments here.)

Corneliu.
Jan Beulich June 17, 2016, 11:27 a.m. UTC | #10
>>> On 17.06.16 at 13:13, <czuzu@bitdefender.com> wrote:
> On 6/17/2016 10:17 AM, Jan Beulich wrote:
>> (And to be clear, I much appreciate any form of reduction of the
>> sometimes extremely long lists of #include-s, just not [apparently
>> or really] randomly mixed with other, substantial changes. That's
>> namely because it's not clear whether source files should explicitly
>> include everything they need, or instead be allowed to rely on
>> headers they include to include further headers they also
>> _explicitly_ rely on.
> 
> Personally I prefer the former since I think it also cuts down 
> compilation time.
> Having header H include every header Ni needed by source S makes H 
> unnecessarily bulky at compilation time for other sources <> S that 
> don't need headers Ni but which depend on H nonetheless.

I nowhere said every _header_ should include everything any of
its _consumers_ would require. My point was solely about source
files. For example, if S depends on both H1 and H2, and H2
already includes H1, whether S then needs to just include H2, or
should also explicitly include H1 (such that S doesn't need changing
when the inclusion of H1 by H2 goes away).

Jan
Corneliu ZUZU June 17, 2016, 11:40 a.m. UTC | #11
On 6/17/2016 11:55 AM, Julien Grall wrote:
> Hello,
>
> On 16/06/16 15:08, Corneliu ZUZU wrote:
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index d31f821..ba248c8 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -19,6 +19,7 @@
>>   #include <xen/errno.h>
>>   #include <xen/bitops.h>
>>   #include <xen/grant_table.h>
>> +#include <xen/vm_event.h>
>>
>>   #include <asm/current.h>
>>   #include <asm/event.h>
>> @@ -251,6 +252,8 @@ static void schedule_tail(struct vcpu *prev)
>>
>>       ctxt_switch_to(current);
>>
>> +    vm_event_vcpu_enter(current);
>> +
>>       local_irq_enable();
>>
>>       context_saved(prev);
>> @@ -296,7 +299,7 @@ void context_switch(struct vcpu *prev, struct 
>> vcpu *next)
>>
>>   void continue_running(struct vcpu *same)
>>   {
>> -    /* Nothing to do */
>> +    vm_event_vcpu_enter(same);
>>   }
>>
>>   void sync_local_execstate(void)
>
> From my understanding of the commit message, vm_event_vcpu_enter 
> should be called before returning to the guest. The scheduling 
> functions are not called every-time Xen is returning to the guest. So 
> if you want to do every time Xen is re-entering to the guest, then 
> this should be done in leave_hypervisor_tail.
>
> Regards,
>

Which is also done, vm_event_vcpu_enter is called from 
leave_hypervisor_tail as well.
Are you saying that calling from leave_hypervisor_tail is enough and 
that the schedule_tail call is not necessary?

Thanks,
Corneliu.
Corneliu ZUZU June 17, 2016, 12:13 p.m. UTC | #12
On 6/17/2016 2:27 PM, Jan Beulich wrote:
>>>> On 17.06.16 at 13:13, <czuzu@bitdefender.com> wrote:
>> On 6/17/2016 10:17 AM, Jan Beulich wrote:
>>> (And to be clear, I much appreciate any form of reduction of the
>>> sometimes extremely long lists of #include-s, just not [apparently
>>> or really] randomly mixed with other, substantial changes. That's
>>> namely because it's not clear whether source files should explicitly
>>> include everything they need, or instead be allowed to rely on
>>> headers they include to include further headers they also
>>> _explicitly_ rely on.
>> Personally I prefer the former since I think it also cuts down
>> compilation time.
>> Having header H include every header Ni needed by source S makes H
>> unnecessarily bulky at compilation time for other sources <> S that
>> don't need headers Ni but which depend on H nonetheless.
> I nowhere said every _header_ should include everything any of
> its _consumers_ would require. My point was solely about source
> files. For example, if S depends on both H1 and H2, and H2
> already includes H1, whether S then needs to just include H2, or
> should also explicitly include H1 (such that S doesn't need changing
> when the inclusion of H1 by H2 goes away).
>
> Jan
>
>

Ah, ok got it.
I restate my view of treating these "issues" with automation tools 
rather than leaving the programmer to do primitive work that he 
shouldn't be required to do with a nowadays programming language.
Clang for example offers a powerful parsing library (can parse GCC too) 
with python bindings, it would be nice to take more advantage of that.

Corneliu.
Julien Grall June 17, 2016, 1:22 p.m. UTC | #13
Hello,

On 17/06/16 12:40, Corneliu ZUZU wrote:
> On 6/17/2016 11:55 AM, Julien Grall wrote:
>> On 16/06/16 15:08, Corneliu ZUZU wrote:
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index d31f821..ba248c8 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -19,6 +19,7 @@
>>>   #include <xen/errno.h>
>>>   #include <xen/bitops.h>
>>>   #include <xen/grant_table.h>
>>> +#include <xen/vm_event.h>
>>>
>>>   #include <asm/current.h>
>>>   #include <asm/event.h>
>>> @@ -251,6 +252,8 @@ static void schedule_tail(struct vcpu *prev)
>>>
>>>       ctxt_switch_to(current);
>>>
>>> +    vm_event_vcpu_enter(current);
>>> +
>>>       local_irq_enable();
>>>
>>>       context_saved(prev);
>>> @@ -296,7 +299,7 @@ void context_switch(struct vcpu *prev, struct
>>> vcpu *next)
>>>
>>>   void continue_running(struct vcpu *same)
>>>   {
>>> -    /* Nothing to do */
>>> +    vm_event_vcpu_enter(same);
>>>   }
>>>
>>>   void sync_local_execstate(void)
>>
>> From my understanding of the commit message, vm_event_vcpu_enter
>> should be called before returning to the guest. The scheduling
>> functions are not called every-time Xen is returning to the guest. So
>> if you want to do every time Xen is re-entering to the guest, then
>> this should be done in leave_hypervisor_tail.
>>
>> Regards,
>>
>
> Which is also done, vm_event_vcpu_enter is called from
> leave_hypervisor_tail as well.

Sorry I did not spot this one.

> Are you saying that calling from leave_hypervisor_tail is enough and
> that the schedule_tail call is not necessary?

leave_hypervisor_tail will always be called before returning to the 
guest. So a call to vm_event_vcpu_enter in this function will be sufficient.

Regards,
diff mbox

Patch

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d31f821..ba248c8 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,6 +19,7 @@ 
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
+#include <xen/vm_event.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -251,6 +252,8 @@  static void schedule_tail(struct vcpu *prev)
 
     ctxt_switch_to(current);
 
+    vm_event_vcpu_enter(current);
+
     local_irq_enable();
 
     context_saved(prev);
@@ -296,7 +299,7 @@  void context_switch(struct vcpu *prev, struct vcpu *next)
 
 void continue_running(struct vcpu *same)
 {
-    /* Nothing to do */
+    vm_event_vcpu_enter(same);
 }
 
 void sync_local_execstate(void)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7fa2ae5..8c50685 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -32,6 +32,7 @@ 
 #include <xen/domain_page.h>
 #include <xen/perfc.h>
 #include <xen/virtual_region.h>
+#include <xen/vm_event.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/debugger.h>
@@ -2662,6 +2663,7 @@  asmlinkage void leave_hypervisor_tail(void)
     {
         local_irq_disable();
         if (!softirq_pending(smp_processor_id())) {
+            vm_event_vcpu_enter(current);
             gic_inject();
             return;
         }
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 266ed89..9b2872a 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -15,6 +15,7 @@ 
 #include <xen/sched.h>
 #include <xen/paging.h>
 #include <xen/trace.h>
+#include <xen/vm_event.h>
 #include <asm/event.h>
 #include <asm/xstate.h>
 #include <asm/hvm/emulate.h>
@@ -23,7 +24,6 @@ 
 #include <asm/hvm/trace.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/svm/svm.h>
-#include <asm/vm_event.h>
 
 static void hvmtrace_io_assist(const ioreq_t *p)
 {
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
index 9c51890..26165b4 100644
--- a/xen/arch/x86/hvm/event.c
+++ b/xen/arch/x86/hvm/event.c
@@ -25,7 +25,6 @@ 
 #include <asm/hvm/event.h>
 #include <asm/paging.h>
 #include <asm/monitor.h>
-#include <asm/vm_event.h>
 #include <public/vm_event.h>
 
 bool_t hvm_event_cr(unsigned int index, unsigned long value, unsigned long old)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 78db903..770bb50 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -65,7 +65,6 @@ 
 #include <asm/altp2m.h>
 #include <asm/mtrr.h>
 #include <asm/apic.h>
-#include <asm/vm_event.h>
 #include <public/sched.h>
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
@@ -509,6 +508,8 @@  void hvm_do_resume(struct vcpu *v)
         }
     }
 
+    vm_event_vcpu_enter(v);
+
     /* Inject pending hw/sw trap */
     if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 670d7dc..b43b94a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -25,6 +25,7 @@ 
 #include <xen/domain_page.h>
 #include <xen/hypercall.h>
 #include <xen/perfc.h>
+#include <xen/vm_event.h>
 #include <asm/current.h>
 #include <asm/io.h>
 #include <asm/iocap.h>
@@ -3874,6 +3875,9 @@  void vmx_vmenter_helper(const struct cpu_user_regs *regs)
     }
 
  out:
+    if ( guest_mode(regs) )
+        vm_event_vcpu_enter(curr);
+
     HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
 
     __vmwrite(GUEST_RIP,    regs->rip);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 89462b2..9d37b12 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -36,7 +36,6 @@ 
 #include <asm/hvm/nestedhvm.h>
 #include <asm/altp2m.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
-#include <asm/vm_event.h>
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 75647c4..f7eb24a 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -18,9 +18,7 @@ 
  * License along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/sched.h>
-#include <asm/hvm/hvm.h>
-#include <asm/vm_event.h>
+#include <xen/vm_event.h>
 
 /* Implicitly serialized by the domctl lock. */
 int vm_event_init_domain(struct domain *d)
diff --git a/xen/common/monitor.c b/xen/common/monitor.c
index b30857a..c46df5a 100644
--- a/xen/common/monitor.c
+++ b/xen/common/monitor.c
@@ -22,8 +22,8 @@ 
 #include <xen/monitor.h>
 #include <xen/sched.h>
 #include <xsm/xsm.h>
+#include <xen/vm_event.h>
 #include <asm/monitor.h>
-#include <asm/vm_event.h>
 
 int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *mop)
 {
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 2906407..15152ba 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -27,7 +27,6 @@ 
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
-#include <asm/vm_event.h>
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 05c3027..4e5a272 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -20,7 +20,6 @@ 
 #define __ASM_ARM_VM_EVENT_H__
 
 #include <xen/sched.h>
-#include <xen/vm_event.h>
 #include <public/domctl.h>
 
 static inline int vm_event_init_domain(struct domain *d)
@@ -56,6 +55,11 @@  static inline void vm_event_fill_regs(vm_event_request_t *req)
     /* Not supported on ARM. */
 }
 
+static inline void arch_vm_event_vcpu_enter(struct vcpu *v)
+{
+    /* Nothing to do. */
+}
+
 /*
  * Monitor vm-events.
  */
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index df8e98d..6fb3b58 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -20,7 +20,6 @@ 
 #define __ASM_X86_VM_EVENT_H__
 
 #include <xen/sched.h>
-#include <xen/vm_event.h>
 
 /*
  * Should we emulate the next matching instruction on VCPU resume
@@ -44,6 +43,11 @@  void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp);
 
 void vm_event_fill_regs(vm_event_request_t *req);
 
+static inline void arch_vm_event_vcpu_enter(struct vcpu *v)
+{
+    /* Nothing to do. */
+}
+
 /*
  * Monitor vm-events.
  */
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index a10ee40..f124143 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -24,6 +24,7 @@ 
 #define __VM_EVENT_H__
 
 #include <xen/sched.h>
+#include <asm/vm_event.h>
 #include <public/vm_event.h>
 
 /* Clean up on domain destruction */
@@ -72,6 +73,20 @@  void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
+/*
+ * "Final touch" vCPU function, should be called just before re-entering vCPUs,
+ * e.g. on x86 it is called by hvm_do_resume (scheduling tail) and
+ * vmx_vmenter_helper (before VMRESUME or VMLAUNCH).
+ */
+static inline void vm_event_vcpu_enter(struct vcpu *v)
+{
+    /* don't track idle vcpus, they're not subject to the vm-event subsystem */
+    if ( is_idle_vcpu(v) )
+        return;
+
+    arch_vm_event_vcpu_enter(v);
+}
+
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);