diff mbox series

x86/hap: be more selective with assisted TLB flush

Message ID 20200429173601.77605-1-roger.pau@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86/hap: be more selective with assisted TLB flush | expand

Commit Message

Roger Pau Monné April 29, 2020, 5:36 p.m. UTC
When doing an assisted flush on HAP the purpose of the
on_selected_cpus is just to trigger a vmexit on remote CPUs that are
in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
also check that the vCPU is running.

While there also pass NULL as the data parameter of on_selected_cpus,
the dummy handler doesn't consume the data in any way.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/mm/hap/hap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Jan Beulich April 30, 2020, 7:20 a.m. UTC | #1
On 29.04.2020 19:36, Roger Pau Monne wrote:
> When doing an assisted flush on HAP the purpose of the
> on_selected_cpus is just to trigger a vmexit on remote CPUs that are
> in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
> also check that the vCPU is running.

Am I right to understand that the change is relevant only to
cover the period of time between ->is_running becoming false
and ->dirty_cpu becoming VCPU_CPU_CLEAN? I.e. ...

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -719,7 +719,7 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
>          hvm_asid_flush_vcpu(v);
>  
>          cpu = read_atomic(&v->dirty_cpu);
> -        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
> +        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )

... the previous logic would have suitably covered the switch-to
path, but doesn't properly cover the switch-from one, due to our
lazy context switch approach? If so, I agree with the change:
Reviewed-by: Jan Beulich <jbeulich@suse.com>
It might be worth mentioning this detail in the description then,
though.

Jan
Roger Pau Monné April 30, 2020, 8:28 a.m. UTC | #2
On Thu, Apr 30, 2020 at 09:20:58AM +0200, Jan Beulich wrote:
> On 29.04.2020 19:36, Roger Pau Monne wrote:
> > When doing an assisted flush on HAP the purpose of the
> > on_selected_cpus is just to trigger a vmexit on remote CPUs that are
> > in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
> > also check that the vCPU is running.
> 
> Am I right to understand that the change is relevant only to
> cover the period of time between ->is_running becoming false
> and ->dirty_cpu becoming VCPU_CPU_CLEAN? I.e. ...
> 
> > --- a/xen/arch/x86/mm/hap/hap.c
> > +++ b/xen/arch/x86/mm/hap/hap.c
> > @@ -719,7 +719,7 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
> >          hvm_asid_flush_vcpu(v);
> >  
> >          cpu = read_atomic(&v->dirty_cpu);
> > -        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
> > +        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
> 
> ... the previous logic would have suitably covered the switch-to
> path, but doesn't properly cover the switch-from one, due to our
> lazy context switch approach?

Yes. Also __context_switch is not called from context_switch when
switching to the idle vcpu, and hence dirty_cpu is not cleared.

> If so, I agree with the change:
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> It might be worth mentioning this detail in the description then,
> though.

Would you mind adding to the commit message if you agree:

"Due to the lazy context switching done by Xen dirty_cpu won't always be
cleared when the guest vCPU is not running, and hence relying on
is_running allows more fine grained control of whether the vCPU is
actually running."

Thanks, Roger.
Jan Beulich April 30, 2020, 8:33 a.m. UTC | #3
On 30.04.2020 10:28, Roger Pau Monné wrote:
> On Thu, Apr 30, 2020 at 09:20:58AM +0200, Jan Beulich wrote:
>> On 29.04.2020 19:36, Roger Pau Monne wrote:
>>> When doing an assisted flush on HAP the purpose of the
>>> on_selected_cpus is just to trigger a vmexit on remote CPUs that are
>>> in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
>>> also check that the vCPU is running.
>>
>> Am I right to understand that the change is relevant only to
>> cover the period of time between ->is_running becoming false
>> and ->dirty_cpu becoming VCPU_CPU_CLEAN? I.e. ...
>>
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -719,7 +719,7 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
>>>          hvm_asid_flush_vcpu(v);
>>>  
>>>          cpu = read_atomic(&v->dirty_cpu);
>>> -        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
>>> +        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>>
>> ... the previous logic would have suitably covered the switch-to
>> path, but doesn't properly cover the switch-from one, due to our
>> lazy context switch approach?
> 
> Yes. Also __context_switch is not called from context_switch when
> switching to the idle vcpu, and hence dirty_cpu is not cleared.
> 
>> If so, I agree with the change:
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> It might be worth mentioning this detail in the description then,
>> though.
> 
> Would you mind adding to the commit message if you agree:
> 
> "Due to the lazy context switching done by Xen dirty_cpu won't always be
> cleared when the guest vCPU is not running, and hence relying on
> is_running allows more fine grained control of whether the vCPU is
> actually running."

Sure; I'll give it over the weekend though for others to comment, if
so desired.

Jan
Andrew Cooper April 30, 2020, 4:19 p.m. UTC | #4
On 30/04/2020 09:33, Jan Beulich wrote:
> On 30.04.2020 10:28, Roger Pau Monné wrote:
>> On Thu, Apr 30, 2020 at 09:20:58AM +0200, Jan Beulich wrote:
>>> On 29.04.2020 19:36, Roger Pau Monne wrote:
>>>> When doing an assisted flush on HAP the purpose of the
>>>> on_selected_cpus is just to trigger a vmexit on remote CPUs that are
>>>> in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
>>>> also check that the vCPU is running.
>>> Am I right to understand that the change is relevant only to
>>> cover the period of time between ->is_running becoming false
>>> and ->dirty_cpu becoming VCPU_CPU_CLEAN? I.e. ...
>>>
>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>> @@ -719,7 +719,7 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
>>>>          hvm_asid_flush_vcpu(v);
>>>>  
>>>>          cpu = read_atomic(&v->dirty_cpu);
>>>> -        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
>>>> +        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>>> ... the previous logic would have suitably covered the switch-to
>>> path, but doesn't properly cover the switch-from one, due to our
>>> lazy context switch approach?
>> Yes. Also __context_switch is not called from context_switch when
>> switching to the idle vcpu, and hence dirty_cpu is not cleared.
>>
>>> If so, I agree with the change:
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> It might be worth mentioning this detail in the description then,
>>> though.
>> Would you mind adding to the commit message if you agree:
>>
>> "Due to the lazy context switching done by Xen dirty_cpu won't always be
>> cleared when the guest vCPU is not running, and hence relying on
>> is_running allows more fine grained control of whether the vCPU is
>> actually running."
> Sure; I'll give it over the weekend though for others to comment, if
> so desired.

I think it is worth pointing out that this fixes a large perf regression
on Nehalem/Westmere systems, where L1 Shim using the enlightened
hypercall is 8x slower than unenlightened way.

~Andrew
Roger Pau Monné April 30, 2020, 6:09 p.m. UTC | #5
On Thu, Apr 30, 2020 at 05:19:19PM +0100, Andrew Cooper wrote:
> On 30/04/2020 09:33, Jan Beulich wrote:
> > On 30.04.2020 10:28, Roger Pau Monné wrote:
> >> On Thu, Apr 30, 2020 at 09:20:58AM +0200, Jan Beulich wrote:
> >>> On 29.04.2020 19:36, Roger Pau Monne wrote:
> >>>> When doing an assisted flush on HAP the purpose of the
> >>>> on_selected_cpus is just to trigger a vmexit on remote CPUs that are
> >>>> in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
> >>>> also check that the vCPU is running.
> >>> Am I right to understand that the change is relevant only to
> >>> cover the period of time between ->is_running becoming false
> >>> and ->dirty_cpu becoming VCPU_CPU_CLEAN? I.e. ...
> >>>
> >>>> --- a/xen/arch/x86/mm/hap/hap.c
> >>>> +++ b/xen/arch/x86/mm/hap/hap.c
> >>>> @@ -719,7 +719,7 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
> >>>>          hvm_asid_flush_vcpu(v);
> >>>>  
> >>>>          cpu = read_atomic(&v->dirty_cpu);
> >>>> -        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
> >>>> +        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
> >>> ... the previous logic would have suitably covered the switch-to
> >>> path, but doesn't properly cover the switch-from one, due to our
> >>> lazy context switch approach?
> >> Yes. Also __context_switch is not called from context_switch when
> >> switching to the idle vcpu, and hence dirty_cpu is not cleared.
> >>
> >>> If so, I agree with the change:
> >>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>> It might be worth mentioning this detail in the description then,
> >>> though.
> >> Would you mind adding to the commit message if you agree:
> >>
> >> "Due to the lazy context switching done by Xen dirty_cpu won't always be
> >> cleared when the guest vCPU is not running, and hence relying on
> >> is_running allows more fine grained control of whether the vCPU is
> >> actually running."
> > Sure; I'll give it over the weekend though for others to comment, if
> > so desired.
> 
> I think it is worth pointing out that this fixes a large perf regression
> on Nehalem/Westmere systems, where L1 Shim using the enlightened
> hypercall is 8x slower than unenlightened way.

I might as well post the actual numbers I have.

I've measured the time of the non-local branch of flush_area_mask
inside the shim running with 32vCPUs over 100000 executions and
averaged the result on a large Westmere system (80 ways total). The
figures where fetched during the boot of a SLES 11 PV guest. The
results are as follow (less is better):

Non assisted flush with x2APIC:      112406ns
Assisted flush without this patch:   820450ns
Assisted flush with this patch:        8330ns

I can add the figures to the commit message if deemed interesting to
have in the repo. Or the above text can be appended to the commit
message if that's fine.

Roger.
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 580d1c2164..0275cdf5c8 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -719,7 +719,7 @@  static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
         hvm_asid_flush_vcpu(v);
 
         cpu = read_atomic(&v->dirty_cpu);
-        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) )
+        if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
             __cpumask_set_cpu(cpu, mask);
     }
 
@@ -729,7 +729,7 @@  static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
      * not currently running will already be flushed when scheduled because of
      * the ASID tickle done in the loop above.
      */
-    on_selected_cpus(mask, dummy_flush, mask, 0);
+    on_selected_cpus(mask, dummy_flush, NULL, 0);
 
     return true;
 }