diff mbox

[v2] sync CPU state upon final domain destruction

Message ID 5A157E040200007800190FAE@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich Nov. 22, 2017, 12:39 p.m. UTC
See the code comment being added for why we need this.

This is being placed here to balance between the desire to prevent
future similar issues (the risk of which would grow if it was put
further down the call stack, e.g. in vmx_vcpu_destroy()) and the
intention to limit the performance impact (otherwise it could also go
into rcu_do_batch(), paralleling the use in do_tasklet_work()).

Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().

Comments

Andrew Cooper Nov. 22, 2017, 12:54 p.m. UTC | #1
On 22/11/17 12:39, Jan Beulich wrote:
> See the code comment being added for why we need this.
>
> This is being placed here to balance between the desire to prevent
> future similar issues (the risk of which would grow if it was put
> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
> intention to limit the performance impact (otherwise it could also go
> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
>
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
>
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>      struct vcpu *v;
>      int i;
>  
> +    /*
> +     * Flush all state for the vCPU previously having run on the current CPU.
> +     * This is in particular relevant for x86 HVM ones on VMX, so that this
> +     * flushing of state won't happen from the TLB flush IPI handler behind
> +     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
> +     */
> +    sync_local_execstate();
> +
>      for ( i = d->max_vcpus - 1; i >= 0; i-- )
>      {
>          if ( (v = d->vcpu[i]) == NULL )
>
>
>
Jan Beulich Nov. 22, 2017, 1 p.m. UTC | #2
>>> On 22.11.17 at 13:39, <JBeulich@suse.com> wrote:
> See the code comment being added for why we need this.
> 
> This is being placed here to balance between the desire to prevent
> future similar issues (the risk of which would grow if it was put
> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
> intention to limit the performance impact (otherwise it could also go
> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
> 
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm sorry, Julien, I did forget to Cc you (for 4.10 inclusion).

> ---
> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
> 
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>      struct vcpu *v;
>      int i;
>  
> +    /*
> +     * Flush all state for the vCPU previously having run on the current CPU.
> +     * This is in particular relevant for x86 HVM ones on VMX, so that this
> +     * flushing of state won't happen from the TLB flush IPI handler behind
> +     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
> +     */
> +    sync_local_execstate();
> +
>      for ( i = d->max_vcpus - 1; i >= 0; i-- )
>      {
>          if ( (v = d->vcpu[i]) == NULL )
Julien Grall Nov. 22, 2017, 4:04 p.m. UTC | #3
Hi,

On 11/22/2017 01:00 PM, Jan Beulich wrote:
>>>> On 22.11.17 at 13:39, <JBeulich@suse.com> wrote:
>> See the code comment being added for why we need this.
>>
>> This is being placed here to balance between the desire to prevent
>> future similar issues (the risk of which would grow if it was put
>> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
>> intention to limit the performance impact (otherwise it could also go
>> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
>>
>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm sorry, Julien, I did forget to Cc you (for 4.10 inclusion).

Release-acked-by: Julien Grall <julien.grall@linaro.org>

Cheers,

> 
>> ---
>> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
>>
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>>       struct vcpu *v;
>>       int i;
>>   
>> +    /*
>> +     * Flush all state for the vCPU previously having run on the current CPU.
>> +     * This is in particular relevant for x86 HVM ones on VMX, so that this
>> +     * flushing of state won't happen from the TLB flush IPI handler behind
>> +     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
>> +     */
>> +    sync_local_execstate();
>> +
>>       for ( i = d->max_vcpus - 1; i >= 0; i-- )
>>       {
>>           if ( (v = d->vcpu[i]) == NULL )
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>
diff mbox

Patch

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -794,6 +794,14 @@  static void complete_domain_destroy(stru
     struct vcpu *v;
     int i;
 
+    /*
+     * Flush all state for the vCPU previously having run on the current CPU.
+     * This is in particular relevant for x86 HVM ones on VMX, so that this
+     * flushing of state won't happen from the TLB flush IPI handler behind
+     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
+     */
+    sync_local_execstate();
+
     for ( i = d->max_vcpus - 1; i >= 0; i-- )
     {
         if ( (v = d->vcpu[i]) == NULL )