diff mbox

[v5,12/13] pvh/acpi: Save ACPI registers for PVH guests

Message ID 1481930319-4796-13-git-send-email-boris.ostrovsky@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Boris Ostrovsky Dec. 16, 2016, 11:18 p.m. UTC
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/pmtimer.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Jan Beulich Dec. 20, 2016, 1:57 p.m. UTC | #1
>>> On 17.12.16 at 00:18, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/pmtimer.c
> +++ b/xen/arch/x86/hvm/pmtimer.c
> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
>      int rc;
>  
>      if ( !has_vpm(d) )
> +    {
> +        if ( !has_acpi_dm_ff(d) )
> +            return hvm_save_entry(PMTIMER, 0, h, acpi);
>          return 0;
> +    }
>  
>      spin_lock(&s->lock);
>  
> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
>      PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
>  
>      if ( !has_vpm(d) )
> +    {
> +        if ( !has_acpi_dm_ff(d) )
> +            return hvm_load_entry(PMTIMER, h, acpi);
>          return -ENODEV;
> +    }
>  
>      spin_lock(&s->lock);

Seeing this I first of all wonder - would there be any harm in simply
having PVH take (almost) the same route as HVM here? In particular
there's a pmt_update_sci() call, an equivalent of which would seem
to be needed for PVH too.

Which in turn gets me to wonder whether some of the code which
is already there couldn't be re-used (handle_evt_io() for example).

And then, seeing the locking here - don't you need some locking
in the earlier patches too, both to serialize accesses from multiple
guest vCPU-s and to arbitrate between Dom0 and the guest?

Jan
Boris Ostrovsky Dec. 20, 2016, 3:09 p.m. UTC | #2
On 12/20/2016 08:57 AM, Jan Beulich wrote:
>>>> On 17.12.16 at 00:18, <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/pmtimer.c
>> +++ b/xen/arch/x86/hvm/pmtimer.c
>> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
>>      int rc;
>>  
>>      if ( !has_vpm(d) )
>> +    {
>> +        if ( !has_acpi_dm_ff(d) )
>> +            return hvm_save_entry(PMTIMER, 0, h, acpi);
>>          return 0;
>> +    }
>>  
>>      spin_lock(&s->lock);
>>  
>> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
>>      PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
>>  
>>      if ( !has_vpm(d) )
>> +    {
>> +        if ( !has_acpi_dm_ff(d) )
>> +            return hvm_load_entry(PMTIMER, h, acpi);
>>          return -ENODEV;
>> +    }
>>  
>>      spin_lock(&s->lock);
> Seeing this I first of all wonder - would there be any harm in simply
> having PVH take (almost) the same route as HVM here? In particular
> there's a pmt_update_sci() call, an equivalent of which would seem
> to be needed for PVH too.

It probably is harmless but not very useful too since we'd be saving
PMTState which is not in use by PVH.

As far as pmt_update_sci() --- the only case when we have an SCI
generated is CPU map update and the interrupt is made pending to the
guest immediately so if we try to resend it during restore won't we be
sending it twice?

>
> Which in turn gets me to wonder whether some of the code which
> is already there couldn't be re-used (handle_evt_io() for example).

My plan is to merge these routines (to some extent) when we make HVM
guests use the same CPU hotplug code as PVH (i.e. stop using QEMU for that).


> And then, seeing the locking here - don't you need some locking
> in the earlier patches too, both to serialize accesses from multiple
> guest vCPU-s and to arbitrate between Dom0 and the guest?

Yes, that was missing.

-boris
Jan Beulich Dec. 20, 2016, 3:40 p.m. UTC | #3
>>> On 20.12.16 at 16:09, <boris.ostrovsky@oracle.com> wrote:
> On 12/20/2016 08:57 AM, Jan Beulich wrote:
>>>>> On 17.12.16 at 00:18, <boris.ostrovsky@oracle.com> wrote:
>>> --- a/xen/arch/x86/hvm/pmtimer.c
>>> +++ b/xen/arch/x86/hvm/pmtimer.c
>>> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
>>>      int rc;
>>>  
>>>      if ( !has_vpm(d) )
>>> +    {
>>> +        if ( !has_acpi_dm_ff(d) )
>>> +            return hvm_save_entry(PMTIMER, 0, h, acpi);
>>>          return 0;
>>> +    }
>>>  
>>>      spin_lock(&s->lock);
>>>  
>>> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
>>>      PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
>>>  
>>>      if ( !has_vpm(d) )
>>> +    {
>>> +        if ( !has_acpi_dm_ff(d) )
>>> +            return hvm_load_entry(PMTIMER, h, acpi);
>>>          return -ENODEV;
>>> +    }
>>>  
>>>      spin_lock(&s->lock);
>> Seeing this I first of all wonder - would there be any harm in simply
>> having PVH take (almost) the same route as HVM here? In particular
>> there's a pmt_update_sci() call, an equivalent of which would seem
>> to be needed for PVH too.
> 
> It probably is harmless but not very useful too since we'd be saving
> PMTState which is not in use by PVH.
> 
> As far as pmt_update_sci() --- the only case when we have an SCI
> generated is CPU map update and the interrupt is made pending to the
> guest immediately so if we try to resend it during restore won't we be
> sending it twice?

Well, depends on how far things got prior to the (racing) migration.
I think it's better to send one too many than risking the guest not
getting to see any.

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index b70c299..37acf11 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -257,7 +257,11 @@  static int acpi_save(struct domain *d, hvm_domain_context_t *h)
     int rc;
 
     if ( !has_vpm(d) )
+    {
+        if ( !has_acpi_dm_ff(d) )
+            return hvm_save_entry(PMTIMER, 0, h, acpi);
         return 0;
+    }
 
     spin_lock(&s->lock);
 
@@ -286,7 +290,11 @@  static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
 
     if ( !has_vpm(d) )
+    {
+        if ( !has_acpi_dm_ff(d) )
+            return hvm_load_entry(PMTIMER, h, acpi);
         return -ENODEV;
+    }
 
     spin_lock(&s->lock);