diff mbox series

[v3,05/11] x86/vioapic: switch to use the EOI callback mechanism

Message ID 20210331103303.79705-6-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series x86/intr: introduce EOI callbacks and fix vPT | expand

Commit Message

Roger Pau Monne March 31, 2021, 10:32 a.m. UTC
Switch the emulated IO-APIC code to use the local APIC EOI callback
mechanism. This allows to remove the last hardcoded callback from
vlapic_handle_EOI. Removing the hardcoded vIO-APIC callback also
allows to getting rid of setting the EOI exit bitmap based on the
triggering mode, as now all users that require an EOI action use the
newly introduced callback mechanism.

Move and rename the vioapic_update_EOI now that it can be made static.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Explicitly convert the last alternative_vcall parameter to a
   boolean in vlapic_set_irq_callback.

Changes since v1:
 - Remove the triggering check in the update_eoi_exit_bitmap call.
 - Register the vlapic callbacks when loading the vIO-APIC state.
 - Reduce scope of ent.
---
 xen/arch/x86/hvm/vioapic.c | 131 ++++++++++++++++++++++++-------------
 xen/arch/x86/hvm/vlapic.c  |  11 ++--
 2 files changed, 92 insertions(+), 50 deletions(-)

Comments

Jan Beulich April 7, 2021, 3:19 p.m. UTC | #1
On 31.03.2021 12:32, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -394,6 +394,50 @@ static const struct hvm_mmio_ops vioapic_mmio_ops = {
>      .write = vioapic_write
>  };
>  
> +static void eoi_callback(unsigned int vector, void *data)
> +{
> +    struct domain *d = current->domain;
> +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> +    unsigned int i;
> +
> +    ASSERT(has_vioapic(d));

On the same grounds on which you dropped checks from hvm_dpci_msi_eoi()
in the previous patch you could imo now drop this assertion.

> @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>           d->arch.hvm.nr_vioapics != 1 )
>          return -EOPNOTSUPP;
>  
> -    return hvm_load_entry(IOAPIC, h, &s->domU);
> +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
> +    if ( rc )
> +        return rc;
> +
> +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
> +    {
> +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
> +        unsigned int vector = ent->fields.vector;
> +        unsigned int delivery_mode = ent->fields.delivery_mode;
> +        struct vcpu *v;
> +
> +        /*
> +         * Add a callback for each possible vector injected by a redirection
> +         * entry.
> +         */
> +        if ( vector < 16 || !ent->fields.remote_irr ||
> +             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
> +            continue;
> +
> +        for_each_vcpu ( d, v )
> +        {
> +            struct vlapic *vlapic = vcpu_vlapic(v);
> +
> +            /*
> +             * NB: if the vlapic registers were restored before the vio-apic
> +             * ones we could test whether the vector is set in the vlapic IRR
> +             * or ISR registers before unconditionally setting the callback.
> +             * This is harmless as eoi_callback is capable of dealing with
> +             * spurious callbacks.
> +             */
> +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
> +                                   ent->fields.dest_mode) )
> +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);

eoi_callback()'s behavior is only one of the aspects to consider here.
The other is vlapic_set_callback()'s complaining if it finds a
callback already set. What guarantees that a mistakenly set callback
here won't get in conflict with some future use of the same vector by
the guest?

And btw - like in the earlier patch you could again pass d instead of
NULL here, avoiding the need to establish it from current in the
callback.

> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -192,7 +192,13 @@ void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
>  
>      if ( hvm_funcs.update_eoi_exit_bitmap )
>          alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
> -                          trig || callback);
> +                          /*
> +                           * NB: need to explicitly convert to boolean to avoid
> +                           * truncation wrongly result in false begin reported
> +                           * for example when the pointer sits on a page
> +                           * boundary.
> +                           */
> +                          !!callback);

I've had quite a bit of difficulty with the comment. Once I realized
that you likely mean "being" instead of "begin" it got a bit better.
I'd like to suggest also s/result/resulting/, a comma after "reported",
and maybe then s/being reported/getting passed/.

As to explicitly converting to bool, wouldn't a cast to bool do? That's
more explicitly an "explicit conversion" than using !!.

Jan
Roger Pau Monne April 7, 2021, 4:46 p.m. UTC | #2
On Wed, Apr 07, 2021 at 05:19:06PM +0200, Jan Beulich wrote:
> On 31.03.2021 12:32, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/vioapic.c
> > +++ b/xen/arch/x86/hvm/vioapic.c
> > @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
> >           d->arch.hvm.nr_vioapics != 1 )
> >          return -EOPNOTSUPP;
> >  
> > -    return hvm_load_entry(IOAPIC, h, &s->domU);
> > +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
> > +    {
> > +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
> > +        unsigned int vector = ent->fields.vector;
> > +        unsigned int delivery_mode = ent->fields.delivery_mode;
> > +        struct vcpu *v;
> > +
> > +        /*
> > +         * Add a callback for each possible vector injected by a redirection
> > +         * entry.
> > +         */
> > +        if ( vector < 16 || !ent->fields.remote_irr ||
> > +             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
> > +            continue;
> > +
> > +        for_each_vcpu ( d, v )
> > +        {
> > +            struct vlapic *vlapic = vcpu_vlapic(v);
> > +
> > +            /*
> > +             * NB: if the vlapic registers were restored before the vio-apic
> > +             * ones we could test whether the vector is set in the vlapic IRR
> > +             * or ISR registers before unconditionally setting the callback.
> > +             * This is harmless as eoi_callback is capable of dealing with
> > +             * spurious callbacks.
> > +             */
> > +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
> > +                                   ent->fields.dest_mode) )
> > +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
> 
> eoi_callback()'s behavior is only one of the aspects to consider here.
> The other is vlapic_set_callback()'s complaining if it finds a
> callback already set. What guarantees that a mistakenly set callback
> here won't get in conflict with some future use of the same vector by
> the guest?

Such conflict would only manifest as a warning message, but won't
cause any malfunction, as the later callback would override the
current one.

This model I'm proposing doesn't support lapic vector sharing with
different devices that require EOI callbacks, I think we already
discussed this on a previous series and agreed it was fine.

> And btw - like in the earlier patch you could again pass d instead of
> NULL here, avoiding the need to establish it from current in the
> callback.

On the new version the vlapic callback gets passed a vcpu parameter,
as I will drop the prepatches to remove passing a domain parameter to
vioapic_update_EOI.

> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -192,7 +192,13 @@ void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
> >  
> >      if ( hvm_funcs.update_eoi_exit_bitmap )
> >          alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
> > -                          trig || callback);
> > +                          /*
> > +                           * NB: need to explicitly convert to boolean to avoid
> > +                           * truncation wrongly result in false begin reported
> > +                           * for example when the pointer sits on a page
> > +                           * boundary.
> > +                           */
> > +                          !!callback);
> 
> I've had quite a bit of difficulty with the comment. Once I realized
> that you likely mean "being" instead of "begin" it got a bit better.
> I'd like to suggest also s/result/resulting/, a comma after "reported",
> and maybe then s/being reported/getting passed/.
> 
> As to explicitly converting to bool, wouldn't a cast to bool do? That's
> more explicitly an "explicit conversion" than using !!.

I've always used !! in the past for such cases because it's shorter, I
can explicitly cast if you prefer that instead.

Thanks, Roger.
Jan Beulich April 8, 2021, 6:27 a.m. UTC | #3
On 07.04.2021 18:46, Roger Pau Monné wrote:
> On Wed, Apr 07, 2021 at 05:19:06PM +0200, Jan Beulich wrote:
>> On 31.03.2021 12:32, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/vioapic.c
>>> +++ b/xen/arch/x86/hvm/vioapic.c
>>> @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>>>           d->arch.hvm.nr_vioapics != 1 )
>>>          return -EOPNOTSUPP;
>>>  
>>> -    return hvm_load_entry(IOAPIC, h, &s->domU);
>>> +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
>>> +    if ( rc )
>>> +        return rc;
>>> +
>>> +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
>>> +    {
>>> +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
>>> +        unsigned int vector = ent->fields.vector;
>>> +        unsigned int delivery_mode = ent->fields.delivery_mode;
>>> +        struct vcpu *v;
>>> +
>>> +        /*
>>> +         * Add a callback for each possible vector injected by a redirection
>>> +         * entry.
>>> +         */
>>> +        if ( vector < 16 || !ent->fields.remote_irr ||
>>> +             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
>>> +            continue;
>>> +
>>> +        for_each_vcpu ( d, v )
>>> +        {
>>> +            struct vlapic *vlapic = vcpu_vlapic(v);
>>> +
>>> +            /*
>>> +             * NB: if the vlapic registers were restored before the vio-apic
>>> +             * ones we could test whether the vector is set in the vlapic IRR
>>> +             * or ISR registers before unconditionally setting the callback.
>>> +             * This is harmless as eoi_callback is capable of dealing with
>>> +             * spurious callbacks.
>>> +             */
>>> +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
>>> +                                   ent->fields.dest_mode) )
>>> +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
>>
>> eoi_callback()'s behavior is only one of the aspects to consider here.
>> The other is vlapic_set_callback()'s complaining if it finds a
>> callback already set. What guarantees that a mistakenly set callback
>> here won't get in conflict with some future use of the same vector by
>> the guest?
> 
> Such conflict would only manifest as a warning message, but won't
> cause any malfunction, as the later callback would override the
> current one.
> 
> This model I'm proposing doesn't support lapic vector sharing with
> different devices that require EOI callbacks, I think we already
> discussed this on a previous series and agreed it was fine.

The problem with such false positive warning messages is that
they'll cause cautious people to investigate, i.e. spend perhaps
a sizable amount of time in understanding what was actually a non-
issue. I view this as a problem, even if the code's functioning is
fine the way it is. I'm not even sure explicitly mentioning the
situation in the comment is going to help, as one may not stumble
across that comment while investigating.

>>> --- a/xen/arch/x86/hvm/vlapic.c
>>> +++ b/xen/arch/x86/hvm/vlapic.c
>>> @@ -192,7 +192,13 @@ void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
>>>  
>>>      if ( hvm_funcs.update_eoi_exit_bitmap )
>>>          alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
>>> -                          trig || callback);
>>> +                          /*
>>> +                           * NB: need to explicitly convert to boolean to avoid
>>> +                           * truncation wrongly result in false begin reported
>>> +                           * for example when the pointer sits on a page
>>> +                           * boundary.
>>> +                           */
>>> +                          !!callback);
>>
>> I've had quite a bit of difficulty with the comment. Once I realized
>> that you likely mean "being" instead of "begin" it got a bit better.
>> I'd like to suggest also s/result/resulting/, a comma after "reported",
>> and maybe then s/being reported/getting passed/.
>>
>> As to explicitly converting to bool, wouldn't a cast to bool do? That's
>> more explicitly an "explicit conversion" than using !!.
> 
> I've always used !! in the past for such cases because it's shorter, I
> can explicitly cast if you prefer that instead.

I agree with the "shorter" aspect. What I'm afraid of is that someone may,
despite the comment, think the !! is a stray leftover from the bool_t
days. I'd therefore prefer to keep the !! pattern for just the legacy
uses, and see casts used in cases like the one here. However, if both you
and Andrew think otherwise, so be it.

Jan
Roger Pau Monne April 8, 2021, 8:59 a.m. UTC | #4
On Thu, Apr 08, 2021 at 08:27:10AM +0200, Jan Beulich wrote:
> On 07.04.2021 18:46, Roger Pau Monné wrote:
> > On Wed, Apr 07, 2021 at 05:19:06PM +0200, Jan Beulich wrote:
> >> On 31.03.2021 12:32, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/vioapic.c
> >>> +++ b/xen/arch/x86/hvm/vioapic.c
> >>> @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
> >>>           d->arch.hvm.nr_vioapics != 1 )
> >>>          return -EOPNOTSUPP;
> >>>  
> >>> -    return hvm_load_entry(IOAPIC, h, &s->domU);
> >>> +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
> >>> +    if ( rc )
> >>> +        return rc;
> >>> +
> >>> +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
> >>> +    {
> >>> +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
> >>> +        unsigned int vector = ent->fields.vector;
> >>> +        unsigned int delivery_mode = ent->fields.delivery_mode;
> >>> +        struct vcpu *v;
> >>> +
> >>> +        /*
> >>> +         * Add a callback for each possible vector injected by a redirection
> >>> +         * entry.
> >>> +         */
> >>> +        if ( vector < 16 || !ent->fields.remote_irr ||
> >>> +             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
> >>> +            continue;
> >>> +
> >>> +        for_each_vcpu ( d, v )
> >>> +        {
> >>> +            struct vlapic *vlapic = vcpu_vlapic(v);
> >>> +
> >>> +            /*
> >>> +             * NB: if the vlapic registers were restored before the vio-apic
> >>> +             * ones we could test whether the vector is set in the vlapic IRR
> >>> +             * or ISR registers before unconditionally setting the callback.
> >>> +             * This is harmless as eoi_callback is capable of dealing with
> >>> +             * spurious callbacks.
> >>> +             */
> >>> +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
> >>> +                                   ent->fields.dest_mode) )
> >>> +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
> >>
> >> eoi_callback()'s behavior is only one of the aspects to consider here.
> >> The other is vlapic_set_callback()'s complaining if it finds a
> >> callback already set. What guarantees that a mistakenly set callback
> >> here won't get in conflict with some future use of the same vector by
> >> the guest?
> > 
> > Such conflict would only manifest as a warning message, but won't
> > cause any malfunction, as the later callback would override the
> > current one.
> > 
> > This model I'm proposing doesn't support lapic vector sharing with
> > different devices that require EOI callbacks, I think we already
> > discussed this on a previous series and agreed it was fine.
> 
> The problem with such false positive warning messages is that
> they'll cause cautious people to investigate, i.e. spend perhaps
> a sizable amount of time in understanding what was actually a non-
> issue. I view this as a problem, even if the code's functioning is
> fine the way it is. I'm not even sure explicitly mentioning the
> situation in the comment is going to help, as one may not stumble
> across that comment while investigating.

What about making the warning message in case of override in
vlapic_set_callback conditional to there being a vector pending in IRR
or ISR?

Without having such vector pending the callback is just useless, as
it's not going to be executed, so overriding it in that situation is
perfectly fine. That should prevent the restoring here triggering the
message unless there's indeed a troublesome sharing of a vector.

> >>> --- a/xen/arch/x86/hvm/vlapic.c
> >>> +++ b/xen/arch/x86/hvm/vlapic.c
> >>> @@ -192,7 +192,13 @@ void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
> >>>  
> >>>      if ( hvm_funcs.update_eoi_exit_bitmap )
> >>>          alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
> >>> -                          trig || callback);
> >>> +                          /*
> >>> +                           * NB: need to explicitly convert to boolean to avoid
> >>> +                           * truncation wrongly result in false begin reported
> >>> +                           * for example when the pointer sits on a page
> >>> +                           * boundary.
> >>> +                           */
> >>> +                          !!callback);
> >>
> >> I've had quite a bit of difficulty with the comment. Once I realized
> >> that you likely mean "being" instead of "begin" it got a bit better.
> >> I'd like to suggest also s/result/resulting/, a comma after "reported",
> >> and maybe then s/being reported/getting passed/.
> >>
> >> As to explicitly converting to bool, wouldn't a cast to bool do? That's
> >> more explicitly an "explicit conversion" than using !!.
> > 
> > I've always used !! in the past for such cases because it's shorter, I
> > can explicitly cast if you prefer that instead.
> 
> I agree with the "shorter" aspect. What I'm afraid of is that someone may,
> despite the comment, think the !! is a stray leftover from the bool_t
> days. I'd therefore prefer to keep the !! pattern for just the legacy
> uses, and see casts used in cases like the one here. However, if both you
> and Andrew think otherwise, so be it.

I'm fine with casting to boolean.

Thanks, Roger.
Jan Beulich April 8, 2021, 10:52 a.m. UTC | #5
On 08.04.2021 10:59, Roger Pau Monné wrote:
> On Thu, Apr 08, 2021 at 08:27:10AM +0200, Jan Beulich wrote:
>> On 07.04.2021 18:46, Roger Pau Monné wrote:
>>> On Wed, Apr 07, 2021 at 05:19:06PM +0200, Jan Beulich wrote:
>>>> On 31.03.2021 12:32, Roger Pau Monne wrote:
>>>>> --- a/xen/arch/x86/hvm/vioapic.c
>>>>> +++ b/xen/arch/x86/hvm/vioapic.c
>>>>> @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>>>>>           d->arch.hvm.nr_vioapics != 1 )
>>>>>          return -EOPNOTSUPP;
>>>>>  
>>>>> -    return hvm_load_entry(IOAPIC, h, &s->domU);
>>>>> +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
>>>>> +    if ( rc )
>>>>> +        return rc;
>>>>> +
>>>>> +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
>>>>> +    {
>>>>> +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
>>>>> +        unsigned int vector = ent->fields.vector;
>>>>> +        unsigned int delivery_mode = ent->fields.delivery_mode;
>>>>> +        struct vcpu *v;
>>>>> +
>>>>> +        /*
>>>>> +         * Add a callback for each possible vector injected by a redirection
>>>>> +         * entry.
>>>>> +         */
>>>>> +        if ( vector < 16 || !ent->fields.remote_irr ||
>>>>> +             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
>>>>> +            continue;
>>>>> +
>>>>> +        for_each_vcpu ( d, v )
>>>>> +        {
>>>>> +            struct vlapic *vlapic = vcpu_vlapic(v);
>>>>> +
>>>>> +            /*
>>>>> +             * NB: if the vlapic registers were restored before the vio-apic
>>>>> +             * ones we could test whether the vector is set in the vlapic IRR
>>>>> +             * or ISR registers before unconditionally setting the callback.
>>>>> +             * This is harmless as eoi_callback is capable of dealing with
>>>>> +             * spurious callbacks.
>>>>> +             */
>>>>> +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
>>>>> +                                   ent->fields.dest_mode) )
>>>>> +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
>>>>
>>>> eoi_callback()'s behavior is only one of the aspects to consider here.
>>>> The other is vlapic_set_callback()'s complaining if it finds a
>>>> callback already set. What guarantees that a mistakenly set callback
>>>> here won't get in conflict with some future use of the same vector by
>>>> the guest?
>>>
>>> Such conflict would only manifest as a warning message, but won't
>>> cause any malfunction, as the later callback would override the
>>> current one.
>>>
>>> This model I'm proposing doesn't support lapic vector sharing with
>>> different devices that require EOI callbacks, I think we already
>>> discussed this on a previous series and agreed it was fine.
>>
>> The problem with such false positive warning messages is that
>> they'll cause cautious people to investigate, i.e. spend perhaps
>> a sizable amount of time in understanding what was actually a non-
>> issue. I view this as a problem, even if the code's functioning is
>> fine the way it is. I'm not even sure explicitly mentioning the
>> situation in the comment is going to help, as one may not stumble
>> across that comment while investigating.
> 
> What about making the warning message in case of override in
> vlapic_set_callback conditional to there being a vector pending in IRR
> or ISR?
> 
> Without having such vector pending the callback is just useless, as
> it's not going to be executed, so overriding it in that situation is
> perfectly fine. That should prevent the restoring here triggering the
> message unless there's indeed a troublesome sharing of a vector.

Ah yes, since the callbacks are self-clearing, this gating looks quite
reasonable to me.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index dcc2de76489..d29b6bfdb7d 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -394,6 +394,50 @@  static const struct hvm_mmio_ops vioapic_mmio_ops = {
     .write = vioapic_write
 };
 
+static void eoi_callback(unsigned int vector, void *data)
+{
+    struct domain *d = current->domain;
+    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
+    unsigned int i;
+
+    ASSERT(has_vioapic(d));
+
+    spin_lock(&d->arch.hvm.irq_lock);
+
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
+    {
+        struct hvm_vioapic *vioapic = domain_vioapic(d, i);
+        unsigned int pin;
+
+        for ( pin = 0; pin < vioapic->nr_pins; pin++ )
+        {
+            union vioapic_redir_entry *ent = &vioapic->redirtbl[pin];
+
+            if ( ent->fields.vector != vector )
+                continue;
+
+            ent->fields.remote_irr = 0;
+
+            if ( is_iommu_enabled(d) )
+            {
+                spin_unlock(&d->arch.hvm.irq_lock);
+                hvm_dpci_eoi(vioapic->base_gsi + pin);
+                spin_lock(&d->arch.hvm.irq_lock);
+            }
+
+            if ( (ent->fields.trig_mode == VIOAPIC_LEVEL_TRIG) &&
+                 !ent->fields.mask && !ent->fields.remote_irr &&
+                 hvm_irq->gsi_assert_count[vioapic->base_gsi + pin] )
+            {
+                ent->fields.remote_irr = 1;
+                vioapic_deliver(vioapic, pin);
+            }
+        }
+    }
+
+    spin_unlock(&d->arch.hvm.irq_lock);
+}
+
 static void ioapic_inj_irq(
     struct hvm_vioapic *vioapic,
     struct vlapic *target,
@@ -407,7 +451,8 @@  static void ioapic_inj_irq(
     ASSERT((delivery_mode == dest_Fixed) ||
            (delivery_mode == dest_LowestPrio));
 
-    vlapic_set_irq(target, vector, trig_mode);
+    vlapic_set_irq_callback(target, vector, trig_mode,
+                            trig_mode ? eoi_callback : NULL, NULL);
 }
 
 static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
@@ -514,50 +559,6 @@  void vioapic_irq_positive_edge(struct domain *d, unsigned int irq)
     }
 }
 
-void vioapic_update_EOI(unsigned int vector)
-{
-    struct domain *d = current->domain;
-    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
-    union vioapic_redir_entry *ent;
-    unsigned int i;
-
-    ASSERT(has_vioapic(d));
-
-    spin_lock(&d->arch.hvm.irq_lock);
-
-    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
-    {
-        struct hvm_vioapic *vioapic = domain_vioapic(d, i);
-        unsigned int pin;
-
-        for ( pin = 0; pin < vioapic->nr_pins; pin++ )
-        {
-            ent = &vioapic->redirtbl[pin];
-            if ( ent->fields.vector != vector )
-                continue;
-
-            ent->fields.remote_irr = 0;
-
-            if ( is_iommu_enabled(d) )
-            {
-                spin_unlock(&d->arch.hvm.irq_lock);
-                hvm_dpci_eoi(vioapic->base_gsi + pin);
-                spin_lock(&d->arch.hvm.irq_lock);
-            }
-
-            if ( (ent->fields.trig_mode == VIOAPIC_LEVEL_TRIG) &&
-                 !ent->fields.mask && !ent->fields.remote_irr &&
-                 hvm_irq->gsi_assert_count[vioapic->base_gsi + pin] )
-            {
-                ent->fields.remote_irr = 1;
-                vioapic_deliver(vioapic, pin);
-            }
-        }
-    }
-
-    spin_unlock(&d->arch.hvm.irq_lock);
-}
-
 int vioapic_get_mask(const struct domain *d, unsigned int gsi)
 {
     unsigned int pin = 0; /* See gsi_vioapic */
@@ -611,6 +612,8 @@  static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
 static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
 {
     struct hvm_vioapic *s;
+    unsigned int i;
+    int rc;
 
     if ( !has_vioapic(d) )
         return -ENODEV;
@@ -621,7 +624,43 @@  static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
          d->arch.hvm.nr_vioapics != 1 )
         return -EOPNOTSUPP;
 
-    return hvm_load_entry(IOAPIC, h, &s->domU);
+    rc = hvm_load_entry(IOAPIC, h, &s->domU);
+    if ( rc )
+        return rc;
+
+    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
+    {
+        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
+        unsigned int vector = ent->fields.vector;
+        unsigned int delivery_mode = ent->fields.delivery_mode;
+        struct vcpu *v;
+
+        /*
+         * Add a callback for each possible vector injected by a redirection
+         * entry.
+         */
+        if ( vector < 16 || !ent->fields.remote_irr ||
+             (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
+            continue;
+
+        for_each_vcpu ( d, v )
+        {
+            struct vlapic *vlapic = vcpu_vlapic(v);
+
+            /*
+             * NB: if the vlapic registers were restored before the vio-apic
+             * ones we could test whether the vector is set in the vlapic IRR
+             * or ISR registers before unconditionally setting the callback.
+             * This is harmless as eoi_callback is capable of dealing with
+             * spurious callbacks.
+             */
+            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
+                                   ent->fields.dest_mode) )
+                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
+        }
+    }
+
+    return 0;
 }
 
 HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 10b216345a7..63fa3780767 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -192,7 +192,13 @@  void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
 
     if ( hvm_funcs.update_eoi_exit_bitmap )
         alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
-                          trig || callback);
+                          /*
+                           * NB: need to explicitly convert to boolean to avoid
+                           * truncation wrongly result in false begin reported
+                           * for example when the pointer sits on a page
+                           * boundary.
+                           */
+                          !!callback);
 
     if ( hvm_funcs.deliver_posted_intr )
         alternative_vcall(hvm_funcs.deliver_posted_intr, target, vec);
@@ -496,9 +502,6 @@  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
     unsigned long flags;
     unsigned int index = vector - 16;
 
-    if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
-        vioapic_update_EOI(vector);
-
     spin_lock_irqsave(&vlapic->callback_lock, flags);
     callback = vlapic->callbacks[index].callback;
     vlapic->callbacks[index].callback = NULL;