diff mbox

[v10,26/32] ARM: vITS: handle MOVI command

Message ID 20170526173540.10066-27-andre.przywara@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andre Przywara May 26, 2017, 5:35 p.m. UTC
The MOVI command moves the interrupt affinity from one redistributor
(read: VCPU) to another.
For now migration of "live" LPIs is not yet implemented, but we store
the changed affinity in our virtual ITTE and the pending_irq.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

Comments

Stefano Stabellini May 30, 2017, 10:35 p.m. UTC | #1
On Fri, 26 May 2017, Andre Przywara wrote:
> The MOVI command moves the interrupt affinity from one redistributor
> (read: VCPU) to another.
> For now migration of "live" LPIs is not yet implemented, but we store
> the changed affinity in our virtual ITTE and the pending_irq.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
> 
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index c350fa5..3332c09 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -666,6 +666,66 @@ out_remove_mapping:
>      return ret;
>  }
>  
> +static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> +    unsigned long flags;
> +    struct pending_irq *p;
> +    struct vcpu *ovcpu, *nvcpu;
> +    uint32_t vlpi;
> +    int ret = -1;
> +
> +    spin_lock(&its->its_lock);
> +    /* Check for a mapped LPI and get the LPI number. */
> +    if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
> +        goto out_unlock;
> +
> +    if ( vlpi == INVALID_LPI )
> +        goto out_unlock;
> +
> +    /* Check the new collection ID and get the new VCPU pointer */
> +    nvcpu = get_vcpu_from_collection(its, collid);
> +    if ( !nvcpu )
> +        goto out_unlock;
> +
> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> +                                        devid, eventid);
> +    if ( unlikely(!p) )
> +        goto out_unlock;
> +
> +    /*
> +     * TODO: This relies on the VCPU being correct in the ITS tables.
> +     * This can be fixed by either using a per-IRQ lock or by using
> +     * the VCPU ID from the pending_irq instead.
> +     */
> +    spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
> +
> +    /* Update our cached vcpu_id in the pending_irq. */
> +    p->lpi_vcpu_id = nvcpu->vcpu_id;
> +
> +    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
> +
> +    /*
> +     * TODO: lookup currently-in-guest virtual IRQs and migrate them,
> +     * as the locking may be fragile otherwise.
> +     * This is not easy to do at the moment, but should become easier
> +     * with the introduction of a per-IRQ lock.
> +     */

Sure but at least we can handle the inflight, but not in guest, case. It
is just a matter of adding (withing the arch.vgic.lock locked region):

    if ( !list_empty(&p->lr_queue) )
    {
        gic_remove_irq(old, p);
        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
        list_del_init(&p->lr_queue);
        list_del_init(&p->inflight);

        spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
        vgic_vcpu_inject_irq(new, irq);
    }

That is simple and there are no problems with locking. The problem is
with the other case: !list_empty(&p->inflight) &&
list_empty(&p->lr_queue), which is the one for which you need to keep
this TODO comment.


> +    /* Now store the new collection in the translation table. */
> +    if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
> +        goto out_unlock;
> +
> +    ret = 0;
> +
> +out_unlock:
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>  
> @@ -711,6 +771,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_MAPTI:
>              ret = its_handle_mapti(its, command);
>              break;
> +        case GITS_CMD_MOVALL:
> +            gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
> +            break;
> +        case GITS_CMD_MOVI:
> +            ret = its_handle_movi(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
> -- 
> 2.9.0
>
Julien Grall May 31, 2017, 11:23 a.m. UTC | #2
Hi Stefano,

On 30/05/17 23:35, Stefano Stabellini wrote:
>> +    /*
>> +     * TODO: lookup currently-in-guest virtual IRQs and migrate them,
>> +     * as the locking may be fragile otherwise.
>> +     * This is not easy to do at the moment, but should become easier
>> +     * with the introduction of a per-IRQ lock.
>> +     */
>
> Sure but at least we can handle the inflight, but not in guest, case. It
> is just a matter of adding (withing the arch.vgic.lock locked region):
>
>     if ( !list_empty(&p->lr_queue) )
>     {
>         gic_remove_irq(old, p);
>         clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>         list_del_init(&p->lr_queue);
>         list_del_init(&p->inflight);
>
>         spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
>         vgic_vcpu_inject_irq(new, irq);
>     }

Please no more hardcoding of migration code. If we are going to support 
migration we should rework the current function vgic_migrate_irq to 
support LPIs.

But ... I don't see any drawback to not support this today. Per the 
specification, when you migrate an interrupt the only things you have to 
ensure if the pending interrupt only fire once either on the old or new 
vCPU.

IHMO, if the interrupt has already been queued then it is too late. We 
should aim at simplifying the code if there are no drawback to do it. In 
this case, what would be the drawback to leave pending on the old vCPU?

Cheers,
Stefano Stabellini May 31, 2017, 5:53 p.m. UTC | #3
On Wed, 31 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 30/05/17 23:35, Stefano Stabellini wrote:
> > > +    /*
> > > +     * TODO: lookup currently-in-guest virtual IRQs and migrate them,
> > > +     * as the locking may be fragile otherwise.
> > > +     * This is not easy to do at the moment, but should become easier
> > > +     * with the introduction of a per-IRQ lock.
> > > +     */
> > 
> > Sure but at least we can handle the inflight, but not in guest, case. It
> > is just a matter of adding (withing the arch.vgic.lock locked region):
> > 
> >     if ( !list_empty(&p->lr_queue) )
> >     {
> >         gic_remove_irq(old, p);
> >         clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> >         list_del_init(&p->lr_queue);
> >         list_del_init(&p->inflight);
> > 
> >         spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> >         vgic_vcpu_inject_irq(new, irq);
> >     }
> 
> Please no more hardcoding of migration code. If we are going to support
> migration we should rework the current function vgic_migrate_irq to support
> LPIs.

I agree, it is true, in fact that is what I actually suggested last
time. My preference would be to just call vgic_migrate_irq, providing an
empty implementation of irq_set_affinity for LPIs.


> But ... I don't see any drawback to not support this today. Per the
> specification, when you migrate an interrupt the only things you have to
> ensure if the pending interrupt only fire once either on the old or new vCPU.
> 
> IHMO, if the interrupt has already been queued then it is too late. We should
> aim at simplifying the code if there are no drawback to do it. In this case,
> what would be the drawback to leave pending on the old vCPU?

It is inconsistent with what we do elsewhere (vgic_migrate_irq). It is
also inconsistent with the TODO comment (!list_empty(&p->lr_queue)
interrupts are not yet currently-in-guest).

I don't want to introduce any more hardcoding, I just would like the
existing vgic_migrate_irq to be called.
Julien Grall May 31, 2017, 6:49 p.m. UTC | #4
Hi Stefano,

On 31/05/2017 18:53, Stefano Stabellini wrote:
> On Wed, 31 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 30/05/17 23:35, Stefano Stabellini wrote:
>>>> +    /*
>>>> +     * TODO: lookup currently-in-guest virtual IRQs and migrate them,
>>>> +     * as the locking may be fragile otherwise.
>>>> +     * This is not easy to do at the moment, but should become easier
>>>> +     * with the introduction of a per-IRQ lock.
>>>> +     */
>>>
>>> Sure but at least we can handle the inflight, but not in guest, case. It
>>> is just a matter of adding (withing the arch.vgic.lock locked region):
>>>
>>>     if ( !list_empty(&p->lr_queue) )
>>>     {
>>>         gic_remove_irq(old, p);
>>>         clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>>>         list_del_init(&p->lr_queue);
>>>         list_del_init(&p->inflight);
>>>
>>>         spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
>>>         vgic_vcpu_inject_irq(new, irq);
>>>     }
>>
>> Please no more hardcoding of migration code. If we are going to support
>> migration we should rework the current function vgic_migrate_irq to support
>> LPIs.
>
> I agree, it is true, in fact that is what I actually suggested last
> time. My preference would be to just call vgic_migrate_irq, providing an
> empty implementation of irq_set_affinity for LPIs.
>
>
>> But ... I don't see any drawback to not support this today. Per the
>> specification, when you migrate an interrupt the only things you have to
>> ensure if the pending interrupt only fire once either on the old or new vCPU.
>>
>> IHMO, if the interrupt has already been queued then it is too late. We should
>> aim at simplifying the code if there are no drawback to do it. In this case,
>> what would be the drawback to leave pending on the old vCPU?
>
> It is inconsistent with what we do elsewhere (vgic_migrate_irq).

So? It is not because the current code does something that we should 
keep the same behavior here which BTW cannot be noticed by a guest.

> It is
> also inconsistent with the TODO comment (!list_empty(&p->lr_queue)
> interrupts are not yet currently-in-guest).

A TODO can easily be updated.

>
> I don't want to introduce any more hardcoding, I just would like the
> existing vgic_migrate_irq to be called.

It is going to need a bit of rework to get it working with LPI as the 
code is currently gated with (p->desc). For what benefits?

Not much as the current code is already working well on migration...

Cheers,
Julien Grall June 2, 2017, 5:17 p.m. UTC | #5
Hi,

On 05/31/2017 07:49 PM, Julien Grall wrote:
>> I don't want to introduce any more hardcoding, I just would like the
>> existing vgic_migrate_irq to be called.
> 
> It is going to need a bit of rework to get it working with LPI as the 
> code is currently gated with (p->desc). For what benefits?
> 
> Not much as the current code is already working well on migration...

So I had a chat with Stefano about that. The consensus is , correct me 
if I misunderstood, either fully implement the migration using 
vgic_irq_migrate or not doing it.

AFAICT, the current approach is not implementing the migration but the 
comment is quite confusing. So if we don't implement it, the comment 
needs to be updated.

Cheers.
Stefano Stabellini June 2, 2017, 8:36 p.m. UTC | #6
On Fri, 2 Jun 2017, Julien Grall wrote:
> Hi,
> 
> On 05/31/2017 07:49 PM, Julien Grall wrote:
> > > I don't want to introduce any more hardcoding, I just would like the
> > > existing vgic_migrate_irq to be called.
> > 
> > It is going to need a bit of rework to get it working with LPI as the code
> > is currently gated with (p->desc). For what benefits?
> > 
> > Not much as the current code is already working well on migration...
> 
> So I had a chat with Stefano about that. The consensus is , correct me if I
> misunderstood, either fully implement the migration using vgic_irq_migrate or
> not doing it.
> 
> AFAICT, the current approach is not implementing the migration but the comment
> is quite confusing. So if we don't implement it, the comment needs to be
> updated.

Yes
diff mbox

Patch

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index c350fa5..3332c09 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -666,6 +666,66 @@  out_remove_mapping:
     return ret;
 }
 
+static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    uint16_t collid = its_cmd_get_collection(cmdptr);
+    unsigned long flags;
+    struct pending_irq *p;
+    struct vcpu *ovcpu, *nvcpu;
+    uint32_t vlpi;
+    int ret = -1;
+
+    spin_lock(&its->its_lock);
+    /* Check for a mapped LPI and get the LPI number. */
+    if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
+        goto out_unlock;
+
+    if ( vlpi == INVALID_LPI )
+        goto out_unlock;
+
+    /* Check the new collection ID and get the new VCPU pointer */
+    nvcpu = get_vcpu_from_collection(its, collid);
+    if ( !nvcpu )
+        goto out_unlock;
+
+    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+                                        devid, eventid);
+    if ( unlikely(!p) )
+        goto out_unlock;
+
+    /*
+     * TODO: This relies on the VCPU being correct in the ITS tables.
+     * This can be fixed by either using a per-IRQ lock or by using
+     * the VCPU ID from the pending_irq instead.
+     */
+    spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
+
+    /* Update our cached vcpu_id in the pending_irq. */
+    p->lpi_vcpu_id = nvcpu->vcpu_id;
+
+    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
+
+    /*
+     * TODO: lookup currently-in-guest virtual IRQs and migrate them,
+     * as the locking may be fragile otherwise.
+     * This is not easy to do at the moment, but should become easier
+     * with the introduction of a per-IRQ lock.
+     */
+
+    /* Now store the new collection in the translation table. */
+    if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
+        goto out_unlock;
+
+    ret = 0;
+
+out_unlock:
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -711,6 +771,12 @@  static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_MAPTI:
             ret = its_handle_mapti(its, command);
             break;
+        case GITS_CMD_MOVALL:
+            gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
+            break;
+        case GITS_CMD_MOVI:
+            ret = its_handle_movi(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;