[XEN,v3,07/11] xen: arm: vgic: allow delivery of PPIs to guests
diff mbox series

Message ID 20191115201037.44982-3-stewart.hildebrand@dornerworks.com
State New
Headers show
Series
  • xen: arm: context switch vtimer PPI state
Related show

Commit Message

Stewart Hildebrand Nov. 15, 2019, 8:10 p.m. UTC
Allow vgic_get_hw_irq_desc to be called with a vcpu argument.

Use vcpu argument in vgic_connect_hw_irq.

vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
ASSERTs.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

---
v3: new patch

---
Note: I have only modified the old vgic to allow delivery of PPIs.
---
 xen/arch/arm/gic-vgic.c | 24 ++++++++++++++++--------
 xen/arch/arm/vgic.c     |  6 +++---
 2 files changed, 19 insertions(+), 11 deletions(-)

Comments

Julien Grall Nov. 23, 2019, 8:35 p.m. UTC | #1
Hi,

On 15/11/2019 20:10, Stewart Hildebrand wrote:
> Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> 
> Use vcpu argument in vgic_connect_hw_irq.
> 
> vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
> ASSERTs.
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> 
> ---
> v3: new patch
> 
> ---
> Note: I have only modified the old vgic to allow delivery of PPIs.

The new vGIC should also be modified to support delivery of PPIs.

> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 82f524a35c..c3933c2687 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>               irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>               spin_lock_irqsave(&p->desc->lock, flags);
>               /*
> -             * The irq cannot be a PPI, we only support delivery of SPIs
> -             * to guests.
> +             * The irq cannot be a SGI, we only support delivery of SPIs
> +             * and PPIs to guests.
>                */
> -            ASSERT(irq >= 32);
> +            ASSERT(irq >= NR_SGIS);

We usually put ASSERT() in place we know that code wouldn't be able to 
work correctly if there ASSERT were hit. In this particular case:

>               if ( irq_type_set_by_domain(d) )
>                   gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));

1) We don't want to allow any domain (including Dom0) to modify the 
interrupt type (i.e. level/edge) for PPIs as this is shared. You will 
also most likely need to modify the counterpart in setup_guest_irq().

>               p->desc->handler->enable(p->desc);

2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So 
vCPU B could enable the SGI for vCPU A. But this would be called on the 
wrong pCPU leading to inconsistency between the hardware state of the 
internal vGIC state.

Cheers,
Julien Grall Nov. 25, 2019, 3:05 p.m. UTC | #2
(+ Andre)

On 23/11/2019 20:35, Julien Grall wrote:
> Hi,
> 
> On 15/11/2019 20:10, Stewart Hildebrand wrote:
>> Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
>>
>> Use vcpu argument in vgic_connect_hw_irq.
>>
>> vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
>> ASSERTs.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
>>
>> ---
>> v3: new patch
>>
>> ---
>> Note: I have only modified the old vgic to allow delivery of PPIs.
> 
> The new vGIC should also be modified to support delivery of PPIs.
> 
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 82f524a35c..c3933c2687 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t 
>> r, int n)
>>               irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>>               spin_lock_irqsave(&p->desc->lock, flags);
>>               /*
>> -             * The irq cannot be a PPI, we only support delivery of SPIs
>> -             * to guests.
>> +             * The irq cannot be a SGI, we only support delivery of SPIs
>> +             * and PPIs to guests.
>>                */
>> -            ASSERT(irq >= 32);
>> +            ASSERT(irq >= NR_SGIS);
> 
> We usually put ASSERT() in place we know that code wouldn't be able to 
> work correctly if there ASSERT were hit. In this particular case:
> 
>>               if ( irq_type_set_by_domain(d) )
>>                   gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
> 
> 1) We don't want to allow any domain (including Dom0) to modify the 
> interrupt type (i.e. level/edge) for PPIs as this is shared. You will 
> also most likely need to modify the counterpart in setup_guest_irq().
> 
>>               p->desc->handler->enable(p->desc);
> 
> 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So 
> vCPU B could enable the SGI for vCPU A. But this would be called on the 
> wrong pCPU leading to inconsistency between the hardware state of the 
> internal vGIC state.

I thought a bit more of the issue over the week-end. The current vGIC is 
fairly messy. I can see two solutions on how to solve this:
     1) Send an IPI to the pCPU where the vCPU A is running and 
disable/enable the interrupt. The other side would need to the vCPU was 
actually running to avoid disabling the PPI for the wrong pCPU
     2) Keep the HW interrupt always enabled

We propagated the enable/disable because of some messy part in the vGIC:
     - vgic_inject_irq() will not queue any pending interrupt if the 
vCPU is offline. While interrupt cannot be delivered, we still need to 
keep them pending as they will never occur again otherwise. This is 
because they are active on the host side and the guest has no way to 
deactivate them.
     - Our implementation of PSCI CPU will remove all pending interrupts 
(see vgic_clear_pending_irqs()). I am not entirely sure the implication 
here because of the previous.

There are a probably more. Aside the issues with it, I don't really see 
good advantage to propagate the interrupt state as the interrupts (PPIs, 
SPIs) have active state. So they can only be received once until the 
guest actually handles it.

So my preference would still be 2) because this makes the code simpler, 
avoid IPI and other potential locking trouble.

On a side note, there are more issues with enable/disable on the current 
vGIC as a pending interrupt already in the LR will not get dropped...

All of this is quite nasty. The sooner the new vGIC is finished the 
sooner we can kill the current one.

Cheers,
Stefano Stabellini Nov. 26, 2019, 1:20 a.m. UTC | #3
On Mon, 25 Nov 2019, Julien Grall wrote:
> (+ Andre)
> 
> On 23/11/2019 20:35, Julien Grall wrote:
> > Hi,
> > 
> > On 15/11/2019 20:10, Stewart Hildebrand wrote:
> > > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> > > 
> > > Use vcpu argument in vgic_connect_hw_irq.
> > > 
> > > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
> > > ASSERTs.
> > > 
> > > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> > > 
> > > ---
> > > v3: new patch
> > > 
> > > ---
> > > Note: I have only modified the old vgic to allow delivery of PPIs.
> > 
> > The new vGIC should also be modified to support delivery of PPIs.
> > 
> > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > > index 82f524a35c..c3933c2687 100644
> > > --- a/xen/arch/arm/vgic.c
> > > +++ b/xen/arch/arm/vgic.c
> > > @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r,
> > > int n)
> > >               irq_set_affinity(p->desc, cpumask_of(v_target->processor));
> > >               spin_lock_irqsave(&p->desc->lock, flags);
> > >               /*
> > > -             * The irq cannot be a PPI, we only support delivery of SPIs
> > > -             * to guests.
> > > +             * The irq cannot be a SGI, we only support delivery of SPIs
> > > +             * and PPIs to guests.
> > >                */
> > > -            ASSERT(irq >= 32);
> > > +            ASSERT(irq >= NR_SGIS);
> > 
> > We usually put ASSERT() in place we know that code wouldn't be able to work
> > correctly if there ASSERT were hit. In this particular case:
> > 
> > >               if ( irq_type_set_by_domain(d) )
> > >                   gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
> > 
> > 1) We don't want to allow any domain (including Dom0) to modify the
> > interrupt type (i.e. level/edge) for PPIs as this is shared. You will also
> > most likely need to modify the counterpart in setup_guest_irq().
> > 
> > >               p->desc->handler->enable(p->desc);
> > 
> > 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So vCPU B
> > could enable the SGI for vCPU A. But this would be called on the wrong pCPU
> > leading to inconsistency between the hardware state of the internal vGIC
> > state.

Is it actually meant to work from a GIC specification perspective? It
sounds "wrong" somehow to me, but I went through the spec and it doesn't
say explicitly that cpuB couldn't enable a SGI/PPI of cpuA. I am still
a bit shocked by this revelation.

[I haven't had a chance to think carefully about what you wrote below
yet. I'll follow-up.]



> I thought a bit more of the issue over the week-end. The current vGIC is
> fairly messy. I can see two solutions on how to solve this:
>     1) Send an IPI to the pCPU where the vCPU A is running and disable/enable
> the interrupt. The other side would need to the vCPU was actually running to
> avoid disabling the PPI for the wrong pCPU
>     2) Keep the HW interrupt always enabled
> 
> We propagated the enable/disable because of some messy part in the vGIC:
>     - vgic_inject_irq() will not queue any pending interrupt if the vCPU is
> offline. While interrupt cannot be delivered, we still need to keep them
> pending as they will never occur again otherwise. This is because they are
> active on the host side and the guest has no way to deactivate them.
>     - Our implementation of PSCI CPU will remove all pending interrupts (see
> vgic_clear_pending_irqs()). I am not entirely sure the implication here
> because of the previous.
> 
> There are a probably more. Aside the issues with it, I don't really see good
> advantage to propagate the interrupt state as the interrupts (PPIs, SPIs) have
> active state. So they can only be received once until the guest actually
> handles it.
> 
> So my preference would still be 2) because this makes the code simpler, avoid
> IPI and other potential locking trouble.
> 
> On a side note, there are more issues with enable/disable on the current vGIC
> as a pending interrupt already in the LR will not get dropped...
> 
> All of this is quite nasty. The sooner the new vGIC is finished the sooner we
> can kill the current one.
Julien Grall Nov. 26, 2019, 1:58 p.m. UTC | #4
Hi,

On 26/11/2019 01:20, Stefano Stabellini wrote:
> On Mon, 25 Nov 2019, Julien Grall wrote:
>> (+ Andre)
>>
>> On 23/11/2019 20:35, Julien Grall wrote:
>>> Hi,
>>>
>>> On 15/11/2019 20:10, Stewart Hildebrand wrote:
>>>> Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
>>>>
>>>> Use vcpu argument in vgic_connect_hw_irq.
>>>>
>>>> vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
>>>> ASSERTs.
>>>>
>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
>>>>
>>>> ---
>>>> v3: new patch
>>>>
>>>> ---
>>>> Note: I have only modified the old vgic to allow delivery of PPIs.
>>>
>>> The new vGIC should also be modified to support delivery of PPIs.
>>>
>>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>>> index 82f524a35c..c3933c2687 100644
>>>> --- a/xen/arch/arm/vgic.c
>>>> +++ b/xen/arch/arm/vgic.c
>>>> @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r,
>>>> int n)
>>>>                irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>>>>                spin_lock_irqsave(&p->desc->lock, flags);
>>>>                /*
>>>> -             * The irq cannot be a PPI, we only support delivery of SPIs
>>>> -             * to guests.
>>>> +             * The irq cannot be a SGI, we only support delivery of SPIs
>>>> +             * and PPIs to guests.
>>>>                 */
>>>> -            ASSERT(irq >= 32);
>>>> +            ASSERT(irq >= NR_SGIS);
>>>
>>> We usually put ASSERT() in place we know that code wouldn't be able to work
>>> correctly if there ASSERT were hit. In this particular case:
>>>
>>>>                if ( irq_type_set_by_domain(d) )
>>>>                    gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
>>>
>>> 1) We don't want to allow any domain (including Dom0) to modify the
>>> interrupt type (i.e. level/edge) for PPIs as this is shared. You will also
>>> most likely need to modify the counterpart in setup_guest_irq().
>>>
>>>>                p->desc->handler->enable(p->desc);
>>>
>>> 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So vCPU B
>>> could enable the SGI for vCPU A. But this would be called on the wrong pCPU
>>> leading to inconsistency between the hardware state of the internal vGIC
>>> state.
> 
> Is it actually meant to work from a GIC specification perspective? It
> sounds "wrong" somehow to me, but I went through the spec and it doesn't
> say explicitly that cpuB couldn't enable a SGI/PPI of cpuA. I am still
> a bit shocked by this revelation.

To be honest, I can see reason to allow this but this is a different 
subject.

In this case the re-distributor is per-CPU and can accessible by any 
CPU. For instance, Linux will access it to find the re-distributor 
associated to a given CPU at boot.

FWIW, the vGIC implementation in KVM handles it the same way.

Cheers,
Stefano Stabellini Nov. 26, 2019, 10:36 p.m. UTC | #5
On Mon, 25 Nov 2019, Julien Grall wrote:
> On 23/11/2019 20:35, Julien Grall wrote:
> > Hi,
> > 
> > On 15/11/2019 20:10, Stewart Hildebrand wrote:
> > > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> > > 
> > > Use vcpu argument in vgic_connect_hw_irq.
> > > 
> > > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
> > > ASSERTs.
> > > 
> > > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> > > 
> > > ---
> > > v3: new patch
> > > 
> > > ---
> > > Note: I have only modified the old vgic to allow delivery of PPIs.
> > 
> > The new vGIC should also be modified to support delivery of PPIs.
> > 
> > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > > index 82f524a35c..c3933c2687 100644
> > > --- a/xen/arch/arm/vgic.c
> > > +++ b/xen/arch/arm/vgic.c
> > > @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r,
> > > int n)
> > >               irq_set_affinity(p->desc, cpumask_of(v_target->processor));
> > >               spin_lock_irqsave(&p->desc->lock, flags);
> > >               /*
> > > -             * The irq cannot be a PPI, we only support delivery of SPIs
> > > -             * to guests.
> > > +             * The irq cannot be a SGI, we only support delivery of SPIs
> > > +             * and PPIs to guests.
> > >                */
> > > -            ASSERT(irq >= 32);
> > > +            ASSERT(irq >= NR_SGIS);
> > 
> > We usually put ASSERT() in place we know that code wouldn't be able to work
> > correctly if there ASSERT were hit. In this particular case:
> > 
> > >               if ( irq_type_set_by_domain(d) )
> > >                   gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
> > 
> > 1) We don't want to allow any domain (including Dom0) to modify the
> > interrupt type (i.e. level/edge) for PPIs as this is shared. You will also
> > most likely need to modify the counterpart in setup_guest_irq().
> > 
> > >               p->desc->handler->enable(p->desc);
> > 
> > 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So vCPU B
> > could enable the SGI for vCPU A. But this would be called on the wrong pCPU
> > leading to inconsistency between the hardware state of the internal vGIC
> > state.
> 
> I thought a bit more of the issue over the week-end. The current vGIC is
> fairly messy. I can see two solutions on how to solve this:
>     1) Send an IPI to the pCPU where the vCPU A is running and disable/enable
> the interrupt. The other side would need to the vCPU was actually running to
> avoid disabling the PPI for the wrong pCPU
>     2) Keep the HW interrupt always enabled
> 
> We propagated the enable/disable because of some messy part in the vGIC:
>     - vgic_inject_irq() will not queue any pending interrupt if the vCPU is
> offline. While interrupt cannot be delivered, we still need to keep them
> pending as they will never occur again otherwise. This is because they are
> active on the host side and the guest has no way to deactivate them.
>     - Our implementation of PSCI CPU will remove all pending interrupts (see
> vgic_clear_pending_irqs()). I am not entirely sure the implication here
> because of the previous.
> 
> There are a probably more. Aside the issues with it, I don't really see good
> advantage to propagate the interrupt state as the interrupts (PPIs, SPIs) have
> active state. So they can only be received once until the guest actually
> handles it.
> 
> So my preference would still be 2) because this makes the code simpler, avoid
> IPI and other potential locking trouble.

Yes, I think that is a good suggestion. I take that you mean that in
vgic_disable_irqs for PPIs we would only clear GIC_IRQ_GUEST_ENABLED
then return basically, right?
Julien Grall Nov. 26, 2019, 10:42 p.m. UTC | #6
On 26/11/2019 22:36, Stefano Stabellini wrote:
> On Mon, 25 Nov 2019, Julien Grall wrote:
>> On 23/11/2019 20:35, Julien Grall wrote:
>>> Hi,
>>>
>>> On 15/11/2019 20:10, Stewart Hildebrand wrote:
>>>> Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
>>>>
>>>> Use vcpu argument in vgic_connect_hw_irq.
>>>>
>>>> vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
>>>> ASSERTs.
>>>>
>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
>>>>
>>>> ---
>>>> v3: new patch
>>>>
>>>> ---
>>>> Note: I have only modified the old vgic to allow delivery of PPIs.
>>>
>>> The new vGIC should also be modified to support delivery of PPIs.
>>>
>>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>>> index 82f524a35c..c3933c2687 100644
>>>> --- a/xen/arch/arm/vgic.c
>>>> +++ b/xen/arch/arm/vgic.c
>>>> @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r,
>>>> int n)
>>>>                irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>>>>                spin_lock_irqsave(&p->desc->lock, flags);
>>>>                /*
>>>> -             * The irq cannot be a PPI, we only support delivery of SPIs
>>>> -             * to guests.
>>>> +             * The irq cannot be a SGI, we only support delivery of SPIs
>>>> +             * and PPIs to guests.
>>>>                 */
>>>> -            ASSERT(irq >= 32);
>>>> +            ASSERT(irq >= NR_SGIS);
>>>
>>> We usually put ASSERT() in place we know that code wouldn't be able to work
>>> correctly if there ASSERT were hit. In this particular case:
>>>
>>>>                if ( irq_type_set_by_domain(d) )
>>>>                    gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
>>>
>>> 1) We don't want to allow any domain (including Dom0) to modify the
>>> interrupt type (i.e. level/edge) for PPIs as this is shared. You will also
>>> most likely need to modify the counterpart in setup_guest_irq().
>>>
>>>>                p->desc->handler->enable(p->desc);
>>>
>>> 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So vCPU B
>>> could enable the SGI for vCPU A. But this would be called on the wrong pCPU
>>> leading to inconsistency between the hardware state of the internal vGIC
>>> state.
>>
>> I thought a bit more of the issue over the week-end. The current vGIC is
>> fairly messy. I can see two solutions on how to solve this:
>>      1) Send an IPI to the pCPU where the vCPU A is running and disable/enable
>> the interrupt. The other side would need to the vCPU was actually running to
>> avoid disabling the PPI for the wrong pCPU
>>      2) Keep the HW interrupt always enabled
>>
>> We propagated the enable/disable because of some messy part in the vGIC:
>>      - vgic_inject_irq() will not queue any pending interrupt if the vCPU is
>> offline. While interrupt cannot be delivered, we still need to keep them
>> pending as they will never occur again otherwise. This is because they are
>> active on the host side and the guest has no way to deactivate them.
>>      - Our implementation of PSCI CPU will remove all pending interrupts (see
>> vgic_clear_pending_irqs()). I am not entirely sure the implication here
>> because of the previous.
>>
>> There are a probably more. Aside the issues with it, I don't really see good
>> advantage to propagate the interrupt state as the interrupts (PPIs, SPIs) have
>> active state. So they can only be received once until the guest actually
>> handles it.
>>
>> So my preference would still be 2) because this makes the code simpler, avoid
>> IPI and other potential locking trouble.
> 
> Yes, I think that is a good suggestion. I take that you mean that in
> vgic_disable_irqs for PPIs we would only clear GIC_IRQ_GUEST_ENABLED
> then return basically, right?
Not really, I am only suggesting to remove the part

if ( desc != NULL )
   ...

But this change alone is not enough. It would require some modification 
in the rest of the vGIC (see my previous e-mail) and likely some 
investigation to understand the implication of keeping the interrupt 
enabled from the HW (I am a bit worry we may have backed this assumption 
into other part of the vGIC :().

Cheers,
Stefano Stabellini Nov. 26, 2019, 11:16 p.m. UTC | #7
On Fri, 15 Nov 2019, Stewart Hildebrand wrote:
> Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> 
> Use vcpu argument in vgic_connect_hw_irq.
> 
> vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
> ASSERTs.
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> 
> ---
> v3: new patch
> 
> ---
> Note: I have only modified the old vgic to allow delivery of PPIs.
> ---
>  xen/arch/arm/gic-vgic.c | 24 ++++++++++++++++--------
>  xen/arch/arm/vgic.c     |  6 +++---
>  2 files changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
> index 98c021f1a8..2c66a8fa92 100644
> --- a/xen/arch/arm/gic-vgic.c
> +++ b/xen/arch/arm/gic-vgic.c
> @@ -418,7 +418,7 @@ struct irq_desc *vgic_get_hw_irq_desc(struct domain *d, struct vcpu *v,
>  {
>      struct pending_irq *p;
>  
> -    ASSERT(!v && virq >= 32);
> +    ASSERT((!v && (virq >= 32)) || (!d && v && (virq >= 16) && (virq < 32)));

I don't think !d is necessary for this to work as intended so I would
limit the ASSERT to

  ASSERT((!v && (virq >= 32)) || (v && (virq >= 16) && (virq < 32)));

the caller can always pass v->domain


>      if ( !v )
>          v = d->vcpu[0];
Julien Grall Nov. 27, 2019, 12:13 a.m. UTC | #8
On Tue, 26 Nov 2019, 23:18 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Fri, 15 Nov 2019, Stewart Hildebrand wrote:
> > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> >
> > Use vcpu argument in vgic_connect_hw_irq.
> >
> > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
> > ASSERTs.
> >
> > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> >
> > ---
> > v3: new patch
> >
> > ---
> > Note: I have only modified the old vgic to allow delivery of PPIs.
> > ---
> >  xen/arch/arm/gic-vgic.c | 24 ++++++++++++++++--------
> >  xen/arch/arm/vgic.c     |  6 +++---
> >  2 files changed, 19 insertions(+), 11 deletions(-)
> >
> > diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
> > index 98c021f1a8..2c66a8fa92 100644
> > --- a/xen/arch/arm/gic-vgic.c
> > +++ b/xen/arch/arm/gic-vgic.c
> > @@ -418,7 +418,7 @@ struct irq_desc *vgic_get_hw_irq_desc(struct domain
> *d, struct vcpu *v,
> >  {
> >      struct pending_irq *p;
> >
> > -    ASSERT(!v && virq >= 32);
> > +    ASSERT((!v && (virq >= 32)) || (!d && v && (virq >= 16) && (virq <
> 32)));
>
> I don't think !d is necessary for this to work as intended so I would
> limit the ASSERT to
>
>   ASSERT((!v && (virq >= 32)) || (v && (virq >= 16) && (virq < 32)));
>
> the caller can always pass v->domain
>

But then you have the risk to run into d != v->domain. So at least with the
ASSERT you document the expectation.

Cheers,


>
> >      if ( !v )
> >          v = d->vcpu[0];
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
Stefano Stabellini Nov. 27, 2019, 6:48 p.m. UTC | #9
On Tue, 26 Nov 2019, Julien Grall wrote:
> On 26/11/2019 22:36, Stefano Stabellini wrote:
> > On Mon, 25 Nov 2019, Julien Grall wrote:
> > > On 23/11/2019 20:35, Julien Grall wrote:
> > > > Hi,
> > > > 
> > > > On 15/11/2019 20:10, Stewart Hildebrand wrote:
> > > > > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
> > > > > 
> > > > > Use vcpu argument in vgic_connect_hw_irq.
> > > > > 
> > > > > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce
> > > > > with
> > > > > ASSERTs.
> > > > > 
> > > > > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> > > > > 
> > > > > ---
> > > > > v3: new patch
> > > > > 
> > > > > ---
> > > > > Note: I have only modified the old vgic to allow delivery of PPIs.
> > > > 
> > > > The new vGIC should also be modified to support delivery of PPIs.
> > > > 
> > > > > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > > > > index 82f524a35c..c3933c2687 100644
> > > > > --- a/xen/arch/arm/vgic.c
> > > > > +++ b/xen/arch/arm/vgic.c
> > > > > @@ -410,10 +410,10 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t
> > > > > r,
> > > > > int n)
> > > > >                irq_set_affinity(p->desc,
> > > > > cpumask_of(v_target->processor));
> > > > >                spin_lock_irqsave(&p->desc->lock, flags);
> > > > >                /*
> > > > > -             * The irq cannot be a PPI, we only support delivery of
> > > > > SPIs
> > > > > -             * to guests.
> > > > > +             * The irq cannot be a SGI, we only support delivery of
> > > > > SPIs
> > > > > +             * and PPIs to guests.
> > > > >                 */
> > > > > -            ASSERT(irq >= 32);
> > > > > +            ASSERT(irq >= NR_SGIS);
> > > > 
> > > > We usually put ASSERT() in place we know that code wouldn't be able to
> > > > work
> > > > correctly if there ASSERT were hit. In this particular case:
> > > > 
> > > > >                if ( irq_type_set_by_domain(d) )
> > > > >                    gic_set_irq_type(p->desc, vgic_get_virq_type(v, n,
> > > > > i));
> > > > 
> > > > 1) We don't want to allow any domain (including Dom0) to modify the
> > > > interrupt type (i.e. level/edge) for PPIs as this is shared. You will
> > > > also
> > > > most likely need to modify the counterpart in setup_guest_irq().
> > > > 
> > > > >                p->desc->handler->enable(p->desc);
> > > > 
> > > > 2) On GICv3, the re-distributor of vCPU A is accessible by vCPU B. So
> > > > vCPU B
> > > > could enable the SGI for vCPU A. But this would be called on the wrong
> > > > pCPU
> > > > leading to inconsistency between the hardware state of the internal vGIC
> > > > state.
> > > 
> > > I thought a bit more of the issue over the week-end. The current vGIC is
> > > fairly messy. I can see two solutions on how to solve this:
> > >      1) Send an IPI to the pCPU where the vCPU A is running and
> > > disable/enable
> > > the interrupt. The other side would need to the vCPU was actually running
> > > to
> > > avoid disabling the PPI for the wrong pCPU
> > >      2) Keep the HW interrupt always enabled
> > > 
> > > We propagated the enable/disable because of some messy part in the vGIC:
> > >      - vgic_inject_irq() will not queue any pending interrupt if the vCPU
> > > is
> > > offline. While interrupt cannot be delivered, we still need to keep them
> > > pending as they will never occur again otherwise. This is because they are
> > > active on the host side and the guest has no way to deactivate them.
> > >      - Our implementation of PSCI CPU will remove all pending interrupts
> > > (see
> > > vgic_clear_pending_irqs()). I am not entirely sure the implication here
> > > because of the previous.
> > > 
> > > There are a probably more. Aside the issues with it, I don't really see
> > > good
> > > advantage to propagate the interrupt state as the interrupts (PPIs, SPIs)
> > > have
> > > active state. So they can only be received once until the guest actually
> > > handles it.
> > > 
> > > So my preference would still be 2) because this makes the code simpler,
> > > avoid
> > > IPI and other potential locking trouble.
> > 
> > Yes, I think that is a good suggestion. I take that you mean that in
> > vgic_disable_irqs for PPIs we would only clear GIC_IRQ_GUEST_ENABLED
> > then return basically, right?
> Not really, I am only suggesting to remove the part
> 
> if ( desc != NULL )
>   ...

I think we are saying the same thing


> But this change alone is not enough. It would require some modification in the
> rest of the vGIC (see my previous e-mail) and likely some investigation to
> understand the implication of keeping the interrupt enabled from the HW (I am
> a bit worry we may have backed this assumption into other part of the vGIC
> :().

I can see that at least save_and_mask_hwppi and restore_hwppi would need
to be modified to account for the fact that GICD_ISENABLER would say "it
is enabled" but actually GIC_IRQ_GUEST_ENABLED is unset.
Julien Grall Nov. 27, 2019, 7:17 p.m. UTC | #10
Hi,

On 27/11/2019 18:48, Stefano Stabellini wrote:
>>>
>>> Yes, I think that is a good suggestion. I take that you mean that in
>>> vgic_disable_irqs for PPIs we would only clear GIC_IRQ_GUEST_ENABLED
>>> then return basically, right?
>> Not really, I am only suggesting to remove the part
>>
>> if ( desc != NULL )
>>    ...
> 
> I think we are saying the same thing

The function is doing a bit more, hence why I wasn't not sure :).

>> But this change alone is not enough. It would require some modification in the
>> rest of the vGIC (see my previous e-mail) and likely some investigation to
>> understand the implication of keeping the interrupt enabled from the HW (I am
>> a bit worry we may have backed this assumption into other part of the vGIC
>> :().
> 
> I can see that at least save_and_mask_hwppi and restore_hwppi would need
> to be modified to account for the fact that GICD_ISENABLER would say "it
> is enabled" but actually GIC_IRQ_GUEST_ENABLED is unset.
It depends how we decide to implement the two functions. We may want to 
decouple the GIC completely the GIC state from the vGIC state. For 
instance, you may still want to mask the interrupt regardless of the 
vGIC state when the vCPU is scheduled out. This would prevent a 
non-quiescent device to generate interrupt while we can't deal with them.

But as we seem to consider the device will be quiescent and also clear 
the pending bit, then I think we can completely avoid to mask/unmask the 
interrupt. This would save a couple of access to the GIC interface.

Cheers,
Stefano Stabellini Nov. 28, 2019, 1:07 a.m. UTC | #11
On Wed, 27 Nov 2019, Julien Grall wrote:
> On Tue, 26 Nov 2019, 23:18 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Fri, 15 Nov 2019, Stewart Hildebrand wrote:
>       > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
>       >
>       > Use vcpu argument in vgic_connect_hw_irq.
>       >
>       > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
>       > ASSERTs.
>       >
>       > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
>       >
>       > ---
>       > v3: new patch
>       >
>       > ---
>       > Note: I have only modified the old vgic to allow delivery of PPIs.
>       > ---
>       >  xen/arch/arm/gic-vgic.c | 24 ++++++++++++++++--------
>       >  xen/arch/arm/vgic.c     |  6 +++---
>       >  2 files changed, 19 insertions(+), 11 deletions(-)
>       >
>       > diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
>       > index 98c021f1a8..2c66a8fa92 100644
>       > --- a/xen/arch/arm/gic-vgic.c
>       > +++ b/xen/arch/arm/gic-vgic.c
>       > @@ -418,7 +418,7 @@ struct irq_desc *vgic_get_hw_irq_desc(struct domain *d, struct vcpu *v,
>       >  {
>       >      struct pending_irq *p;
>       > 
>       > -    ASSERT(!v && virq >= 32);
>       > +    ASSERT((!v && (virq >= 32)) || (!d && v && (virq >= 16) && (virq < 32)));
> 
>       I don't think !d is necessary for this to work as intended so I would
>       limit the ASSERT to
> 
>         ASSERT((!v && (virq >= 32)) || (v && (virq >= 16) && (virq < 32)));
> 
>       the caller can always pass v->domain
> 
> But then you have the risk to run into d != v->domain. So at least with the ASSERT you document the expectation.

Yes, that was not my intention.

It makes sense in certain scenarios for v to be NULL. What I was trying
to say is that when v is not-NULL, then also d should be not-NULL for
consistency. I don't think it makes sense to pass v corresponding to
vcpu1 of domain2 and d == NULL, right?

I don't know if you want to add a (d == v->domain) check to the ASSERT
as it is pretty busy already. I am OK either way.
Julien Grall Nov. 28, 2019, 9:53 a.m. UTC | #12
On 28/11/2019 01:07, Stefano Stabellini wrote:
> On Wed, 27 Nov 2019, Julien Grall wrote:
>> On Tue, 26 Nov 2019, 23:18 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>>        On Fri, 15 Nov 2019, Stewart Hildebrand wrote:
>>        > Allow vgic_get_hw_irq_desc to be called with a vcpu argument.
>>        >
>>        > Use vcpu argument in vgic_connect_hw_irq.
>>        >
>>        > vgic_connect_hw_irq is called for PPIs and SPIs, not SGIs. Enforce with
>>        > ASSERTs.
>>        >
>>        > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
>>        >
>>        > ---
>>        > v3: new patch
>>        >
>>        > ---
>>        > Note: I have only modified the old vgic to allow delivery of PPIs.
>>        > ---
>>        >  xen/arch/arm/gic-vgic.c | 24 ++++++++++++++++--------
>>        >  xen/arch/arm/vgic.c     |  6 +++---
>>        >  2 files changed, 19 insertions(+), 11 deletions(-)
>>        >
>>        > diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
>>        > index 98c021f1a8..2c66a8fa92 100644
>>        > --- a/xen/arch/arm/gic-vgic.c
>>        > +++ b/xen/arch/arm/gic-vgic.c
>>        > @@ -418,7 +418,7 @@ struct irq_desc *vgic_get_hw_irq_desc(struct domain *d, struct vcpu *v,
>>        >  {
>>        >      struct pending_irq *p;
>>        >
>>        > -    ASSERT(!v && virq >= 32);
>>        > +    ASSERT((!v && (virq >= 32)) || (!d && v && (virq >= 16) && (virq < 32)));
>>
>>        I don't think !d is necessary for this to work as intended so I would
>>        limit the ASSERT to
>>
>>          ASSERT((!v && (virq >= 32)) || (v && (virq >= 16) && (virq < 32)));
>>
>>        the caller can always pass v->domain
>>
>> But then you have the risk to run into d != v->domain. So at least with the ASSERT you document the expectation.
> 
> Yes, that was not my intention.
> 
> It makes sense in certain scenarios for v to be NULL. What I was trying
> to say is that when v is not-NULL, then also d should be not-NULL for
> consistency. I don't think it makes sense to pass v corresponding to
> vcpu1 of domain2 and d == NULL, right?

While I usually like consistency, 'd' is only used to find 'v' if it is 
NULL. So I really see limited reason to impose the caller to set 'd' in 
this case.

> 
> I don't know if you want to add a (d == v->domain) check to the ASSERT
> as it is pretty busy already. I am OK either way.
>

Patch
diff mbox series

diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
index 98c021f1a8..2c66a8fa92 100644
--- a/xen/arch/arm/gic-vgic.c
+++ b/xen/arch/arm/gic-vgic.c
@@ -418,7 +418,7 @@  struct irq_desc *vgic_get_hw_irq_desc(struct domain *d, struct vcpu *v,
 {
     struct pending_irq *p;
 
-    ASSERT(!v && virq >= 32);
+    ASSERT((!v && (virq >= 32)) || (!d && v && (virq >= 16) && (virq < 32)));
 
     if ( !v )
         v = d->vcpu[0];
@@ -434,15 +434,23 @@  int vgic_connect_hw_irq(struct domain *d, struct vcpu *v, unsigned int virq,
                         struct irq_desc *desc, bool connect)
 {
     unsigned long flags;
-    /*
-     * Use vcpu0 to retrieve the pending_irq struct. Given that we only
-     * route SPIs to guests, it doesn't make any difference.
-     */
-    struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq);
-    struct vgic_irq_rank *rank = vgic_rank_irq(v_target, virq);
-    struct pending_irq *p = irq_to_pending(v_target, virq);
+    struct vcpu *v_target;
+    struct vgic_irq_rank *rank;
+    struct pending_irq *p;
     int ret = 0;
 
+    if (v)
+        v_target = v;
+    else
+        /* Use vcpu0 to retrieve the pending_irq struct. */
+        v_target = vgic_get_target_vcpu(d->vcpu[0], virq);
+
+    rank = vgic_rank_irq(v_target, virq);
+    p = irq_to_pending(v_target, virq);
+
+    ASSERT(virq >= NR_SGIS);
+    ASSERT(p->irq >= NR_SGIS);
+
     /* "desc" is optional when we disconnect an IRQ. */
     ASSERT(!connect || desc);
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 82f524a35c..c3933c2687 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -410,10 +410,10 @@  void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
             irq_set_affinity(p->desc, cpumask_of(v_target->processor));
             spin_lock_irqsave(&p->desc->lock, flags);
             /*
-             * The irq cannot be a PPI, we only support delivery of SPIs
-             * to guests.
+             * The irq cannot be a SGI, we only support delivery of SPIs
+             * and PPIs to guests.
              */
-            ASSERT(irq >= 32);
+            ASSERT(irq >= NR_SGIS);
             if ( irq_type_set_by_domain(d) )
                 gic_set_irq_type(p->desc, vgic_get_virq_type(v, n, i));
             p->desc->handler->enable(p->desc);