diff mbox series

[v4,5/6] KVM: arm64: GICv4.1: Restore VLPI pending state to physical side

Message ID 20210313083900.234-6-lushenming@huawei.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Add VLPI migration support on GICv4.1 | expand

Commit Message

Shenming Lu March 13, 2021, 8:38 a.m. UTC
From: Zenghui Yu <yuzenghui@huawei.com>

When setting the forwarding path of a VLPI (switch to the HW mode),
we can also transfer the pending state from irq->pending_latch to
VPT (especially in migration, the pending states of VLPIs are restored
into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
a VLPI to pending.

Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Shenming Lu <lushenming@huawei.com>
---
 arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Marc Zyngier March 15, 2021, 8:30 a.m. UTC | #1
On 2021-03-13 08:38, Shenming Lu wrote:
> From: Zenghui Yu <yuzenghui@huawei.com>
> 
> When setting the forwarding path of a VLPI (switch to the HW mode),
> we can also transfer the pending state from irq->pending_latch to
> VPT (especially in migration, the pending states of VLPIs are restored
> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
> a VLPI to pending.
> 
> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
> Signed-off-by: Shenming Lu <lushenming@huawei.com>
> ---
>  arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c 
> b/arch/arm64/kvm/vgic/vgic-v4.c
> index ac029ba3d337..3b82ab80c2f3 100644
> --- a/arch/arm64/kvm/vgic/vgic-v4.c
> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, 
> int virq,
>  	irq->host_irq	= virq;
>  	atomic_inc(&map.vpe->vlpi_count);
> 
> +	/* Transfer pending state */
> +	if (irq->pending_latch) {
> +		unsigned long flags;
> +
> +		ret = irq_set_irqchip_state(irq->host_irq,
> +					    IRQCHIP_STATE_PENDING,
> +					    irq->pending_latch);
> +		WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
> +
> +		/*
> +		 * Clear pending_latch and communicate this state
> +		 * change via vgic_queue_irq_unlock.
> +		 */
> +		raw_spin_lock_irqsave(&irq->irq_lock, flags);
> +		irq->pending_latch = false;
> +		vgic_queue_irq_unlock(kvm, irq, flags);
> +	}
> +
>  out:
>  	mutex_unlock(&its->its_lock);
>  	return ret;

The read side of the pending state isn't locked, but the write side is.
I'd rather you lock the whole sequence for peace of mind.

Thanks,

         M.
Shenming Lu March 15, 2021, 9:11 a.m. UTC | #2
On 2021/3/15 16:30, Marc Zyngier wrote:
> On 2021-03-13 08:38, Shenming Lu wrote:
>> From: Zenghui Yu <yuzenghui@huawei.com>
>>
>> When setting the forwarding path of a VLPI (switch to the HW mode),
>> we can also transfer the pending state from irq->pending_latch to
>> VPT (especially in migration, the pending states of VLPIs are restored
>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>> a VLPI to pending.
>>
>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
>> Signed-off-by: Shenming Lu <lushenming@huawei.com>
>> ---
>>  arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>  1 file changed, 18 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>> index ac029ba3d337..3b82ab80c2f3 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>>      irq->host_irq    = virq;
>>      atomic_inc(&map.vpe->vlpi_count);
>>
>> +    /* Transfer pending state */
>> +    if (irq->pending_latch) {
>> +        unsigned long flags;
>> +
>> +        ret = irq_set_irqchip_state(irq->host_irq,
>> +                        IRQCHIP_STATE_PENDING,
>> +                        irq->pending_latch);
>> +        WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>> +
>> +        /*
>> +         * Clear pending_latch and communicate this state
>> +         * change via vgic_queue_irq_unlock.
>> +         */
>> +        raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> +        irq->pending_latch = false;
>> +        vgic_queue_irq_unlock(kvm, irq, flags);
>> +    }
>> +
>>  out:
>>      mutex_unlock(&its->its_lock);
>>      return ret;
> 
> The read side of the pending state isn't locked, but the write side is.
> I'd rather you lock the whole sequence for peace of mind.

Did you mean to lock before emitting the mapping request, Or just before reading
the pending state?

Thanks,
Shenming

> 
> Thanks,
> 
>         M.
Marc Zyngier March 15, 2021, 9:20 a.m. UTC | #3
On 2021-03-15 09:11, Shenming Lu wrote:
> On 2021/3/15 16:30, Marc Zyngier wrote:
>> On 2021-03-13 08:38, Shenming Lu wrote:
>>> From: Zenghui Yu <yuzenghui@huawei.com>
>>> 
>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>> we can also transfer the pending state from irq->pending_latch to
>>> VPT (especially in migration, the pending states of VLPIs are 
>>> restored
>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>>> a VLPI to pending.
>>> 
>>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
>>> Signed-off-by: Shenming Lu <lushenming@huawei.com>
>>> ---
>>>  arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>>  1 file changed, 18 insertions(+)
>>> 
>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c 
>>> b/arch/arm64/kvm/vgic/vgic-v4.c
>>> index ac029ba3d337..3b82ab80c2f3 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, 
>>> int virq,
>>>      irq->host_irq    = virq;
>>>      atomic_inc(&map.vpe->vlpi_count);
>>> 
>>> +    /* Transfer pending state */
>>> +    if (irq->pending_latch) {
>>> +        unsigned long flags;
>>> +
>>> +        ret = irq_set_irqchip_state(irq->host_irq,
>>> +                        IRQCHIP_STATE_PENDING,
>>> +                        irq->pending_latch);
>>> +        WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>> +
>>> +        /*
>>> +         * Clear pending_latch and communicate this state
>>> +         * change via vgic_queue_irq_unlock.
>>> +         */
>>> +        raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>> +        irq->pending_latch = false;
>>> +        vgic_queue_irq_unlock(kvm, irq, flags);
>>> +    }
>>> +
>>>  out:
>>>      mutex_unlock(&its->its_lock);
>>>      return ret;
>> 
>> The read side of the pending state isn't locked, but the write side 
>> is.
>> I'd rather you lock the whole sequence for peace of mind.
> 
> Did you mean to lock before emitting the mapping request, Or just 
> before reading
> the pending state?

Just before reading the pending state, so that we can't get a concurrent
modification of that state while we make the interrupt pending in the 
VPT
and clearing it in the emulation.

Thanks,

         M.
Shenming Lu March 15, 2021, 9:25 a.m. UTC | #4
On 2021/3/15 17:20, Marc Zyngier wrote:
> On 2021-03-15 09:11, Shenming Lu wrote:
>> On 2021/3/15 16:30, Marc Zyngier wrote:
>>> On 2021-03-13 08:38, Shenming Lu wrote:
>>>> From: Zenghui Yu <yuzenghui@huawei.com>
>>>>
>>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>>> we can also transfer the pending state from irq->pending_latch to
>>>> VPT (especially in migration, the pending states of VLPIs are restored
>>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>>>> a VLPI to pending.
>>>>
>>>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
>>>> Signed-off-by: Shenming Lu <lushenming@huawei.com>
>>>> ---
>>>>  arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>>>  1 file changed, 18 insertions(+)
>>>>
>>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>>>> index ac029ba3d337..3b82ab80c2f3 100644
>>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>>>>      irq->host_irq    = virq;
>>>>      atomic_inc(&map.vpe->vlpi_count);
>>>>
>>>> +    /* Transfer pending state */
>>>> +    if (irq->pending_latch) {
>>>> +        unsigned long flags;
>>>> +
>>>> +        ret = irq_set_irqchip_state(irq->host_irq,
>>>> +                        IRQCHIP_STATE_PENDING,
>>>> +                        irq->pending_latch);
>>>> +        WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>>> +
>>>> +        /*
>>>> +         * Clear pending_latch and communicate this state
>>>> +         * change via vgic_queue_irq_unlock.
>>>> +         */
>>>> +        raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>>> +        irq->pending_latch = false;
>>>> +        vgic_queue_irq_unlock(kvm, irq, flags);
>>>> +    }
>>>> +
>>>>  out:
>>>>      mutex_unlock(&its->its_lock);
>>>>      return ret;
>>>
>>> The read side of the pending state isn't locked, but the write side is.
>>> I'd rather you lock the whole sequence for peace of mind.
>>
>> Did you mean to lock before emitting the mapping request, Or just before reading
>> the pending state?
> 
> Just before reading the pending state, so that we can't get a concurrent
> modification of that state while we make the interrupt pending in the VPT
> and clearing it in the emulation.

Get it. I will correct it right now.

Thanks,
Shenming

> 
> Thanks,
> 
>         M.
Marc Zyngier March 15, 2021, 9:30 a.m. UTC | #5
On 2021-03-15 09:25, Shenming Lu wrote:
> On 2021/3/15 17:20, Marc Zyngier wrote:
>> On 2021-03-15 09:11, Shenming Lu wrote:
>>> On 2021/3/15 16:30, Marc Zyngier wrote:
>>>> On 2021-03-13 08:38, Shenming Lu wrote:
>>>>> From: Zenghui Yu <yuzenghui@huawei.com>
>>>>> 
>>>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>>>> we can also transfer the pending state from irq->pending_latch to
>>>>> VPT (especially in migration, the pending states of VLPIs are 
>>>>> restored
>>>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to 
>>>>> trigger
>>>>> a VLPI to pending.
>>>>> 
>>>>> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
>>>>> Signed-off-by: Shenming Lu <lushenming@huawei.com>
>>>>> ---
>>>>>  arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>>>>  1 file changed, 18 insertions(+)
>>>>> 
>>>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c 
>>>>> b/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> index ac029ba3d337..3b82ab80c2f3 100644
>>>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm 
>>>>> *kvm, int virq,
>>>>>      irq->host_irq    = virq;
>>>>>      atomic_inc(&map.vpe->vlpi_count);
>>>>> 
>>>>> +    /* Transfer pending state */
>>>>> +    if (irq->pending_latch) {
>>>>> +        unsigned long flags;
>>>>> +
>>>>> +        ret = irq_set_irqchip_state(irq->host_irq,
>>>>> +                        IRQCHIP_STATE_PENDING,
>>>>> +                        irq->pending_latch);
>>>>> +        WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>>>> +
>>>>> +        /*
>>>>> +         * Clear pending_latch and communicate this state
>>>>> +         * change via vgic_queue_irq_unlock.
>>>>> +         */
>>>>> +        raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>>>> +        irq->pending_latch = false;
>>>>> +        vgic_queue_irq_unlock(kvm, irq, flags);
>>>>> +    }
>>>>> +
>>>>>  out:
>>>>>      mutex_unlock(&its->its_lock);
>>>>>      return ret;
>>>> 
>>>> The read side of the pending state isn't locked, but the write side 
>>>> is.
>>>> I'd rather you lock the whole sequence for peace of mind.
>>> 
>>> Did you mean to lock before emitting the mapping request, Or just 
>>> before reading
>>> the pending state?
>> 
>> Just before reading the pending state, so that we can't get a 
>> concurrent
>> modification of that state while we make the interrupt pending in the 
>> VPT
>> and clearing it in the emulation.
> 
> Get it. I will correct it right now.

Please hold off sending a new version for a few days. My inbox is 
exploding...

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index ac029ba3d337..3b82ab80c2f3 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -449,6 +449,24 @@  int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
 	irq->host_irq	= virq;
 	atomic_inc(&map.vpe->vlpi_count);
 
+	/* Transfer pending state */
+	if (irq->pending_latch) {
+		unsigned long flags;
+
+		ret = irq_set_irqchip_state(irq->host_irq,
+					    IRQCHIP_STATE_PENDING,
+					    irq->pending_latch);
+		WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
+
+		/*
+		 * Clear pending_latch and communicate this state
+		 * change via vgic_queue_irq_unlock.
+		 */
+		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+		irq->pending_latch = false;
+		vgic_queue_irq_unlock(kvm, irq, flags);
+	}
+
 out:
 	mutex_unlock(&its->its_lock);
 	return ret;