diff mbox

[v2] KVM: arm/arm64: Let vcpu thread modify its own active state

Message ID 1488807757-86131-1-git-send-email-christoffer.dall@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Christoffer Dall March 6, 2017, 1:42 p.m. UTC
From: Jintack Lim <jintack@cs.columbia.edu>

Currently, if a vcpu thread tries to change the active state of an
interrupt which is already on the same vcpu's AP list, Since the VGIC
mmio handler is called after a vcpu has already synced back the LR state
to the struct vgic_irq, we can just let it proceed safely.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
---
Changes since v1:
 - Reworked comment
 - Consider userspace accesses
 - Get the right requester VCPU for GICv3 private IRQ accesses
 - Tested using kvm-unit-tests and verified that it deadlocked without
   this patch and passed the test with this patch :)

 virt/kvm/arm/vgic/vgic-mmio.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

Comments

Marc Zyngier March 6, 2017, 2:38 p.m. UTC | #1
On Mon, Mar 06 2017 at  1:42:37 pm GMT, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> From: Jintack Lim <jintack@cs.columbia.edu>
>
> Currently, if a vcpu thread tries to change the active state of an
> interrupt which is already on the same vcpu's AP list, Since the VGIC
> mmio handler is called after a vcpu has already synced back the LR state
> to the struct vgic_irq, we can just let it proceed safely.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
Jintack Lim March 6, 2017, 7:20 p.m. UTC | #2
Hi Christoffer,

thanks for submitting this patch.

On Mon, Mar 6, 2017 at 8:42 AM, Christoffer Dall
<christoffer.dall@linaro.org> wrote:
> From: Jintack Lim <jintack@cs.columbia.edu>
>
> Currently, if a vcpu thread tries to change the active state of an
> interrupt which is already on the same vcpu's AP list,

"it'll loop forever" is remove accidentally in the commit message in v2?

> Since the VGIC
> mmio handler is called after a vcpu has already synced back the LR state
> to the struct vgic_irq, we can just let it proceed safely.
>
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> ---
> Changes since v1:
>  - Reworked comment
>  - Consider userspace accesses
>  - Get the right requester VCPU for GICv3 private IRQ accesses
>  - Tested using kvm-unit-tests and verified that it deadlocked without
>    this patch and passed the test with this patch :)

nice!

>
>  virt/kvm/arm/vgic/vgic-mmio.c | 32 ++++++++++++++++++++++++--------
>  1 file changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index 3654b4c..2a5db13 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -180,21 +180,37 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
>  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>                                     bool new_active_state)
>  {
> +       struct kvm_vcpu *requester_vcpu;
>         spin_lock(&irq->irq_lock);
> +
> +       /*
> +        * The vcpu parameter here can mean multiple things depending on how
> +        * this function is called; when handling a trap from the kernel it
> +        * depends on the GIC version, and these functions are also called as
> +        * part of save/restore from userspace.
> +        *
> +        * Therefore, we have to figure out the requester in a reliable way.
> +        *
> +        * When accessing VGIC state from user space, the requester_vcpu is
> +        * NULL, which is fine, because we guarantee that no VCPUs are running
> +        * when accessing VGIC state from user space so irq->vcpu->cpu is
> +        * always -1.
> +        */
> +       requester_vcpu = kvm_arm_get_running_vcpu();
> +
>         /*
>          * If this virtual IRQ was written into a list register, we
>          * have to make sure the CPU that runs the VCPU thread has
> -        * synced back LR state to the struct vgic_irq.  We can only
> -        * know this for sure, when either this irq is not assigned to
> -        * anyone's AP list anymore, or the VCPU thread is not
> -        * running on any CPUs.
> +        * synced back the LR state to the struct vgic_irq.
>          *
> -        * In the opposite case, we know the VCPU thread may be on its
> -        * way back from the guest and still has to sync back this
> -        * IRQ, so we release and re-acquire the spin_lock to let the
> -        * other thread sync back the IRQ.
> +        * As long as the conditions below are true, we know the VCPU thread
> +        * may be on its way back from the guest (we kicked the VCPU thread in
> +        * vgic_change_active_prepare)  and still has to sync back this IRQ,
> +        * so we release and re-acquire the spin_lock to let the other thread
> +        * sync back the IRQ.
>          */
>         while (irq->vcpu && /* IRQ may have state in an LR somewhere */
> +              irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
>                irq->vcpu->cpu != -1) /* VCPU thread is running */
>                 cond_resched_lock(&irq->irq_lock);
>
> --
> 2.5.0
>
>
Christoffer Dall March 7, 2017, 9:57 a.m. UTC | #3
On Mon, Mar 06, 2017 at 02:20:44PM -0500, Jintack Lim wrote:
> Hi Christoffer,
> 
> thanks for submitting this patch.
> 
> On Mon, Mar 6, 2017 at 8:42 AM, Christoffer Dall
> <christoffer.dall@linaro.org> wrote:
> > From: Jintack Lim <jintack@cs.columbia.edu>
> >
> > Currently, if a vcpu thread tries to change the active state of an
> > interrupt which is already on the same vcpu's AP list,
> 
> "it'll loop forever" is remove accidentally in the commit message in v2?

yes, we can fix this up when applying the patch.  Thanks.

> 
> > Since the VGIC
> > mmio handler is called after a vcpu has already synced back the LR state
> > to the struct vgic_irq, we can just let it proceed safely.
> >
> > Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> > ---
> > Changes since v1:
> >  - Reworked comment
> >  - Consider userspace accesses
> >  - Get the right requester VCPU for GICv3 private IRQ accesses
> >  - Tested using kvm-unit-tests and verified that it deadlocked without
> >    this patch and passed the test with this patch :)
> 
> nice!
> 
> >
> >  virt/kvm/arm/vgic/vgic-mmio.c | 32 ++++++++++++++++++++++++--------
> >  1 file changed, 24 insertions(+), 8 deletions(-)
> >
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > index 3654b4c..2a5db13 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > @@ -180,21 +180,37 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
> >  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >                                     bool new_active_state)
> >  {
> > +       struct kvm_vcpu *requester_vcpu;
> >         spin_lock(&irq->irq_lock);
> > +
> > +       /*
> > +        * The vcpu parameter here can mean multiple things depending on how
> > +        * this function is called; when handling a trap from the kernel it
> > +        * depends on the GIC version, and these functions are also called as
> > +        * part of save/restore from userspace.
> > +        *
> > +        * Therefore, we have to figure out the requester in a reliable way.
> > +        *
> > +        * When accessing VGIC state from user space, the requester_vcpu is
> > +        * NULL, which is fine, because we guarantee that no VCPUs are running
> > +        * when accessing VGIC state from user space so irq->vcpu->cpu is
> > +        * always -1.
> > +        */
> > +       requester_vcpu = kvm_arm_get_running_vcpu();
> > +
> >         /*
> >          * If this virtual IRQ was written into a list register, we
> >          * have to make sure the CPU that runs the VCPU thread has
> > -        * synced back LR state to the struct vgic_irq.  We can only
> > -        * know this for sure, when either this irq is not assigned to
> > -        * anyone's AP list anymore, or the VCPU thread is not
> > -        * running on any CPUs.
> > +        * synced back the LR state to the struct vgic_irq.
> >          *
> > -        * In the opposite case, we know the VCPU thread may be on its
> > -        * way back from the guest and still has to sync back this
> > -        * IRQ, so we release and re-acquire the spin_lock to let the
> > -        * other thread sync back the IRQ.
> > +        * As long as the conditions below are true, we know the VCPU thread
> > +        * may be on its way back from the guest (we kicked the VCPU thread in
> > +        * vgic_change_active_prepare)  and still has to sync back this IRQ,
> > +        * so we release and re-acquire the spin_lock to let the other thread
> > +        * sync back the IRQ.
> >          */
> >         while (irq->vcpu && /* IRQ may have state in an LR somewhere */
> > +              irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
> >                irq->vcpu->cpu != -1) /* VCPU thread is running */
> >                 cond_resched_lock(&irq->irq_lock);
> >
> > --
> > 2.5.0
> >
> >
>
diff mbox

Patch

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index 3654b4c..2a5db13 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -180,21 +180,37 @@  unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
 static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
+	struct kvm_vcpu *requester_vcpu;
 	spin_lock(&irq->irq_lock);
+
+	/*
+	 * The vcpu parameter here can mean multiple things depending on how
+	 * this function is called; when handling a trap from the kernel it
+	 * depends on the GIC version, and these functions are also called as
+	 * part of save/restore from userspace.
+	 *
+	 * Therefore, we have to figure out the requester in a reliable way.
+	 *
+	 * When accessing VGIC state from user space, the requester_vcpu is
+	 * NULL, which is fine, because we guarantee that no VCPUs are running
+	 * when accessing VGIC state from user space so irq->vcpu->cpu is
+	 * always -1.
+	 */
+	requester_vcpu = kvm_arm_get_running_vcpu();
+
 	/*
 	 * If this virtual IRQ was written into a list register, we
 	 * have to make sure the CPU that runs the VCPU thread has
-	 * synced back LR state to the struct vgic_irq.  We can only
-	 * know this for sure, when either this irq is not assigned to
-	 * anyone's AP list anymore, or the VCPU thread is not
-	 * running on any CPUs.
+	 * synced back the LR state to the struct vgic_irq.
 	 *
-	 * In the opposite case, we know the VCPU thread may be on its
-	 * way back from the guest and still has to sync back this
-	 * IRQ, so we release and re-acquire the spin_lock to let the
-	 * other thread sync back the IRQ.
+	 * As long as the conditions below are true, we know the VCPU thread
+	 * may be on its way back from the guest (we kicked the VCPU thread in
+	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
+	 * so we release and re-acquire the spin_lock to let the other thread
+	 * sync back the IRQ.
 	 */
 	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
+	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
 	       irq->vcpu->cpu != -1) /* VCPU thread is running */
 		cond_resched_lock(&irq->irq_lock);