Message ID | 20170609174141.5068-9-andre.przywara@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Andre, On 09/06/17 18:41, Andre Przywara wrote: > For LPIs the struct pending_irq's are dynamically allocated and the > pointers will be stored in a radix tree. Since an LPI can be "unmapped" > at any time, teach the VGIC how to deal with irq_to_pending() returning > a NULL pointer. > We just do nothing in this case or clean up the LR if the virtual LPI > number was still in an LR. > > Those are all call sites for irq_to_pending(), as per: > "git grep irq_to_pending", and their evaluations: > (PROTECTED means: added NULL check and bailing out) > > xen/arch/arm/gic.c: > gic_route_irq_to_guest(): only called for SPIs, added ASSERT() > gic_remove_irq_from_guest(): only called for SPIs, added ASSERT() > gic_remove_from_lr_pending(): PROTECTED, called within VCPU VGIC lock > gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock > gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock > gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock > > xen/arch/arm/vgic.c: > vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT() > arch_move_irqs(): not iterating over LPIs, LPI ASSERT already in place > vgic_disable_irqs(): not called for LPIs, added ASSERT() > vgic_enable_irqs(): not called for LPIs, added ASSERT() > vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock > > xen/include/asm-arm/event.h: > local_events_need_delivery_nomask(): only called for a PPI, added ASSERT() > > xen/include/asm-arm/vgic.h: > (prototype) > > Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Cheers,
On Fri, 9 Jun 2017, Andre Przywara wrote: > For LPIs the struct pending_irq's are dynamically allocated and the > pointers will be stored in a radix tree. Since an LPI can be "unmapped" > at any time, teach the VGIC how to deal with irq_to_pending() returning > a NULL pointer. > We just do nothing in this case or clean up the LR if the virtual LPI > number was still in an LR. > > Those are all call sites for irq_to_pending(), as per: > "git grep irq_to_pending", and their evaluations: > (PROTECTED means: added NULL check and bailing out) > > xen/arch/arm/gic.c: > gic_route_irq_to_guest(): only called for SPIs, added ASSERT() > gic_remove_irq_from_guest(): only called for SPIs, added ASSERT() > gic_remove_from_lr_pending(): PROTECTED, called within VCPU VGIC lock > gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock > gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock > gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock > > xen/arch/arm/vgic.c: > vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT() > arch_move_irqs(): not iterating over LPIs, LPI ASSERT already in place > vgic_disable_irqs(): not called for LPIs, added ASSERT() > vgic_enable_irqs(): not called for LPIs, added ASSERT() > vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock > > xen/include/asm-arm/event.h: > local_events_need_delivery_nomask(): only called for a PPI, added ASSERT() > > xen/include/asm-arm/vgic.h: > (prototype) > > Signed-off-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> > --- > xen/arch/arm/gic.c | 26 ++++++++++++++++++++++++-- > xen/arch/arm/vgic.c | 21 +++++++++++++++++++++ > xen/include/asm-arm/event.h | 3 +++ > 3 files changed, 48 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index e0c54d5..36e340b 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, > /* Caller has already checked that the IRQ is an SPI */ > ASSERT(virq >= 32); > ASSERT(virq < vgic_num_irqs(d)); > + ASSERT(!is_lpi(virq)); > > vgic_lock_rank(v_target, rank, flags); > > @@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, > ASSERT(spin_is_locked(&desc->lock)); > ASSERT(test_bit(_IRQ_GUEST, &desc->status)); > ASSERT(p->desc == desc); > + ASSERT(!is_lpi(virq)); > > vgic_lock_rank(v_target, rank, flags); > > @@ -420,6 +422,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq) > { > struct pending_irq *n = irq_to_pending(v, virtual_irq); > > + /* If an LPI has been removed meanwhile, there is nothing left to raise. */ > + if ( unlikely(!n) ) > + return; > + > ASSERT(spin_is_locked(&v->arch.vgic.lock)); > > if ( list_empty(&n->lr_queue) ) > @@ -439,20 +445,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, > { > int i; > unsigned int nr_lrs = gic_hw_ops->info->nr_lrs; > + struct pending_irq *p = irq_to_pending(v, virtual_irq); > > ASSERT(spin_is_locked(&v->arch.vgic.lock)); > > + if ( unlikely(!p) ) > + /* An unmapped LPI does not need to be raised. */ > + return; > + > if ( v == current && list_empty(&v->arch.vgic.lr_pending) ) > { > i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs); > if (i < nr_lrs) { > set_bit(i, &this_cpu(lr_mask)); > - gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING); > + gic_set_lr(i, p, GICH_LR_PENDING); > return; > } > } > > - gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq)); > + gic_add_to_lr_pending(v, p); > } > > static void gic_update_one_lr(struct vcpu *v, int i) > @@ -467,6 +478,17 @@ static void gic_update_one_lr(struct vcpu *v, int i) > gic_hw_ops->read_lr(i, &lr_val); > irq = lr_val.virq; > p = irq_to_pending(v, irq); > + /* An LPI might have been unmapped, in which case we just clean up here. */ > + if ( unlikely(!p) ) > + { > + ASSERT(is_lpi(irq)); > + > + gic_hw_ops->clear_lr(i); > + clear_bit(i, &this_cpu(lr_mask)); > + > + return; > + } > + > if ( lr_val.state & GICH_LR_ACTIVE ) > { > set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index f0d288d..1edf93d 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -236,6 +236,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) > unsigned long flags; > struct pending_irq *p; > > + /* This will never be called for an LPI, as we don't migrate them. */ > + ASSERT(!is_lpi(irq)); > + > spin_lock_irqsave(&old->arch.vgic.lock, flags); > > p = irq_to_pending(old, irq); > @@ -320,6 +323,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) > int i = 0; > struct vcpu *v_target; > > + /* LPIs will never be disabled via this function. */ > + ASSERT(!is_lpi(32 * n + 31)); > + > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { > irq = i + (32 * n); > v_target = vgic_get_target_vcpu(v, irq); > @@ -367,6 +373,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) > struct vcpu *v_target; > struct domain *d = v->domain; > > + /* LPIs will never be enabled via this function. */ > + ASSERT(!is_lpi(32 * n + 31)); > + > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { > irq = i + (32 * n); > v_target = vgic_get_target_vcpu(v, irq); > @@ -447,6 +456,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, > return true; > } > > +/* > + * Returns the pointer to the struct pending_irq belonging to the given > + * interrupt. > + * This can return NULL if called for an LPI which has been unmapped > + * meanwhile. > + */ > struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) > { > struct pending_irq *n; > @@ -490,6 +505,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) > spin_lock_irqsave(&v->arch.vgic.lock, flags); > > n = irq_to_pending(v, virq); > + /* If an LPI has been removed, there is nothing to inject here. */ > + if ( unlikely(!n) ) > + { > + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); > + return; > + } > > /* vcpu offline */ > if ( test_bit(_VPF_down, &v->pause_flags) ) > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > index 5330dfe..caefa50 100644 > --- a/xen/include/asm-arm/event.h > +++ b/xen/include/asm-arm/event.h > @@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void) > struct pending_irq *p = irq_to_pending(current, > current->domain->arch.evtchn_irq); > > + /* Does not work for LPIs. */ > + ASSERT(!is_lpi(current->domain->arch.evtchn_irq)); > + > /* XXX: if the first interrupt has already been delivered, we should > * check whether any other interrupts with priority higher than the > * one in GICV_IAR are in the lr_pending queue or in the LR > -- > 2.9.0 >
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index e0c54d5..36e340b 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, /* Caller has already checked that the IRQ is an SPI */ ASSERT(virq >= 32); ASSERT(virq < vgic_num_irqs(d)); + ASSERT(!is_lpi(virq)); vgic_lock_rank(v_target, rank, flags); @@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, ASSERT(spin_is_locked(&desc->lock)); ASSERT(test_bit(_IRQ_GUEST, &desc->status)); ASSERT(p->desc == desc); + ASSERT(!is_lpi(virq)); vgic_lock_rank(v_target, rank, flags); @@ -420,6 +422,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq) { struct pending_irq *n = irq_to_pending(v, virtual_irq); + /* If an LPI has been removed meanwhile, there is nothing left to raise. */ + if ( unlikely(!n) ) + return; + ASSERT(spin_is_locked(&v->arch.vgic.lock)); if ( list_empty(&n->lr_queue) ) @@ -439,20 +445,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, { int i; unsigned int nr_lrs = gic_hw_ops->info->nr_lrs; + struct pending_irq *p = irq_to_pending(v, virtual_irq); ASSERT(spin_is_locked(&v->arch.vgic.lock)); + if ( unlikely(!p) ) + /* An unmapped LPI does not need to be raised. */ + return; + if ( v == current && list_empty(&v->arch.vgic.lr_pending) ) { i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs); if (i < nr_lrs) { set_bit(i, &this_cpu(lr_mask)); - gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING); + gic_set_lr(i, p, GICH_LR_PENDING); return; } } - gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq)); + gic_add_to_lr_pending(v, p); } static void gic_update_one_lr(struct vcpu *v, int i) @@ -467,6 +478,17 @@ static void gic_update_one_lr(struct vcpu *v, int i) gic_hw_ops->read_lr(i, &lr_val); irq = lr_val.virq; p = irq_to_pending(v, irq); + /* An LPI might have been unmapped, in which case we just clean up here. */ + if ( unlikely(!p) ) + { + ASSERT(is_lpi(irq)); + + gic_hw_ops->clear_lr(i); + clear_bit(i, &this_cpu(lr_mask)); + + return; + } + if ( lr_val.state & GICH_LR_ACTIVE ) { set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index f0d288d..1edf93d 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -236,6 +236,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) unsigned long flags; struct pending_irq *p; + /* This will never be called for an LPI, as we don't migrate them. */ + ASSERT(!is_lpi(irq)); + spin_lock_irqsave(&old->arch.vgic.lock, flags); p = irq_to_pending(old, irq); @@ -320,6 +323,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) int i = 0; struct vcpu *v_target; + /* LPIs will never be disabled via this function. */ + ASSERT(!is_lpi(32 * n + 31)); + while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); v_target = vgic_get_target_vcpu(v, irq); @@ -367,6 +373,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) struct vcpu *v_target; struct domain *d = v->domain; + /* LPIs will never be enabled via this function. */ + ASSERT(!is_lpi(32 * n + 31)); + while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); v_target = vgic_get_target_vcpu(v, irq); @@ -447,6 +456,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, return true; } +/* + * Returns the pointer to the struct pending_irq belonging to the given + * interrupt. + * This can return NULL if called for an LPI which has been unmapped + * meanwhile. + */ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) { struct pending_irq *n; @@ -490,6 +505,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) spin_lock_irqsave(&v->arch.vgic.lock, flags); n = irq_to_pending(v, virq); + /* If an LPI has been removed, there is nothing to inject here. */ + if ( unlikely(!n) ) + { + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + return; + } /* vcpu offline */ if ( test_bit(_VPF_down, &v->pause_flags) ) diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index 5330dfe..caefa50 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void) struct pending_irq *p = irq_to_pending(current, current->domain->arch.evtchn_irq); + /* Does not work for LPIs. */ + ASSERT(!is_lpi(current->domain->arch.evtchn_irq)); + /* XXX: if the first interrupt has already been delivered, we should * check whether any other interrupts with priority higher than the * one in GICV_IAR are in the lr_pending queue or in the LR
For LPIs the struct pending_irq's are dynamically allocated and the pointers will be stored in a radix tree. Since an LPI can be "unmapped" at any time, teach the VGIC how to deal with irq_to_pending() returning a NULL pointer. We just do nothing in this case or clean up the LR if the virtual LPI number was still in an LR. Those are all call sites for irq_to_pending(), as per: "git grep irq_to_pending", and their evaluations: (PROTECTED means: added NULL check and bailing out) xen/arch/arm/gic.c: gic_route_irq_to_guest(): only called for SPIs, added ASSERT() gic_remove_irq_from_guest(): only called for SPIs, added ASSERT() gic_remove_from_lr_pending(): PROTECTED, called within VCPU VGIC lock gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock xen/arch/arm/vgic.c: vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT() arch_move_irqs(): not iterating over LPIs, LPI ASSERT already in place vgic_disable_irqs(): not called for LPIs, added ASSERT() vgic_enable_irqs(): not called for LPIs, added ASSERT() vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock xen/include/asm-arm/event.h: local_events_need_delivery_nomask(): only called for a PPI, added ASSERT() xen/include/asm-arm/vgic.h: (prototype) Signed-off-by: Andre Przywara <andre.przywara@arm.com> --- xen/arch/arm/gic.c | 26 ++++++++++++++++++++++++-- xen/arch/arm/vgic.c | 21 +++++++++++++++++++++ xen/include/asm-arm/event.h | 3 +++ 3 files changed, 48 insertions(+), 2 deletions(-)