From patchwork Wed Jun 14 16:51:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9786983 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A15B66038E for ; Wed, 14 Jun 2017 16:55:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92DCD1FF15 for ; Wed, 14 Jun 2017 16:55:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 87AD02860B; Wed, 14 Jun 2017 16:55:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E91E71FF15 for ; Wed, 14 Jun 2017 16:54:59 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dLBWv-0006Ne-L2; Wed, 14 Jun 2017 16:52:29 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dLBWu-0006Mw-O0 for xen-devel@lists.xenproject.org; Wed, 14 Jun 2017 16:52:28 +0000 Received: from [85.158.137.68] by server-8.bemta-3.messagelabs.com id 4A/C0-23755-BC961495; Wed, 14 Jun 2017 16:52:27 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrOLMWRWlGSWpSXmKPExsVysyfVTfd0pmO kweIWCYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNWPr7QNMBZfMKp6se8zYwLhIu4uRi0NIYDOj xOXLd9khnOWMEvsXdrB0MXJysAnoSuy4+ZoZxBYRCJV4uuA7M0gRs8B1RonTO+azgSSEBfwkz rZeYwexWQRUJS78OsQKYvMKWElcX/iQCcSWEJCTaDh/H2wQp4C1xNTtk4HiHEDbrCRurtCfwM i9gJFhFaNGcWpRWWqRrpGBXlJRZnpGSW5iZo6uoYGxXm5qcXFiempOYlKxXnJ+7iZGoIfrGRg YdzA2n/A7xCjJwaQkyjtV0DFSiC8pP6UyI7E4I76oNCe1+BCjDAeHkgRvUAZQTrAoNT21Ii0z BxhqMGkJDh4lEd6tqUBp3uKCxNzizHSI1ClGRSlx3kCQPgGQREZpHlwbLLwvMcpKCfMyMjAwC PEUpBblZpagyr9iFOdgVBLmLUsHmsKTmVcCN/0V0GImoMVBFxxAFpckIqSkGhhLtQ41PT+ed/ BRVHNr8a6Ux24MNrzHqp4kbK8MWHRX8bfFGQduxYwpAsuSdr1Kzq8O9grbPdX6zAOL1iWHbsh fTzxYL6KyeNIVoadMTo3Jf8UjzR4vjrXYNFeMIVIssvHgU6NdhRP+bjDalnFRWqM2+dAvpaCP h1+UTbxz58P1F3PunmWTNzRXYinOSDTUYi4qTgQA7f8gqmoCAAA= X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-11.tower-31.messagelabs.com!1497459146!74781226!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 60777 invoked from network); 14 Jun 2017 16:52:27 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-11.tower-31.messagelabs.com with SMTP; 14 Jun 2017 16:52:27 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7681915A2; Wed, 14 Jun 2017 09:52:26 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 25F5D3F3E1; Wed, 14 Jun 2017 09:52:25 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Wed, 14 Jun 2017 17:51:57 +0100 Message-Id: <20170614165223.7543-9-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170614165223.7543-1-andre.przywara@arm.com> References: <20170614165223.7543-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org, Vijaya Kumar K , Vijay Kilari , Shanker Donthineni , Manish Jaggi Subject: [Xen-devel] [PATCH v12 08/34] ARM: GIC: Add checks for NULL pointer pending_irq's X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP For LPIs the struct pending_irq's are dynamically allocated and the pointers will be stored in a radix tree. Since an LPI can be "unmapped" at any time, teach the VGIC how to deal with irq_to_pending() returning a NULL pointer. We just do nothing in this case or clean up the LR if the virtual LPI number was still in an LR. Those are all call sites for irq_to_pending(), as per: "git grep irq_to_pending", and their evaluations: (PROTECTED means: added NULL check and bailing out) xen/arch/arm/gic.c: gic_route_irq_to_guest(): only called for SPIs, added ASSERT() gic_remove_irq_from_guest(): only called for SPIs, added ASSERT() gic_remove_from_lr_pending(): PROTECTED, called within VCPU VGIC lock gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock xen/arch/arm/vgic.c: vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT() arch_move_irqs(): not iterating over LPIs, LPI ASSERT already in place vgic_disable_irqs(): not called for LPIs, added ASSERT() vgic_enable_irqs(): not called for LPIs, added ASSERT() vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock xen/include/asm-arm/event.h: local_events_need_delivery_nomask(): only called for a PPI, added ASSERT() xen/include/asm-arm/vgic.h: (prototype) Signed-off-by: Andre Przywara Reviewed-by: Julien Grall Acked-by: Stefano Stabellini --- xen/arch/arm/gic.c | 26 ++++++++++++++++++++++++-- xen/arch/arm/vgic.c | 21 +++++++++++++++++++++ xen/include/asm-arm/event.h | 3 +++ 3 files changed, 48 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index a59591d..e1dfd66 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, /* Caller has already checked that the IRQ is an SPI */ ASSERT(virq >= 32); ASSERT(virq < vgic_num_irqs(d)); + ASSERT(!is_lpi(virq)); vgic_lock_rank(v_target, rank, flags); @@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, ASSERT(spin_is_locked(&desc->lock)); ASSERT(test_bit(_IRQ_GUEST, &desc->status)); ASSERT(p->desc == desc); + ASSERT(!is_lpi(virq)); vgic_lock_rank(v_target, rank, flags); @@ -420,6 +422,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq) { struct pending_irq *n = irq_to_pending(v, virtual_irq); + /* If an LPI has been removed meanwhile, there is nothing left to raise. */ + if ( unlikely(!n) ) + return; + ASSERT(spin_is_locked(&v->arch.vgic.lock)); if ( list_empty(&n->lr_queue) ) @@ -439,20 +445,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq, { int i; unsigned int nr_lrs = gic_hw_ops->info->nr_lrs; + struct pending_irq *p = irq_to_pending(v, virtual_irq); ASSERT(spin_is_locked(&v->arch.vgic.lock)); + if ( unlikely(!p) ) + /* An unmapped LPI does not need to be raised. */ + return; + if ( v == current && list_empty(&v->arch.vgic.lr_pending) ) { i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs); if (i < nr_lrs) { set_bit(i, &this_cpu(lr_mask)); - gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING); + gic_set_lr(i, p, GICH_LR_PENDING); return; } } - gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq)); + gic_add_to_lr_pending(v, p); } static void gic_update_one_lr(struct vcpu *v, int i) @@ -467,6 +478,17 @@ static void gic_update_one_lr(struct vcpu *v, int i) gic_hw_ops->read_lr(i, &lr_val); irq = lr_val.virq; p = irq_to_pending(v, irq); + /* An LPI might have been unmapped, in which case we just clean up here. */ + if ( unlikely(!p) ) + { + ASSERT(is_lpi(irq)); + + gic_hw_ops->clear_lr(i); + clear_bit(i, &this_cpu(lr_mask)); + + return; + } + if ( lr_val.state & GICH_LR_ACTIVE ) { set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 9771463..9cc9563 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -236,6 +236,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) unsigned long flags; struct pending_irq *p; + /* This will never be called for an LPI, as we don't migrate them. */ + ASSERT(!is_lpi(irq)); + spin_lock_irqsave(&old->arch.vgic.lock, flags); p = irq_to_pending(old, irq); @@ -320,6 +323,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) int i = 0; struct vcpu *v_target; + /* LPIs will never be disabled via this function. */ + ASSERT(!is_lpi(32 * n + 31)); + while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); v_target = vgic_get_target_vcpu(v, irq); @@ -367,6 +373,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) struct vcpu *v_target; struct domain *d = v->domain; + /* LPIs will never be enabled via this function. */ + ASSERT(!is_lpi(32 * n + 31)); + while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); v_target = vgic_get_target_vcpu(v, irq); @@ -447,6 +456,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, return true; } +/* + * Returns the pointer to the struct pending_irq belonging to the given + * interrupt. + * This can return NULL if called for an LPI which has been unmapped + * meanwhile. + */ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) { struct pending_irq *n; @@ -490,6 +505,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) spin_lock_irqsave(&v->arch.vgic.lock, flags); n = irq_to_pending(v, virq); + /* If an LPI has been removed, there is nothing to inject here. */ + if ( unlikely(!n) ) + { + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + return; + } /* vcpu offline */ if ( test_bit(_VPF_down, &v->pause_flags) ) diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index 5330dfe..caefa50 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void) struct pending_irq *p = irq_to_pending(current, current->domain->arch.evtchn_irq); + /* Does not work for LPIs. */ + ASSERT(!is_lpi(current->domain->arch.evtchn_irq)); + /* XXX: if the first interrupt has already been delivered, we should * check whether any other interrupts with priority higher than the * one in GICV_IAR are in the lr_pending queue or in the LR