From patchwork Thu May 4 15:31:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9712157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 70F5D6020B for ; Thu, 4 May 2017 15:31:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6452C28697 for ; Thu, 4 May 2017 15:31:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 57F76286B6; Thu, 4 May 2017 15:31:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D19D9284D1 for ; Thu, 4 May 2017 15:31:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d6Igz-0004KD-9D; Thu, 04 May 2017 15:29:21 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d6Igy-0004I1-2L for xen-devel@lists.xenproject.org; Thu, 04 May 2017 15:29:20 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id 50/B4-02181-FC84B095; Thu, 04 May 2017 15:29:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrGLMWRWlGSWpSXmKPExsVysyfVTfe0B3e kwbf3Ohbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bfy6kFs9Urbu3ey9TAeFi6i5GLQ0hgM6PE iZV72SGc5YwS5zbdAHI4OdgEdCV23HzNDGKLCIRKPF3wHcxmFlCS2H/2GiOILSzgInHjUisTi M0ioCpxYepssDivgJXEvT2vweZICMhJNJy/D9bLKWAtsXfvbjBbCKjmw5OTbBMYuRcwMqxi1C hOLSpLLdI1stBLKspMzyjJTczM0TU0MNXLTS0uTkxPzUlMKtZLzs/dxAj0bz0DA+MOxr5Vfoc YJTmYlER51V+xRwrxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4P3uzh0pJFiUmp5akZaZAww0mLQE B4+SCO9HkDRvcUFibnFmOkTqFKOilDjvCpCEAEgiozQPrg0W3JcYZaWEeRkZGBiEeApSi3IzS 1DlXzGKczAqCfNyAWNFiCczrwRu+iugxUxAi5tlOUAWlyQipKQaGCcuZE7wE19RHcl58aGQU/ 3HbS7Zkq5eU6L2CDVoT7vjXze33mL3TJekFWuuy1rt6JvRLjpPtePtzbCCx98/+SZLBakx3LK Lknd8rXhS+TATd3LgzA+3ednM/tvFraguTZt49WLZh9jOeY/SJr/elrCJT55D6diXpF+tM/eY VJ7dHLFgodVFbyWW4oxEQy3mouJEAH0E3hFpAgAA X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-4.tower-206.messagelabs.com!1493911755!96389049!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 11179 invoked from network); 4 May 2017 15:29:15 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-4.tower-206.messagelabs.com with SMTP; 4 May 2017 15:29:15 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BE52715BE; Thu, 4 May 2017 08:29:14 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 090E43F23B; Thu, 4 May 2017 08:29:13 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Thu, 4 May 2017 16:31:17 +0100 Message-Id: <20170504153123.1204-5-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170504153123.1204-1-andre.przywara@arm.com> References: <20170504153123.1204-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH 04/10] ARM: vGIC: add struct pending_irq locking X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Introduce the proper locking sequence for the new pending_irq lock. This takes the lock around multiple accesses to struct members, also makes sure we observe the locking order (VGIC VCPU lock first, then pending_irq lock). Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 26 ++++++++++++++++++++++++++ xen/arch/arm/vgic.c | 12 +++++++++++- 2 files changed, 37 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 67375a2..e175e9b 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -351,6 +351,7 @@ void gic_disable_cpu(void) static inline void gic_set_lr(int lr, struct pending_irq *p, unsigned int state) { + ASSERT(spin_is_locked(&p->lock)); ASSERT(!local_irq_is_enabled()); gic_hw_ops->update_lr(lr, p, state); @@ -413,6 +414,7 @@ void gic_raise_guest_irq(struct vcpu *v, struct pending_irq *p) unsigned int nr_lrs = gic_hw_ops->info->nr_lrs; ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&p->lock)); if ( v == current && list_empty(&v->arch.vgic.lr_pending) ) { @@ -439,6 +441,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) gic_hw_ops->read_lr(i, &lr_val); irq = lr_val.virq; p = irq_to_pending(v, irq); + spin_lock(&p->lock); if ( lr_val.state & GICH_LR_ACTIVE ) { set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); @@ -495,6 +498,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) } } } + spin_unlock(&p->lock); } void gic_clear_lrs(struct vcpu *v) @@ -545,14 +549,30 @@ static void gic_restore_pending_irqs(struct vcpu *v) /* No more free LRs: find a lower priority irq to evict */ list_for_each_entry_reverse( p_r, inflight_r, inflight ) { + if ( p_r->irq < p->irq ) + { + spin_lock(&p_r->lock); + spin_lock(&p->lock); + } + else + { + spin_lock(&p->lock); + spin_lock(&p_r->lock); + } if ( p_r->priority == p->priority ) + { + spin_unlock(&p->lock); + spin_unlock(&p_r->lock); goto out; + } if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) && !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) ) goto found; } /* We didn't find a victim this time, and we won't next * time, so quit */ + spin_unlock(&p->lock); + spin_unlock(&p_r->lock); goto out; found: @@ -562,12 +582,18 @@ found: clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status); gic_add_to_lr_pending(v, p_r); inflight_r = &p_r->inflight; + + spin_unlock(&p_r->lock); } + else + spin_lock(&p->lock); gic_set_lr(lr, p, GICH_LR_PENDING); list_del_init(&p->lr_queue); set_bit(lr, &this_cpu(lr_mask)); + spin_unlock(&p->lock); + /* We can only evict nr_lrs entries */ lrs--; if ( lrs == 0 ) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index f4ae454..44363bb 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -356,11 +356,16 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); v_target = vgic_get_target_vcpu(v, irq); + + spin_lock_irqsave(&v_target->arch.vgic.lock, flags); p = irq_to_pending(v_target, irq); + spin_lock(&p->lock); + set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) gic_raise_guest_irq(v_target, p); + spin_unlock(&p->lock); spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); if ( p->desc != NULL ) { @@ -482,10 +487,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) return; } + spin_lock(&n->lock); set_bit(GIC_IRQ_GUEST_QUEUED, &n->status); if ( !list_empty(&n->inflight) ) { + spin_unlock(&n->lock); gic_raise_inflight_irq(v, n); goto out; } @@ -501,10 +508,13 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) if ( iter->priority > priority ) { list_add_tail(&n->inflight, &iter->inflight); + spin_unlock(&n->lock); goto out; } } list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs); + spin_unlock(&n->lock); + out: spin_unlock_irqrestore(&v->arch.vgic.lock, flags); /* we have a new higher priority irq, inject it into the guest */