From patchwork Thu Dec 15 01:04:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9475281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 17287607EE for ; Thu, 15 Dec 2016 01:07:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0722928670 for ; Thu, 15 Dec 2016 01:07:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC5EC286B2; Thu, 15 Dec 2016 01:07:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5468228670 for ; Thu, 15 Dec 2016 01:07:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cHKTl-0007yr-Dc; Thu, 15 Dec 2016 01:05:01 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cHKTk-0007yl-7d for xen-devel@lists.xenproject.org; Thu, 15 Dec 2016 01:05:00 +0000 Received: from [85.158.137.68] by server-12.bemta-3.messagelabs.com id BB/DA-16730-B3CE1585; Thu, 15 Dec 2016 01:04:59 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprLIsWRWlGSWpSXmKPExsVybKJsh67Vm8A Ig8ZGFYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePA1INMBXuNK5ZO2cnUwPhfo4uRi0NIYCqj xM3zT9ghnNlMEtebdjF2MXJysAhoS1zrW8AMYrMJGEr8fbKJrYuRg0MCyF7ymQMkLCIgIbGpY QUTSJhZwEri7sRikLCwgLXE0U0zwTp5BbwkFrU/BrNFBXQlDv37wwYRF5Q4OfMJC4jNLKAlsX z6NjBbQiBDYl7PHFYIG6j3xiUoW03i6rlNzBMY+WchaZ+FpH0BI9MqRo3i1KKy1CJdQ1O9pKL M9IyS3MTMHF1DA2O93NTi4sT01JzEpGK95PzcTYzAYGMAgh2Ma7Z7HmKU5GBSEuXdqhcYIcSX lJ9SmZFYnBFfVJqTWnyIUYaDQ0mCl/E1UE6wKDU9tSItMwcY9jBpCQ4eJRFeS5A0b3FBYm5xZ jpE6hSjopQ479NXQAkBkERGaR5cGyzWLjHKSgnzMgIdIsRTkFqUm1mCKv+KUZyDUUmY1wxkPE 9mXgnc9FdAi5mAFosu8QdZXJKIkJJqYJww16Lm5Gn+/88FVkwPuqdxM6rc84BUY925u35v5uz Ys96VR6dwaVNH0MVQFin+zt3bHX5H+N8xZFPYz/4+9lbHpMaerUeKph8+9fXF1sPX+jj7g/Wy 5l3z6Xig1Lc0c9OCJdbv6iKNb0RpV3wpPHjv19eSotuRS73Kr+VV9Nevm71UJ9VVoViJpTgj0 VCLuag4EQDQrDkqsAIAAA== X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-11.tower-31.messagelabs.com!1481763897!45303895!1 X-Originating-IP: [198.145.29.136] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 21668 invoked from network); 15 Dec 2016 01:04:58 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.136) by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 15 Dec 2016 01:04:58 -0000 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3BADB2038E; Thu, 15 Dec 2016 01:04:55 +0000 (UTC) Received: from [10.1.10.56] (96-82-76-110-static.hfc.comcastbusiness.net [96.82.76.110]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CB60920303; Thu, 15 Dec 2016 01:04:53 +0000 (UTC) Date: Wed, 14 Dec 2016 17:04:53 -0800 (PST) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-X260 To: julien.grall@arm.com Message-ID: User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org Subject: [Xen-devel] [PATCH] xen/arm: fix rank/vgic locks inversion bug X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The locking order is: first rank lock, then vgic lock. The order is respected everywhere, except for gic_update_one_lr. gic_update_one_lr is called with the vgic lock held, but it calls vgic_get_target_vcpu, which tries to obtain the rank lock. This can cause deadlocks. We already have a version of vgic_get_target_vcpu that doesn't take the rank lock: __vgic_get_target_vcpu. Also the only routine that modify the target vcpu are vgic_store_itargetsr and vgic_store_irouter. They call vgic_migrate_irq, which already take the vgic lock. Solve the lock inversion problem, by not taking the rank lock in gic_update_one_lr (calling __vgic_get_target_vcpu instead of vgic_get_target_vcpu). We make this safe, by placing modifications to rank->vcpu within regions protected by the vgic lock. Signed-off-by: Stefano Stabellini --- There are supposed to be 2 Coverity IDs associated with this, but the system the currently down. diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index a5348f2..3483192 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -506,7 +506,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) list_del_init(&p->inflight); if ( test_and_clear_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) { - struct vcpu *v_target = vgic_get_target_vcpu(v, irq); + struct vcpu *v_target = __vgic_get_target_vcpu(v, irq); irq_set_affinity(p->desc, cpumask_of(v_target->processor)); } } diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 3dbcfe8..666ebdb 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -95,6 +95,7 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, { unsigned int i; unsigned int virq; + unsigned long flags; ASSERT(spin_is_locked(&rank->lock)); @@ -116,6 +117,7 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, { unsigned int new_target, old_target; uint8_t new_mask; + struct vcpu *old; /* * Don't need to mask as we rely on new_mask to fit for only one @@ -153,16 +155,18 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, new_target--; old_target = rank->vcpu[offset]; + old = d->vcpu[old_target]; + spin_lock_irqsave(&old->arch.vgic.lock, flags); /* Only migrate the vIRQ if the target vCPU has changed */ if ( new_target != old_target ) { - vgic_migrate_irq(d->vcpu[old_target], + vgic_migrate_irq(old, d->vcpu[new_target], virq); } - rank->vcpu[offset] = new_target; + spin_unlock_irqrestore(&old->arch.vgic.lock, flags); } } diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index d61479d..880c643 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -122,6 +122,7 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank, { struct vcpu *new_vcpu, *old_vcpu; unsigned int virq; + unsigned long flags; /* There is 1 vIRQ per IROUTER */ virq = offset / NR_BYTES_PER_IROUTER; @@ -150,11 +151,13 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank, if ( !new_vcpu ) return; + spin_lock_irqsave(&old_vcpu->arch.vgic.lock, flags); /* Only migrate the IRQ if the target vCPU has changed */ if ( new_vcpu != old_vcpu ) vgic_migrate_irq(old_vcpu, new_vcpu, virq); rank->vcpu[offset] = new_vcpu->vcpu_id; + spin_unlock_irqrestore(&old_vcpu->arch.vgic.lock, flags); } static inline bool vgic_reg64_check_access(struct hsr_dabt dabt) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 364d5f0..4f7971f 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -219,7 +219,7 @@ int vcpu_vgic_free(struct vcpu *v) } /* The function should be called by rank lock taken. */ -static struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq) +struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq) { struct vgic_irq_rank *rank = vgic_rank_irq(v, virq); @@ -257,9 +257,11 @@ static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq) void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) { - unsigned long flags; struct pending_irq *p = irq_to_pending(old, irq); + ASSERT(spin_is_locked(&old->arch.vgic.lock)); + ASSERT(!local_irq_is_enabled()); + /* nothing to do for virtual interrupts */ if ( p->desc == NULL ) return; @@ -270,12 +272,9 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) perfc_incr(vgic_irq_migrates); - spin_lock_irqsave(&old->arch.vgic.lock, flags); - if ( list_empty(&p->inflight) ) { irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); return; } /* If the IRQ is still lr_pending, re-inject it to the new vcpu */ @@ -285,7 +284,6 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) list_del_init(&p->lr_queue); list_del_init(&p->inflight); irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); vgic_vcpu_inject_irq(new, irq); return; } @@ -293,8 +291,6 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) * and wait for the EOI */ if ( !list_empty(&p->inflight) ) set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status); - - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); } void arch_move_irqs(struct vcpu *v) diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 672f649..c911b33 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -293,6 +293,7 @@ extern int domain_vgic_init(struct domain *d, unsigned int nr_spis); extern void domain_vgic_free(struct domain *d); extern int vcpu_vgic_init(struct vcpu *v); extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq); +extern struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq); extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq); extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq); extern void vgic_clear_pending_irqs(struct vcpu *v);