From patchwork Fri Jul 21 20:00:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9857593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BE7C2601C0 for ; Fri, 21 Jul 2017 20:02:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B064728636 for ; Fri, 21 Jul 2017 20:02:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A4CC92864A; Fri, 21 Jul 2017 20:02:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 50F5A28636 for ; Fri, 21 Jul 2017 20:02:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe60-0005OM-Ba; Fri, 21 Jul 2017 20:00:20 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe5y-0005JK-TQ for xen-devel@lists.xenproject.org; Fri, 21 Jul 2017 20:00:19 +0000 Received: from [193.109.254.147] by server-2.bemta-6.messagelabs.com id 0D/FD-27137-25D52795; Fri, 21 Jul 2017 20:00:18 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysyfVTTcwtij S4NFPLYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePL5Z2MBX+EK84fOMzawNgq0MXIxSEksIlR 4tzaLlYIZzmjxPTOwyxdjJwcbAK6EjtuvmYGsUUEQiWeLvgOZjMLKEnsP3uNEcQWFnCXePSxg Q3EZhFQlThwsJsJxOYVsJGYe24RmC0hICfRcP4+WC8nUHzF5Sdg9UIC1hLNi6eyTGDkXsDIsI pRozi1qCy1SNfIUC+pKDM9oyQ3MTNH19DATC83tbg4MT01JzGpWC85P3cTI9DDDECwg/HPsoB DjJIcTEqivJpWRZFCfEn5KZUZicUZ8UWlOanFhxhlODiUJHiXRAPlBItS01Mr0jJzgKEGk5bg 4FES4RWJAUrzFhck5hZnpkOkTjHqcrya8P8bkxBLXn5eqpQ47zuQGQIgRRmleXAjYGF/iVFWS piXEegoIZ6C1KLczBJU+VeM4hyMSsK8P0Gm8GTmlcBtegV0BBPQEY/cCkCOKElESEk1MNqdiX LzDy0xzf22OuT08nm+kzdoBkpP8tv+nfHHpUUbHoY821I07ZrRfcfVAu9/zrgtWMmeU/jo94p r8UpSwWniQsZCF5ILlde7HTR8zOpzivlmzN3CRwayM2xLSj2UL+9zblX+qLqvQ/vvYgHlS3t/ TlDWS2j2kcycGaGXfcS38/Uy9WcXu5RYijMSDbWYi4oTAUTMZc12AgAA X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-3.tower-27.messagelabs.com!1500667217!108014853!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 25552 invoked from network); 21 Jul 2017 20:00:17 -0000 Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-3.tower-27.messagelabs.com with SMTP; 21 Jul 2017 20:00:17 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0672080D; Fri, 21 Jul 2017 13:00:17 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3D1123F3E1; Fri, 21 Jul 2017 13:00:16 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 21 Jul 2017 21:00:05 +0100 Message-Id: <20170721200010.29010-18-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170721200010.29010-1-andre.przywara@arm.com> References: <20170721200010.29010-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Since a VCPU can own multiple IRQs, the natural locking order is to take a VCPU lock first, then the individual per-IRQ locks. However there are situations where the target VCPU is not known without looking into the struct pending_irq first, which usually means we need to take the IRQ lock first. To solve this problem, we provide a function called vgic_lock_vcpu_irq(), which takes a locked struct pending_irq() and returns with *both* the VCPU and the IRQ lock held. This is done by looking up the target VCPU, then briefly dropping the IRQ lock, taking the VCPU lock, then grabbing the per-IRQ lock again. Before returning there is a check whether something has changed in the brief period where we didn't hold the IRQ lock, retrying in this (very rare) case. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 1ba0010..0e6dfe5 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -224,6 +224,48 @@ int vcpu_vgic_free(struct vcpu *v) return 0; } +/** + * vgic_lock_vcpu_irq(): lock both the pending_irq and the corresponding VCPU + * + * @v: the VCPU (for private IRQs) + * @p: pointer to the locked struct pending_irq + * @flags: pointer to the IRQ flags used when locking the VCPU + * + * The function takes a locked IRQ and returns with both the IRQ and the + * corresponding VCPU locked. This is non-trivial due to the locking order + * being actually the other way round (VCPU first, then IRQ). + * + * Returns: pointer to the VCPU this IRQ is targeting. + */ +struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p, + unsigned long *flags) +{ + struct vcpu *target_vcpu; + + ASSERT(spin_is_locked(&p->lock)); + + target_vcpu = vgic_get_target_vcpu(v, p); + spin_unlock(&p->lock); + + do + { + struct vcpu *current_vcpu; + + spin_lock_irqsave(&target_vcpu->arch.vgic.lock, *flags); + spin_lock(&p->lock); + + current_vcpu = vgic_get_target_vcpu(v, p); + + if ( target_vcpu->vcpu_id == current_vcpu->vcpu_id ) + return target_vcpu; + + spin_unlock(&p->lock); + spin_unlock_irqrestore(&target_vcpu->arch.vgic.lock, *flags); + + target_vcpu = current_vcpu; + } while (1); +} + struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p) { struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);