From patchwork Fri Jul 21 19:59:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9857609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 96AD7601C0 for ; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8850E2863B for ; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7D6AA2864F; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2BCC92863B for ; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe5j-0004om-Us; Fri, 21 Jul 2017 20:00:03 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe5i-0004hf-Cs for xen-devel@lists.xenproject.org; Fri, 21 Jul 2017 20:00:02 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id 92/29-02962-14D52795; Fri, 21 Jul 2017 20:00:01 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrALMWRWlGSWpSXmKPExsVysyfVTdcxtij S4PxlQYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePKV8WCT0IVH57cY25gfMbfxcjFISSwmVFi yeGFrBDOckaJzYtWMHYxcnKwCehK7Lj5mhnEFhEIlXi64DuYzSygJLH/7DWwGmGBIImj/98yd TFycLAIqErMuukFEuYVsJZ4+GISK4gtISAn0XD+Plgrp4CNxIrLT9hAbCGgmubFU1kmMHIvYG RYxahenFpUllqka6SXVJSZnlGSm5iZo2toYKaXm1pcnJiempOYVKyXnJ+7iRHoWwYg2MG47K/ TIUZJDiYlUV5Nq6JIIb6k/JTKjMTijPii0pzU4kOMMhwcShK86jFAOcGi1PTUirTMHGCQwaQl OHiURHhDQdK8xQWJucWZ6RCpU4y6HK8m/P/GJMSSl5+XKiXOGwNSJABSlFGaBzcCFvCXGGWlh HkZgY4S4ilILcrNLEGVf8UozsGoJMxrBjKFJzOvBG7TK6AjmICOeORWAHJESSJCSqqBccGa1q h91zsr189Padi8LDRQ8n6J5JkLxxM+su1ddYH7VvijN489T99IrBPYw2ajvGlrXMf3K89M+F6 zfahh2/5hm9/Z6r85p+aV1X71e9u6NEf8TNEBu+dPoxe1SBew318VmsB5kGWGxp07vH+apRfU lB/O/Py50awmylTjvcES7+dmpyJ2eimxFGckGmoxFxUnAgD358YncwIAAA== X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-11.tower-27.messagelabs.com!1500667200!76493572!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 37778 invoked from network); 21 Jul 2017 20:00:01 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-11.tower-27.messagelabs.com with SMTP; 21 Jul 2017 20:00:01 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C9FC1596; Fri, 21 Jul 2017 13:00:00 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C18463F3E1; Fri, 21 Jul 2017 12:59:59 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 21 Jul 2017 20:59:49 +0100 Message-Id: <20170721200010.29010-2-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170721200010.29010-1-andre.przywara@arm.com> References: <20170721200010.29010-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently we protect the pending_irq structure with the corresponding VGIC VCPU lock. There are problems in certain corner cases (for instance if an IRQ is migrating), so let's introduce a per-IRQ lock, which will protect the consistency of this structure independent from any VCPU. For now this just introduces and initializes the lock, also adds wrapper macros to simplify its usage (and help debugging). Signed-off-by: Andre Przywara --- xen/arch/arm/vgic.c | 1 + xen/include/asm-arm/vgic.h | 11 +++++++++++ 2 files changed, 12 insertions(+) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 1e5107b..38dacd3 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -69,6 +69,7 @@ void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq) memset(p, 0, sizeof(*p)); INIT_LIST_HEAD(&p->inflight); INIT_LIST_HEAD(&p->lr_queue); + spin_lock_init(&p->lock); p->irq = virq; p->lpi_vcpu_id = INVALID_VCPU_ID; } diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index d4ed23d..1c38b9a 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -90,6 +90,14 @@ struct pending_irq * TODO: when implementing irq migration, taking only the current * vgic lock is not going to be enough. */ struct list_head lr_queue; + /* The lock protects the consistency of this structure. A single status bit + * can be read and/or set without holding the lock using the atomic + * set_bit/clear_bit/test_bit functions, however accessing multiple bits or + * relating to other members in this struct requires the lock. + * The list_head members are protected by their corresponding VCPU lock, + * it is not sufficient to hold this pending_irq lock here to query or + * change list order or affiliation. */ + spinlock_t lock; }; #define NR_INTERRUPT_PER_RANK 32 @@ -156,6 +164,9 @@ struct vgic_ops { #define vgic_lock(v) spin_lock_irq(&(v)->domain->arch.vgic.lock) #define vgic_unlock(v) spin_unlock_irq(&(v)->domain->arch.vgic.lock) +#define vgic_irq_lock(p, flags) spin_lock_irqsave(&(p)->lock, flags) +#define vgic_irq_unlock(p, flags) spin_unlock_irqrestore(&(p)->lock, flags) + #define vgic_lock_rank(v, r, flags) spin_lock_irqsave(&(r)->lock, flags) #define vgic_unlock_rank(v, r, flags) spin_unlock_irqrestore(&(r)->lock, flags)