From patchwork Fri Jul 21 19:59:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9857615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 61399601C0 for ; Fri, 21 Jul 2017 20:02:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 535792863B for ; Fri, 21 Jul 2017 20:02:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 47FD528654; Fri, 21 Jul 2017 20:02:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D1A972863B for ; Fri, 21 Jul 2017 20:02:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe5t-0005B3-6h; Fri, 21 Jul 2017 20:00:13 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe5r-00057O-6h for xen-devel@lists.xenproject.org; Fri, 21 Jul 2017 20:00:11 +0000 Received: from [85.158.137.68] by server-14.bemta-3.messagelabs.com id C3/0B-01862-A4D52795; Fri, 21 Jul 2017 20:00:10 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrILMWRWlGSWpSXmKPExsVysyfVTdcztij SYMoibYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNWP51R+MBU9UK752PmVpYOyX7WLk4hAS2MQo 0bhqNjOEs5xR4vOMqyxdjJwcbAK6EjtuvmYGsUUEQiWeLvgOZjMLKEnsP3uNsYuRg0NYIEpi4 cZoEJNFQFXi2ewykApeARuJy+vvs4PYEgJyEg3n74N1cgLFV1x+wgZiCwlYSzQvnsoygZF7AS PDKkaN4tSistQiXSMjvaSizPSMktzEzBxdQwNjvdzU4uLE9NScxKRiveT83E2MQO/WMzAw7mC cesLvEKMkB5OSKK+mVVGkEF9SfkplRmJxRnxRaU5q8SFGGQ4OJQneVTFAOcGi1PTUirTMHGCY waQlOHiURHhFQNK8xQWJucWZ6RCpU4y6HK8m/P/GJMSSl5+XKiXOGwNSJABSlFGaBzcCFvKXG GWlhHkZGRgYhHgKUotyM0tQ5V8xinMwKgnz5oNM4cnMK4Hb9AroCCagIx65FYAcUZKIkJJqYN RMql52grHqjvTGX2IcYeVCj3/+tDT4/Z/7xq/9V/OjrA4zCtbqf1FmDT7CfP9m0+LSK6zbjzF 15OhdnJ/FVyXbdStisr/7nbk2lps8NqQ17e8TW/FVq0s36ap1By/3gyfsz+rmnF27+1qf7iIJ v0dZEjNZpzHYnfSTZbi+b20To/GijWFT65VYijMSDbWYi4oTAY2NaCh0AgAA X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-2.tower-31.messagelabs.com!1500667209!94891687!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 46706 invoked from network); 21 Jul 2017 20:00:09 -0000 Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-2.tower-31.messagelabs.com with SMTP; 21 Jul 2017 20:00:09 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C4BD780D; Fri, 21 Jul 2017 13:00:08 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 071C13F3E1; Fri, 21 Jul 2017 13:00:07 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 21 Jul 2017 20:59:57 +0100 Message-Id: <20170721200010.29010-10-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170721200010.29010-1-andre.przywara@arm.com> References: <20170721200010.29010-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP As the priority value is now officially a member of struct pending_irq, we need to take its lock when manipulating it via ITS commands. Make sure we take the IRQ lock after the VCPU lock when we need both. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic-v3-its.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c index 66095d4..705708a 100644 --- a/xen/arch/arm/vgic-v3-its.c +++ b/xen/arch/arm/vgic-v3-its.c @@ -402,6 +402,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p) uint8_t property; int ret; + ASSERT(spin_is_locked(&p->lock)); /* * If no redistributor has its LPIs enabled yet, we can't access the * property table. In this case we just can't update the properties, @@ -419,7 +420,7 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p) if ( ret ) return ret; - write_atomic(&p->priority, property & LPI_PROP_PRIO_MASK); + p->priority = property & LPI_PROP_PRIO_MASK; if ( property & LPI_PROP_ENABLED ) set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); @@ -457,7 +458,7 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr) uint32_t devid = its_cmd_get_deviceid(cmdptr); uint32_t eventid = its_cmd_get_id(cmdptr); struct pending_irq *p; - unsigned long flags; + unsigned long flags, vcpu_flags; struct vcpu *vcpu; uint32_t vlpi; int ret = -1; @@ -485,7 +486,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr) if ( unlikely(!p) ) goto out_unlock_its; - spin_lock_irqsave(&vcpu->arch.vgic.lock, flags); + spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags); + vgic_irq_lock(p, flags); /* Read the property table and update our cached status. */ if ( update_lpi_property(d, p) ) @@ -497,7 +499,8 @@ static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr) ret = 0; out_unlock: - spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags); + vgic_irq_unlock(p, flags); + spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags); out_unlock_its: spin_unlock(&its->its_lock); @@ -517,7 +520,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr) struct pending_irq *pirqs[16]; uint64_t vlpi = 0; /* 64-bit to catch overflows */ unsigned int nr_lpis, i; - unsigned long flags; + unsigned long flags, vcpu_flags; int ret = 0; /* @@ -542,7 +545,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr) vcpu = get_vcpu_from_collection(its, collid); spin_unlock(&its->its_lock); - spin_lock_irqsave(&vcpu->arch.vgic.lock, flags); + spin_lock_irqsave(&vcpu->arch.vgic.lock, vcpu_flags); read_lock(&its->d->arch.vgic.pend_lpi_tree_lock); do @@ -555,9 +558,13 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr) for ( i = 0; i < nr_lpis; i++ ) { + vgic_irq_lock(pirqs[i], flags); /* We only care about LPIs on our VCPU. */ if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id ) + { + vgic_irq_unlock(pirqs[i], flags); continue; + } vlpi = pirqs[i]->irq; /* If that fails for a single LPI, carry on to handle the rest. */ @@ -566,6 +573,8 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr) update_lpi_vgic_status(vcpu, pirqs[i]); else ret = err; + + vgic_irq_unlock(pirqs[i], flags); } /* * Loop over the next gang of pending_irqs until we reached the end of @@ -576,7 +585,7 @@ static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr) (nr_lpis == ARRAY_SIZE(pirqs)) ); read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock); - spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags); + spin_unlock_irqrestore(&vcpu->arch.vgic.lock, vcpu_flags); return ret; } @@ -712,6 +721,7 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr) uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid; uint16_t collid = its_cmd_get_collection(cmdptr); struct pending_irq *pirq; + unsigned long flags; struct vcpu *vcpu = NULL; int ret = -1; @@ -765,7 +775,9 @@ static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr) * We don't need the VGIC VCPU lock here, because the pending_irq isn't * in the radix tree yet. */ + vgic_irq_lock(pirq, flags); ret = update_lpi_property(its->d, pirq); + vgic_irq_unlock(pirq, flags); if ( ret ) goto out_remove_host_entry;