From patchwork Fri Jun 9 17:41:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9779033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3C27260318 for ; Fri, 9 Jun 2017 17:44:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28888286E0 for ; Fri, 9 Jun 2017 17:44:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D1C4286E2; Fri, 9 Jun 2017 17:44:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 74BA9286F6 for ; Fri, 9 Jun 2017 17:44:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dJNvA-0003hc-Ql; Fri, 09 Jun 2017 17:42:04 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dJNv9-0003eg-2c for xen-devel@lists.xenproject.org; Fri, 09 Jun 2017 17:42:03 +0000 Received: from [85.158.137.68] by server-13.bemta-3.messagelabs.com id 06/F1-17076-AEDDA395; Fri, 09 Jun 2017 17:42:02 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysyfVTfflXat Ig8WHZCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oyTTSUF32wr5jYtZ21gfGrQxcjFISSwmVFi 76OfjBDOckaJVfOus3QxcnKwCehK7Lj5mhnEFhEIlXi64DszSBGzwHVGidM75rOBJIQFXCVmd v0HK2IRUJU4smobI4jNK2At8en5VLBBEgJyEg3n74PVcALFt+1ZC9YrJGAl0XLoMvsERu4FjA yrGDWKU4vKUot0DS30kooy0zNKchMzc3QNDYz1clOLixPTU3MSk4r1kvNzNzECPVzPwMC4g/H 3ac9DjJIcTEqivNMKrCKF+JLyUyozEosz4otKc1KLDzHKcHAoSfD23AHKCRalpqdWpGXmAEMN Ji3BwaMkwvviJFCat7ggMbc4Mx0idYpRl2PD6vVfmIRY8vLzUqXEeS1BZgiAFGWU5sGNgIX9J UZZKWFeRgYGBiGegtSi3MwSVPlXjOIcjErCvIbAKBLiycwrgdv0CugIJqAjlryzADmiJBEhJd XAyBCv7PV877r7UZeff9vsxyJtJPZh9tbeb6G+MVf2qm34mtJ8wfBZfO8Uy7e6MzSYP+76Lnx mudvV00pPFn29cs/3WvHBew/vi69oXfjwhlx01S/vzdPrV67gyTk2z4NVp1Qr9vganuYotudX /6ZXLV+4KeTyB7cc3sn66yYVfzKWWLq2RlG45p4SS3FGoqEWc1FxIgC9JZEddgIAAA== X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-3.tower-31.messagelabs.com!1497030120!104928188!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 29737 invoked from network); 9 Jun 2017 17:42:01 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-3.tower-31.messagelabs.com with SMTP; 9 Jun 2017 17:42:01 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A3651596; Fri, 9 Jun 2017 10:42:00 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1FC563F578; Fri, 9 Jun 2017 10:41:58 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 9 Jun 2017 18:41:21 +0100 Message-Id: <20170609174141.5068-15-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170609174141.5068-1-andre.przywara@arm.com> References: <20170609174141.5068-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org, Vijaya Kumar K , Vijay Kilari , Shanker Donthineni , Manish Jaggi Subject: [Xen-devel] [PATCH v11 14/34] ARM: GICv3: forward pending LPIs to guests X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Upon receiving an LPI on the host, we need to find the right VCPU and virtual IRQ number to get this IRQ injected. Iterate our two-level LPI table to find the domain ID and the virtual LPI number quickly when the host takes an LPI. We then look up the right VCPU in the struct pending_irq. We use the existing injection function to let the GIC emulation deal with this interrupt. This introduces a do_LPI() as a hardware gic_ops. Signed-off-by: Andre Przywara Acked-by: Julien Grall --- xen/arch/arm/gic-v2.c | 7 ++++ xen/arch/arm/gic-v3-lpi.c | 79 ++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/gic-v3.c | 1 + xen/arch/arm/gic.c | 8 +++- xen/include/asm-arm/domain.h | 3 +- xen/include/asm-arm/gic.h | 2 + xen/include/asm-arm/gic_v3_its.h | 10 +++++ 7 files changed, 108 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c index 270a136..ffbe47c 100644 --- a/xen/arch/arm/gic-v2.c +++ b/xen/arch/arm/gic-v2.c @@ -1217,6 +1217,12 @@ static int __init gicv2_init(void) return 0; } +static void gicv2_do_LPI(unsigned int lpi) +{ + /* No LPIs in a GICv2 */ + BUG(); +} + const static struct gic_hw_operations gicv2_ops = { .info = &gicv2_info, .init = gicv2_init, @@ -1244,6 +1250,7 @@ const static struct gic_hw_operations gicv2_ops = { .make_hwdom_madt = gicv2_make_hwdom_madt, .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings, .iomem_deny_access = gicv2_iomem_deny_access, + .do_LPI = gicv2_do_LPI, }; /* Set up the GIC */ diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c index dbaf45a..03d23b6 100644 --- a/xen/arch/arm/gic-v3-lpi.c +++ b/xen/arch/arm/gic-v3-lpi.c @@ -136,6 +136,85 @@ uint64_t gicv3_get_redist_address(unsigned int cpu, bool use_pta) return per_cpu(lpi_redist, cpu).redist_id << 16; } +void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq) +{ + /* + * TODO: this assumes that the struct pending_irq stays valid all of + * time. We cannot properly protect this with the current locking + * scheme, but the future per-IRQ lock will solve this problem. + */ + struct pending_irq *p = irq_to_pending(d->vcpu[0], virq); + unsigned int vcpu_id; + + if ( !p ) + return; + + vcpu_id = ACCESS_ONCE(p->lpi_vcpu_id); + if ( vcpu_id >= d->max_vcpus ) + return; + + vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq); +} + +/* + * Handle incoming LPIs, which are a bit special, because they are potentially + * numerous and also only get injected into guests. Treat them specially here, + * by just looking up their target vCPU and virtual LPI number and hand it + * over to the injection function. + * Please note that LPIs are edge-triggered only, also have no active state, + * so spurious interrupts on the host side are no issue (we can just ignore + * them). + * Also a guest cannot expect that firing interrupts that haven't been + * fully configured yet will reach the CPU, so we don't need to care about + * this special case. + */ +void gicv3_do_LPI(unsigned int lpi) +{ + struct domain *d; + union host_lpi *hlpip, hlpi; + + irq_enter(); + + /* EOI the LPI already. */ + WRITE_SYSREG32(lpi, ICC_EOIR1_EL1); + + /* Find out if a guest mapped something to this physical LPI. */ + hlpip = gic_get_host_lpi(lpi); + if ( !hlpip ) + goto out; + + hlpi.data = read_u64_atomic(&hlpip->data); + + /* + * Unmapped events are marked with an invalid LPI ID. We can safely + * ignore them, as they have no further state and no-one can expect + * to see them if they have not been mapped. + */ + if ( hlpi.virt_lpi == INVALID_LPI ) + goto out; + + d = rcu_lock_domain_by_id(hlpi.dom_id); + if ( !d ) + goto out; + + /* + * TODO: Investigate what to do here for potential interrupt storms. + * As we keep all host LPIs enabled, for disabling LPIs we would need + * to queue a ITS host command, which we avoid so far during a guest's + * runtime. Also re-enabling would trigger a host command upon the + * guest sending a command, which could be an attack vector for + * hogging the host command queue. + * See the thread around here for some background: + * https://lists.xen.org/archives/html/xen-devel/2016-12/msg00003.html + */ + vgic_vcpu_inject_lpi(d, hlpi.virt_lpi); + + rcu_unlock_domain(d); + +out: + irq_exit(); +} + static int gicv3_lpi_allocate_pendtable(uint64_t *reg) { uint64_t val; diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c index fc3614e..d539d6c 100644 --- a/xen/arch/arm/gic-v3.c +++ b/xen/arch/arm/gic-v3.c @@ -1692,6 +1692,7 @@ static const struct gic_hw_operations gicv3_ops = { .make_hwdom_dt_node = gicv3_make_hwdom_dt_node, .make_hwdom_madt = gicv3_make_hwdom_madt, .iomem_deny_access = gicv3_iomem_deny_access, + .do_LPI = gicv3_do_LPI, }; static int __init gicv3_dt_preinit(struct dt_device_node *node, const void *data) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 36e340b..9597ef8 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -732,7 +732,13 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq) do_IRQ(regs, irq, is_fiq); local_irq_disable(); } - else if (unlikely(irq < 16)) + else if ( is_lpi(irq) ) + { + local_irq_enable(); + gic_hw_ops->do_LPI(irq); + local_irq_disable(); + } + else if ( unlikely(irq < 16) ) { do_sgi(regs, irq); } diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 3d8e84c..ebaea35 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -260,7 +260,8 @@ struct arch_vcpu /* GICv3: redistributor base and flags for this vCPU */ paddr_t rdist_base; -#define VGIC_V3_RDIST_LAST (1 << 0) /* last vCPU of the rdist */ +#define VGIC_V3_RDIST_LAST (1 << 0) /* last vCPU of the rdist */ +#define VGIC_V3_LPIS_ENABLED (1 << 1) uint8_t flags; } vgic; diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h index 5d5b4cc..783937b 100644 --- a/xen/include/asm-arm/gic.h +++ b/xen/include/asm-arm/gic.h @@ -367,6 +367,8 @@ struct gic_hw_operations { int (*map_hwdom_extra_mappings)(struct domain *d); /* Deny access to GIC regions */ int (*iomem_deny_access)(const struct domain *d); + /* Handle LPIs, which require special handling */ + void (*do_LPI)(unsigned int lpi); }; void register_gic_ops(const struct gic_hw_operations *ops); diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h index 29559a3..a659184 100644 --- a/xen/include/asm-arm/gic_v3_its.h +++ b/xen/include/asm-arm/gic_v3_its.h @@ -134,6 +134,8 @@ void gicv3_its_dt_init(const struct dt_device_node *node); bool gicv3_its_host_has_its(void); +void gicv3_do_LPI(unsigned int lpi); + int gicv3_lpi_init_rdist(void __iomem * rdist_base); /* Initialize the host structures for LPIs and the host ITSes. */ @@ -164,6 +166,8 @@ int gicv3_its_map_guest_device(struct domain *d, int gicv3_allocate_host_lpi_block(struct domain *d, uint32_t *first_lpi); void gicv3_free_host_lpi_block(uint32_t first_lpi); +void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq); + #else static inline void gicv3_its_dt_init(const struct dt_device_node *node) @@ -175,6 +179,12 @@ static inline bool gicv3_its_host_has_its(void) return false; } +static inline void gicv3_do_LPI(unsigned int lpi) +{ + /* We don't enable LPIs without an ITS. */ + BUG(); +} + static inline int gicv3_lpi_init_rdist(void __iomem * rdist_base) { return -ENODEV;