From patchwork Thu Mar 16 11:20:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9627921 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 62A676048C for ; Thu, 16 Mar 2017 11:20:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56199285B5 for ; Thu, 16 Mar 2017 11:20:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B04828604; Thu, 16 Mar 2017 11:20:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9AE6D285B8 for ; Thu, 16 Mar 2017 11:20:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1coTQu-0007je-3B; Thu, 16 Mar 2017 11:19:04 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1coTQs-0007fn-CP for xen-devel@lists.xenproject.org; Thu, 16 Mar 2017 11:19:02 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 8A/2B-21675-5A47AC85; Thu, 16 Mar 2017 11:19:01 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrOLMWRWlGSWpSXmKPExsVysyfVTXdpyak Ig0dPOCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ownaw6wFLQ5VLz4u5axgbHTuIuRi0NIYDOj xLdtD5khnOWMEqe39rB2MXJysAnoSuy4+ZoZxBYRCJWY8/MRmM0sUClx8cN+NhBbGCi+dutMF hCbRUBV4sYDiF5eARuJhpPbwWokBOQkGs7fB+vlBIp3nNwDFhcSsJb4fPQz+wRG7gWMDKsYNY pTi8pSi3SNzPSSijLTM0pyEzNzdA0NzPRyU4uLE9NTcxKTivWS83M3MQI9zAAEOxjPLAg8xCj JwaQkyvv7y6EIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8/7gPRwgJFqWmp1akZeYAQw0mLcHB oyTC+xIkzVtckJhbnJkOkTrFqCglzmvHA5QQAElklObBtcHC+xKjrJQwLyPQIUI8BalFuZklq PKvGMU5GJWEedlApvBk5pXATX8FtJgJaPE0voMgi0sSEVJSDYxXZabZztFc+uD/xPe670UO/F S2fHS5W/VLNY/p1EdrZXuvcPklPgl29H7079TBTu9cJ1FZFsOz55T1qo9e+uLgv/v2FW/GikM 9jM/de96w/j201z/ItUW12zl567/DTzccevFbbMO2I1dFHy1nPBXh9TM/efGze4Fa9l/L0+eZ dK0xPPJaJS9GiaU4I9FQi7moOBEAjuG1h2oCAAA= X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1489663140!82614325!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 62305 invoked from network); 16 Mar 2017 11:19:00 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-13.tower-27.messagelabs.com with SMTP; 16 Mar 2017 11:19:00 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D9C713D5; Thu, 16 Mar 2017 04:19:00 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 480713F5C9; Thu, 16 Mar 2017 04:18:59 -0700 (PDT) From: Andre Przywara To: Stefano Stabellini , Julien Grall Date: Thu, 16 Mar 2017 11:20:12 +0000 Message-Id: <20170316112030.20419-10-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170316112030.20419-1-andre.przywara@arm.com> References: <20170316112030.20419-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org, Shanker Donthineni , Vijay Kilari Subject: [Xen-devel] [PATCH v2 09/27] ARM: GICv3: introduce separate pending_irq structs for LPIs X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP For the same reason that allocating a struct irq_desc for each possible LPI is not an option, having a struct pending_irq for each LPI is also not feasible. However we actually only need those when an interrupt is on a vCPU (or is about to be injected). Maintain a list of those structs that we can use for the lifecycle of a guest LPI. We allocate new entries if necessary, however reuse pre-owned entries whenever possible. I added some locking around this list here, however my gut feeling is that we don't need one because this a per-VCPU structure anyway. If someone could confirm this, I'd be grateful. Teach the existing VGIC functions to find the right pointer when being given a virtual LPI number. Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 3 +++ xen/arch/arm/vgic-v3.c | 3 +++ xen/arch/arm/vgic.c | 64 +++++++++++++++++++++++++++++++++++++++++--- xen/include/asm-arm/domain.h | 2 ++ xen/include/asm-arm/vgic.h | 14 ++++++++++ 5 files changed, 83 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index a5348f2..bd3c032 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -509,6 +509,9 @@ static void gic_update_one_lr(struct vcpu *v, int i) struct vcpu *v_target = vgic_get_target_vcpu(v, irq); irq_set_affinity(p->desc, cpumask_of(v_target->processor)); } + /* If this was an LPI, mark this struct as available again. */ + if ( is_lpi(p->irq) ) + p->irq = 0; } } } diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 1fadb00..b0653c2 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -1426,6 +1426,9 @@ static int vgic_v3_vcpu_init(struct vcpu *v) if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) ) v->arch.vgic.flags |= VGIC_V3_RDIST_LAST; + spin_lock_init(&v->arch.vgic.pending_lpi_list_lock); + INIT_LIST_HEAD(&v->arch.vgic.pending_lpi_list); + return 0; } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 364d5f0..e5cfa54 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -30,6 +30,8 @@ #include #include +#include +#include #include static inline struct vgic_irq_rank *vgic_get_rank(struct vcpu *v, int rank) @@ -61,7 +63,7 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq) return vgic_get_rank(v, rank); } -static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq) +void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq) { INIT_LIST_HEAD(&p->inflight); INIT_LIST_HEAD(&p->lr_queue); @@ -244,10 +246,14 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq) static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq) { - struct vgic_irq_rank *rank = vgic_rank_irq(v, virq); + struct vgic_irq_rank *rank; unsigned long flags; int priority; + if ( is_lpi(virq) ) + return vgic_lpi_get_priority(v->domain, virq); + + rank = vgic_rank_irq(v, virq); vgic_lock_rank(v, rank, flags); priority = rank->priority[virq & INTERRUPT_RANK_MASK]; vgic_unlock_rank(v, rank, flags); @@ -446,13 +452,63 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, return true; } +/* + * Holding struct pending_irq's for each possible virtual LPI in each domain + * requires too much Xen memory, also a malicious guest could potentially + * spam Xen with LPI map requests. We cannot cover those with (guest allocated) + * ITS memory, so we use a dynamic scheme of allocating struct pending_irq's + * on demand. + */ +struct pending_irq *lpi_to_pending(struct vcpu *v, unsigned int lpi, + bool allocate) +{ + struct lpi_pending_irq *lpi_irq, *empty = NULL; + + spin_lock(&v->arch.vgic.pending_lpi_list_lock); + list_for_each_entry(lpi_irq, &v->arch.vgic.pending_lpi_list, entry) + { + if ( lpi_irq->pirq.irq == lpi ) + { + spin_unlock(&v->arch.vgic.pending_lpi_list_lock); + return &lpi_irq->pirq; + } + + if ( lpi_irq->pirq.irq == 0 && !empty ) + empty = lpi_irq; + } + + if ( !allocate ) + { + spin_unlock(&v->arch.vgic.pending_lpi_list_lock); + return NULL; + } + + if ( !empty ) + { + empty = xzalloc(struct lpi_pending_irq); + vgic_init_pending_irq(&empty->pirq, lpi); + list_add_tail(&empty->entry, &v->arch.vgic.pending_lpi_list); + } else + { + empty->pirq.status = 0; + empty->pirq.irq = lpi; + } + + spin_unlock(&v->arch.vgic.pending_lpi_list_lock); + + return &empty->pirq; +} + struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) { struct pending_irq *n; + /* Pending irqs allocation strategy: the first vgic.nr_spis irqs * are used for SPIs; the rests are used for per cpu irqs */ if ( irq < 32 ) n = &v->arch.vgic.pending_irqs[irq]; + else if ( is_lpi(irq) ) + n = lpi_to_pending(v, irq, true); else n = &v->domain->arch.vgic.pending_irqs[irq - 32]; return n; @@ -480,7 +536,7 @@ void vgic_clear_pending_irqs(struct vcpu *v) void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) { uint8_t priority; - struct pending_irq *iter, *n = irq_to_pending(v, virq); + struct pending_irq *iter, *n; unsigned long flags; bool running; @@ -488,6 +544,8 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) spin_lock_irqsave(&v->arch.vgic.lock, flags); + n = irq_to_pending(v, virq); + /* vcpu offline */ if ( test_bit(_VPF_down, &v->pause_flags) ) { diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 00b9c1a..f44a84b 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -257,6 +257,8 @@ struct arch_vcpu paddr_t rdist_base; #define VGIC_V3_RDIST_LAST (1 << 0) /* last vCPU of the rdist */ uint8_t flags; + struct list_head pending_lpi_list; + spinlock_t pending_lpi_list_lock; /* protects the pending_lpi_list */ } vgic; /* Timer registers */ diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 467333c..8f1099c 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -83,6 +83,12 @@ struct pending_irq struct list_head lr_queue; }; +struct lpi_pending_irq +{ + struct list_head entry; + struct pending_irq pirq; +}; + #define NR_INTERRUPT_PER_RANK 32 #define INTERRUPT_RANK_MASK (NR_INTERRUPT_PER_RANK - 1) @@ -296,13 +302,21 @@ extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq); extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq); extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq); extern void vgic_clear_pending_irqs(struct vcpu *v); +extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq); extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq); extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq); +extern struct pending_irq *lpi_to_pending(struct vcpu *v, unsigned int irq, + bool allocate); extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s); extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq); extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr); extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n); extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n); +/* placeholder function until the property table gets introduced */ +static inline int vgic_lpi_get_priority(struct domain *d, uint32_t vlpi) +{ + return 0xa; +} extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops); int vgic_v2_init(struct domain *d, int *mmio_count); int vgic_v3_init(struct domain *d, int *mmio_count);