From patchwork Thu May 4 15:31:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9712171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5CE056020B for ; Thu, 4 May 2017 15:31:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E453286BC for ; Thu, 4 May 2017 15:31:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43157286C2; Thu, 4 May 2017 15:31:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5E1AB286BC for ; Thu, 4 May 2017 15:31:42 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d6Ih2-0004Oh-Rw; Thu, 04 May 2017 15:29:24 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1d6Ih0-0004LL-VE for xen-devel@lists.xenproject.org; Thu, 04 May 2017 15:29:23 +0000 Received: from [193.109.254.147] by server-6.bemta-6.messagelabs.com id B1/A9-03920-2D84B095; Thu, 04 May 2017 15:29:22 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrILMWRWlGSWpSXmKPExsVysyfVTfeiB3e kwaQ9ihbft0xmcmD0OPzhCksAYxRrZl5SfkUCa0b3+uqCNRUVP658YW9g/B7WxcjFISSwkVGi 6/UhNghnOaPEz9c/WLsYOTnYBHQldtx8zQxiiwiESjxd8B3MZhZQkth/9hpjFyMHh7CAh8S31 XkgYRYBVYmm3TPBWnkFrCVWz3kCZksIyEk0nL8P1soJFN+7dzeYLSRgJfHhyUm2CYzcCxgZVj FqFKcWlaUW6Roa6yUVZaZnlOQmZuboGhqY6eWmFhcnpqfmJCYV6yXn525iBHqXAQh2MH5ZFnC IUZKDSUmUV/0Ve6QQX1J+SmVGYnFGfFFpTmrxIUYZDg4lCd4Z7tyRQoJFqempFWmZOcAwg0lL cPAoifB2ugGleYsLEnOLM9MhUqcYdTn6Oj6+ZxJiycvPS5US57UGmSEAUpRRmgc3Ahbylxhlp YR5GYGOEuIpSC3KzSxBlX/FKM7BqCTMywWMICGezLwSuE2vgI5gAjqiWZYD5IiSRISUVAPjpg XuL393bLu40VZ7lcWxEr7pTIbvJt3NcHqmUvRVdVKarkG6pXaPVWeDwFTV8BidHbGPbiZPC38 o1hRo5ckZHXlNwMVrz8elfXtd4k7cWBRhnjh3+f+Y6fMXscRM5Oq2P719quVbdr946zfz6xgK H3V/fz7BQWnLgpzCuX3pSk9zGa5a+1cpsRRnJBpqMRcVJwIAuO35vHQCAAA= X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1493911760!47564250!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45019 invoked from network); 4 May 2017 15:29:21 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-15.tower-27.messagelabs.com with SMTP; 4 May 2017 15:29:21 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A5BD2B; Thu, 4 May 2017 08:29:20 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C64E43F23B; Thu, 4 May 2017 08:29:19 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Thu, 4 May 2017 16:31:22 +0100 Message-Id: <20170504153123.1204-10-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170504153123.1204-1-andre.przywara@arm.com> References: <20170504153123.1204-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH 09/10] ARM: vGIC: introduce vgic_get/put_pending_irq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP So far there is always a statically allocated struct pending_irq for each interrupt that we deal with. To prepare for dynamically allocated LPIs, introduce a put/get wrapper to get hold of a pending_irq pointer. So far get() just returns the same pointer and put() is empty, but this change allows to introduce ref-counting very easily, to prevent use-after-free usage of struct pending_irq's once LPIs get unmapped from a domain. For convenience reasons we introduce a put_unlock() version, which also drops the pending_irq lock before calling the actual put() function. Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 24 +++++++++++++++------ xen/arch/arm/vgic-v2.c | 4 ++-- xen/arch/arm/vgic-v3.c | 4 ++-- xen/arch/arm/vgic.c | 52 +++++++++++++++++++++++++++++++-------------- xen/include/asm-arm/event.h | 20 +++++++++++------ xen/include/asm-arm/vgic.h | 7 +++++- 6 files changed, 77 insertions(+), 34 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 737da6b..7147b6c 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -136,9 +136,7 @@ void gic_route_irq_to_xen(struct irq_desc *desc, unsigned int priority) int gic_route_irq_to_guest(struct domain *d, unsigned int virq, struct irq_desc *desc, unsigned int priority) { - /* Use vcpu0 to retrieve the pending_irq struct. Given that we only - * route SPIs to guests, it doesn't make any difference. */ - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq); + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq); ASSERT(spin_is_locked(&desc->lock)); /* Caller has already checked that the IRQ is an SPI */ @@ -148,7 +146,10 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, if ( p->desc || /* The VIRQ should not be already enabled by the guest */ test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) + { + vgic_put_pending_irq(d, p); return -EBUSY; + } desc->handler = gic_hw_ops->gic_guest_irq_type; set_bit(_IRQ_GUEST, &desc->status); @@ -159,6 +160,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, p->desc = desc; + vgic_put_pending_irq(d, p); return 0; } @@ -166,7 +168,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, struct irq_desc *desc) { - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq); + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq); ASSERT(spin_is_locked(&desc->lock)); ASSERT(test_bit(_IRQ_GUEST, &desc->status)); @@ -189,7 +191,10 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, */ if ( test_bit(_IRQ_INPROGRESS, &desc->status) || !test_bit(_IRQ_DISABLED, &desc->status) ) + { + vgic_put_pending_irq(d, p); return -EBUSY; + } } clear_bit(_IRQ_GUEST, &desc->status); @@ -197,6 +202,8 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, p->desc = NULL; + vgic_put_pending_irq(d, p); + return 0; } @@ -383,13 +390,14 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n) void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq) { - struct pending_irq *p = irq_to_pending(v, virtual_irq); + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, virtual_irq); unsigned long flags; spin_lock_irqsave(&v->arch.vgic.lock, flags); if ( !list_empty(&p->lr_queue) ) list_del_init(&p->lr_queue); spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + vgic_put_pending_irq(v->domain, p); } void gic_raise_inflight_irq(struct vcpu *v, struct pending_irq *n) @@ -406,6 +414,7 @@ void gic_raise_inflight_irq(struct vcpu *v, struct pending_irq *n) gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n", n->irq, v->domain->domain_id, v->vcpu_id); #endif + vgic_put_pending_irq(v->domain, n); } void gic_raise_guest_irq(struct vcpu *v, struct pending_irq *p) @@ -440,8 +449,9 @@ static void gic_update_one_lr(struct vcpu *v, int i) gic_hw_ops->read_lr(i, &lr_val); irq = lr_val.virq; - p = irq_to_pending(v, irq); + p = vgic_get_pending_irq(v->domain, v, irq); spin_lock(&p->lock); + if ( lr_val.state & GICH_LR_ACTIVE ) { set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); @@ -499,7 +509,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) } } } - spin_unlock(&p->lock); + vgic_put_pending_irq_unlock(v->domain, p); } void gic_clear_lrs(struct vcpu *v) diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index bf755ae..36ed04f 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -75,11 +75,11 @@ static uint32_t vgic_fetch_itargetsr(struct vcpu *v, unsigned int offset) for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ ) { - struct pending_irq *p = irq_to_pending(v, offset); + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, offset); spin_lock(&p->lock); reg |= (1 << p->vcpu_id) << (i * NR_BITS_PER_TARGET); - spin_unlock(&p->lock); + vgic_put_pending_irq_unlock(v->domain, p); } return reg; diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 15a512a..fff518e 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -105,10 +105,10 @@ static uint64_t vgic_fetch_irouter(struct vcpu *v, unsigned int offset) /* There is exactly 1 vIRQ per IROUTER */ offset /= NR_BYTES_PER_IROUTER; - p = irq_to_pending(v, offset); + p = vgic_get_pending_irq(v->domain, v, offset); spin_lock(&p->lock); aff = vcpuid_to_vaffinity(p->vcpu_id); - spin_unlock(&p->lock); + vgic_put_pending_irq_unlock(v->domain, p); return aff; } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 530ac55..c7d645e 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -245,17 +245,16 @@ static void set_config(struct pending_irq *p, unsigned int config) set_bit(GIC_IRQ_GUEST_EDGE, &p->status); } - #define DEFINE_GATHER_IRQ_INFO(name, get_val, shift) \ uint32_t gather_irq_info_##name(struct vcpu *v, unsigned int irq) \ { \ uint32_t ret = 0, i; \ for ( i = 0; i < (32 / shift); i++ ) \ { \ - struct pending_irq *p = irq_to_pending(v, irq + i); \ + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, irq + i); \ spin_lock(&p->lock); \ ret |= get_val(p) << (shift * i); \ - spin_unlock(&p->lock); \ + vgic_put_pending_irq_unlock(v->domain, p); \ } \ return ret; \ } @@ -267,10 +266,10 @@ void scatter_irq_info_##name(struct vcpu *v, unsigned int irq, \ unsigned int i; \ for ( i = 0; i < (32 / shift); i++ ) \ { \ - struct pending_irq *p = irq_to_pending(v, irq + i); \ + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, irq + i); \ spin_lock(&p->lock); \ set_val(p, (value >> (shift * i)) & ((1 << shift) - 1)); \ - spin_unlock(&p->lock); \ + vgic_put_pending_irq_unlock(v->domain, p); \ } \ } @@ -332,13 +331,13 @@ void arch_move_irqs(struct vcpu *v) for ( i = 32; i < vgic_num_irqs(d); i++ ) { - p = irq_to_pending(d->vcpu[0], i); + p = vgic_get_pending_irq(d, NULL, i); spin_lock(&p->lock); v_target = vgic_get_target_vcpu(d, p); if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) irq_set_affinity(p->desc, cpu_mask); - spin_unlock(&p->lock); + vgic_put_pending_irq_unlock(d, p); } } @@ -351,7 +350,7 @@ void vgic_disable_irqs(struct vcpu *v, unsigned int irq, uint32_t r) struct vcpu *v_target; while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { - p = irq_to_pending(v, irq + i); + p = vgic_get_pending_irq(v->domain, v, irq + i); v_target = vgic_get_target_vcpu(v->domain, p); clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status); gic_remove_from_queues(v_target, irq + i); @@ -361,6 +360,7 @@ void vgic_disable_irqs(struct vcpu *v, unsigned int irq, uint32_t r) p->desc->handler->disable(p->desc); spin_unlock_irqrestore(&p->desc->lock, flags); } + vgic_put_pending_irq(v->domain, p); i++; } } @@ -376,7 +376,7 @@ void vgic_enable_irqs(struct vcpu *v, unsigned int irq, uint32_t r) struct domain *d = v->domain; while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { - p = irq_to_pending(v, irq + i); + p = vgic_get_pending_irq(d, v, irq + i); v_target = vgic_get_target_vcpu(d, p); spin_lock_irqsave(&v_target->arch.vgic.lock, flags); @@ -404,6 +404,7 @@ void vgic_enable_irqs(struct vcpu *v, unsigned int irq, uint32_t r) p->desc->handler->enable(p->desc); spin_unlock_irqrestore(&p->desc->lock, flags); } + vgic_put_pending_irq(v->domain, p); i++; } } @@ -461,23 +462,39 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, return true; } -struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) +struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq) +{ + ASSERT(irq >= NR_LOCAL_IRQS); + + return &d->arch.vgic.pending_irqs[irq - 32]; +} + +struct pending_irq *vgic_get_pending_irq(struct domain *d, struct vcpu *v, + unsigned int irq) { struct pending_irq *n; + /* Pending irqs allocation strategy: the first vgic.nr_spis irqs * are used for SPIs; the rests are used for per cpu irqs */ if ( irq < 32 ) + { + ASSERT(v); n = &v->arch.vgic.pending_irqs[irq]; + } else - n = &v->domain->arch.vgic.pending_irqs[irq - 32]; + n = spi_to_pending(d, irq); + return n; } -struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq) +void vgic_put_pending_irq(struct domain *d, struct pending_irq *p) { - ASSERT(irq >= NR_LOCAL_IRQS); +} - return &d->arch.vgic.pending_irqs[irq - 32]; +void vgic_put_pending_irq_unlock(struct domain *d, struct pending_irq *p) +{ + spin_unlock(&p->lock); + vgic_put_pending_irq(d, p); } void vgic_clear_pending_irqs(struct vcpu *v) @@ -494,7 +511,7 @@ void vgic_clear_pending_irqs(struct vcpu *v) void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) { - struct pending_irq *iter, *n = irq_to_pending(v, virq); + struct pending_irq *iter, *n = vgic_get_pending_irq(v->domain, v, virq); unsigned long flags; bool running; @@ -504,6 +521,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) if ( test_bit(_VPF_down, &v->pause_flags) ) { spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + vgic_put_pending_irq(v->domain, n); return; } @@ -536,6 +554,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) spin_unlock(&n->lock); out: + vgic_put_pending_irq(v->domain, n); spin_unlock_irqrestore(&v->arch.vgic.lock, flags); /* we have a new higher priority irq, inject it into the guest */ running = v->is_running; @@ -550,12 +569,13 @@ out: void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq) { struct vcpu *v; - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq); + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq); /* the IRQ needs to be an SPI */ ASSERT(virq >= 32 && virq <= vgic_num_irqs(d)); v = vgic_get_target_vcpu(d, p); + vgic_put_pending_irq(d, p); vgic_vcpu_inject_irq(v, virq); } diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index 5330dfe..df672e2 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -16,8 +16,7 @@ static inline int vcpu_event_delivery_is_enabled(struct vcpu *v) static inline int local_events_need_delivery_nomask(void) { - struct pending_irq *p = irq_to_pending(current, - current->domain->arch.evtchn_irq); + int ret = 0; /* XXX: if the first interrupt has already been delivered, we should * check whether any other interrupts with priority higher than the @@ -28,11 +27,20 @@ static inline int local_events_need_delivery_nomask(void) * case. */ if ( gic_events_need_delivery() ) - return 1; + { + ret = 1; + } + else + { + struct pending_irq *p; - if ( vcpu_info(current, evtchn_upcall_pending) && - list_empty(&p->inflight) ) - return 1; + p = vgic_get_pending_irq(current->domain, current, + current->domain->arch.evtchn_irq); + if ( vcpu_info(current, evtchn_upcall_pending) && + list_empty(&p->inflight) ) + ret = 1; + vgic_put_pending_irq(current->domain, p); + } return 0; } diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 186e6df..36e4de2 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -299,7 +299,12 @@ extern struct vcpu *vgic_get_target_vcpu(struct domain *d, extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq); extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq); extern void vgic_clear_pending_irqs(struct vcpu *v); -extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq); +extern struct pending_irq *vgic_get_pending_irq(struct domain *d, + struct vcpu *v, + unsigned int irq); +extern void vgic_put_pending_irq(struct domain *d, struct pending_irq *p); +extern void vgic_put_pending_irq_unlock(struct domain *d, + struct pending_irq *p); extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq); extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s); extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);