From patchwork Fri Jul 21 20:00:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9857611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E82D960393 for ; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D929128636 for ; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE3D12864A; Fri, 21 Jul 2017 20:02:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB9F528636 for ; Fri, 21 Jul 2017 20:02:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe64-0005XN-Gk; Fri, 21 Jul 2017 20:00:24 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dYe62-0005Rl-Rx for xen-devel@lists.xenproject.org; Fri, 21 Jul 2017 20:00:23 +0000 Received: from [85.158.143.35] by server-10.bemta-6.messagelabs.com id C2/CE-03582-65D52795; Fri, 21 Jul 2017 20:00:22 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRWlGSWpSXmKPExsVysyfVTTc0tij SYNUvVovvWyYzOTB6HP5whSWAMYo1My8pvyKBNePCxa2MBbPbGSuaH3xgbWCcmdbFyMUhJLCJ UeLQu1WsEM5yRon7G7awdTFycrAJ6ErsuPmaGcQWEQiVeLrgO5jNLKAksf/sNUYQW1ggVuLCo h9MIDaLgKrEmR93WUFsXgEbiZu75oPZEgJyEg3n74P1cgLFV1x+AjZfSMBaonnxVJYJjNwLGB lWMWoUpxaVpRbpGprqJRVlpmeU5CZm5ugaGpjp5aYWFyemp+YkJhXrJefnbmIE+pgBCHYwfls WcIhRkoNJSZRX06ooUogvKT+lMiOxOCO+qDQntfgQowwHh5IEr3oMUE6wKDU9tSItMwcYbDBp CQ4eJRHePpA0b3FBYm5xZjpE6hSjLserCf+/MQmx5OXnpUqJ88aAFAmAFGWU5sGNgAX+JUZZK WFeRqCjhHgKUotyM0tQ5V8xinMwKgnz5oNM4cnMK4Hb9AroCCagIx65FYAcUZKIkJJqYOxyai +b+fa5ofLBn88XOHxlSikpz2j0O87keW71PSXGpbYmsdYPAheIq2helis7VnPkXLngpCvul7l 4b/VuejQvd3fClJauK0z28w2fmQst25z1pTWubZ39VgEzwf7vLO2zGB+/ZiipNC94NF9Zvll0 aUytiB/Lg/I6ofwIWRvP1Oo/dYcuK7EUZyQaajEXFScCAPMm5Ft3AgAA X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1500667220!67758690!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 52353 invoked from network); 21 Jul 2017 20:00:20 -0000 Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-10.tower-21.messagelabs.com with SMTP; 21 Jul 2017 20:00:20 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F8FB80D; Fri, 21 Jul 2017 13:00:20 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 520B83F3E1; Fri, 21 Jul 2017 13:00:19 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 21 Jul 2017 21:00:08 +0100 Message-Id: <20170721200010.29010-21-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170721200010.29010-1-andre.przywara@arm.com> References: <20170721200010.29010-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org Subject: [Xen-devel] [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The enabled bits for a group of IRQs are still stored in the irq_rank structure, although we already have the same information in pending_irq, in the GIC_IRQ_GUEST_ENABLED bit of the "status" field. Remove the storage from the irq_rank and just utilize the existing wrappers to cover enabling/disabling of multiple IRQs. This also marks the removal of the last member of struct vgic_irq_rank. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic-v2.c | 41 +++------ xen/arch/arm/vgic-v3.c | 41 +++------ xen/arch/arm/vgic.c | 201 +++++++++++++++++++++++++++------------------ xen/include/asm-arm/vgic.h | 10 +-- 4 files changed, 152 insertions(+), 141 deletions(-) diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index c7ed3ce..3320642 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -166,9 +166,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info, register_t *r, void *priv) { struct hsr_dabt dabt = info->dabt; - struct vgic_irq_rank *rank; int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase); - unsigned long flags; unsigned int irq; perfc_incr(vgicd_reads); @@ -222,20 +220,16 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info, case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD); - if ( rank == NULL) goto read_as_zero; - vgic_lock_rank(v, rank, flags); - *r = vreg_reg32_extract(rank->ienable, info); - vgic_unlock_rank(v, rank, flags); + irq = (gicd_reg - GICD_ISENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero; + *r = vgic_fetch_irq_enabled(v, irq); return 1; case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD); - if ( rank == NULL) goto read_as_zero; - vgic_lock_rank(v, rank, flags); - *r = vreg_reg32_extract(rank->ienable, info); - vgic_unlock_rank(v, rank, flags); + irq = (gicd_reg - GICD_ICENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero; + *r = vgic_fetch_irq_enabled(v, irq); return 1; /* Read the pending status of an IRQ via GICD is not supported */ @@ -386,10 +380,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info, register_t r, void *priv) { struct hsr_dabt dabt = info->dabt; - struct vgic_irq_rank *rank; int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase); - uint32_t tr; - unsigned long flags; unsigned int irq; perfc_incr(vgicd_writes); @@ -426,24 +417,16 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info, case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ISENABLER, DABT_WORD); - if ( rank == NULL) goto write_ignore; - vgic_lock_rank(v, rank, flags); - tr = rank->ienable; - vreg_reg32_setbits(&rank->ienable, r, info); - vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index); - vgic_unlock_rank(v, rank, flags); + irq = (gicd_reg - GICD_ISENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore; + vgic_store_irq_enable(v, irq, r); return 1; case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, gicd_reg - GICD_ICENABLER, DABT_WORD); - if ( rank == NULL) goto write_ignore; - vgic_lock_rank(v, rank, flags); - tr = rank->ienable; - vreg_reg32_clearbits(&rank->ienable, r, info); - vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index); - vgic_unlock_rank(v, rank, flags); + irq = (gicd_reg - GICD_ICENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore; + vgic_store_irq_disable(v, irq, r); return 1; case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN): diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index e9d46af..00cc1e5 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -676,8 +676,6 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v, register_t *r) { struct hsr_dabt dabt = info->dabt; - struct vgic_irq_rank *rank; - unsigned long flags; unsigned int irq; switch ( reg ) @@ -689,20 +687,16 @@ static int __vgic_v3_distr_common_mmio_read(const char *name, struct vcpu *v, case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD); - if ( rank == NULL ) goto read_as_zero; - vgic_lock_rank(v, rank, flags); - *r = vreg_reg32_extract(rank->ienable, info); - vgic_unlock_rank(v, rank, flags); + irq = (reg - GICD_ISENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero; + *r = vgic_fetch_irq_enabled(v, irq); return 1; case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD); - if ( rank == NULL ) goto read_as_zero; - vgic_lock_rank(v, rank, flags); - *r = vreg_reg32_extract(rank->ienable, info); - vgic_unlock_rank(v, rank, flags); + irq = (reg - GICD_ICENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto read_as_zero; + *r = vgic_fetch_irq_enabled(v, irq); return 1; /* Read the pending status of an IRQ via GICD/GICR is not supported */ @@ -752,9 +746,6 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v, register_t r) { struct hsr_dabt dabt = info->dabt; - struct vgic_irq_rank *rank; - uint32_t tr; - unsigned long flags; unsigned int irq; switch ( reg ) @@ -765,24 +756,16 @@ static int __vgic_v3_distr_common_mmio_write(const char *name, struct vcpu *v, case VRANGE32(GICD_ISENABLER, GICD_ISENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, reg - GICD_ISENABLER, DABT_WORD); - if ( rank == NULL ) goto write_ignore; - vgic_lock_rank(v, rank, flags); - tr = rank->ienable; - vreg_reg32_setbits(&rank->ienable, r, info); - vgic_enable_irqs(v, (rank->ienable) & (~tr), rank->index); - vgic_unlock_rank(v, rank, flags); + irq = (reg - GICD_ISENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore; + vgic_store_irq_enable(v, irq, r); return 1; case VRANGE32(GICD_ICENABLER, GICD_ICENABLERN): if ( dabt.size != DABT_WORD ) goto bad_width; - rank = vgic_rank_offset(v, 1, reg - GICD_ICENABLER, DABT_WORD); - if ( rank == NULL ) goto write_ignore; - vgic_lock_rank(v, rank, flags); - tr = rank->ienable; - vreg_reg32_clearbits(&rank->ienable, r, info); - vgic_disable_irqs(v, (~rank->ienable) & tr, rank->index); - vgic_unlock_rank(v, rank, flags); + irq = (reg - GICD_ICENABLER) * 8; + if ( irq >= v->domain->arch.vgic.nr_spis + 32 ) goto write_ignore; + vgic_store_irq_disable(v, irq, r); return 1; case VRANGE32(GICD_ISPENDR, GICD_ISPENDRN): diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index a49fcde..dd969e2 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -261,6 +261,60 @@ struct vcpu *vgic_get_target_vcpu(struct domain *d, struct pending_irq *p) return d->vcpu[p->vcpu_id]; } +/* Takes a locked pending_irq and enables the interrupt, also unlocking it. */ +static void vgic_enable_irq_unlock(struct domain *d, struct pending_irq *p) +{ + struct vcpu *v_target; + unsigned long flags; + struct irq_desc *desc; + + v_target = vgic_lock_vcpu_irq(d, p, &flags); + + clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status); + gic_remove_from_lr_pending(v_target, p); + desc = p->desc; + spin_unlock(&p->lock); + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + + if ( desc != NULL ) + { + spin_lock_irqsave(&desc->lock, flags); + desc->handler->disable(desc); + spin_unlock_irqrestore(&desc->lock, flags); + } +} + +/* Takes a locked pending_irq and disables the interrupt, also unlocking it. */ +static void vgic_disable_irq_unlock(struct domain *d, struct pending_irq *p) +{ + struct vcpu *v_target; + unsigned long flags; + struct irq_desc *desc; + int int_type; + + v_target = vgic_lock_vcpu_irq(d, p, &flags); + + set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); + int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ? IRQ_TYPE_LEVEL_HIGH : + IRQ_TYPE_EDGE_RISING; + if ( !list_empty(&p->inflight) && + !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + gic_raise_guest_irq(v_target, p->irq, p->cur_priority); + desc = p->desc; + spin_unlock(&p->lock); + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + + if ( desc != NULL ) + { + spin_lock_irqsave(&desc->lock, flags); + irq_set_affinity(desc, cpumask_of(v_target->processor)); + if ( irq_type_set_by_domain(d) ) + gic_set_irq_type(desc, int_type); + desc->handler->enable(desc); + spin_unlock_irqrestore(&desc->lock, flags); + } +} + #define MAX_IRQS_PER_IPRIORITYR 4 uint32_t vgic_fetch_irq_priority(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq) @@ -347,6 +401,75 @@ void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq, local_irq_restore(flags); } +#define IRQS_PER_ENABLER 32 +/** + * vgic_fetch_irq_enabled: assemble the enabled bits for a group of 32 IRQs + * @v: the VCPU for private IRQs, any VCPU of a domain for SPIs + * @first_irq: the first IRQ to be queried, must be aligned to 32 + */ +uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq) +{ + struct pending_irq *pirqs[IRQS_PER_ENABLER]; + unsigned long flags; + uint32_t reg = 0; + unsigned int i; + + local_irq_save(flags); + vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs); + + for ( i = 0; i < 32; i++ ) + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) ) + reg |= BIT(i); + + vgic_unlock_irqs(pirqs, IRQS_PER_ENABLER); + local_irq_restore(flags); + + return reg; +} + +void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq, + uint32_t value) +{ + struct pending_irq *pirqs[IRQS_PER_ENABLER]; + unsigned long flags; + int i; + + local_irq_save(flags); + vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs); + + /* This goes backwards, as it unlocks the IRQs during the process */ + for ( i = IRQS_PER_ENABLER - 1; i >= 0; i-- ) + { + if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) && + (value & BIT(i)) ) + vgic_enable_irq_unlock(v->domain, pirqs[i]); + else + spin_unlock(&pirqs[i]->lock); + } + local_irq_restore(flags); +} + +void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq, + uint32_t value) +{ + struct pending_irq *pirqs[IRQS_PER_ENABLER]; + unsigned long flags; + int i; + + local_irq_save(flags); + vgic_lock_irqs(v, IRQS_PER_ENABLER, first_irq, pirqs); + + /* This goes backwards, as it unlocks the IRQs during the process */ + for ( i = 31; i >= 0; i-- ) + { + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &pirqs[i]->status) && + (value & BIT(i)) ) + vgic_disable_irq_unlock(v->domain, pirqs[i]); + else + spin_unlock(&pirqs[i]->lock); + } +} + bool vgic_migrate_irq(struct pending_irq *p, unsigned long *flags, struct vcpu *new) { @@ -437,40 +560,6 @@ void arch_move_irqs(struct vcpu *v) } } -void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) -{ - const unsigned long mask = r; - struct pending_irq *p; - struct irq_desc *desc; - unsigned int irq; - unsigned long flags; - int i = 0; - struct vcpu *v_target; - - /* LPIs will never be disabled via this function. */ - ASSERT(!is_lpi(32 * n + 31)); - - while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { - irq = i + (32 * n); - p = irq_to_pending(v, irq); - v_target = vgic_get_target_vcpu(v->domain, p); - - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); - clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - gic_remove_from_lr_pending(v_target, p); - desc = p->desc; - spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); - - if ( desc != NULL ) - { - spin_lock_irqsave(&desc->lock, flags); - desc->handler->disable(desc); - spin_unlock_irqrestore(&desc->lock, flags); - } - i++; - } -} - void vgic_lock_irqs(struct vcpu *v, unsigned int nrirqs, unsigned int first_irq, struct pending_irq **pirqs) { @@ -491,50 +580,6 @@ void vgic_unlock_irqs(struct pending_irq **pirqs, unsigned int nrirqs) spin_unlock(&pirqs[i]->lock); } -void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) -{ - const unsigned long mask = r; - struct pending_irq *p; - unsigned int irq, int_type; - unsigned long flags, vcpu_flags; - int i = 0; - struct vcpu *v_target; - struct domain *d = v->domain; - - /* LPIs will never be enabled via this function. */ - ASSERT(!is_lpi(32 * n + 31)); - - while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { - irq = i + (32 * n); - p = irq_to_pending(v, irq); - v_target = vgic_get_target_vcpu(v->domain, p); - spin_lock_irqsave(&v_target->arch.vgic.lock, vcpu_flags); - vgic_irq_lock(p, flags); - set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); - int_type = test_bit(GIC_IRQ_GUEST_LEVEL, &p->status) ? - IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING; - if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) - gic_raise_guest_irq(v_target, irq, p->cur_priority); - vgic_irq_unlock(p, flags); - spin_unlock_irqrestore(&v_target->arch.vgic.lock, vcpu_flags); - if ( p->desc != NULL ) - { - spin_lock_irqsave(&p->desc->lock, flags); - irq_set_affinity(p->desc, cpumask_of(v_target->processor)); - /* - * The irq cannot be a PPI, we only support delivery of SPIs - * to guests. - */ - ASSERT(irq >= 32); - if ( irq_type_set_by_domain(d) ) - gic_set_irq_type(p->desc, int_type); - p->desc->handler->enable(p->desc); - spin_unlock_irqrestore(&p->desc->lock, flags); - } - i++; - } -} - bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq, const struct sgi_target *target) { diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index fe4d53d..233ff1f 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -109,9 +109,6 @@ struct vgic_irq_rank { spinlock_t lock; /* Covers access to all other members of this struct */ uint8_t index; - - uint32_t ienable; - }; struct sgi_target { @@ -187,6 +184,11 @@ void vgic_store_irq_priority(struct vcpu *v, unsigned int nrirqs, uint32_t vgic_fetch_irq_config(struct vcpu *v, unsigned int first_irq); void vgic_store_irq_config(struct vcpu *v, unsigned int first_irq, uint32_t reg); +uint32_t vgic_fetch_irq_enabled(struct vcpu *v, unsigned int first_irq); +void vgic_store_irq_enable(struct vcpu *v, unsigned int first_irq, + uint32_t value); +void vgic_store_irq_disable(struct vcpu *v, unsigned int first_irq, + uint32_t value); enum gic_sgi_mode; @@ -218,8 +220,6 @@ extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq); extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s); extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq); extern bool vgic_emulate(struct cpu_user_regs *regs, union hsr hsr); -extern void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n); -extern void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n); extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops); int vgic_v2_init(struct domain *d, int *mmio_count); int vgic_v3_init(struct domain *d, int *mmio_count);