From patchwork Fri Nov 8 17:49:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 11235321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8DFB1850 for ; Fri, 8 Nov 2019 17:50:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 99200222C4 for ; Fri, 8 Nov 2019 17:50:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730968AbfKHRuB (ORCPT ); Fri, 8 Nov 2019 12:50:01 -0500 Received: from foss.arm.com ([217.140.110.172]:47726 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729775AbfKHRuB (ORCPT ); Fri, 8 Nov 2019 12:50:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5C3B7A7; Fri, 8 Nov 2019 09:50:00 -0800 (PST) Received: from donnerap.arm.com (donnerap.cambridge.arm.com [10.1.197.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 55FD23F719; Fri, 8 Nov 2019 09:49:59 -0800 (PST) From: Andre Przywara To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Julien Thierry , Suzuki K Poulose Subject: [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Date: Fri, 8 Nov 2019 17:49:50 +0000 Message-Id: <20191108174952.740-2-andre.przywara@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191108174952.740-1-andre.przywara@arm.com> References: <20191108174952.740-1-andre.przywara@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The GIC specification describes support for two distinct interrupt groups, which can be enabled independently from each other. At the moment our VGIC emulation does not really honour this, so using Group0 interrupts with the GICv3 emulation does not work as expected at the moment. Prepare the code for dealing with the *two* interrupt groups: Allow to handle the two enable bits in the distributor, by providing accessors. At the moment this still only honours group1, because we need more code to properly differentiate the group enables. Also provide a stub function to later implement the re-scanning of all IRQs, should the group enable bit for a group change. This patch does not change the current behaviour yet, but just provides the infrastructure bits separately, mostly for review purposes. Signed-off-by: Andre Przywara --- include/kvm/arm_vgic.h | 7 ++- virt/kvm/arm/vgic/vgic-debug.c | 2 +- virt/kvm/arm/vgic/vgic-mmio-v2.c | 23 ++++++---- virt/kvm/arm/vgic/vgic-mmio-v3.c | 22 +++++---- virt/kvm/arm/vgic/vgic.c | 76 +++++++++++++++++++++++++++++++- virt/kvm/arm/vgic/vgic.h | 3 ++ 6 files changed, 110 insertions(+), 23 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 9d53f545a3d5..0f845430c732 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -29,6 +29,9 @@ #define VGIC_MIN_LPI 8192 #define KVM_IRQCHIP_NUM_PINS (1020 - 32) +#define GIC_GROUP0 0 +#define GIC_GROUP1 1 + #define irq_is_ppi(irq) ((irq) >= VGIC_NR_SGIS && (irq) < VGIC_NR_PRIVATE_IRQS) #define irq_is_spi(irq) ((irq) >= VGIC_NR_PRIVATE_IRQS && \ (irq) <= VGIC_MAX_SPI) @@ -227,8 +230,8 @@ struct vgic_dist { struct list_head rd_regions; }; - /* distributor enabled */ - bool enabled; + /* group0 and/or group1 IRQs enabled on the distributor level */ + u8 groups_enable; struct vgic_irq *spis; diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c index cc12fe9b2df3..ab64e908024e 100644 --- a/virt/kvm/arm/vgic/vgic-debug.c +++ b/virt/kvm/arm/vgic/vgic-debug.c @@ -150,7 +150,7 @@ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist) seq_printf(s, "nr_spis:\t%d\n", dist->nr_spis); if (v3) seq_printf(s, "nr_lpis:\t%d\n", dist->lpi_list_count); - seq_printf(s, "enabled:\t%d\n", dist->enabled); + seq_printf(s, "groups enabled:\t%d\n", dist->groups_enable); seq_printf(s, "\n"); seq_printf(s, "P=pending_latch, L=line_level, A=active\n"); diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index 5945f062d749..fe8528bd5b55 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -26,11 +26,14 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; - u32 value; + u32 value = 0; switch (addr & 0x0c) { case GIC_DIST_CTRL: - value = vgic->enabled ? GICD_ENABLE : 0; + if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0)) + value |= GICD_ENABLE; + if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1)) + value |= BIT(1); break; case GIC_DIST_CTR: value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS; @@ -42,8 +45,6 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, (vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) | (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; - default: - return 0; } return value; @@ -53,14 +54,18 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; - bool was_enabled = dist->enabled; + struct kvm *kvm = vcpu->kvm; + int grp0_changed, grp1_changed; switch (addr & 0x0c) { case GIC_DIST_CTRL: - dist->enabled = val & GICD_ENABLE; - if (!was_enabled && dist->enabled) - vgic_kick_vcpus(vcpu->kvm); + grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0, + val & GICD_ENABLE); + grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1, + val & BIT(1)); + if (grp0_changed || grp1_changed) + vgic_rescan_pending_irqs(kvm, grp0_changed > 0 || + grp1_changed > 0); break; case GIC_DIST_CTR: case GIC_DIST_IIDR: diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 7dfd15dbb308..73e3410af332 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -66,7 +66,9 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, switch (addr & 0x0c) { case GICD_CTLR: - if (vgic->enabled) + if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0)) + value |= GICD_CTLR_ENABLE_SS_G0; + if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1)) value |= GICD_CTLR_ENABLE_SS_G1; value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS; break; @@ -85,8 +87,6 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, (vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) | (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; - default: - return 0; } return value; @@ -96,15 +96,19 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; - bool was_enabled = dist->enabled; + struct kvm *kvm = vcpu->kvm; + int grp0_changed, grp1_changed; switch (addr & 0x0c) { case GICD_CTLR: - dist->enabled = val & GICD_CTLR_ENABLE_SS_G1; - - if (!was_enabled && dist->enabled) - vgic_kick_vcpus(vcpu->kvm); + grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0, + val & GICD_CTLR_ENABLE_SS_G0); + grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1, + val & GICD_CTLR_ENABLE_SS_G1); + + if (grp0_changed || grp1_changed) + vgic_rescan_pending_irqs(kvm, grp0_changed > 0 || + grp1_changed > 0); break; case GICD_TYPER: case GICD_IIDR: diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 99b02ca730a8..3b88e14d239f 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -201,6 +201,12 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active) active)); } +static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq) +{ + /* Placeholder implementation until we properly support Group0. */ + return kvm->arch.vgic.groups_enable; +} + /** * kvm_vgic_target_oracle - compute the target vcpu for an irq * @@ -228,7 +234,8 @@ static struct kvm_vcpu *vgic_target_oracle(struct vgic_irq *irq) */ if (irq->enabled && irq_is_pending(irq)) { if (unlikely(irq->target_vcpu && - !irq->target_vcpu->kvm->arch.vgic.enabled)) + !vgic_irq_is_grp_enabled(irq->target_vcpu->kvm, + irq))) return NULL; return irq->target_vcpu; @@ -303,6 +310,71 @@ static void vgic_sort_ap_list(struct kvm_vcpu *vcpu) list_sort(NULL, &vgic_cpu->ap_list_head, vgic_irq_cmp); } +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status) +{ + struct vgic_dist *dist = &kvm->arch.vgic; + u32 group_mask = 1U << group; + u32 new_bit = (unsigned)status << group; + u8 was_enabled = dist->groups_enable & group_mask; + + if (new_bit == was_enabled) + return 0; + + /* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */ + if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { + if (group == GIC_GROUP0) + return 0; + } else { + if (group == GIC_GROUP1) + return 0; + } + + dist->groups_enable &= ~group_mask; + dist->groups_enable |= new_bit; + if (new_bit > was_enabled) + return 1; + else + return -1; + + return 0; +} + +/* + * The group enable status of at least one of the groups has changed. + * If enabled is true, at least one of the groups got enabled. + * Adjust the forwarding state of every IRQ (on ap_list or not) accordingly. + */ +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled) +{ + /* + * TODO: actually scan *all* IRQs of the VM for pending IRQs. + * If a pending IRQ's group is now enabled, add it to its ap_list. + * If a pending IRQ's group is now disabled, kick the VCPU to + * let it remove this IRQ from its ap_list. We have to let the + * VCPU do it itself, because we can't know the exact state of an + * IRQ pending on a running VCPU. + */ + + /* For now just kick all VCPUs, as the old code did. */ + vgic_kick_vcpus(kvm); +} + +bool vgic_dist_group_enabled(struct kvm *kvm, int group) +{ + struct vgic_dist *dist = &kvm->arch.vgic; + + /* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */ + if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { + if (group == GIC_GROUP0) + return false; + } else { + if (group == GIC_GROUP1) + return false; + } + + return dist->groups_enable & (1U << group); +} + /* * Only valid injection if changing level for level-triggered IRQs or for a * rising edge, and in-kernel connected IRQ lines can only be controlled by @@ -949,7 +1021,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) unsigned long flags; struct vgic_vmcr vmcr; - if (!vcpu->kvm->arch.vgic.enabled) + if (!vcpu->kvm->arch.vgic.groups_enable) return false; if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last) diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index c7fefd6b1c80..219eb23d580d 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -168,7 +168,10 @@ void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending); void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active); bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, unsigned long flags); +bool vgic_dist_group_enabled(struct kvm *kvm, int group); +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status); void vgic_kick_vcpus(struct kvm *kvm); +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled); int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr, phys_addr_t addr, phys_addr_t alignment); From patchwork Fri Nov 8 17:49:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 11235323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 182421850 for ; Fri, 8 Nov 2019 17:50:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 011DE222CB for ; Fri, 8 Nov 2019 17:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730994AbfKHRuD (ORCPT ); Fri, 8 Nov 2019 12:50:03 -0500 Received: from foss.arm.com ([217.140.110.172]:47740 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729775AbfKHRuC (ORCPT ); Fri, 8 Nov 2019 12:50:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 338AE31B; Fri, 8 Nov 2019 09:50:02 -0800 (PST) Received: from donnerap.arm.com (donnerap.cambridge.arm.com [10.1.197.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E2C3F3F719; Fri, 8 Nov 2019 09:50:00 -0800 (PST) From: Andre Przywara To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Julien Thierry , Suzuki K Poulose Subject: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled Date: Fri, 8 Nov 2019 17:49:51 +0000 Message-Id: <20191108174952.740-3-andre.przywara@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191108174952.740-1-andre.przywara@arm.com> References: <20191108174952.740-1-andre.przywara@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Our current VGIC emulation code treats the "EnableGrpX" bits in GICD_CTLR as a single global interrupt delivery switch, where in fact the GIC architecture asks for this being separate for the two interrupt groups. To implement this properly, we have to slightly adjust our design, to *not* let IRQs from a disabled interrupt group be added to the ap_list. As a consequence, enabling one group requires us to re-evaluate every pending IRQ and potentially add it to its respective ap_list. Similarly disabling an interrupt group requires pending IRQs to be removed from the ap_list (as long as they have not been activated yet). Implement a rather simple, yet not terribly efficient algorithm to achieve this: For each VCPU we iterate over all IRQs, checking for pending ones and adding them to the list. We hold the ap_list_lock for this, to make this atomic from a VCPU's point of view. When an interrupt group gets disabled, we can't directly remove affected IRQs from the ap_list, as a running VCPU might have already activated them, which wouldn't be immediately visible to the host. Instead simply kick all VCPUs, so that they clean their ap_list's automatically when running vgic_prune_ap_list(). Signed-off-by: Andre Przywara --- virt/kvm/arm/vgic/vgic.c | 88 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 80 insertions(+), 8 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 3b88e14d239f..28d9ff282017 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status) return 0; } +/* + * Check whether a given IRQs need to be queued to this ap_list, and do + * so if that's the case. + * Requires the ap_list_lock to be held (but not the irq lock). + * + * Returns 1 if that IRQ has been added to the ap_list, and 0 if not. + */ +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, + int intid) +{ + struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid); + int ret = 0; + + raw_spin_lock(&irq->irq_lock); + if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) { + /* + * Grab a reference to the irq to reflect the + * fact that it is now in the ap_list. + */ + vgic_get_irq_kref(irq); + list_add_tail(&irq->ap_list, + &vcpu->arch.vgic_cpu.ap_list_head); + irq->vcpu = vcpu; + + ret = 1; + } + raw_spin_unlock(&irq->irq_lock); + vgic_put_irq(kvm, irq); + + return ret; +} + /* * The group enable status of at least one of the groups has changed. * If enabled is true, at least one of the groups got enabled. @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status) */ void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled) { + int cpuid; + struct kvm_vcpu *vcpu; + /* - * TODO: actually scan *all* IRQs of the VM for pending IRQs. - * If a pending IRQ's group is now enabled, add it to its ap_list. - * If a pending IRQ's group is now disabled, kick the VCPU to - * let it remove this IRQ from its ap_list. We have to let the - * VCPU do it itself, because we can't know the exact state of an - * IRQ pending on a running VCPU. + * If no group got enabled, we only have to potentially remove + * interrupts from ap_lists. We can't do this here, because a running + * VCPU might have ACKed an IRQ already, which wouldn't immediately + * be reflected in the ap_list. + * So kick all VCPUs, which will let them re-evaluate their ap_lists + * by running vgic_prune_ap_list(), removing no longer enabled + * IRQs. + */ + if (!enabled) { + vgic_kick_vcpus(kvm); + + return; + } + + /* + * At least one group went from disabled to enabled. Now we need + * to scan *all* IRQs of the VM for newly group-enabled IRQs. + * If a pending IRQ's group is now enabled, add it to the ap_list. + * + * For each VCPU this needs to be atomic, as we need *all* newly + * enabled IRQs in be in the ap_list to determine the highest + * priority one. + * So grab the ap_list_lock, then iterate over all private IRQs and + * all SPIs. Once the ap_list is updated, kick that VCPU to + * forward any new IRQs to the guest. */ + kvm_for_each_vcpu(cpuid, vcpu, kvm) { + unsigned long flags; + int i; - /* For now just kick all VCPUs, as the old code did. */ - vgic_kick_vcpus(kvm); + raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags); + + for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++) + queue_enabled_irq(kvm, vcpu, i); + + for (i = VGIC_NR_PRIVATE_IRQS; + i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++) + queue_enabled_irq(kvm, vcpu, i); + + raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, + flags); + + if (kvm_vgic_vcpu_pending_irq(vcpu)) { + kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); + kvm_vcpu_kick(vcpu); + } + } } bool vgic_dist_group_enabled(struct kvm *kvm, int group) From patchwork Fri Nov 8 17:49:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 11235327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C07FC1864 for ; Fri, 8 Nov 2019 17:50:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9FB3621D82 for ; Fri, 8 Nov 2019 17:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731028AbfKHRuE (ORCPT ); Fri, 8 Nov 2019 12:50:04 -0500 Received: from foss.arm.com ([217.140.110.172]:47752 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731008AbfKHRuE (ORCPT ); Fri, 8 Nov 2019 12:50:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C4DDDAB6; Fri, 8 Nov 2019 09:50:03 -0800 (PST) Received: from donnerap.arm.com (donnerap.cambridge.arm.com [10.1.197.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 76D1F3F719; Fri, 8 Nov 2019 09:50:02 -0800 (PST) From: Andre Przywara To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Julien Thierry , Suzuki K Poulose Subject: [PATCH 3/3] kvm: arm: VGIC: Enable proper Group0 handling Date: Fri, 8 Nov 2019 17:49:52 +0000 Message-Id: <20191108174952.740-4-andre.przywara@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191108174952.740-1-andre.przywara@arm.com> References: <20191108174952.740-1-andre.przywara@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that our VGIC emulation can properly deal with two separate interrupt groups and their enable bits, we can allow a guest to control the Group0 enable bit in the distributor. When evaluating the state of an interrupt, we now take its individual group enable bit into account, and drop the global "distributor disabled" notion when checking for VCPUs with a pending interrupt. This allows the guest to control both groups independently. Signed-off-by: Andre Przywara --- virt/kvm/arm/vgic/vgic.c | 24 +----------------------- 1 file changed, 1 insertion(+), 23 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 28d9ff282017..ed3a10b9ea93 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -203,8 +203,7 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active) static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq) { - /* Placeholder implementation until we properly support Group0. */ - return kvm->arch.vgic.groups_enable; + return vgic_dist_group_enabled(kvm, irq->group); } /** @@ -320,15 +319,6 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status) if (new_bit == was_enabled) return 0; - /* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */ - if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { - if (group == GIC_GROUP0) - return 0; - } else { - if (group == GIC_GROUP1) - return 0; - } - dist->groups_enable &= ~group_mask; dist->groups_enable |= new_bit; if (new_bit > was_enabled) @@ -435,15 +425,6 @@ bool vgic_dist_group_enabled(struct kvm *kvm, int group) { struct vgic_dist *dist = &kvm->arch.vgic; - /* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */ - if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { - if (group == GIC_GROUP0) - return false; - } else { - if (group == GIC_GROUP1) - return false; - } - return dist->groups_enable & (1U << group); } @@ -1093,9 +1074,6 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) unsigned long flags; struct vgic_vmcr vmcr; - if (!vcpu->kvm->arch.vgic.groups_enable) - return false; - if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last) return true;