From patchwork Thu Aug 20 16:28:58 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 7045701 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6B5D8C05AC for ; Thu, 20 Aug 2015 16:30:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7C62E204F6 for ; Thu, 20 Aug 2015 16:30:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73597204CF for ; Thu, 20 Aug 2015 16:30:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753178AbbHTQ37 (ORCPT ); Thu, 20 Aug 2015 12:29:59 -0400 Received: from foss.arm.com ([217.140.101.70]:56676 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752363AbbHTQ34 (ORCPT ); Thu, 20 Aug 2015 12:29:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13C80662; Thu, 20 Aug 2015 09:29:47 -0700 (PDT) Received: from zomby-woof.event.rightround.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 98D783F59E; Thu, 20 Aug 2015 09:29:55 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , Gleb Natapov Cc: Christoffer Dall , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Mario Smarduch , "Suzuki K. Poulose" , Vladimir Murzin , kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 20/25] KVM: arm/arm64: vgic: Allow HW interrupts to be queued to a guest Date: Thu, 20 Aug 2015 17:28:58 +0100 Message-Id: <1440088143-4722-21-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1440088143-4722-1-git-send-email-marc.zyngier@arm.com> References: <1440088143-4722-1-git-send-email-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To allow a HW interrupt to be injected into a guest, we lookup the guest virtual interrupt in the irq_phys_map list, and if we have a match, encode both interrupts in the LR. We also mark the interrupt as "active" at the host distributor level. On guest EOI on the virtual interrupt, the host interrupt will be deactivated. Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 86 insertions(+), 3 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 51c9900..9d009d2 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -1140,6 +1140,39 @@ static void vgic_queue_irq_to_lr(struct kvm_vcpu *vcpu, int irq, if (!vgic_irq_is_edge(vcpu, irq)) vlr.state |= LR_EOI_INT; + if (vlr.irq >= VGIC_NR_SGIS) { + struct irq_phys_map *map; + map = vgic_irq_map_search(vcpu, irq); + + /* + * If we have a mapping, and the virtual interrupt is + * being injected, then we must set the state to + * active in the physical world. Otherwise the + * physical interrupt will fire and the guest will + * exit before processing the virtual interrupt. + */ + if (map) { + int ret; + + BUG_ON(!map->active); + vlr.hwirq = map->phys_irq; + vlr.state |= LR_HW; + vlr.state &= ~LR_EOI_INT; + + ret = irq_set_irqchip_state(map->irq, + IRQCHIP_STATE_ACTIVE, + true); + WARN_ON(ret); + + /* + * Make sure we're not going to sample this + * again, as a HW-backed interrupt cannot be + * in the PENDING_ACTIVE stage. + */ + vgic_irq_set_queued(vcpu, irq); + } + } + vgic_set_lr(vcpu, lr_nr, vlr); vgic_sync_lr_elrsr(vcpu, lr_nr, vlr); } @@ -1364,6 +1397,39 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) return level_pending; } +/* + * Save the physical active state, and reset it to inactive. + * + * Return 1 if HW interrupt went from active to inactive, and 0 otherwise. + */ +static int vgic_sync_hwirq(struct kvm_vcpu *vcpu, struct vgic_lr vlr) +{ + struct irq_phys_map *map; + int ret; + + if (!(vlr.state & LR_HW)) + return 0; + + map = vgic_irq_map_search(vcpu, vlr.irq); + BUG_ON(!map || !map->active); + + ret = irq_get_irqchip_state(map->irq, + IRQCHIP_STATE_ACTIVE, + &map->active); + + WARN_ON(ret); + + if (map->active) { + ret = irq_set_irqchip_state(map->irq, + IRQCHIP_STATE_ACTIVE, + false); + WARN_ON(ret); + return 0; + } + + return 1; +} + /* Sync back the VGIC state after a guest run */ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { @@ -1378,14 +1444,31 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) elrsr = vgic_get_elrsr(vcpu); elrsr_ptr = u64_to_bitmask(&elrsr); - /* Clear mappings for empty LRs */ - for_each_set_bit(lr, elrsr_ptr, vgic->nr_lr) { + /* Deal with HW interrupts, and clear mappings for empty LRs */ + for (lr = 0; lr < vgic->nr_lr; lr++) { struct vgic_lr vlr; - if (!test_and_clear_bit(lr, vgic_cpu->lr_used)) + if (!test_bit(lr, vgic_cpu->lr_used)) continue; vlr = vgic_get_lr(vcpu, lr); + if (vgic_sync_hwirq(vcpu, vlr)) { + /* + * So this is a HW interrupt that the guest + * EOI-ed. Clean the LR state and allow the + * interrupt to be sampled again. + */ + vlr.state = 0; + vlr.hwirq = 0; + vgic_set_lr(vcpu, lr, vlr); + vgic_irq_clear_queued(vcpu, vlr.irq); + set_bit(lr, elrsr_ptr); + } + + if (!test_bit(lr, elrsr_ptr)) + continue; + + clear_bit(lr, vgic_cpu->lr_used); BUG_ON(vlr.irq >= dist->nr_irqs); vgic_cpu->vgic_irq_lr_map[vlr.irq] = LR_EMPTY;