From patchwork Thu Nov 19 14:52:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 7657821 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0B8C29F2E2 for ; Thu, 19 Nov 2015 14:55:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42147206A7 for ; Thu, 19 Nov 2015 14:55:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 213A020648 for ; Thu, 19 Nov 2015 14:55:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZzQa7-0000NN-R9; Thu, 19 Nov 2015 14:53:03 +0000 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZzQa3-0000FF-FN for linux-arm-kernel@lists.infradead.org; Thu, 19 Nov 2015 14:53:00 +0000 Received: by wmww144 with SMTP id w144so241760405wmw.1 for ; Thu, 19 Nov 2015 06:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=kEJmzUjScFj8737/46+48lE/m2bkUQnwkbiQqxmt+WM=; b=FBGdHuF2wz5t3YrXvn6HE6xCfxwXiBfZm3t2vCUaFJaS+ZtYo3mEi1xgSDfCshJ7GY czqN3wuZllqehddx6aq2G4+r8lSqpw9qgTGBchutgdLYW0lcTURCiuttGRmslbrFFKY2 LoDk5UMoPaexh58WfjEHrA2zC10x0XVpY9zX05rw9yoVlQdsGJF6jEjoSG55Yw0obEWm SCgZEiffehynHmyemlyPid+r/LLhh2+ilzRcMmxF5imibrECPgC+QdL7GEeWTwbIfv2Q bfn7vjMgsNeIR00/J/Q3Gdj/XutXV/fSqYu6+PaU0/dGhCA6iRjn/k4DXbdVewVSQDxE sPgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=kEJmzUjScFj8737/46+48lE/m2bkUQnwkbiQqxmt+WM=; b=e8wM57j0VavgpDnsuqUylL/fr33HLoNBCvCbtMEqNeMrFNwAelXj5Bb6WqbducMxA2 zjVFz5cnmLNWfv/gtOjXXEZe2G6QxLdyREqxS1x89hYrftZjgjIMx0kZ94/iuS6pYDIn +QDOAl8AyOJ0+LkEBykOVzhJkDnk/MC050JeB7+vQkgkInM76cIGxuLVup/AKYyMO7x5 UDybmhqR2T/FtYjK+J9Q6ZXe0LVxYeYmZjzgItm/t55xIjvV6aEjHQRAA9at/qhGhquj mcjf5DYRxSUa8O5akxBzc6LOC0wACzZrPC7QzP2pJSk1DYHcjwwEIPeQJF7UtQwtb6pN mx3A== X-Gm-Message-State: ALoCoQn2GlBeYln8CUs9jNn0rsCVIabfcUdfU776OuHo2NkwsQNyoeXhEPzum14hX9DAO5Z5Cr55 X-Received: by 10.194.57.178 with SMTP id j18mr10319682wjq.113.1447944757093; Thu, 19 Nov 2015 06:52:37 -0800 (PST) Received: from new-host-3.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m16sm1660275wmb.13.2015.11.19.06.52.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 19 Nov 2015 06:52:36 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH] KVM: arm/arm64: vgic: leave the LR active state on GICD_ICENABLERn access Date: Thu, 19 Nov 2015 14:52:27 +0000 Message-Id: <1447944747-17689-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151119_065259_689225_418918A3 X-CRM114-Status: GOOD ( 18.89 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andre.przywara@arm.com, linux-kernel@vger.kernel.org, patches@linaro.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently on clear-enable MMIO we retire the corresponding LR whatever its state. More precisely we do not sync ACTIVE state but we erase the LR state. In case of a forwarded IRQ, the physical IRQ source is also erased meaning the physical IRQ will never be deactivated. In case of a non forwarded IRQ, the LR can be reused (since the state was reset) and the guest can deactivate an IRQ that is not marked in the LR anymore. This patch adds a parameter to vgic_retire_lr that makes possible to select the type of the LR that must be retired. unqueue will retire/sync all LR's while disable will leave the active LR's. Signed-off-by: Eric Auger --- virt/kvm/arm/vgic.c | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 5335383..bc30d93 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -105,7 +105,7 @@ #include "vgic.h" static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu); -static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu); +static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu, unsigned state); static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr); static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc); static u64 vgic_get_elrsr(struct kvm_vcpu *vcpu); @@ -713,18 +713,10 @@ void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) add_sgi_source(vcpu, lr.irq, lr.source); /* - * If the LR holds an active (10) or a pending and active (11) - * interrupt then move the active state to the - * distributor tracking bit. + * retire pending, active, active and pending LR's and + * sync their state back to the distributor */ - if (lr.state & LR_STATE_ACTIVE) - vgic_irq_set_active(vcpu, lr.irq); - - /* - * Reestablish the pending state on the distributor and the - * CPU interface and mark the LR as free for other use. - */ - vgic_retire_lr(i, vcpu); + vgic_retire_lr(i, vcpu, LR_STATE_ACTIVE | LR_STATE_PENDING); /* Finally update the VGIC state. */ vgic_update_state(vcpu->kvm); @@ -1077,22 +1069,25 @@ static inline void vgic_enable(struct kvm_vcpu *vcpu) vgic_ops->enable(vcpu); } -static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu) +static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu, unsigned state) { struct vgic_lr vlr = vgic_get_lr(vcpu, lr_nr); - vgic_irq_clear_queued(vcpu, vlr.irq); + if (vlr.state & LR_STATE_ACTIVE & state) { + vgic_irq_set_active(vcpu, vlr.irq); + vlr.state &= ~LR_STATE_ACTIVE; + } - /* - * We must transfer the pending state back to the distributor before - * retiring the LR, otherwise we may loose edge-triggered interrupts. - */ - if (vlr.state & LR_STATE_PENDING) { + if (vlr.state & LR_STATE_PENDING & state) { vgic_dist_irq_set_pending(vcpu, vlr.irq); - vlr.hwirq = 0; + vlr.state &= ~LR_STATE_PENDING; } - vlr.state = 0; + if (!(vlr.state & LR_STATE_MASK)) { + vlr.hwirq = 0; + vlr.state = 0; + vgic_irq_clear_queued(vcpu, vlr.irq); + } vgic_set_lr(vcpu, lr_nr, vlr); } @@ -1114,8 +1109,14 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu) for_each_clear_bit(lr, elrsr_ptr, vgic->nr_lr) { struct vgic_lr vlr = vgic_get_lr(vcpu, lr); + /* + * retire pending only LR's and sync their state + * back to the distributor. Active LR's cannot be + * retired since the guest will attempt to deactivate + * the IRQ. + */ if (!vgic_irq_is_enabled(vcpu, vlr.irq)) - vgic_retire_lr(lr, vcpu); + vgic_retire_lr(lr, vcpu, LR_STATE_PENDING); } }