From patchwork Mon Oct 1 09:14:38 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1530121 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 2E7913FE1C for ; Mon, 1 Oct 2012 09:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752772Ab2JAJOl (ORCPT ); Mon, 1 Oct 2012 05:14:41 -0400 Received: from mail-qc0-f174.google.com ([209.85.216.174]:37996 "EHLO mail-qc0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752579Ab2JAJOl (ORCPT ); Mon, 1 Oct 2012 05:14:41 -0400 Received: by mail-qc0-f174.google.com with SMTP id d3so3491612qch.19 for ; Mon, 01 Oct 2012 02:14:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=subject:to:from:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding :x-gm-message-state; bh=SJshsVft03jJlhYRUZxFdTeJeBtPxjWpLk0et9GPFwc=; b=D4Rrs63IbnKwq+sAgUNgxD6wqZXx44p0K1xF/OYYjM/JFRxKj608b3SnEjOnFz966K s102bYQGgDrY/lvuHBivfg74UsqUgX2bnBFsxrBemDQqciQnHTC76YZTEVpipDxrLSJf 4Owj6rRrSHpqxSLBv5lyj4RHNSBKO+3CMFvZe898xwKvdCvV+Sn1JrOOzhTVqcmbIaJn jg8JtsbOHkctWFu1F2oGXYuaUpgp9Wd6grwSh/wf6ig4HPs3IMfF2PtksmfAlyBhAkYz SDVVBYpmuElfx+ZnPMqnTcimbiSURG4I0Z2qatAp05ZMSAr3St+SPIceFVjw26+8k474 qIpA== Received: by 10.224.214.67 with SMTP id gz3mr35519551qab.70.1349082880716; Mon, 01 Oct 2012 02:14:40 -0700 (PDT) Received: from [127.0.1.1] (pool-72-80-83-148.nycmny.fios.verizon.net. [72.80.83.148]) by mx.google.com with ESMTPS id ck11sm23788663qab.17.2012.10.01.02.14.39 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 01 Oct 2012 02:14:40 -0700 (PDT) Subject: [PATCH v2 09/10] ARM: KVM: vgic: reduce the number of vcpu kick To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu From: Christoffer Dall Cc: Marc Zyngier Date: Mon, 01 Oct 2012 05:14:38 -0400 Message-ID: <20121001091438.49503.13879.stgit@ubuntu> In-Reply-To: <20121001091244.49503.96318.stgit@ubuntu> References: <20121001091244.49503.96318.stgit@ubuntu> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Gm-Message-State: ALoCoQkqYJXD9UegXqFfIgHPsGB/cIOQDQd4v2XugGMmt1xJ4bpH2inN7eShoq2SHNrhLXH+l4rY Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marc Zyngier If we have level interrupts already programmed to fire on a vcpu, there is no reason to kick it after injecting a new interrupt, as we're guaranteed that we'll exit when the level interrupt will be EOId (VGIC_LR_EOI is set). The exit will force a reload of the VGIC, injecting the new interrupts. Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_vgic.h | 10 ++++++++++ arch/arm/kvm/arm.c | 10 +++++++++- arch/arm/kvm/vgic.c | 10 ++++++++-- 3 files changed, 27 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h index c8327f3..588c637 100644 --- a/arch/arm/include/asm/kvm_vgic.h +++ b/arch/arm/include/asm/kvm_vgic.h @@ -214,6 +214,9 @@ struct vgic_cpu { u32 vgic_elrsr[2]; /* Saved only */ u32 vgic_apr; u32 vgic_lr[64]; /* A15 has only 4... */ + + /* Number of level-triggered interrupt in progress */ + atomic_t irq_active_count; #endif }; @@ -250,6 +253,8 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_exit_mmio *mmio); #define irqchip_in_kernel(k) (!!((k)->arch.vgic.vctrl_base)) +#define vgic_active_irq(v) (atomic_read(&(v)->arch.vgic_cpu.irq_active_count) == 0) + #else static inline int kvm_vgic_hyp_init(void) { @@ -286,6 +291,11 @@ static inline int irqchip_in_kernel(struct kvm *kvm) { return 0; } + +static inline int vgic_active_irq(struct kvm_vcpu *vcpu) +{ + return 0; +} #endif #endif diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index f88fd18..b03e604 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -94,7 +94,15 @@ int kvm_arch_hardware_enable(void *garbage) int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) { - return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; + if (kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE) { + if (vgic_active_irq(vcpu) && + cmpxchg(&vcpu->mode, EXITING_GUEST_MODE, IN_GUEST_MODE) == EXITING_GUEST_MODE) + return 0; + + return 1; + } + + return 0; } void kvm_arch_hardware_disable(void *garbage) diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c index fc2a138..63fe0dd 100644 --- a/arch/arm/kvm/vgic.c +++ b/arch/arm/kvm/vgic.c @@ -674,8 +674,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq) kvm_debug("LR%d piggyback for IRQ%d %x\n", lr, irq, vgic_cpu->vgic_lr[lr]); BUG_ON(!test_bit(lr, vgic_cpu->lr_used)); vgic_cpu->vgic_lr[lr] |= VGIC_LR_PENDING_BIT; - if (is_level) + if (is_level) { vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI; + atomic_inc(&vgic_cpu->irq_active_count); + } return true; } @@ -687,8 +689,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq) kvm_debug("LR%d allocated for IRQ%d %x\n", lr, irq, sgi_source_id); vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq); - if (is_level) + if (is_level) { vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI; + atomic_inc(&vgic_cpu->irq_active_count); + } vgic_cpu->vgic_irq_lr_map[irq] = lr; clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr); @@ -963,6 +967,8 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data) vgic_bitmap_set_irq_val(&dist->irq_active, vcpu->vcpu_id, irq, 0); + atomic_dec(&vgic_cpu->irq_active_count); + smp_mb(); vgic_cpu->vgic_lr[lr] &= ~VGIC_LR_EOI; writel_relaxed(vgic_cpu->vgic_lr[lr], dist->vctrl_base + GICH_LR0 + (lr << 2));