From patchwork Tue Mar 21 21:10:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9637637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0776460327 for ; Tue, 21 Mar 2017 21:13:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED03A25E13 for ; Tue, 21 Mar 2017 21:13:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E07BE27D8D; Tue, 21 Mar 2017 21:13:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 738DF25E13 for ; Tue, 21 Mar 2017 21:13:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rIs9ulrBqH2QK0fI9ABPvXBZsKjtBZ3bQZWFr9BIUz8=; b=UD605fX66Q9yypi4C5ylIHdsDg Pf7V8OKdobLcMPifAzcH1oPN9tVpiOd3fCR/ia/rFgG49j9LNmsbGdd9sJIx/ccFDun2BfByEOnrl ozWJlBXX+XtCoy7iLM2eIUtE+ytZz1pZNCFI6tHZOuog3+4IIz0zObX9o1+1OxsvroHNMl9o3vS1a 2enHgjNguu9CD74IrBeIhNK0cFZc/TFOQ6fhXFWEqIMFOCvhBLpXcqTmpWpsJ1h1OHEh/VyjnQ2Xe s4yjQ7SVSkp0kupr6wvTHo3f0Q2z6Cb7kzh2vRSSD0JM12yE4D4EK2fDkQ2LejTNhBG4TjCF3dwSL IL5B7/QQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cqR5r-0006w5-Nv; Tue, 21 Mar 2017 21:13:27 +0000 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cqR40-0004n9-Tv for linux-arm-kernel@lists.infradead.org; Tue, 21 Mar 2017 21:11:37 +0000 Received: by mail-wm0-x230.google.com with SMTP id n11so21250888wma.1 for ; Tue, 21 Mar 2017 14:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IqtVrIMRbBr4PAZhj179BnQ6nwahcq50ViFshD7GcmM=; b=QG1ivTIXqz1fl1os2XagJCEYzy7W9tWmMIJ7v/kL6megFuwl7RkGc0/9SRC44dyqVC 6ZO3UtBzbbLgXVbGYZV/yXuzUrYtBegSh3LAkO5xHNFgIDsRpQ4JNUli7dlLpHTPnsqf 1M0V5fkjHk8SmYL3UZQ/Mj076k59AO3BoZk7A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IqtVrIMRbBr4PAZhj179BnQ6nwahcq50ViFshD7GcmM=; b=oBXWNnUtwDrtpGp/m2wMlEMCGLd6ELFYEL14PFvlxTAiRUi3kFa1voPmxjJhM/4kcD DzmbksddvJG1V9DbhU0l2OHz7pEA/dNJuDyr3WHc8jeG6mWKCaOBcgKx8PSwyRcN7Ufw 5R5asSbIQzkip4bsPy1XrxcSmKd4IiaTb4bAkCcfggWBQGoje1ph5/R0MD3KYmLb5H2g RHlcqYk85ufs1yHINrqF4VlhK9k4q1/iZ345Z2oyVgI5wUjo4P6oJl3FI1+iQWZJsCem RfSXUuKHdV5bgjB7MJt3YSNVWKAmvup6BeFPKx8Wrf7rWwT/pWepSPledGpHPHsUhRcD iPzQ== X-Gm-Message-State: AFeK/H3JMUbjsj3e4Zdu3n2Qo75PCsuXH5YJGfW5wxeTP/4pPIhJIya2VYJZWWVjX80xv9x0 X-Received: by 10.28.218.71 with SMTP id r68mr4754697wmg.59.1490130671085; Tue, 21 Mar 2017 14:11:11 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id w130sm18953459wmg.0.2017.03.21.14.11.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Mar 2017 14:11:10 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 10/10] KVM: arm/arm64: vgic: Improve sync_hwstate performance Date: Tue, 21 Mar 2017 22:10:59 +0100 Message-Id: <20170321211059.8719-11-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170321211059.8719-1-cdall@linaro.org> References: <20170321211059.8719-1-cdall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170321_141133_644153_BC910736 X-CRM114-Status: GOOD ( 17.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Andre Przywara , Christoffer Dall , kvm@vger.kernel.org, Eric Auger MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP There is no need to call any functions to fold LRs when we don't use any LRs and we don't need to mess with overflow flags, take spinlocks, or prune the AP list if the AP list is empty. Note: list_empty is a single atomic read (uses READ_ONCE) and can therefore check if a list is empty or not without the need to take the spinlock protecting the list. Signed-off-by: Christoffer Dall Reviewed-by: Marc Zyngier --- Changes since v1: - Moved clearing used_lrs into fold_lr_state - Moved hunk that removes check for vgic_initialized into previous patch virt/kvm/arm/vgic/vgic-v2.c | 7 +++++-- virt/kvm/arm/vgic/vgic-v3.c | 7 +++++-- virt/kvm/arm/vgic/vgic.c | 10 ++++++---- 3 files changed, 16 insertions(+), 8 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index f9c4329..a1fc07a 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -59,12 +59,13 @@ static bool lr_signals_eoi_mi(u32 lr_val) */ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) { - struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2; int lr; cpuif->vgic_hcr &= ~GICH_HCR_UIE; - for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) { + for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { u32 val = cpuif->vgic_lr[lr]; u32 intid = val & GICH_LR_VIRTUALID; struct vgic_irq *irq; @@ -106,6 +107,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } + + vgic_cpu->used_lrs = 0; } /* diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index c7ac9b1..2bcee2e 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -36,13 +36,14 @@ static bool lr_signals_eoi_mi(u64 lr_val) void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) { - struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3; u32 model = vcpu->kvm->arch.vgic.vgic_model; int lr; cpuif->vgic_hcr &= ~ICH_HCR_UIE; - for (lr = 0; lr < vcpu->arch.vgic_cpu.used_lrs; lr++) { + for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { u64 val = cpuif->vgic_lr[lr]; u32 intid; struct vgic_irq *irq; @@ -92,6 +93,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } + + vgic_cpu->used_lrs = 0; } /* Requires the irq to be locked already */ diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 04a405a..3d0979c 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -633,11 +633,13 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; - vgic_fold_lr_state(vcpu); - vgic_prune_ap_list(vcpu); + /* An empty ap_list_head implies used_lrs == 0 */ + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) + return; - /* Make sure we can fast-path in flush_hwstate */ - vgic_cpu->used_lrs = 0; + if (vgic_cpu->used_lrs) + vgic_fold_lr_state(vcpu); + vgic_prune_ap_list(vcpu); } /* Flush our emulation state into the GIC hardware before entering the guest. */