From patchwork Tue Mar 21 21:10:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9637611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D26E160327 for ; Tue, 21 Mar 2017 21:12:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C37AF25E13 for ; Tue, 21 Mar 2017 21:12:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B6F5F27D8D; Tue, 21 Mar 2017 21:12:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 43F9725E13 for ; Tue, 21 Mar 2017 21:12:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VaaPg8OR15y7QuAWi9wvYDHgdyF3lQR/YccKUN9mFyQ=; b=hQG/vBmoEdJ/h3jcYrgGXHQ42r sCPGLb03F8v/B4WZJx4IwkEhuQW72XIU10y50opTTjbRRFN6M/jzzovpwGDSAUFcP4/v0ENBIBTbo iGgyvhP0KL1/x/0eNTHOCU3jG9IAS0DiqvhDromXzDbNcuvl3SD5jwz1MmR50kcXJpvTAnmJtgmqp CR1N7GzJRyUY1/o+4CprZLpFMJOxWth/1mGHy0+FG21DcJrcsNAUEwtG6+3+e7hj8pH0iXi9twZHo D3IpAMnzI6IxTes11Zd3yKI9CyBSuKkecc/vVWysarexh4sa5BIU8jsjdMnScAd5fgczZ21ZeM7xg 7C4Ox3BA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cqR4V-0005Pa-WF; Tue, 21 Mar 2017 21:12:04 +0000 Received: from mail-wr0-x22a.google.com ([2a00:1450:400c:c0c::22a]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cqR3s-0004mS-9x for linux-arm-kernel@lists.infradead.org; Tue, 21 Mar 2017 21:11:27 +0000 Received: by mail-wr0-x22a.google.com with SMTP id u108so119562241wrb.3 for ; Tue, 21 Mar 2017 14:11:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CGHawQpzX1T16vaxYKE1MiZcHeMUsER8jkGioWqOP64=; b=R8AnG/oqf/66LaigmxI2A19FdjPu3h84ltgneZWpYYxLRjfC7bEhXo2QOFvjfeDsEA 4AwHz3NFIkzXniz2MUJLnCm6T/aI9TRitfh77U20hTfamEx7pJ6iQyGfXA1m7sD+yIP7 WUWRtoNTc+38Ov6jpST9bCAhOvF2xCbET0ymA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CGHawQpzX1T16vaxYKE1MiZcHeMUsER8jkGioWqOP64=; b=jBn2Zz1czZozXyZCmpg9a4b24aACoBkOolI6wKtRdF0tvkh4U9w5rhws2MPfuhqrG/ /Hsoon1meQ0K2fO56An0/RHDdm1CLCgw5y0YoUeRfs2yuBW2VKTBPzDWPLdDXSr7ZYFm TEWqAFxA/JemRj6HVcRmd7JwPBnUAOw/K6r5APF/XvV4VjvkLlOpU1K1Rlzi+//3MV/u gRXTpW4d2OaEoWvdvq1w+ylG4BbUjz7Ol6zKhNBE0uEr+rjFQCdhbIHeLQJbXpVGgtGH 4O/e3K7UFiIuOj05ZpUKUaTyjwwB9CUKmfVcvGp0YkeeYSYs0Yup1/PHP82lbv+JNYU+ GvkQ== X-Gm-Message-State: AFeK/H0rgf2Ilnb81ZLLWP03OyIG4BvMHgU3rw7+zhmM2lbxPkWawUh0tjvLjrEdKhuk77ZQ X-Received: by 10.223.177.218 with SMTP id r26mr36416573wra.194.1490130662435; Tue, 21 Mar 2017 14:11:02 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id w130sm18953459wmg.0.2017.03.21.14.11.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Mar 2017 14:11:01 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 02/10] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Date: Tue, 21 Mar 2017 22:10:51 +0100 Message-Id: <20170321211059.8719-3-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170321211059.8719-1-cdall@linaro.org> References: <20170321211059.8719-1-cdall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170321_141124_498231_63F5C95D X-CRM114-Status: GOOD ( 15.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Christoffer Dall , kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , Shih-Wei Li MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shih-Wei Li We do not need to flush vgic states in each world switch unless there is pending IRQ queued to the vgic's ap list. We can thus reduce the overhead by not grabbing the spinlock and not making the extra function call to vgic_flush_lr_state. Note: list_empty is a single atomic read (uses READ_ONCE) and can therefore check if a list is empty or not without the need to take the spinlock protecting the list. Signed-off-by: Shih-Wei Li Signed-off-by: Christoffer Dall Reviewed-by: Marc Zyngier --- Changes since v1: - Added comment in kvm_vgic_flush_hwstate virt/kvm/arm/vgic/vgic.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 2ac0def..1043291 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -637,12 +637,17 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) /* Sync back the hardware VGIC state into our emulation after a guest's run. */ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + if (unlikely(!vgic_initialized(vcpu->kvm))) return; vgic_process_maintenance_interrupt(vcpu); vgic_fold_lr_state(vcpu); vgic_prune_ap_list(vcpu); + + /* Make sure we can fast-path in flush_hwstate */ + vgic_cpu->used_lrs = 0; } /* Flush our emulation state into the GIC hardware before entering the guest. */ @@ -651,6 +656,18 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) if (unlikely(!vgic_initialized(vcpu->kvm))) return; + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) + return; + spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);