From patchwork Tue Mar 21 21:10:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9637629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2460060327 for ; Tue, 21 Mar 2017 21:12:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 15F0F25E13 for ; Tue, 21 Mar 2017 21:12:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A06727D8D; Tue, 21 Mar 2017 21:12:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4B4C25E13 for ; Tue, 21 Mar 2017 21:12:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934091AbdCUVMp (ORCPT ); Tue, 21 Mar 2017 17:12:45 -0400 Received: from mail-wr0-f172.google.com ([209.85.128.172]:33433 "EHLO mail-wr0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933869AbdCUVL5 (ORCPT ); Tue, 21 Mar 2017 17:11:57 -0400 Received: by mail-wr0-f172.google.com with SMTP id y90so1894897wrb.0 for ; Tue, 21 Mar 2017 14:11:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CGHawQpzX1T16vaxYKE1MiZcHeMUsER8jkGioWqOP64=; b=R8AnG/oqf/66LaigmxI2A19FdjPu3h84ltgneZWpYYxLRjfC7bEhXo2QOFvjfeDsEA 4AwHz3NFIkzXniz2MUJLnCm6T/aI9TRitfh77U20hTfamEx7pJ6iQyGfXA1m7sD+yIP7 WUWRtoNTc+38Ov6jpST9bCAhOvF2xCbET0ymA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CGHawQpzX1T16vaxYKE1MiZcHeMUsER8jkGioWqOP64=; b=B3fLFPvnkEAlODSDKvtkylRJ8YcTCFIzo2l52H5hFYjwltp3mgAG1VIvZgR3MtwQKE Cll/vyEFj/A0VJ+XeznjC1H/+yNx1bKP0ftmWWk19ExQReWqmpsB6SCvPiICupZXHmkr PfljtomZ6+3IW+uosHg5vDv5OgYMWCa56iw+lCFK4Ro3KnMHclWtQESklh3X1/fevOvN xtq7roRV+/zvrZMGVzltQ8mL6BLWPQB/Tm3FRdXXNKQkfLESSVoZrvIqCCVldi0nxdrO hbLHLz3qgJyb8y9p9vo0h2GgXifKD34KOGsatXACi6g16LrUKLzGWQb3mXEXI8NhDHrd emCg== X-Gm-Message-State: AFeK/H1p9/QorUNAeHcBV6vN9SK2RMjS5ooZiqJNUxD90t7ilpKTbmeyAqbdAC9JoBHy3X7M X-Received: by 10.223.177.218 with SMTP id r26mr36416573wra.194.1490130662435; Tue, 21 Mar 2017 14:11:02 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id w130sm18953459wmg.0.2017.03.21.14.11.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Mar 2017 14:11:01 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , Shih-Wei Li , Christoffer Dall Subject: [PATCH v2 02/10] KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ Date: Tue, 21 Mar 2017 22:10:51 +0100 Message-Id: <20170321211059.8719-3-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170321211059.8719-1-cdall@linaro.org> References: <20170321211059.8719-1-cdall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Shih-Wei Li We do not need to flush vgic states in each world switch unless there is pending IRQ queued to the vgic's ap list. We can thus reduce the overhead by not grabbing the spinlock and not making the extra function call to vgic_flush_lr_state. Note: list_empty is a single atomic read (uses READ_ONCE) and can therefore check if a list is empty or not without the need to take the spinlock protecting the list. Signed-off-by: Shih-Wei Li Signed-off-by: Christoffer Dall Reviewed-by: Marc Zyngier --- Changes since v1: - Added comment in kvm_vgic_flush_hwstate virt/kvm/arm/vgic/vgic.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 2ac0def..1043291 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -637,12 +637,17 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) /* Sync back the hardware VGIC state into our emulation after a guest's run. */ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + if (unlikely(!vgic_initialized(vcpu->kvm))) return; vgic_process_maintenance_interrupt(vcpu); vgic_fold_lr_state(vcpu); vgic_prune_ap_list(vcpu); + + /* Make sure we can fast-path in flush_hwstate */ + vgic_cpu->used_lrs = 0; } /* Flush our emulation state into the GIC hardware before entering the guest. */ @@ -651,6 +656,18 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) if (unlikely(!vgic_initialized(vcpu->kvm))) return; + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) + return; + spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);