From patchwork Wed Mar 4 10:14:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 5935121 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 642EBBF440 for ; Wed, 4 Mar 2015 10:23:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8619920222 for ; Wed, 4 Mar 2015 10:23:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5A68220155 for ; Wed, 4 Mar 2015 10:23:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756099AbbCDKXL (ORCPT ); Wed, 4 Mar 2015 05:23:11 -0500 Received: from mail-wi0-f172.google.com ([209.85.212.172]:34172 "EHLO mail-wi0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933194AbbCDKR0 (ORCPT ); Wed, 4 Mar 2015 05:17:26 -0500 Received: by widex7 with SMTP id ex7so27573727wid.1 for ; Wed, 04 Mar 2015 02:17:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MI/jQ8SMWhC/tk/5gb0WOMvoojGuwze6krem9U3BG6A=; b=FCfmYzqQkLx78cVCCg5MNNI9hjcbZkZEdZ51fdbnogTQNwcm6cj1bRM7TrTtI6hWdQ 3yRpnP3F5N6unoj10UKemVQn46g3k56E3wyqFBpGjOf0TFmEXVt4GZIOPDGVp9TxmMn3 L4CjGqk+k6v0HID9HgtTO5LAgRpZSmuvE1DCZFI9nfpiWgdp0nEHbUVi+K3fBttZi9ff dB0wDfjM5gvavswK4FX9UHgxbdBO1NhY3WH/Ag2HMafOjzYFMkg6juQ8NHAkTuuJEzFt kfilJdw8z9IY10yxctTaZZHaI+pednfTDvLpnrhlwP0MVqtfKbyHV+sozHJregb+17uz qAEQ== X-Gm-Message-State: ALoCoQmTMnWYN9wftDDhbW6qd0IpIpzlkFww8limI0K368w0iYSDOUBnw1Mo67UmMb9SW8kgiud0 X-Received: by 10.194.121.136 with SMTP id lk8mr6004588wjb.49.1425464245804; Wed, 04 Mar 2015 02:17:25 -0800 (PST) Received: from gnx2579.gnb.st.com (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id j7sm6415582wix.4.2015.03.04.02.17.23 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 04 Mar 2015 02:17:24 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: andre.przywara@arm.com, linux-kernel@vger.kernel.org, patches@linaro.org, gleb@kernel.org, pbonzini@redhat.com Subject: [PATCH v9 4/5] KVM: arm/arm64: remove coarse grain dist locking at kvm_vgic_sync_hwstate Date: Wed, 4 Mar 2015 11:14:35 +0100 Message-Id: <1425464076-20558-5-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1425464076-20558-1-git-send-email-eric.auger@linaro.org> References: <1425464076-20558-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To prepare for irqfd addition, coarse grain locking is removed at kvm_vgic_sync_hwstate level and finer grain locking is introduced in vgic_process_maintenance only. Signed-off-by: Eric Auger Acked-by: Christoffer Dall --- virt/kvm/arm/vgic.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 0cc6ab6..4e9b6d3 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -1081,6 +1081,7 @@ epilog: static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) { u32 status = vgic_get_interrupt_status(vcpu); + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; bool level_pending = false; kvm_debug("STATUS = %08x\n", status); @@ -1098,6 +1099,7 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) struct vgic_lr vlr = vgic_get_lr(vcpu, lr); WARN_ON(vgic_irq_is_edge(vcpu, vlr.irq)); + spin_lock(&dist->lock); vgic_irq_clear_queued(vcpu, vlr.irq); WARN_ON(vlr.state & LR_STATE_MASK); vlr.state = 0; @@ -1125,6 +1127,8 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) vgic_cpu_irq_clear(vcpu, vlr.irq); } + spin_unlock(&dist->lock); + /* * Despite being EOIed, the LR may not have * been marked as empty. @@ -1139,10 +1143,7 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) return level_pending; } -/* - * Sync back the VGIC state after a guest run. The distributor lock is - * needed so we don't get preempted in the middle of the state processing. - */ +/* Sync back the VGIC state after a guest run */ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; @@ -1189,14 +1190,10 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; - if (!irqchip_in_kernel(vcpu->kvm)) return; - spin_lock(&dist->lock); __kvm_vgic_sync_hwstate(vcpu); - spin_unlock(&dist->lock); } int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)