From patchwork Mon Aug 13 14:57:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23CD71390 for ; Mon, 13 Aug 2018 15:53:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 115E221FAC for ; Mon, 13 Aug 2018 15:53:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0456E290DC; Mon, 13 Aug 2018 15:53:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9005E21FAC for ; Mon, 13 Aug 2018 15:53:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=eCc23spu/yK1Dh9ydfPkisRrHaTgoK2u2afSwJ55VYI=; b=BVUBueFjtWaW0EeKk2t/zsPzRc nfH8aM3azAcVhj7pkH8JbbiADQCz2tChV/CmZDiWyxjm1TPSbV5o5XLfmssHQH+etxq6P1Cyd4Hff AyUyFNTLGv7CI6IOELjME3udrxV541x/79H7A/Bk7sOVaqSzcIt4v8KWa5xiOp0Y0bfSkxc3PvFE4 ww/ucyTtFgke8hwlDeQnlM3kLTRrX1FI3l5wr6xoN8/jrJ1/pr+ngSaa7dYJ/0WCxNbyz8LD32rJk XRJE3+y/JoCinj4tOpNx0gUAuGsvioG2Zr/eAb3T1gWDCViqr8n/sEQdQhwc8m1d3eao5Fg7jwyz+ USqXcAmQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpFA8-0008J8-8m; Mon, 13 Aug 2018 15:53:44 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpF9f-0007eD-NJ for linux-arm-kernel@bombadil.infradead.org; Mon, 13 Aug 2018 15:53:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=cghjmt2kaf5yGoPPkCTvgBnI2QgufdoacetXi9LXYCQ=; b=QYBkMm3lmszP3xrsqau9G6ysM u7Ro6VAKnsV/TcRC4Z1Vd9od5wihJDkFxoCb63R8rD4++ulQu8mM3eACOxGSh3AvXlqsvuliTJMkL flNPn/w7idTmLwQRcSS3jvtNM6l5OV/BcyPYeGxhTBtTGPz5Pumocf79IDgIxh0sKkikTTVOYg0k/ DMqNUta4d2xveoDpU3oXiYLEFBEhflZ1rEtVOomRPn/Z8rg1jktMe8X2OfDV/+zDKg+V+mwx3fouW ZU0398eFnUrgM7vmtg8pydNAbUnQStk7AdubxDNnn8yoFvCdvAJhNNOU/hV/R/IV9gjmchM1lr8tp JIRkQUP2A==; Received: from foss.arm.com ([217.140.101.70]) by merlin.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpEKk-0004c0-Oj for linux-arm-kernel@lists.infradead.org; Mon, 13 Aug 2018 15:00:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A34781596; Mon, 13 Aug 2018 08:00:38 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E4343F5D0; Mon, 13 Aug 2018 08:00:35 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 34/37] KVM: arm/arm64: vgic: Do not use spin_lock_irqsave/restore with irq disabled Date: Mon, 13 Aug 2018 15:57:52 +0100 Message-Id: <20180813145755.16566-35-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180813_110038_981420_45C0B2BB X-CRM114-Status: GOOD ( 13.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Andrew Jones , Kees Cook , kvm@vger.kernel.org, "Gustavo A . R . Silva" , Andre Przywara , Punit Agrawal , Christoffer Dall , Dongjiu Geng , Jia He , Eric Auger , James Morse , Catalin Marinas , Jia He , Suzuki Poulose , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jia He kvm_vgic_sync_hwstate is only called with IRQ being disabled. There is thus no need to call spin_lock_irqsave/restore in vgic_fold_lr_state and vgic_prune_ap_list. This patch replace them with the non irq-safe version. Signed-off-by: Jia He Acked-by: Christoffer Dall [maz: commit message tidy-up] Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-v2.c | 7 ++++--- virt/kvm/arm/vgic/vgic-v3.c | 7 ++++--- virt/kvm/arm/vgic/vgic.c | 13 +++++++------ 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index df5e6a6e3186..69b892abd7dc 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -62,7 +62,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2; int lr; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); cpuif->vgic_hcr &= ~GICH_HCR_UIE; @@ -83,7 +84,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) irq = vgic_get_irq(vcpu->kvm, vcpu, intid); - spin_lock_irqsave(&irq->irq_lock, flags); + spin_lock(&irq->irq_lock); /* Always preserve the active bit */ irq->active = !!(val & GICH_LR_ACTIVE_BIT); @@ -126,7 +127,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) vgic_irq_set_phys_active(irq, false); } - spin_unlock_irqrestore(&irq->irq_lock, flags); + spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index 530b8491c892..9c0dd234ebe8 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -46,7 +46,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3; u32 model = vcpu->kvm->arch.vgic.vgic_model; int lr; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); cpuif->vgic_hcr &= ~ICH_HCR_UIE; @@ -75,7 +76,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) if (!irq) /* An LPI could have been unmapped. */ continue; - spin_lock_irqsave(&irq->irq_lock, flags); + spin_lock(&irq->irq_lock); /* Always preserve the active bit */ irq->active = !!(val & ICH_LR_ACTIVE_BIT); @@ -118,7 +119,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) vgic_irq_set_phys_active(irq, false); } - spin_unlock_irqrestore(&irq->irq_lock, flags); + spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index c22cea678a66..7cfdfbc910e0 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -593,10 +593,11 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_irq *irq, *tmp; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); retry: - spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + spin_lock(&vgic_cpu->ap_list_lock); list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; @@ -637,7 +638,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) /* This interrupt looks like it has to be migrated. */ spin_unlock(&irq->irq_lock); - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + spin_unlock(&vgic_cpu->ap_list_lock); /* * Ensure locking order by always locking the smallest @@ -651,7 +652,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) vcpuB = vcpu; } - spin_lock_irqsave(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + spin_lock(&vcpuA->arch.vgic_cpu.ap_list_lock); spin_lock_nested(&vcpuB->arch.vgic_cpu.ap_list_lock, SINGLE_DEPTH_NESTING); spin_lock(&irq->irq_lock); @@ -676,7 +677,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) spin_unlock(&irq->irq_lock); spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); - spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + spin_unlock(&vcpuA->arch.vgic_cpu.ap_list_lock); if (target_vcpu_needs_kick) { kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); @@ -686,7 +687,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) goto retry; } - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + spin_unlock(&vgic_cpu->ap_list_lock); } static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu)