From patchwork Tue May 16 10:04:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9728805 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1F60B602B4 for ; Tue, 16 May 2017 10:16:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1004A28421 for ; Tue, 16 May 2017 10:16:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 040D328A08; Tue, 16 May 2017 10:16:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 64FC528421 for ; Tue, 16 May 2017 10:16:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AavbXazocP8UpPP079eNC0f/nvd7w4M2RoZtNE8wc0c=; b=Kwecc6C5t8xz8w0yylvUB6xVtp mvm93B+DXsTlPLU93r7pDCH1OSTf9Og5AQIwpXOXfugRvmurv9sEpG4zbKRWmTS47M8Xr6eRUsI8u LZeC4H2DF9LAsFOxKahkRxtXkYk6QmijIijCShwhyZI6FT5BCogeHqq06XaC8//aFn0/G22Dit3i9 l5aT/BNwPJZrRrj0VJkrzrPyWZty9kAaFFZk9T1h8y5M0iyB1ch3fAqj6O0RRWA6qtWQIJNdAe66G fs7vMX14frM8rl/tT+SeSfi+FNAsN9+5xJ++81QAKiio2XiCtyBfthvc9Uq3U3mXiGh3RK6YwH/1I TEmX6gqg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dAZWf-0000EZ-T0; Tue, 16 May 2017 10:16:21 +0000 Received: from mail-qk0-x232.google.com ([2607:f8b0:400d:c09::232]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dAZLp-0002QR-8G for linux-arm-kernel@lists.infradead.org; Tue, 16 May 2017 10:05:12 +0000 Received: by mail-qk0-x232.google.com with SMTP id u75so122740100qka.3 for ; Tue, 16 May 2017 03:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lq978tMRkCFKDENAzpOxxhLeswD3Qp4fFJXn7QDbq78=; b=RFF6152zFxMgaCcG+kiWVJJBLAAXXs9d6HFIILiB23KsqyFotx0l1L+afeQCBH9xKF sLk1LOecRriA+tMRelITxud6UKLAjZVQem2fFYtD6kwuDch+XctqhVEFOv5ViDmXKQ6Z H8CnD8J1lw9YrLZSzD5wYivDnByI7S8ij3VZE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lq978tMRkCFKDENAzpOxxhLeswD3Qp4fFJXn7QDbq78=; b=m5rhZk3QmzeJSzqPFVjSyvYiQ8EDDL0EhlV/Dj8xYoJFBKrm/XxKz4lTme6quZMxUx ac+vE+vLzDgMzASj7J/FORVzJRF+uGHmHUU8dqn8gmC7NeoGLsqU2Kf3+4cWEkI8pcsN Mx4CeTsaeCoJ6uN4NRaomvPjQJygcQd2UUZ5uoVllPI+f8jiaC6NzxgNPuM97Vc8FK4O vCY84r4XiExg5PGUrBwzf/Q2X0IMJ/ctFGXz/8grpmVFWGJT1canXBZEIcSfxrYRud9r rcP5cLARaiUpruHLhqHFDiEwPwVkdGgJF3IedeBh+v2q7+2Z/hyMNpnk/W/1SBm3G3Qz lToA== X-Gm-Message-State: AODbwcCfx5Ho7KKv4oQcWFHRKhbIZXkGDeTUMyDM8F63Rxj5o2C92iIG o4Jz/pqioXSTyJZJ X-Received: by 10.80.194.25 with SMTP id n25mr8128202edf.153.1494929087842; Tue, 16 May 2017 03:04:47 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id f10sm432477ede.4.2017.05.16.03.04.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 May 2017 03:04:47 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race Date: Tue, 16 May 2017 12:04:31 +0200 Message-Id: <20170516100431.4101-4-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170516100431.4101-1-cdall@linaro.org> References: <20170516100431.4101-1-cdall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170516_030509_382240_57484EA3 X-CRM114-Status: GOOD ( 18.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Eric Auger , Andrew Jones , kvm@vger.kernel.org, Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We don't need to stop a specific VCPU when changing the active state, because private IRQs can only be modified by a running VCPU for the VCPU itself and it is therefore already stopped. However, it is also possible for two VCPUs to be modifying the active state of SPIs at the same time, which can cause the thread being stuck in the loop that checks other VCPU threads for a potentially very long time, or to modify the active state of a running VCPU. Fix this by serializing all accesses to setting and clearing the active state of interrupts using the KVM mutex. Reported-by: Andrew Jones Signed-off-by: Christoffer Dall Reviewed-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 2 -- arch/arm64/include/asm/kvm_host.h | 2 -- virt/kvm/arm/arm.c | 20 ++++---------------- virt/kvm/arm/vgic/vgic-mmio.c | 18 ++++++++++-------- virt/kvm/arm/vgic/vgic.c | 11 ++++++----- 5 files changed, 20 insertions(+), 33 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index f0e6657..12274d4 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu __percpu **kvm_get_running_vcpus(void); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu); -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu); int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5e19165..32cbe8a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu); -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu); u64 __kvm_call_hyp(void *hypfn, ...); #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__) diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 3417e18..3c387fd 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT); } -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu) -{ - vcpu->arch.pause = true; - kvm_vcpu_kick(vcpu); -} - -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu) -{ - struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu); - - vcpu->arch.pause = false; - swake_up(wq); -} - void kvm_arm_resume_guest(struct kvm *kvm) { int i; struct kvm_vcpu *vcpu; - kvm_for_each_vcpu(i, vcpu, kvm) - kvm_arm_resume_vcpu(vcpu); + kvm_for_each_vcpu(i, vcpu, kvm) { + vcpu->arch.pause = false; + swake_up(kvm_arch_vcpu_wq(vcpu)); + } } static void vcpu_sleep(struct kvm_vcpu *vcpu) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index 64cbcb4..c1e4bdd 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, * be migrated while we don't hold the IRQ locks and we don't want to be * chasing moving targets. * - * For private interrupts, we only have to make sure the single and only VCPU - * that can potentially queue the IRQ is stopped. + * For private interrupts we don't have to do anything because userspace + * accesses to the VGIC state already require all VCPUs to be stopped, and + * only the VCPU itself can modify its private interrupts active state, which + * guarantees that the VCPU is not running. */ static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid) { - if (intid < VGIC_NR_PRIVATE_IRQS) - kvm_arm_halt_vcpu(vcpu); - else + if (intid > VGIC_NR_PRIVATE_IRQS) kvm_arm_halt_guest(vcpu->kvm); } /* See vgic_change_active_prepare */ static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid) { - if (intid < VGIC_NR_PRIVATE_IRQS) - kvm_arm_resume_vcpu(vcpu); - else + if (intid > VGIC_NR_PRIVATE_IRQS) kvm_arm_resume_guest(vcpu->kvm); } @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu, { u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + mutex_lock(&vcpu->kvm->lock); vgic_change_active_prepare(vcpu, intid); __vgic_mmio_write_cactive(vcpu, addr, len, val); vgic_change_active_finish(vcpu, intid); + mutex_unlock(&vcpu->kvm->lock); } void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu, @@ -305,11 +305,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, { u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + mutex_lock(&vcpu->kvm->lock); vgic_change_active_prepare(vcpu, intid); __vgic_mmio_write_sactive(vcpu, addr, len, val); vgic_change_active_finish(vcpu, intid); + mutex_unlock(&vcpu->kvm->lock); } void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu, diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 83b24d2..aea080a 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -35,11 +35,12 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = { /* * Locking order is always: - * its->cmd_lock (mutex) - * its->its_lock (mutex) - * vgic_cpu->ap_list_lock - * kvm->lpi_list_lock - * vgic_irq->irq_lock + * kvm->lock (mutex) + * its->cmd_lock (mutex) + * its->its_lock (mutex) + * vgic_cpu->ap_list_lock + * kvm->lpi_list_lock + * vgic_irq->irq_lock * * If you need to take multiple locks, always take the upper lock first, * then the lower ones, e.g. first take the its_lock, then the irq_lock.