From patchwork Fri Mar 15 12:06:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?C=C3=A9dric_Le_Goater?= X-Patchwork-Id: 10854653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C47A21669 for ; Fri, 15 Mar 2019 12:17:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA41528B3C for ; Fri, 15 Mar 2019 12:17:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A8212A946; Fri, 15 Mar 2019 12:17:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7ACF928B3C for ; Fri, 15 Mar 2019 12:17:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728973AbfCOMRM (ORCPT ); Fri, 15 Mar 2019 08:17:12 -0400 Received: from 20.mo7.mail-out.ovh.net ([46.105.49.208]:32895 "EHLO 20.mo7.mail-out.ovh.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728940AbfCOMRM (ORCPT ); Fri, 15 Mar 2019 08:17:12 -0400 X-Greylist: delayed 571 seconds by postgrey-1.27 at vger.kernel.org; Fri, 15 Mar 2019 08:17:10 EDT Received: from player793.ha.ovh.net (unknown [10.109.143.223]) by mo7.mail-out.ovh.net (Postfix) with ESMTP id 5A458109CFB for ; Fri, 15 Mar 2019 13:09:44 +0100 (CET) Received: from kaod.org (lfbn-1-2226-17.w90-76.abo.wanadoo.fr [90.76.48.17]) (Authenticated sender: clg@kaod.org) by player793.ha.ovh.net (Postfix) with ESMTPSA id 3FD733CD2267; Fri, 15 Mar 2019 12:09:35 +0000 (UTC) From: =?utf-8?q?C=C3=A9dric_Le_Goater?= To: kvm-ppc@vger.kernel.org Cc: Paul Mackerras , David Gibson , kvm@vger.kernel.org, Michael Ellerman , linuxppc-dev@lists.ozlabs.org, =?utf-8?q?C=C3=A9dric_Le_Goater?= Subject: [PATCH v3 17/17] KVM: PPC: Book3S HV: XIVE: clear the vCPU interrupt presenters Date: Fri, 15 Mar 2019 13:06:09 +0100 Message-Id: <20190315120609.25910-18-clg@kaod.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190315120609.25910-1-clg@kaod.org> References: <20190315120609.25910-1-clg@kaod.org> MIME-Version: 1.0 X-Ovh-Tracer-Id: 10810890906650184663 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedutddrheehgdefjecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemucehtddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the VM boots, the CAS negotiation process determines which interrupt mode to use and invokes a machine reset. At that time, the previous KVM interrupt device is 'destroyed' before the chosen one is created. Upon destruction, the vCPU interrupt presenters using the KVM device should be cleared first, the machine will reconnect them later to the new device after it is created. Signed-off-by: Cédric Le Goater Reviewed-by: David Gibson --- Changes since v2 : - removed comments on possible race in kvmppc_native_connect_vcpu() for the XIVE KVM device. This is still an issue in the XICS-over-XIVE device. arch/powerpc/kvm/book3s_xics.c | 19 +++++++++++++ arch/powerpc/kvm/book3s_xive.c | 39 +++++++++++++++++++++++++-- arch/powerpc/kvm/book3s_xive_native.c | 12 +++++++++ 3 files changed, 68 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c index f27ee57ab46e..81cdabf4295f 100644 --- a/arch/powerpc/kvm/book3s_xics.c +++ b/arch/powerpc/kvm/book3s_xics.c @@ -1342,6 +1342,25 @@ static void kvmppc_xics_free(struct kvm_device *dev) struct kvmppc_xics *xics = dev->private; int i; struct kvm *kvm = xics->kvm; + struct kvm_vcpu *vcpu; + + /* + * When destroying the VM, the vCPUs are destroyed first and + * the vCPU list should be empty. If this is not the case, + * then we are simply destroying the device and we should + * clean up the vCPU interrupt presenters first. + */ + if (atomic_read(&kvm->online_vcpus) != 0) { + /* + * call kick_all_cpus_sync() to ensure that all CPUs + * have executed any pending interrupts + */ + if (is_kvmppc_hv_enabled(kvm)) + kick_all_cpus_sync(); + + kvm_for_each_vcpu(i, vcpu, kvm) + kvmppc_xics_free_icp(vcpu); + } debugfs_remove(xics->dentry); diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c index 480a3fc6b9fd..cf6a4c6c5a28 100644 --- a/arch/powerpc/kvm/book3s_xive.c +++ b/arch/powerpc/kvm/book3s_xive.c @@ -1100,11 +1100,19 @@ void kvmppc_xive_disable_vcpu_interrupts(struct kvm_vcpu *vcpu) void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu) { struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; - struct kvmppc_xive *xive = xc->xive; + struct kvmppc_xive *xive; int i; + if (!kvmppc_xics_enabled(vcpu)) + return; + + if (!xc) + return; + pr_devel("cleanup_vcpu(cpu=%d)\n", xc->server_num); + xive = xc->xive; + /* Ensure no interrupt is still routed to that VP */ xc->valid = false; kvmppc_xive_disable_vcpu_interrupts(vcpu); @@ -1141,6 +1149,10 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu) } /* Free the VP */ kfree(xc); + + /* Cleanup the vcpu */ + vcpu->arch.irq_type = KVMPPC_IRQ_DEFAULT; + vcpu->arch.xive_vcpu = NULL; } int kvmppc_xive_connect_vcpu(struct kvm_device *dev, @@ -1158,7 +1170,7 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev, } if (xive->kvm != vcpu->kvm) return -EPERM; - if (vcpu->arch.irq_type) + if (vcpu->arch.irq_type != KVMPPC_IRQ_DEFAULT) return -EBUSY; if (kvmppc_xive_find_server(vcpu->kvm, cpu)) { pr_devel("Duplicate !\n"); @@ -1828,8 +1840,31 @@ static void kvmppc_xive_free(struct kvm_device *dev) { struct kvmppc_xive *xive = dev->private; struct kvm *kvm = xive->kvm; + struct kvm_vcpu *vcpu; int i; + /* + * When destroying the VM, the vCPUs are destroyed first and + * the vCPU list should be empty. If this is not the case, + * then we are simply destroying the device and we should + * clean up the vCPU interrupt presenters first. + */ + if (atomic_read(&kvm->online_vcpus) != 0) { + /* + * call kick_all_cpus_sync() to ensure that all CPUs + * have executed any pending interrupts + */ + if (is_kvmppc_hv_enabled(kvm)) + kick_all_cpus_sync(); + + /* + * TODO: There is still a race window with the early + * checks in kvmppc_native_connect_vcpu() + */ + kvm_for_each_vcpu(i, vcpu, kvm) + kvmppc_xive_cleanup_vcpu(vcpu); + } + debugfs_remove(xive->dentry); if (kvm) diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c index 67a1bb26a4cc..8f7be5e23177 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -956,8 +956,20 @@ static void kvmppc_xive_native_free(struct kvm_device *dev) { struct kvmppc_xive *xive = dev->private; struct kvm *kvm = xive->kvm; + struct kvm_vcpu *vcpu; int i; + /* + * When destroying the VM, the vCPUs are destroyed first and + * the vCPU list should be empty. If this is not the case, + * then we are simply destroying the device and we should + * clean up the vCPU interrupt presenters first. + */ + if (atomic_read(&kvm->online_vcpus) != 0) { + kvm_for_each_vcpu(i, vcpu, kvm) + kvmppc_xive_native_cleanup_vcpu(vcpu); + } + debugfs_remove(xive->dentry); pr_devel("Destroying xive native device\n");