From patchwork Thu Oct 8 12:04:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Fedin X-Patchwork-Id: 7352261 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5474A9F4DC for ; Thu, 8 Oct 2015 12:04:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3D6C120785 for ; Thu, 8 Oct 2015 12:04:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 438BC2077F for ; Thu, 8 Oct 2015 12:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756567AbbJHMES (ORCPT ); Thu, 8 Oct 2015 08:04:18 -0400 Received: from mailout1.w1.samsung.com ([210.118.77.11]:57272 "EHLO mailout1.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753932AbbJHMER (ORCPT ); Thu, 8 Oct 2015 08:04:17 -0400 Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0NVW00DFSHJ2AR60@mailout1.w1.samsung.com> for kvm@vger.kernel.org; Thu, 08 Oct 2015 13:04:15 +0100 (BST) X-AuditID: cbfec7f4-f79c56d0000012ee-e2-56165bbe3634 Received: from eusync4.samsung.com ( [203.254.199.214]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id FB.C2.04846.EBB56165; Thu, 8 Oct 2015 13:04:14 +0100 (BST) Received: from fedinw7x64 ([106.109.131.169]) by eusync4.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0NVW00DKRHJ04IA0@eusync4.samsung.com>; Thu, 08 Oct 2015 13:04:14 +0100 (BST) From: Pavel Fedin To: 'Andre Przywara' , 'Marc Zyngier' , 'Christoffer Dall' Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org References: <20151008101408.GA20936@cbox> <56164BC4.80104@arm.com> <5616503A.7060504@arm.com> In-reply-to: <5616503A.7060504@arm.com> Subject: RE: [PATCH 0/2] KVM: arm/arm64: Clean up some obsolete code Date: Thu, 08 Oct 2015 15:04:12 +0300 Message-id: <008c01d101c1$735a03d0$5a0e0b70$@samsung.com> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7bit X-Mailer: Microsoft Outlook 14.0 Thread-index: AQHczLxRwTlff21FNe7uyD8+nyZlpwMmMIcqAr1XkZ8CJX8Q3p4JmwJA Content-language: ru X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrILMWRmVeSWpSXmKPExsVy+t/xa7r7osXCDA49NrJYMe8no8WL1/8Y LeZMLbT4eOo4u8XfO//YHFg91sxbw+hx59oeNo/zm9Ywe3zeJBfAEsVlk5Kak1mWWqRvl8CV 0bz9CVNBl27F64VvGBsYl6t0MXJySAiYSGz5eIkZwhaTuHBvPVsXIxeHkMBSRomNSxpYIJzv jBKvP/SwgVSxCahLnP76ASwhItDFKHFwxjGwdmYBU4n/rzaxgthCAj2MEidvGIHYnEANjRt3 sYPYwgIuEk9mfmcEsVkEVCVm/nwFZvMKWEocfjGTFcIWlPgx+R4LxEwtifU7jzNB2PISm9e8 hTpVQWLH2ddgvSICbhLN03ZC3SAiMe3fPeYJjEKzkIyahWTULCSjZiFpWcDIsopRNLU0uaA4 KT3XUK84Mbe4NC9dLzk/dxMjJCK+7GBcfMzqEKMAB6MSD+8CO7EwIdbEsuLK3EOMEhzMSiK8 HYFAId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rxzd70PERJITyxJzU5NLUgtgskycXBKNTBuPGgT fqzLbaGu0seoM/tClGeIq93JtE7bzjhLfc8u3/kHjapnzpVqu3Dnn/aSpFVTmpwSmK9UsbWr q077emLD7vw3eyVn3OgK6b2UzN30eeXcJKOLaUcTeGd7u15ifLP/wbrUI7fdeOJ846wWFPuF 2nDdaVV9mSywsXz/7SecX99IR+w17Q9XYinOSDTUYi4qTgQAvaDjaYQCAAA= Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hello! > Yes, I am looking at merging this. From the discussion with Pavel I > remember some things that I disagreed with, so I may propose a follow-up > patch. I will give this a try tomorrow. I reply to this thread, because this relates to the whole changeset. After the merge, some pieces are missing, considering my live migration W.I.P (see patch below). Together with this, vITS v3 backported to v4.2.2 and... Tested-by: Pavel Fedin --- From b08e9ba1da69f9cf5731c89a4ff39561cd16e6ea Mon Sep 17 00:00:00 2001 From: Pavel Fedin Date: Thu, 8 Oct 2015 14:43:23 +0300 Subject: [PATCH] Missing vITS pieces for live migration and safety 1. Split up vits_init() and perform allocations on PENDBASER access. Fixes crash when setting LPI registers from userspace before any vCPU has been run. The bug is triggered by reset routine in qemu. 2. Properly handle LPIs in vgic_unqueue_irqs(). Does not corrupt memory any more if LPI happens to get in there. Signed-off-by: Pavel Fedin --- virt/kvm/arm/its-emul.c | 26 +++++++++++++++++++------- virt/kvm/arm/its-emul.h | 1 + virt/kvm/arm/vgic-v3-emul.c | 11 ++++++++++- virt/kvm/arm/vgic.c | 15 +++++++++++---- 4 files changed, 41 insertions(+), 12 deletions(-) diff --git a/virt/kvm/arm/its-emul.c b/virt/kvm/arm/its-emul.c index b40a7fc..b1d61df 100644 --- a/virt/kvm/arm/its-emul.c +++ b/virt/kvm/arm/its-emul.c @@ -1117,7 +1117,9 @@ int vits_init(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_its *its = &dist->its; - int ret; + + if (dist->pendbaser) + return 0; dist->pendbaser = kcalloc(dist->nr_cpus, sizeof(u64), GFP_KERNEL); if (!dist->pendbaser) @@ -1132,18 +1134,27 @@ int vits_init(struct kvm *kvm) INIT_LIST_HEAD(&its->device_list); INIT_LIST_HEAD(&its->collection_list); - ret = vgic_register_kvm_io_dev(kvm, dist->vgic_its_base, - KVM_VGIC_V3_ITS_SIZE, vgicv3_its_ranges, - -1, &its->iodev); - if (ret) - return ret; - its->enabled = false; dist->msis_require_devid = true; return 0; } +int vits_map_resources(struct kvm *kvm) +{ + struct vgic_dist *dist = &kvm->arch.vgic; + struct vgic_its *its = &dist->its; + int ret; + + ret = vits_init(kvm); + if (ret) + return ret; + + return vgic_register_kvm_io_dev(kvm, dist->vgic_its_base, + KVM_VGIC_V3_ITS_SIZE, vgicv3_its_ranges, + -1, &its->iodev); +} + void vits_destroy(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; @@ -1182,6 +1193,7 @@ void vits_destroy(struct kvm *kvm) kfree(its->buffer_page); kfree(dist->pendbaser); + dist->pendbaser = NULL; its->enabled = false; spin_unlock(&its->lock); } diff --git a/virt/kvm/arm/its-emul.h b/virt/kvm/arm/its-emul.h index 95e56a7..236f153 100644 --- a/virt/kvm/arm/its-emul.h +++ b/virt/kvm/arm/its-emul.h @@ -34,6 +34,7 @@ void vgic_enable_lpis(struct kvm_vcpu *vcpu); int vits_init(struct kvm *kvm); +int vits_map_resources(struct kvm *kvm); void vits_destroy(struct kvm *kvm); int vits_inject_msi(struct kvm *kvm, struct kvm_msi *msi); diff --git a/virt/kvm/arm/vgic-v3-emul.c b/virt/kvm/arm/vgic-v3-emul.c index 3e262f3..b340202 100644 --- a/virt/kvm/arm/vgic-v3-emul.c +++ b/virt/kvm/arm/vgic-v3-emul.c @@ -693,6 +693,15 @@ static bool handle_mmio_pendbaser_redist(struct kvm_vcpu *vcpu, struct vgic_dist *dist = &vcpu->kvm->arch.vgic; int mode = ACCESS_READ_VALUE; + /* + * This makes sure that dist->pendbaser is allocated. + * We don't use any locks here because the actual initialization will + * happen either during register access from userspace, or upon first + * run. Both situations are already single-threaded. + */ + if (vits_init(vcpu->kvm)) + return false; + /* Storing a value with LPIs already enabled is undefined */ mode |= dist->lpis_enabled ? ACCESS_WRITE_IGNORED : ACCESS_WRITE_VALUE; vgic_reg64_access(mmio, offset, @@ -880,7 +889,7 @@ static int vgic_v3_map_resources(struct kvm *kvm, } if (vgic_has_its(kvm)) { - ret = vits_init(kvm); + ret = vits_map_resources(kvm); if (ret) goto out_unregister; } diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index b32baa0..8dbbb1a 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -711,6 +711,7 @@ bool vgic_handle_cfg_reg(u32 *reg, struct kvm_exit_mmio *mmio, */ void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) { + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; u64 elrsr = vgic_get_elrsr(vcpu); unsigned long *elrsr_ptr = u64_to_bitmask(&elrsr); @@ -737,8 +738,10 @@ void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) * interrupt then move the active state to the * distributor tracking bit. */ - if (lr.state & LR_STATE_ACTIVE) - vgic_irq_set_active(vcpu, lr.irq); + if (lr.state & LR_STATE_ACTIVE) { + if (lr.irq < dist->nr_irqs) + vgic_irq_set_active(vcpu, lr.irq); + } /* * Reestablish the pending state on the distributor and the @@ -746,8 +749,12 @@ void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) * is fine, then we are only setting a few bits that were * already set. */ - if (lr.state & LR_STATE_PENDING) - vgic_dist_irq_set_pending(vcpu, lr.irq); + if (lr.state & LR_STATE_PENDING) { + if (lr.irq < dist->nr_irqs) + vgic_dist_irq_set_pending(vcpu, lr.irq); + else + vgic_unqueue_lpi(vcpu, lr.irq); + } /* * Mark the LR as free for other use.