From patchwork Fri Sep 25 00:47:19 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zachary Amsden X-Patchwork-Id: 50086 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8P0osWv005496 for ; Fri, 25 Sep 2009 00:50:54 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752970AbZIYAtl (ORCPT ); Thu, 24 Sep 2009 20:49:41 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752850AbZIYAtj (ORCPT ); Thu, 24 Sep 2009 20:49:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47615 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752681AbZIYAth (ORCPT ); Thu, 24 Sep 2009 20:49:37 -0400 Received: from int-mx04.intmail.prod.int.phx2.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.17]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8P0nfNw000305; Thu, 24 Sep 2009 20:49:41 -0400 Received: from localhost.localdomain (vpn-12-216.rdu.redhat.com [10.11.12.216]) by int-mx04.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n8P0nUR3032078; Thu, 24 Sep 2009 20:49:39 -0400 From: Zachary Amsden To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Zachary Amsden , Avi Kivity , Marcelo Tosatti Subject: [PATCH: kvm 4/5] Fix hotremove of CPUs for KVM. Date: Thu, 24 Sep 2009 14:47:19 -1000 Message-Id: <1253839640-12695-5-git-send-email-zamsden@redhat.com> In-Reply-To: <1253839640-12695-4-git-send-email-zamsden@redhat.com> References: <20090924151049.GB14102@amt.cnet> <1253839640-12695-1-git-send-email-zamsden@redhat.com> <1253839640-12695-2-git-send-email-zamsden@redhat.com> <1253839640-12695-3-git-send-email-zamsden@redhat.com> <1253839640-12695-4-git-send-email-zamsden@redhat.com> Organization: Frobozz Magic Timekeeping Company X-Scanned-By: MIMEDefang 2.67 on 10.5.11.17 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the process of bringing down CPUs, the SVM / VMX structures associated with those CPUs are not freed. This may cause leaks when unloading and reloading the KVM module, as only the structures associated with online CPUs are cleaned up. So, clean up all possible CPUs, not just online ones. Signed-off-by: Zachary Amsden --- arch/x86/kvm/svm.c | 2 +- arch/x86/kvm/vmx.c | 7 +++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 8f99d0c..13ca268 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -525,7 +525,7 @@ static __exit void svm_hardware_unsetup(void) { int cpu; - for_each_online_cpu(cpu) + for_each_possible_cpu(cpu) svm_cpu_uninit(cpu); __free_pages(pfn_to_page(iopm_base >> PAGE_SHIFT), IOPM_ALLOC_ORDER); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index b8a8428..603bde3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1350,8 +1350,11 @@ static void free_kvm_area(void) { int cpu; - for_each_online_cpu(cpu) - free_vmcs(per_cpu(vmxarea, cpu)); + for_each_possible_cpu(cpu) + if (per_cpu(vmxarea, cpu)) { + free_vmcs(per_cpu(vmxarea, cpu)); + per_cpu(vmxarea, cpu) = NULL; + } } static __init int alloc_kvm_area(void)