From patchwork Sun Apr 17 11:01:02 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 713451 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3HB1QPV005379 for ; Sun, 17 Apr 2011 11:01:26 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756617Ab1DQLBM (ORCPT ); Sun, 17 Apr 2011 07:01:12 -0400 Received: from fmmailgate02.web.de ([217.72.192.227]:58759 "EHLO fmmailgate02.web.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752490Ab1DQLBM (ORCPT ); Sun, 17 Apr 2011 07:01:12 -0400 Received: from smtp03.web.de ( [172.20.0.65]) by fmmailgate02.web.de (Postfix) with ESMTP id AE11619C0C075; Sun, 17 Apr 2011 13:01:03 +0200 (CEST) Received: from [88.66.126.216] (helo=mchn199C.mchp.siemens.de) by smtp03.web.de with asmtp (TLSv1:AES256-SHA:256) (WEB.DE 4.110 #2) id 1QBPix-0004JE-00; Sun, 17 Apr 2011 13:01:03 +0200 Message-ID: <4DAAC86E.1060904@web.de> Date: Sun, 17 Apr 2011 13:01:02 +0200 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , kvm Subject: [PATCH] KVM: Clean up error handling during VCPU creation References: <4DA4E128.8070401@web.de> <4DAAAFD4.3030900@redhat.com> In-Reply-To: <4DAAAFD4.3030900@redhat.com> X-Enigmail-Version: 1.1.2 X-Sender: jan.kiszka@web.de X-Provags-ID: V01U2FsdGVkX1/E+XhduD/NWacDduQohM5DZQc6b2jixABuv8j8 LKxsasESJ6XHrsplu6MkaF6Xtl+T0UKPqr4fyH8eKV4itYD8np jI4Ehokmw= Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sun, 17 Apr 2011 11:01:26 +0000 (UTC) On 2011-04-17 11:16, Avi Kivity wrote: > On 04/13/2011 02:32 AM, Jan Kiszka wrote: >> From: Jan Kiszka >> >> If kvm_arch_vcpu_setup failed, we leaked the allocated VCPU structure so >> far. > >> @@ -1609,18 +1609,18 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm >> *kvm, u32 id) >> >> r = kvm_arch_vcpu_setup(vcpu); >> if (r) >> - return r; >> + goto vcpu_destroy; >> > > kvm_arch_vcpu_setup() (at least x86's) does a vcpu->free() on failure. > I think the current code is correct (if confusing). Right. How about cleaning this inconsistency up? It looks like there is no problem calling kvm_arch_vcpu_destroy even if kvm_arch_vcpu_setup bailed out early. Jan ------8<------ From: Jan Kiszka So far kvm_arch_vcpu_setup is responsible for freeing the vcpu struct if it fails. Move this confusing resonsibility back into the hands of kvm_vm_ioctl_create_vcpu. Only kvm_arch_vcpu_setup of x86 is affected, all other archs cannot fail. Signed-off-by: Jan Kiszka --- arch/x86/kvm/x86.c | 5 ----- virt/kvm/kvm_main.c | 11 ++++++----- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1d5a7f4..9826d5d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6002,12 +6002,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) if (r == 0) r = kvm_mmu_setup(vcpu); vcpu_put(vcpu); - if (r < 0) - goto free_vcpu; - return 0; -free_vcpu: - kvm_x86_ops->vcpu_free(vcpu); return r; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5814645..57b173c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1609,18 +1609,18 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) r = kvm_arch_vcpu_setup(vcpu); if (r) - return r; + goto vcpu_destroy; mutex_lock(&kvm->lock); if (atomic_read(&kvm->online_vcpus) == KVM_MAX_VCPUS) { r = -EINVAL; - goto vcpu_destroy; + goto unlock_vcpu_destroy; } kvm_for_each_vcpu(r, v, kvm) if (v->vcpu_id == id) { r = -EEXIST; - goto vcpu_destroy; + goto unlock_vcpu_destroy; } BUG_ON(kvm->vcpus[atomic_read(&kvm->online_vcpus)]); @@ -1630,7 +1630,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) r = create_vcpu_fd(vcpu); if (r < 0) { kvm_put_kvm(kvm); - goto vcpu_destroy; + goto unlock_vcpu_destroy; } kvm->vcpus[atomic_read(&kvm->online_vcpus)] = vcpu; @@ -1644,8 +1644,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) mutex_unlock(&kvm->lock); return r; -vcpu_destroy: +unlock_vcpu_destroy: mutex_unlock(&kvm->lock); +vcpu_destroy: kvm_arch_vcpu_destroy(vcpu); return r; }