From patchwork Mon May 23 08:33:05 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 808052 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p4N8XDMA026691 for ; Mon, 23 May 2011 08:33:13 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754094Ab1EWIdJ (ORCPT ); Mon, 23 May 2011 04:33:09 -0400 Received: from goliath.siemens.de ([192.35.17.28]:18185 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754060Ab1EWIdI (ORCPT ); Mon, 23 May 2011 04:33:08 -0400 Received: from mail1.siemens.de (localhost [127.0.0.1]) by goliath.siemens.de (8.13.6/8.13.6) with ESMTP id p4N8X5H0016059; Mon, 23 May 2011 10:33:05 +0200 Received: from mchn199C.mchp.siemens.de ([146.254.215.109]) by mail1.siemens.de (8.13.6/8.13.6) with ESMTP id p4N8X5Sn001364; Mon, 23 May 2011 10:33:05 +0200 Message-ID: <4DDA1BC1.9050904@siemens.com> Date: Mon, 23 May 2011 10:33:05 +0200 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Avi Kivity , Marcelo Tosatti CC: kvm Subject: [PATCH][RESEND] KVM: Clean up error handling during VCPU creation Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Mon, 23 May 2011 08:33:13 +0000 (UTC) So far kvm_arch_vcpu_setup is responsible for freeing the vcpu struct if it fails. Move this confusing resonsibility back into the hands of kvm_vm_ioctl_create_vcpu. Only kvm_arch_vcpu_setup of x86 is affected, all other archs cannot fail. Signed-off-by: Jan Kiszka --- arch/x86/kvm/x86.c | 5 ----- virt/kvm/kvm_main.c | 11 ++++++----- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index da48622..aaa3735 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6126,12 +6126,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) if (r == 0) r = kvm_mmu_setup(vcpu); vcpu_put(vcpu); - if (r < 0) - goto free_vcpu; - return 0; -free_vcpu: - kvm_x86_ops->vcpu_free(vcpu); return r; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3962899..8de7208 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1612,18 +1612,18 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) r = kvm_arch_vcpu_setup(vcpu); if (r) - return r; + goto vcpu_destroy; mutex_lock(&kvm->lock); if (atomic_read(&kvm->online_vcpus) == KVM_MAX_VCPUS) { r = -EINVAL; - goto vcpu_destroy; + goto unlock_vcpu_destroy; } kvm_for_each_vcpu(r, v, kvm) if (v->vcpu_id == id) { r = -EEXIST; - goto vcpu_destroy; + goto unlock_vcpu_destroy; } BUG_ON(kvm->vcpus[atomic_read(&kvm->online_vcpus)]); @@ -1633,7 +1633,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) r = create_vcpu_fd(vcpu); if (r < 0) { kvm_put_kvm(kvm); - goto vcpu_destroy; + goto unlock_vcpu_destroy; } kvm->vcpus[atomic_read(&kvm->online_vcpus)] = vcpu; @@ -1647,8 +1647,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id) mutex_unlock(&kvm->lock); return r; -vcpu_destroy: +unlock_vcpu_destroy: mutex_unlock(&kvm->lock); +vcpu_destroy: kvm_arch_vcpu_destroy(vcpu); return r; }