diff mbox series

kvm: x86: Do proper cleanup if kvm_x86_ops->vm_init() fails

Message ID 20220729031108.3929138-1-junaids@google.com (mailing list archive)
State New, archived
Headers show
Series kvm: x86: Do proper cleanup if kvm_x86_ops->vm_init() fails | expand

Commit Message

Junaid Shahid July 29, 2022, 3:11 a.m. UTC
If vm_init() fails [which can happen, for instance, if a memory
allocation fails during avic_vm_init()], we need to cleanup some
state in order to avoid resource leaks.

Signed-off-by: Junaid Shahid <junaids@google.com>
---
 arch/x86/kvm/x86.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)


base-commit: a4850b5590d01bf3fb19fda3fc5d433f7382a974

Comments

Sean Christopherson July 29, 2022, 2:36 p.m. UTC | #1
On Thu, Jul 28, 2022, Junaid Shahid wrote:
> If vm_init() fails [which can happen, for instance, if a memory
> allocation fails during avic_vm_init()], we need to cleanup some
> state in order to avoid resource leaks.
> 
> Signed-off-by: Junaid Shahid <junaids@google.com>
> ---
>  arch/x86/kvm/x86.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f389691d8c04..ef5fd2f05c79 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12064,8 +12064,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  	kvm_hv_init_vm(kvm);
>  	kvm_xen_init_vm(kvm);
>  
> -	return static_call(kvm_x86_vm_init)(kvm);
> +	ret = static_call(kvm_x86_vm_init)(kvm);
> +	if (ret)
> +		goto out_uninit_mmu;
>  
> +	return 0;
> +
> +out_uninit_mmu:
> +	kvm_mmu_uninit_vm(kvm);

Hrm, this works for now (I think), but I really don't like that kvm_apicv_init(),
kvm_hv_init_vm(), and kvm_xen_init_vm() all do something without that something
being unwound on failure.  E.g. both Hyper-V and Xen have a paired "destroy"
function, it just so happens that their destroy paths are guaranteed nops in this
case.

AFAICT, there are no dependencies on doing vendor init at the end, so what if we
hoist it up so that all paths that can fail are at the top?

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5366f884e9a7..7e749be356b2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12042,6 +12042,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
        if (ret)
                goto out_page_track;
 
+       ret = static_call(kvm_x86_vm_init)(kvm);
+       if (ret)
+               goto out_uninit_mmu;
+
        INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
        INIT_LIST_HEAD(&kvm->arch.assigned_dev_head);
        atomic_set(&kvm->arch.noncoherent_dma_count, 0);
@@ -12077,8 +12081,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
        kvm_hv_init_vm(kvm);
        kvm_xen_init_vm(kvm);
 
-       return static_call(kvm_x86_vm_init)(kvm);
+       return 0;
 
+out_uninit_mmu:
+       kvm_mmu_uninit_vm(kvm);
 out_page_track:
        kvm_page_track_cleanup(kvm);
 out:


Calling kvm_apicv_init() after avic_vm_init() is somewhat odd.  If we really want
to avoid that, we could add a dedicated kvm_x86_ops to initialize APICv and then
make kvm_x86_ops.vm_init() a void return e.g.

static int kvm_apicv_init(struct kvm *kvm)
{
	unsigned long *inhibits = &kvm->arch.apicv_inhibit_reasons;

	init_rwsem(&kvm->arch.apicv_update_lock);

	set_or_clear_apicv_inhibit(inhibits, APICV_INHIBIT_REASON_ABSENT, true);

	if (!enable_apicv) {
		set_or_clear_apicv_inhibit(inhibits,
					   APICV_INHIBIT_REASON_DISABLE, true);
		return 0;
	}

	return static_call(kvm_x86_apicv_init(kvm));
}
Junaid Shahid July 29, 2022, 6:28 p.m. UTC | #2
On 7/29/22 07:36, Sean Christopherson wrote:
> On Thu, Jul 28, 2022, Junaid Shahid wrote:
>> If vm_init() fails [which can happen, for instance, if a memory
>> allocation fails during avic_vm_init()], we need to cleanup some
>> state in order to avoid resource leaks.
>>
>> Signed-off-by: Junaid Shahid <junaids@google.com>
>> ---
>>   arch/x86/kvm/x86.c | 8 +++++++-
>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index f389691d8c04..ef5fd2f05c79 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -12064,8 +12064,14 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>>   	kvm_hv_init_vm(kvm);
>>   	kvm_xen_init_vm(kvm);
>>   
>> -	return static_call(kvm_x86_vm_init)(kvm);
>> +	ret = static_call(kvm_x86_vm_init)(kvm);
>> +	if (ret)
>> +		goto out_uninit_mmu;
>>   
>> +	return 0;
>> +
>> +out_uninit_mmu:
>> +	kvm_mmu_uninit_vm(kvm);
> 
> Hrm, this works for now (I think), but I really don't like that kvm_apicv_init(),
> kvm_hv_init_vm(), and kvm_xen_init_vm() all do something without that something
> being unwound on failure.  E.g. both Hyper-V and Xen have a paired "destroy"
> function, it just so happens that their destroy paths are guaranteed nops in this
> case.
> 
> AFAICT, there are no dependencies on doing vendor init at the end, so what if we
> hoist it up so that all paths that can fail are at the top?
> 

Ack. I'll move the vendor init call up. I think I'll skip breaking out a 
separate kvm_x86_op for apicv_init() for now.

Thanks,
Junaid
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f389691d8c04..ef5fd2f05c79 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12064,8 +12064,14 @@  int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm_hv_init_vm(kvm);
 	kvm_xen_init_vm(kvm);
 
-	return static_call(kvm_x86_vm_init)(kvm);
+	ret = static_call(kvm_x86_vm_init)(kvm);
+	if (ret)
+		goto out_uninit_mmu;
 
+	return 0;
+
+out_uninit_mmu:
+	kvm_mmu_uninit_vm(kvm);
 out_page_track:
 	kvm_page_track_cleanup(kvm);
 out: