diff mbox series

[v2,32/45] KVM: Move initialization of preempt notifier to kvm_vcpu_init()

Message ID 20191218215530.2280-33-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: Refactor vCPU creation | expand

Commit Message

Sean Christopherson Dec. 18, 2019, 9:55 p.m. UTC
Initialize the preempt notifier immediately in kvm_vcpu_init() to pave
the way for removing kvm_arch_vcpu_setup(), i.e. to allow arch specific
code to call vcpu_load() during kvm_arch_vcpu_create().

Back when preemption support was added, the location of the call to init
the preempt notifier was perfectly sane.  The overall vCPU creation flow
featured a single arch specific hook and the preempt notifer was used
immediately after its initialization (by vcpu_load()).  E.g.:

        vcpu = kvm_arch_ops->vcpu_create(kvm, n);
        if (IS_ERR(vcpu))
                return PTR_ERR(vcpu);

        preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);

        vcpu_load(vcpu);
        r = kvm_mmu_setup(vcpu);
        vcpu_put(vcpu);
        if (r < 0)
                goto free_vcpu;

Today, the call to preempt_notifier_init() is sandwiched between two
arch specific calls, kvm_arch_vcpu_create() and kvm_arch_vcpu_setup(),
which needlessly forces x86 (and possibly others?) to split its vCPU
creation flow.  Init the preempt notifier prior to any arch specific
call so that each arch can independently decide how best to organize
its creation flow.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Cornelia Huck Dec. 20, 2019, 9:50 a.m. UTC | #1
On Wed, 18 Dec 2019 13:55:17 -0800
Sean Christopherson <sean.j.christopherson@intel.com> wrote:

> Initialize the preempt notifier immediately in kvm_vcpu_init() to pave
> the way for removing kvm_arch_vcpu_setup(), i.e. to allow arch specific
> code to call vcpu_load() during kvm_arch_vcpu_create().
> 
> Back when preemption support was added, the location of the call to init
> the preempt notifier was perfectly sane.  The overall vCPU creation flow
> featured a single arch specific hook and the preempt notifer was used
> immediately after its initialization (by vcpu_load()).  E.g.:
> 
>         vcpu = kvm_arch_ops->vcpu_create(kvm, n);
>         if (IS_ERR(vcpu))
>                 return PTR_ERR(vcpu);
> 
>         preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
> 
>         vcpu_load(vcpu);
>         r = kvm_mmu_setup(vcpu);
>         vcpu_put(vcpu);
>         if (r < 0)
>                 goto free_vcpu;
> 
> Today, the call to preempt_notifier_init() is sandwiched between two
> arch specific calls, kvm_arch_vcpu_create() and kvm_arch_vcpu_setup(),
> which needlessly forces x86 (and possibly others?) to split its vCPU
> creation flow.  Init the preempt notifier prior to any arch specific
> call so that each arch can independently decide how best to organize
> its creation flow.
> 
> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  virt/kvm/kvm_main.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
diff mbox series

Patch

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fd8168b8c0e4..876cf3dd2c97 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -348,6 +348,7 @@  static int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
 	kvm_vcpu_set_dy_eligible(vcpu, false);
 	vcpu->preempted = false;
 	vcpu->ready = false;
+	preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
 
 	r = kvm_arch_vcpu_init(vcpu);
 	if (r < 0)
@@ -2752,8 +2753,6 @@  static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
 	if (r)
 		goto vcpu_uninit;
 
-	preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
-
 	r = kvm_arch_vcpu_setup(vcpu);
 	if (r)
 		goto vcpu_destroy;