diff mbox series

[3/3] KVM: LAPIC: Optimize PMI delivering overhead

Message ID 1633687054-18865-3-git-send-email-wanpengli@tencent.com (mailing list archive)
State New, archived
Headers show
Series [1/3] KVM: emulate: #GP when emulating rdpmc if CR0.PE is 1 | expand

Commit Message

Wanpeng Li Oct. 8, 2021, 9:57 a.m. UTC
From: Wanpeng Li <wanpengli@tencent.com>

The overhead of kvm_vcpu_kick() is huge since expensive rcu/memory
barrier etc operations in rcuwait_wake_up(). It is worse when local 
delivery since the vCPU is scheduled and we still suffer from this. 
We can observe 12us+ for kvm_vcpu_kick() in kvm_pmu_deliver_pmi() 
path by ftrace before the patch and 6us+ after the optimization. 

Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/lapic.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Vitaly Kuznetsov Oct. 8, 2021, 10:52 a.m. UTC | #1
Wanpeng Li <kernellwp@gmail.com> writes:

> From: Wanpeng Li <wanpengli@tencent.com>
>
> The overhead of kvm_vcpu_kick() is huge since expensive rcu/memory
> barrier etc operations in rcuwait_wake_up(). It is worse when local 
> delivery since the vCPU is scheduled and we still suffer from this. 
> We can observe 12us+ for kvm_vcpu_kick() in kvm_pmu_deliver_pmi() 
> path by ftrace before the patch and 6us+ after the optimization. 
>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> ---
>  arch/x86/kvm/lapic.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 76fb00921203..ec6997187c6d 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -1120,7 +1120,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
>  	case APIC_DM_NMI:
>  		result = 1;
>  		kvm_inject_nmi(vcpu);
> -		kvm_vcpu_kick(vcpu);
> +		if (vcpu != kvm_get_running_vcpu())
> +			kvm_vcpu_kick(vcpu);

Out of curiosity,

can this be converted into a generic optimization for kvm_vcpu_kick()
instead? I.e. if kvm_vcpu_kick() is called for the currently running
vCPU, there's almost nothing to do, especially when we already have a
request pending, right? (I didn't put too much though to it)

>  		break;
>  
>  	case APIC_DM_INIT:
Wanpeng Li Oct. 8, 2021, 11:06 a.m. UTC | #2
On Fri, 8 Oct 2021 at 18:52, Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Wanpeng Li <kernellwp@gmail.com> writes:
>
> > From: Wanpeng Li <wanpengli@tencent.com>
> >
> > The overhead of kvm_vcpu_kick() is huge since expensive rcu/memory
> > barrier etc operations in rcuwait_wake_up(). It is worse when local
> > delivery since the vCPU is scheduled and we still suffer from this.
> > We can observe 12us+ for kvm_vcpu_kick() in kvm_pmu_deliver_pmi()
> > path by ftrace before the patch and 6us+ after the optimization.
> >
> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > ---
> >  arch/x86/kvm/lapic.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > index 76fb00921203..ec6997187c6d 100644
> > --- a/arch/x86/kvm/lapic.c
> > +++ b/arch/x86/kvm/lapic.c
> > @@ -1120,7 +1120,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
> >       case APIC_DM_NMI:
> >               result = 1;
> >               kvm_inject_nmi(vcpu);
> > -             kvm_vcpu_kick(vcpu);
> > +             if (vcpu != kvm_get_running_vcpu())
> > +                     kvm_vcpu_kick(vcpu);
>
> Out of curiosity,
>
> can this be converted into a generic optimization for kvm_vcpu_kick()
> instead? I.e. if kvm_vcpu_kick() is called for the currently running
> vCPU, there's almost nothing to do, especially when we already have a
> request pending, right? (I didn't put too much though to it)

I thought about it before, I will do it in the next version since you
also vote for it. :)

    Wanpeng
Sean Christopherson Oct. 8, 2021, 3:59 p.m. UTC | #3
On Fri, Oct 08, 2021, Wanpeng Li wrote:
> On Fri, 8 Oct 2021 at 18:52, Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >
> > Wanpeng Li <kernellwp@gmail.com> writes:
> >
> > > From: Wanpeng Li <wanpengli@tencent.com>
> > >
> > > The overhead of kvm_vcpu_kick() is huge since expensive rcu/memory
> > > barrier etc operations in rcuwait_wake_up(). It is worse when local

Memory barriers on x86 are just compiler barriers.  The only meaningful overhead
is the locked transaction in rcu_read_lock() => preempt_disable().  I suspect the
performance benefit from this patch comes either comes from avoiding a second
lock when disabling preemption again for get_cpu(), or by avoiding the cmpxchg()
in kvm_vcpu_exiting_guest_mode().

> > > delivery since the vCPU is scheduled and we still suffer from this.
> > > We can observe 12us+ for kvm_vcpu_kick() in kvm_pmu_deliver_pmi()
> > > path by ftrace before the patch and 6us+ after the optimization.

Those numbers seem off, I wouldn't expect a few locks to take 6us.

> > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > > ---
> > >  arch/x86/kvm/lapic.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > > index 76fb00921203..ec6997187c6d 100644
> > > --- a/arch/x86/kvm/lapic.c
> > > +++ b/arch/x86/kvm/lapic.c
> > > @@ -1120,7 +1120,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
> > >       case APIC_DM_NMI:
> > >               result = 1;
> > >               kvm_inject_nmi(vcpu);
> > > -             kvm_vcpu_kick(vcpu);
> > > +             if (vcpu != kvm_get_running_vcpu())
> > > +                     kvm_vcpu_kick(vcpu);
> >
> > Out of curiosity,
> >
> > can this be converted into a generic optimization for kvm_vcpu_kick()
> > instead? I.e. if kvm_vcpu_kick() is called for the currently running
> > vCPU, there's almost nothing to do, especially when we already have a
> > request pending, right? (I didn't put too much though to it)
> 
> I thought about it before, I will do it in the next version since you
> also vote for it. :)

Adding a kvm_get_running_vcpu() check before kvm_vcpu_wake_up() in kvm_vcpu_kick()
is not functionally correct as it's possible to reach kvm_cpu_kick() from (soft)
IRQ context, e.g. hrtimer => apic_timer_expired() and pi_wakeup_handler().  If
the kick occurs after prepare_to_rcuwait() and the final kvm_vcpu_check_block(),
but before the vCPU is scheduled out, then the kvm_vcpu_wake_up() is required to
wake the vCPU, even if it is the current running vCPU.

The extra check might also degrade performance for many cases since the full kick
path would need to disable preemption three times, though if the overhead is from
x86's cmpxchg() then it's a moot point.

I think we'd want something like this to avoid extra preempt_disable() as well
as the cmpxchg() when @vcpu is the running vCPU.

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8b7dc6e89fd7..f148a7d2a8b9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3349,8 +3349,15 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 {
        int me, cpu;

-       if (kvm_vcpu_wake_up(vcpu))
-               return;
+       me = get_cpu();
+
+       if (rcuwait_active(&vcpu->wait) && kvm_vcpu_wake_up(vcpu))
+               goto out;
+
+       if (vcpu == __this_cpu_read(kvm_running_vcpu)) {
+               WARN_ON_ONCE(vcpu->mode == IN_GUEST_MODE);
+               goto out;
+       }

        /*
         * Note, the vCPU could get migrated to a different pCPU at any point
@@ -3359,12 +3366,12 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
         * IPI is to force the vCPU to leave IN_GUEST_MODE, and migrating the
         * vCPU also requires it to leave IN_GUEST_MODE.
         */
-       me = get_cpu();
        if (kvm_arch_vcpu_should_kick(vcpu)) {
                cpu = READ_ONCE(vcpu->cpu);
                if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
                        smp_send_reschedule(cpu);
        }
+out:
        put_cpu();
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
Wanpeng Li Oct. 9, 2021, 9:14 a.m. UTC | #4
On Fri, 8 Oct 2021 at 23:59, Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Oct 08, 2021, Wanpeng Li wrote:
> > On Fri, 8 Oct 2021 at 18:52, Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> > >
> > > Wanpeng Li <kernellwp@gmail.com> writes:
> > >
> > > > From: Wanpeng Li <wanpengli@tencent.com>
> > > >
> > > > The overhead of kvm_vcpu_kick() is huge since expensive rcu/memory
> > > > barrier etc operations in rcuwait_wake_up(). It is worse when local
>
> Memory barriers on x86 are just compiler barriers.  The only meaningful overhead
> is the locked transaction in rcu_read_lock() => preempt_disable().  I suspect the
> performance benefit from this patch comes either comes from avoiding a second
> lock when disabling preemption again for get_cpu(), or by avoiding the cmpxchg()
> in kvm_vcpu_exiting_guest_mode().
>
> > > > delivery since the vCPU is scheduled and we still suffer from this.
> > > > We can observe 12us+ for kvm_vcpu_kick() in kvm_pmu_deliver_pmi()
> > > > path by ftrace before the patch and 6us+ after the optimization.
>
> Those numbers seem off, I wouldn't expect a few locks to take 6us.

Maybe the ftrace introduces more overhead.

>
> > > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > > > ---
> > > >  arch/x86/kvm/lapic.c | 3 ++-
> > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > > > index 76fb00921203..ec6997187c6d 100644
> > > > --- a/arch/x86/kvm/lapic.c
> > > > +++ b/arch/x86/kvm/lapic.c
> > > > @@ -1120,7 +1120,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
> > > >       case APIC_DM_NMI:
> > > >               result = 1;
> > > >               kvm_inject_nmi(vcpu);
> > > > -             kvm_vcpu_kick(vcpu);
> > > > +             if (vcpu != kvm_get_running_vcpu())
> > > > +                     kvm_vcpu_kick(vcpu);
> > >
> > > Out of curiosity,
> > >
> > > can this be converted into a generic optimization for kvm_vcpu_kick()
> > > instead? I.e. if kvm_vcpu_kick() is called for the currently running
> > > vCPU, there's almost nothing to do, especially when we already have a
> > > request pending, right? (I didn't put too much though to it)
> >
> > I thought about it before, I will do it in the next version since you
> > also vote for it. :)
>
> Adding a kvm_get_running_vcpu() check before kvm_vcpu_wake_up() in kvm_vcpu_kick()
> is not functionally correct as it's possible to reach kvm_cpu_kick() from (soft)
> IRQ context, e.g. hrtimer => apic_timer_expired() and pi_wakeup_handler().  If
> the kick occurs after prepare_to_rcuwait() and the final kvm_vcpu_check_block(),
> but before the vCPU is scheduled out, then the kvm_vcpu_wake_up() is required to
> wake the vCPU, even if it is the current running vCPU.

Good point.

>
> The extra check might also degrade performance for many cases since the full kick
> path would need to disable preemption three times, though if the overhead is from
> x86's cmpxchg() then it's a moot point.
>
> I think we'd want something like this to avoid extra preempt_disable() as well
> as the cmpxchg() when @vcpu is the running vCPU.

Do it in v2, thanks for the suggestion.

    Wanpeng
diff mbox series

Patch

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 76fb00921203..ec6997187c6d 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1120,7 +1120,8 @@  static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 	case APIC_DM_NMI:
 		result = 1;
 		kvm_inject_nmi(vcpu);
-		kvm_vcpu_kick(vcpu);
+		if (vcpu != kvm_get_running_vcpu())
+			kvm_vcpu_kick(vcpu);
 		break;
 
 	case APIC_DM_INIT: