Message ID | 1598001454-11709-1-git-send-email-wanpengli@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: LAPIC: Don't kick vCPU which is injecting already-expired timer | expand |
On Fri, Aug 21, 2020 at 05:17:34PM +0800, Wanpeng Li wrote: > From: Wanpeng Li <wanpengli@tencent.com> > > The kick after setting KVM_REQ_PENDING_TIMER is used to handle the timer > fires on a different pCPU which vCPU is running on, we don't need this > kick when injecting already-expired timer, this kick is expensive since > memory barrier, rcu, preemption disable/enable operations. This patch > reduces the overhead by don't kick vCPU which is injecting already-expired > timer. This should also call out the VMX preemption timer case, which also passes from_timer_fn=false but doesn't need a kick because kvm_lapic_expired_hv_timer() is called from the target vCPU. > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > --- > arch/x86/kvm/lapic.c | 2 +- > arch/x86/kvm/x86.c | 5 +++-- > arch/x86/kvm/x86.h | 2 +- > 3 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > index 248095a..5b5ae66 100644 > --- a/arch/x86/kvm/lapic.c > +++ b/arch/x86/kvm/lapic.c > @@ -1642,7 +1642,7 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn) > } > > atomic_inc(&apic->lapic_timer.pending); > - kvm_set_pending_timer(vcpu); > + kvm_set_pending_timer(vcpu, from_timer_fn); My vote would be to open code kvm_set_pending_timer() here and drop the helper, i.e. kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu); if (from_timer_fn) kvm_vcpu_kick(vcpu); with that and an updated changelog: Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> > } > > static void start_sw_tscdeadline(struct kvm_lapic *apic) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 599d732..2a45405 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1778,10 +1778,11 @@ static s64 get_kvmclock_base_ns(void) > } > #endif > > -void kvm_set_pending_timer(struct kvm_vcpu *vcpu) > +void kvm_set_pending_timer(struct kvm_vcpu *vcpu, bool should_kick) > { > kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu); > - kvm_vcpu_kick(vcpu); > + if (should_kick) > + kvm_vcpu_kick(vcpu); > } > > static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock) > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > index 995ab69..0eaae9c 100644 > --- a/arch/x86/kvm/x86.h > +++ b/arch/x86/kvm/x86.h > @@ -246,7 +246,7 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu) > return is_smm(vcpu) || kvm_x86_ops.apic_init_signal_blocked(vcpu); > } > > -void kvm_set_pending_timer(struct kvm_vcpu *vcpu); > +void kvm_set_pending_timer(struct kvm_vcpu *vcpu, bool should_kick); > void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); > > void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr); > -- > 2.7.4 >
On Sat, 22 Aug 2020 at 12:01, Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > On Fri, Aug 21, 2020 at 05:17:34PM +0800, Wanpeng Li wrote: > > From: Wanpeng Li <wanpengli@tencent.com> > > > > The kick after setting KVM_REQ_PENDING_TIMER is used to handle the timer > > fires on a different pCPU which vCPU is running on, we don't need this > > kick when injecting already-expired timer, this kick is expensive since > > memory barrier, rcu, preemption disable/enable operations. This patch > > reduces the overhead by don't kick vCPU which is injecting already-expired > > timer. > > This should also call out the VMX preemption timer case, which also passes > from_timer_fn=false but doesn't need a kick because kvm_lapic_expired_hv_timer() > is called from the target vCPU. > > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com> > > --- > > arch/x86/kvm/lapic.c | 2 +- > > arch/x86/kvm/x86.c | 5 +++-- > > arch/x86/kvm/x86.h | 2 +- > > 3 files changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > > index 248095a..5b5ae66 100644 > > --- a/arch/x86/kvm/lapic.c > > +++ b/arch/x86/kvm/lapic.c > > @@ -1642,7 +1642,7 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn) > > } > > > > atomic_inc(&apic->lapic_timer.pending); > > - kvm_set_pending_timer(vcpu); > > + kvm_set_pending_timer(vcpu, from_timer_fn); > > My vote would be to open code kvm_set_pending_timer() here and drop the > helper, i.e. > > kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu); > if (from_timer_fn) > kvm_vcpu_kick(vcpu); > > with that and an updated changelog: Agreed. > > Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Thanks. Wanpeng
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 248095a..5b5ae66 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1642,7 +1642,7 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn) } atomic_inc(&apic->lapic_timer.pending); - kvm_set_pending_timer(vcpu); + kvm_set_pending_timer(vcpu, from_timer_fn); } static void start_sw_tscdeadline(struct kvm_lapic *apic) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 599d732..2a45405 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1778,10 +1778,11 @@ static s64 get_kvmclock_base_ns(void) } #endif -void kvm_set_pending_timer(struct kvm_vcpu *vcpu) +void kvm_set_pending_timer(struct kvm_vcpu *vcpu, bool should_kick) { kvm_make_request(KVM_REQ_PENDING_TIMER, vcpu); - kvm_vcpu_kick(vcpu); + if (should_kick) + kvm_vcpu_kick(vcpu); } static void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 995ab69..0eaae9c 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -246,7 +246,7 @@ static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu) return is_smm(vcpu) || kvm_x86_ops.apic_init_signal_blocked(vcpu); } -void kvm_set_pending_timer(struct kvm_vcpu *vcpu); +void kvm_set_pending_timer(struct kvm_vcpu *vcpu, bool should_kick); void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr);