mbox series

[v4,0/3] KVM: Yield to IPI target if necessary

Message ID 1560255830-8656-1-git-send-email-wanpengli@tencent.com (mailing list archive)
Headers show
Series KVM: Yield to IPI target if necessary | expand

Message

Wanpeng Li June 11, 2019, 12:23 p.m. UTC
The idea is from Xen, when sending a call-function IPI-many to vCPUs, 
yield if any of the IPI target vCPUs was preempted. 17% performance 
increasement of ebizzy benchmark can be observed in an over-subscribe 
environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function 
IPI-many since call-function is not easy to be trigged by userspace 
workload).

v3 -> v4: 
 * check map->phys_map[dest_id]
 * more cleaner kvm_sched_yield()

v2 -> v3:
 * add bounds-check on dest_id

v1 -> v2:
 * check map is not NULL
 * check map->phys_map[dest_id] is not NULL
 * make kvm_sched_yield static
 * change dest_id to unsinged long

Wanpeng Li (3):
  KVM: X86: Yield to IPI target if necessary
  KVM: X86: Implement PV sched yield hypercall
  KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest

 Documentation/virtual/kvm/cpuid.txt      |  4 ++++
 Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
 arch/x86/include/uapi/asm/kvm_para.h     |  1 +
 arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
 arch/x86/kvm/cpuid.c                     |  3 ++-
 arch/x86/kvm/x86.c                       | 21 +++++++++++++++++++++
 include/uapi/linux/kvm_para.h            |  1 +
 7 files changed, 61 insertions(+), 1 deletion(-)

Comments

Wanpeng Li June 18, 2019, 9 a.m. UTC | #1
ping, :)
On Tue, 11 Jun 2019 at 20:23, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> yield if any of the IPI target vCPUs was preempted. 17% performance
> increasement of ebizzy benchmark can be observed in an over-subscribe
> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> IPI-many since call-function is not easy to be trigged by userspace
> workload).
>
> v3 -> v4:
>  * check map->phys_map[dest_id]
>  * more cleaner kvm_sched_yield()
>
> v2 -> v3:
>  * add bounds-check on dest_id
>
> v1 -> v2:
>  * check map is not NULL
>  * check map->phys_map[dest_id] is not NULL
>  * make kvm_sched_yield static
>  * change dest_id to unsinged long
>
> Wanpeng Li (3):
>   KVM: X86: Yield to IPI target if necessary
>   KVM: X86: Implement PV sched yield hypercall
>   KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
>
>  Documentation/virtual/kvm/cpuid.txt      |  4 ++++
>  Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
>  arch/x86/include/uapi/asm/kvm_para.h     |  1 +
>  arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
>  arch/x86/kvm/cpuid.c                     |  3 ++-
>  arch/x86/kvm/x86.c                       | 21 +++++++++++++++++++++
>  include/uapi/linux/kvm_para.h            |  1 +
>  7 files changed, 61 insertions(+), 1 deletion(-)
>
> --
> 2.7.4
>
Wanpeng Li June 28, 2019, 7:29 a.m. UTC | #2
ping again,
On Tue, 18 Jun 2019 at 17:00, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> ping, :)
> On Tue, 11 Jun 2019 at 20:23, Wanpeng Li <kernellwp@gmail.com> wrote:
> >
> > The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > yield if any of the IPI target vCPUs was preempted. 17% performance
> > increasement of ebizzy benchmark can be observed in an over-subscribe
> > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > IPI-many since call-function is not easy to be trigged by userspace
> > workload).
> >
> > v3 -> v4:
> >  * check map->phys_map[dest_id]
> >  * more cleaner kvm_sched_yield()
> >
> > v2 -> v3:
> >  * add bounds-check on dest_id
> >
> > v1 -> v2:
> >  * check map is not NULL
> >  * check map->phys_map[dest_id] is not NULL
> >  * make kvm_sched_yield static
> >  * change dest_id to unsinged long
> >
> > Wanpeng Li (3):
> >   KVM: X86: Yield to IPI target if necessary
> >   KVM: X86: Implement PV sched yield hypercall
> >   KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
> >
> >  Documentation/virtual/kvm/cpuid.txt      |  4 ++++
> >  Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
> >  arch/x86/include/uapi/asm/kvm_para.h     |  1 +
> >  arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
> >  arch/x86/kvm/cpuid.c                     |  3 ++-
> >  arch/x86/kvm/x86.c                       | 21 +++++++++++++++++++++
> >  include/uapi/linux/kvm_para.h            |  1 +
> >  7 files changed, 61 insertions(+), 1 deletion(-)
> >
> > --
> > 2.7.4
> >
Paolo Bonzini July 2, 2019, 4:49 p.m. UTC | #3
On 11/06/19 14:23, Wanpeng Li wrote:
> The idea is from Xen, when sending a call-function IPI-many to vCPUs, 
> yield if any of the IPI target vCPUs was preempted. 17% performance 
> increasement of ebizzy benchmark can be observed in an over-subscribe 
> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function 
> IPI-many since call-function is not easy to be trigged by userspace 
> workload).
> 
> v3 -> v4: 
>  * check map->phys_map[dest_id]
>  * more cleaner kvm_sched_yield()
> 
> v2 -> v3:
>  * add bounds-check on dest_id
> 
> v1 -> v2:
>  * check map is not NULL
>  * check map->phys_map[dest_id] is not NULL
>  * make kvm_sched_yield static
>  * change dest_id to unsinged long
> 
> Wanpeng Li (3):
>   KVM: X86: Yield to IPI target if necessary
>   KVM: X86: Implement PV sched yield hypercall
>   KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
> 
>  Documentation/virtual/kvm/cpuid.txt      |  4 ++++
>  Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
>  arch/x86/include/uapi/asm/kvm_para.h     |  1 +
>  arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
>  arch/x86/kvm/cpuid.c                     |  3 ++-
>  arch/x86/kvm/x86.c                       | 21 +++++++++++++++++++++
>  include/uapi/linux/kvm_para.h            |  1 +
>  7 files changed, 61 insertions(+), 1 deletion(-)
> 

Queued, thanks.

Paolo