Message ID | 1531864767-30648-1-git-send-email-longman@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 17/07/2018 23:59, Waiman Long wrote: > On a VM with only 1 vCPU, the locking fast path will always be > successful. In this case, there is no need to use the the PV qspinlock > code which has higher overhead on the unlock side than the native > qspinlock code. > > Signed-off-by: Waiman Long <longman@redhat.com> > --- > arch/x86/kernel/kvm.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 5b2300b..575c9a5 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void) > if (kvm_para_has_hint(KVM_HINTS_REALTIME)) > return; > > + /* Don't use the pvqspinlock code if there is only 1 vCPU. */ > + if (num_possible_cpus() == 1) > + return; > + > __pv_init_lock_hash(); > pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; > pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); > Queued, thanks. Paolo
On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote: > On a VM with only 1 vCPU, the locking fast path will always be > successful. In this case, there is no need to use the the PV qspinlock > code which has higher overhead on the unlock side than the native > qspinlock code. Why not make this global? That is for both KVM and Xen and any other virtualized guest that uses this? > > Signed-off-by: Waiman Long <longman@redhat.com> > --- > arch/x86/kernel/kvm.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 5b2300b..575c9a5 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void) > if (kvm_para_has_hint(KVM_HINTS_REALTIME)) > return; > > + /* Don't use the pvqspinlock code if there is only 1 vCPU. */ > + if (num_possible_cpus() == 1) > + return; > + > __pv_init_lock_hash(); > pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; > pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); > -- > 1.8.3.1 >
On 07/18/2018 09:15 PM, Konrad Rzeszutek Wilk wrote: > On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote: >> On a VM with only 1 vCPU, the locking fast path will always be >> successful. In this case, there is no need to use the the PV qspinlock >> code which has higher overhead on the unlock side than the native >> qspinlock code. > Why not make this global? That is for both KVM and Xen and any > other virtualized guest that uses this? Right, I will send another patch for Xen. The pvqspinlock code has to be explicitly opted in. Right now, both Xen and KVM used it in the tree. I am not sure about other out-of-tree modules. There is nothing I can do for those. Cheers, Longman
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 5b2300b..575c9a5 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void) if (kvm_para_has_hint(KVM_HINTS_REALTIME)) return; + /* Don't use the pvqspinlock code if there is only 1 vCPU. */ + if (num_possible_cpus() == 1) + return; + __pv_init_lock_hash(); pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
On a VM with only 1 vCPU, the locking fast path will always be successful. In this case, there is no need to use the the PV qspinlock code which has higher overhead on the unlock side than the native qspinlock code. Signed-off-by: Waiman Long <longman@redhat.com> --- arch/x86/kernel/kvm.c | 4 ++++ 1 file changed, 4 insertions(+)