diff mbox

x86/kvm: Don't use pvqspinlock code if only 1 vCPU

Message ID 1531864767-30648-1-git-send-email-longman@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Waiman Long July 17, 2018, 9:59 p.m. UTC
On a VM with only 1 vCPU, the locking fast path will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 arch/x86/kernel/kvm.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Paolo Bonzini July 18, 2018, 11:51 a.m. UTC | #1
On 17/07/2018 23:59, Waiman Long wrote:
> On a VM with only 1 vCPU, the locking fast path will always be
> successful. In this case, there is no need to use the the PV qspinlock
> code which has higher overhead on the unlock side than the native
> qspinlock code.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5b2300b..575c9a5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void)
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
>  		return;
>  
> +	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> +	if (num_possible_cpus() == 1)
> +		return;
> +
>  	__pv_init_lock_hash();
>  	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>  	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
> 

Queued, thanks.

Paolo
Konrad Rzeszutek Wilk July 19, 2018, 1:15 a.m. UTC | #2
On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote:
> On a VM with only 1 vCPU, the locking fast path will always be
> successful. In this case, there is no need to use the the PV qspinlock
> code which has higher overhead on the unlock side than the native
> qspinlock code.

Why not make this global? That is for both KVM and Xen and any
other virtualized guest that uses this?

> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5b2300b..575c9a5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void)
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
>  		return;
>  
> +	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> +	if (num_possible_cpus() == 1)
> +		return;
> +
>  	__pv_init_lock_hash();
>  	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>  	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
> -- 
> 1.8.3.1
>
Waiman Long July 19, 2018, 1:34 p.m. UTC | #3
On 07/18/2018 09:15 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote:
>> On a VM with only 1 vCPU, the locking fast path will always be
>> successful. In this case, there is no need to use the the PV qspinlock
>> code which has higher overhead on the unlock side than the native
>> qspinlock code.
> Why not make this global? That is for both KVM and Xen and any
> other virtualized guest that uses this?

Right, I will send another patch for Xen. The pvqspinlock code has to be
explicitly opted in. Right now, both Xen and KVM used it in the tree. I
am not sure about other out-of-tree modules. There is nothing I can do
for those.

Cheers,
Longman
diff mbox

Patch

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5b2300b..575c9a5 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -748,6 +748,10 @@  void __init kvm_spinlock_init(void)
 	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
 		return;
 
+	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
+	if (num_possible_cpus() == 1)
+		return;
+
 	__pv_init_lock_hash();
 	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
 	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);