From patchwork Wed Sep 6 17:36:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 9941139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6CCBF602CC for ; Wed, 6 Sep 2017 17:38:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F31822B39 for ; Wed, 6 Sep 2017 17:38:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 23B3A28BFF; Wed, 6 Sep 2017 17:38:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C0D2E22B39 for ; Wed, 6 Sep 2017 17:38:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dpeFc-0000hR-6k; Wed, 06 Sep 2017 17:36:32 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dpeFb-0000h9-3y for xen-devel@lists.xenproject.org; Wed, 06 Sep 2017 17:36:31 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id FE/CD-03283-E1230B95; Wed, 06 Sep 2017 17:36:30 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrBLMWRWlGSWpSXmKPExsVyuP0Ov66s0YZ Ig/n/9C2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oznU5ayFnyWrNjycy9zA+NxsS5GTg4JASOJ txP/MXUxcnEICSxklOjZ1cUMkmATUJXYcP0UK4gtIlAncXD5IWaQImaBX4wSlw7uYAJJCAskS 2ycd5+ti5GDgwWo4fVxFZAwr4CJxOFPR5ghFshLdByYzAJicwqYSjzuh7CFgGqOf7/OOoGRew EjwypGjeLUorLUIl0jM72kosz0jJLcxMwcXUMDM73c1OLixPTUnMSkYr3k/NxNjED/MgDBDsY zCwIPMUpyMCmJ8l5WWx8pxJeUn1KZkVicEV9UmpNafIhRhoNDSYKXyXBDpJBgUWp6akVaZg4w 0GDSEhw8SiK8+w2A0rzFBYm5xZnpEKlTjMYcxzZd/sPE0XHz7h8mIZa8/LxUKXHenyClAiClG aV5cINgEXCJUVZKmJcR6DQhnoLUotzMElT5V4ziHIxKwrzv9IGm8GTmlcDtewV0ChPQKVUv14 CcUpKIkJJqYJQstpGwd+x8MnHr9HuLZr1t4cxMvCI0oe/I33uWC9j3p9bsWaA6a5n6Y6aO+tr etC7DpIT/90wyJfd21F/ccaPrb+bsC8r8TZsO7Krf+eDrXY7Xh6IfG7aa/znw32wxe+98saur hO5yLpG+lnzj0fmFbg+KbeZuu7NF3nKaQHKgvJnwgXt14exKLMUZiYZazEXFiQB+r585ewIAA A== X-Env-Sender: jgross@suse.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1504719389!105921943!1 X-Originating-IP: [195.135.220.15] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3979 invoked from network); 6 Sep 2017 17:36:29 -0000 Received: from mx2.suse.de (HELO mx1.suse.de) (195.135.220.15) by server-13.tower-27.messagelabs.com with DHE-RSA-CAMELLIA256-SHA encrypted SMTP; 6 Sep 2017 17:36:29 -0000 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 7762BAE70; Wed, 6 Sep 2017 17:36:29 +0000 (UTC) From: Juergen Gross To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org, virtualization@lists.linux-foundation.org Date: Wed, 6 Sep 2017 19:36:24 +0200 Message-Id: <20170906173625.18158-2-jgross@suse.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20170906173625.18158-1-jgross@suse.com> References: <20170906173625.18158-1-jgross@suse.com> Cc: Juergen Gross , jeremy@goop.org, peterz@infradead.org, rusty@rustcorp.com.au, chrisw@sous-sol.org, mingo@redhat.com, tglx@linutronix.de, hpa@zytor.com, longman@redhat.com, akataria@vmware.com, boris.ostrovsky@oracle.com Subject: [Xen-devel] [PATCH v3 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP There are cases where a guest tries to switch spinlocks to bare metal behavior (e.g. by setting "xen_nopvspin" boot parameter). Today this has the downside of falling back to unfair test and set scheme for qspinlocks due to virt_spin_lock() detecting the virtualized environment. Add a static key controlling whether virt_spin_lock() should be called or not. When running on bare metal set the new key to false. Signed-off-by: Juergen Gross --- arch/x86/include/asm/qspinlock.h | 11 ++++++++++- arch/x86/kernel/paravirt-spinlocks.c | 6 ++++++ arch/x86/kernel/smpboot.c | 2 ++ kernel/locking/qspinlock.c | 4 ++++ 4 files changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index 48a706f641f2..308dfd0714c7 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_QSPINLOCK_H #define _ASM_X86_QSPINLOCK_H +#include #include #include #include @@ -46,10 +47,14 @@ static inline void queued_spin_unlock(struct qspinlock *lock) #endif #ifdef CONFIG_PARAVIRT +DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key); + +void native_pv_lock_init(void) __init; + #define virt_spin_lock virt_spin_lock static inline bool virt_spin_lock(struct qspinlock *lock) { - if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + if (!static_branch_likely(&virt_spin_lock_key)) return false; /* @@ -65,6 +70,10 @@ static inline bool virt_spin_lock(struct qspinlock *lock) return true; } +#else +static inline void native_pv_lock_init(void) +{ +} #endif /* CONFIG_PARAVIRT */ #include diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index 8f2d1c9d43a8..2fc65ddea40d 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -42,3 +42,9 @@ struct pv_lock_ops pv_lock_ops = { #endif /* SMP */ }; EXPORT_SYMBOL(pv_lock_ops); + +void __init native_pv_lock_init(void) +{ + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + static_branch_disable(&virt_spin_lock_key); +} diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 54b9e89d4d6b..21500d3ba359 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -77,6 +77,7 @@ #include #include #include +#include /* Number of siblings per CPU package */ int smp_num_siblings = 1; @@ -1381,6 +1382,7 @@ void __init native_smp_prepare_boot_cpu(void) /* already set me in cpu_online_mask in boot_cpu_init() */ cpumask_set_cpu(me, cpu_callout_mask); cpu_set_state_online(me); + native_pv_lock_init(); } void __init native_smp_cpus_done(unsigned int max_cpus) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 294294c71ba4..838d235b87ef 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -76,6 +76,10 @@ #define MAX_NODES 4 #endif +#ifdef CONFIG_PARAVIRT +DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); +#endif + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi.