From patchwork Wed Feb 8 18:00:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 9562953 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1F08260434 for ; Wed, 8 Feb 2017 18:03:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28F681FF14 for ; Wed, 8 Feb 2017 18:03:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D6442850E; Wed, 8 Feb 2017 18:03:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C836D28503 for ; Wed, 8 Feb 2017 18:03:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbWY8-0005nL-CH; Wed, 08 Feb 2017 18:01:00 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbWY7-0005nF-LV for xen-devel@lists.xenproject.org; Wed, 08 Feb 2017 18:00:59 +0000 Received: from [85.158.137.68] by server-11.bemta-3.messagelabs.com id 62/AC-01684-ADC5B985; Wed, 08 Feb 2017 18:00:58 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRWlGSWpSXmKPExsVysWW7jO6tmNk RBmuvClt83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBm3FtoUtKlWzNj6kbWBsVehi5GLQ0hgN5PE 11c/2CCcA4wSOxZdZ+9i5ORgE1CT+HOrkxXEFhE4ziTxe4YSSBGzwDNmiau9HWBFwgIRElceP GEEsVkEVCV+H9zIBmLzCjhJvHx6jgnElhDQlji18Ts7SLOEQB+jxM67U1gmMHItYGRYxahenF pUllqka6GXVJSZnlGSm5iZo2toYKyXm1pcnJiempOYVKyXnJ+7iRHoSQYg2MF4od35EKMkB5O SKG9e1OwIIb6k/JTKjMTijPii0pzU4kOMMhwcShK8x6KBcoJFqempFWmZOcCQgklLcPAoifBe AknzFhck5hZnpkOkTjEqSonzngNJCIAkMkrz4NpgYXyJUVZKmJcR6BAhnoLUotzMElT5V4ziH IxKwrwbQabwZOaVwE1/BbSYCWjx9dOzQBaXJCKkpBoYRY5fl3Er+6P2Qe4r655t53J7mFv8bq 1+nb8/I+5O8ZVa6+9tX1xEVh/b9YvlZZwd74OHczY5HWmvuGDz/kFTQG1jEp90hkbDobsmHse 5F+/9+Eqn/sNfj2vqf79UfXx3dZuQxyaLfYfzhBcbsenteHXpwY/7pqvz7u3T11ybmXthnd82 rlCeaUosxRmJhlrMRcWJAJHyz+teAgAA X-Env-Sender: longman@redhat.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1486576856!83975953!1 X-Originating-IP: [209.132.183.28] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 2782 invoked from network); 8 Feb 2017 18:00:58 -0000 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Feb 2017 18:00:58 -0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E87A3104D4; Wed, 8 Feb 2017 18:00:56 +0000 (UTC) Received: from llong.com (dhcp-17-128.bos.redhat.com [10.18.17.128]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v18I0q3K031426; Wed, 8 Feb 2017 13:00:53 -0500 From: Waiman Long To: Jeremy Fitzhardinge , Chris Wright , Alok Kataria , Rusty Russell , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" Date: Wed, 8 Feb 2017 13:00:24 -0500 Message-Id: <1486576825-17058-1-git-send-email-longman@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 08 Feb 2017 18:00:57 +0000 (UTC) Cc: linux-arch@vger.kernel.org, Juergen Gross , kvm@vger.kernel.org, =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Pan Xinhui , x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Waiman Long , Paolo Bonzini , xen-devel@lists.xenproject.org, Boris Ostrovsky Subject: [Xen-devel] [PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP It was found when running fio sequential write test with a XFS ramdisk on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16% 4.16% fio [k] __kvm_vcpu_is_preempted So making vcpu_is_preempted() a callee-save function has a pretty high cost associated with it. As vcpu_is_preempted() is called within the spinlock, mutex and rwsem slowpaths, there isn't much to gain by making it callee-save. So it is now changed to a normal function call instead. With this patch applied, the aggregrate bandwidth of the fio sequential write test increased slightly from 2563.3MB/s to 2588.1MB/s (about 1%). Signed-off-by: Waiman Long --- arch/x86/include/asm/paravirt.h | 2 +- arch/x86/include/asm/paravirt_types.h | 2 +- arch/x86/kernel/kvm.c | 7 ++----- arch/x86/kernel/paravirt-spinlocks.c | 6 ++---- arch/x86/xen/spinlock.c | 4 +--- 5 files changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 864f57b..2515885 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -676,7 +676,7 @@ static __always_inline void pv_kick(int cpu) static __always_inline bool pv_vcpu_is_preempted(int cpu) { - return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); + return PVOP_CALL1(bool, pv_lock_ops.vcpu_is_preempted, cpu); } #endif /* SMP && PARAVIRT_SPINLOCKS */ diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index bb2de45..88dc852 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -309,7 +309,7 @@ struct pv_lock_ops { void (*wait)(u8 *ptr, u8 val); void (*kick)(int cpu); - struct paravirt_callee_save vcpu_is_preempted; + bool (*vcpu_is_preempted)(int cpu); }; /* This contains all the paravirt structures: we get a convenient diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 099fcba..eb3753d 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -595,7 +595,6 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) return !!src->preempted; } -PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. @@ -614,10 +613,8 @@ void __init kvm_spinlock_init(void) pv_lock_ops.wait = kvm_wait; pv_lock_ops.kick = kvm_kick_cpu; - if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) { - pv_lock_ops.vcpu_is_preempted = - PV_CALLEE_SAVE(__kvm_vcpu_is_preempted); - } + if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) + pv_lock_ops.vcpu_is_preempted = __kvm_vcpu_is_preempted; } #endif /* CONFIG_PARAVIRT_SPINLOCKS */ diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index 6259327..da050bc 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -24,12 +24,10 @@ __visible bool __native_vcpu_is_preempted(int cpu) { return false; } -PV_CALLEE_SAVE_REGS_THUNK(__native_vcpu_is_preempted); bool pv_is_native_vcpu_is_preempted(void) { - return pv_lock_ops.vcpu_is_preempted.func == - __raw_callee_save___native_vcpu_is_preempted; + return pv_lock_ops.vcpu_is_preempted == __native_vcpu_is_preempted; } struct pv_lock_ops pv_lock_ops = { @@ -38,7 +36,7 @@ struct pv_lock_ops pv_lock_ops = { .queued_spin_unlock = PV_CALLEE_SAVE(__native_queued_spin_unlock), .wait = paravirt_nop, .kick = paravirt_nop, - .vcpu_is_preempted = PV_CALLEE_SAVE(__native_vcpu_is_preempted), + .vcpu_is_preempted = __native_vcpu_is_preempted, #endif /* SMP */ }; EXPORT_SYMBOL(pv_lock_ops); diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 25a7c43..c85bb8f 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -114,8 +114,6 @@ void xen_uninit_lock_cpu(int cpu) per_cpu(irq_name, cpu) = NULL; } -PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen); - /* * Our init of PV spinlocks is split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -138,7 +136,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } static __init int xen_parse_nopvspin(char *arg)