From patchwork Tue Jul 10 19:31:39 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra K T X-Patchwork-Id: 1178811 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id D0B2A40B37 for ; Tue, 10 Jul 2012 19:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752270Ab2GJTeK (ORCPT ); Tue, 10 Jul 2012 15:34:10 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:32962 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751619Ab2GJTeI (ORCPT ); Tue, 10 Jul 2012 15:34:08 -0400 Received: from /spool/local by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 10 Jul 2012 20:20:39 +1000 Received: from d23relay03.au.ibm.com (202.81.31.245) by e23smtp09.au.ibm.com (202.81.31.206) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 10 Jul 2012 20:20:14 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q6AJXfKg46989344; Wed, 11 Jul 2012 05:33:41 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q6AJXdEA017775; Wed, 11 Jul 2012 05:33:40 +1000 Received: from [192.168.1.3] ([9.77.201.80]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q6AJXT7n017701; Wed, 11 Jul 2012 05:33:32 +1000 From: Raghavendra K T To: "H. Peter Anvin" , Thomas Gleixner , Marcelo Tosatti , Ingo Molnar , Avi Kivity , Rik van Riel Cc: S390 , Carsten Otte , Christian Borntraeger , KVM , Raghavendra K T , chegu vinod , "Andrew M. Theurer" , LKML , X86 , Gleb Natapov , linux390@de.ibm.com, Srivatsa Vaddagiri , Joerg Roedel Date: Wed, 11 Jul 2012 01:01:39 +0530 Message-Id: <20120710193138.16440.31791.sendpatchset@codeblue> In-Reply-To: <20120710193056.16440.40112.sendpatchset@codeblue> References: <20120710193056.16440.40112.sendpatchset@codeblue> Subject: [PATCH RFC V2 2/2] kvm PLE handler: Choose better candidate for directed yield x-cbid: 12071010-3568-0000-0000-0000021D4A63 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Raghavendra K T Currently PLE handler can repeatedly do a directed yield to same vcpu that has recently done PL exit. This can degrade the performance. Try to yield to most eligible guy instead by alternate yielding. Precisely, give chance to a VCPU which has: (a) Not done PLE exit at all (probably he is preempted lock-holder) (b) VCPU skipped in last iteration because it did PL exit, and probably has become eligible now (next eligible lock holder) Signed-off-by: Raghavendra K T Reviewed-by: Rik van Riel --- arch/s390/include/asm/kvm_host.h | 5 +++++ arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 3 +++ 4 files changed, 39 insertions(+), 1 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index dd17537..884f2c4 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -256,5 +256,10 @@ struct kvm_arch{ struct gmap *gmap; }; +static inline bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *v) +{ + return true; +} + extern int sie64a(struct kvm_s390_sie_block *, u64 *); #endif diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 386f3e6..77c1358 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -966,7 +966,7 @@ extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); void kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err); int kvm_is_in_guest(void); - +bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *vcpu); void kvm_pmu_init(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); void kvm_pmu_reset(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b30c310..bf92ffc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6623,6 +6623,36 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu) kvm_x86_ops->interrupt_allowed(vcpu); } +/* + * Helper that checks whether a VCPU is eligible for directed yield. + * Most eligible candidate to yield is decided by following heuristics: + * + * (a) VCPU which has not done PL exit recently (preempted lock holder), + * indicated by @pause_loop_exited. Cleared just before guest_enter() + * + * (b) VCPU which has done PL exit but did not get chance last time. + * (mostly it has become eligible now since we have probably + * yielded to lockholder in last iteration. This is done by toggling + * @dy_eligible each time a VCPU checked for eligibility.) + * + * Yielding to a recently PL exited VCPU before yielding to preempted + * lock-holder results in uneligible VCPU burning CPU. Giving priority + * for a potential lock-holder increases lock progress chance. + */ +bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *vcpu) +{ + bool eligible; + + eligible = !vcpu->arch.ple.pause_loop_exited || + (vcpu->arch.ple.pause_loop_exited && + vcpu->arch.ple.dy_eligible); + + if (vcpu->arch.ple.pause_loop_exited) + vcpu->arch.ple.dy_eligible = !vcpu->arch.ple.dy_eligible; + + return eligible; +} + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7e14068..519321a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1595,6 +1595,9 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) continue; if (waitqueue_active(&vcpu->wq)) continue; + if (!kvm_arch_vcpu_check_and_update_eligible(vcpu)) { + continue; + } if (kvm_vcpu_yield_to(vcpu)) { kvm->last_boosted_vcpu = i; yielded = 1;