diff mbox

[RFC,V2,2/2] kvm PLE handler: Choose better candidate for directed yield

Message ID 20120710193138.16440.31791.sendpatchset@codeblue (mailing list archive)
State New, archived
Headers show

Commit Message

Raghavendra K T July 10, 2012, 7:31 p.m. UTC
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Currently PLE handler can repeatedly do a directed yield to same vcpu
that has recently done PL exit. This can degrade the performance.

Try to yield to most eligible guy instead by alternate yielding.
Precisely, give chance to a VCPU which has:
 (a) Not done PLE exit at all (probably he is preempted lock-holder)
 (b) VCPU skipped in last iteration because it did PL exit, and probably
 has become eligible now (next eligible lock holder)

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---

 arch/s390/include/asm/kvm_host.h |    5 +++++
 arch/x86/include/asm/kvm_host.h  |    2 +-
 arch/x86/kvm/x86.c               |   30 ++++++++++++++++++++++++++++++
 virt/kvm/kvm_main.c              |    3 +++
 4 files changed, 39 insertions(+), 1 deletions(-)



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Rik van Riel July 10, 2012, 7:40 p.m. UTC | #1
On 07/10/2012 03:31 PM, Raghavendra K T wrote:
> From: Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>
>
> Currently PLE handler can repeatedly do a directed yield to same vcpu
> that has recently done PL exit. This can degrade the performance.
>
> Try to yield to most eligible guy instead by alternate yielding.
> Precisely, give chance to a VCPU which has:
>   (a) Not done PLE exit at all (probably he is preempted lock-holder)
>   (b) VCPU skipped in last iteration because it did PL exit, and probably
>   has become eligible now (next eligible lock holder)
>
> Signed-off-by: Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>

Reviewed-by: Rik van Riel <riel@redhat.com>
Ingo Molnar July 10, 2012, 8:50 p.m. UTC | #2
* Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> wrote:

> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1595,6 +1595,9 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>  				continue;
>  			if (waitqueue_active(&vcpu->wq))
>  				continue;
> +			if (!kvm_arch_vcpu_check_and_update_eligible(vcpu)) {
> +				continue;
> +			}

Saw this fly by for a second time and now it annoyed me enough 
to mention it: those curly braces are superfluous and should be 
dropped.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Raghavendra K T July 11, 2012, 3 a.m. UTC | #3
On 07/11/2012 02:20 AM, Ingo Molnar wrote:
>
> * Raghavendra K T<raghavendra.kt@linux.vnet.ibm.com>  wrote:
>
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -1595,6 +1595,9 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me)
>>   				continue;
>>   			if (waitqueue_active(&vcpu->wq))
>>   				continue;
>> +			if (!kvm_arch_vcpu_check_and_update_eligible(vcpu)) {
>> +				continue;
>> +			}
>
> Saw this fly by for a second time and now it annoyed me enough
> to mention it: those curly braces are superfluous and should be
> dropped.
>

Oops! Sorry.will remove {} and resend with Rik's reviewed-by. TBH,
block got simplified after iterations (local), this one leftout. my
mistake.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index dd17537..884f2c4 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -256,5 +256,10 @@  struct kvm_arch{
 	struct gmap *gmap;
 };
 
+static inline bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *v)
+{
+	return true;
+}
+
 extern int sie64a(struct kvm_s390_sie_block *, u64 *);
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 386f3e6..77c1358 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -966,7 +966,7 @@  extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn);
 void kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err);
 
 int kvm_is_in_guest(void);
-
+bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *vcpu);
 void kvm_pmu_init(struct kvm_vcpu *vcpu);
 void kvm_pmu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_reset(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b30c310..bf92ffc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6623,6 +6623,36 @@  bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
 			kvm_x86_ops->interrupt_allowed(vcpu);
 }
 
+/*
+ * Helper that checks whether a VCPU is eligible for directed yield.
+ * Most eligible candidate to yield is decided by following heuristics:
+ *
+ *  (a) VCPU which has not done PL exit recently (preempted lock holder),
+ *      indicated by @pause_loop_exited. Cleared just before guest_enter()
+ *
+ *  (b) VCPU which has done PL exit but did not get chance last time.
+ *      (mostly it has become eligible now since we have probably
+ *      yielded to lockholder in last iteration. This is done by toggling
+ *      @dy_eligible each time a VCPU checked for eligibility.)
+ *
+ *   Yielding to a recently PL exited VCPU before yielding to preempted
+ *   lock-holder results in uneligible VCPU burning CPU. Giving priority
+ *   for a potential lock-holder increases lock progress chance.
+ */
+bool kvm_arch_vcpu_check_and_update_eligible(struct kvm_vcpu *vcpu)
+{
+	bool eligible;
+
+	eligible = !vcpu->arch.ple.pause_loop_exited ||
+			(vcpu->arch.ple.pause_loop_exited &&
+			 vcpu->arch.ple.dy_eligible);
+
+	if (vcpu->arch.ple.pause_loop_exited)
+		vcpu->arch.ple.dy_eligible = !vcpu->arch.ple.dy_eligible;
+
+	return eligible;
+}
+
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7e14068..519321a 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1595,6 +1595,9 @@  void kvm_vcpu_on_spin(struct kvm_vcpu *me)
 				continue;
 			if (waitqueue_active(&vcpu->wq))
 				continue;
+			if (!kvm_arch_vcpu_check_and_update_eligible(vcpu)) {
+				continue;
+			}
 			if (kvm_vcpu_yield_to(vcpu)) {
 				kvm->last_boosted_vcpu = i;
 				yielded = 1;