From patchwork Wed Aug 29 19:21:01 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra K T X-Patchwork-Id: 1385831 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 01804DFFCF for ; Wed, 29 Aug 2012 19:25:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754557Ab2H2TYS (ORCPT ); Wed, 29 Aug 2012 15:24:18 -0400 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:42819 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754121Ab2H2TYQ (ORCPT ); Wed, 29 Aug 2012 15:24:16 -0400 Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 30 Aug 2012 05:23:28 +1000 Received: from d23relay04.au.ibm.com (202.81.31.246) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 30 Aug 2012 05:23:26 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q7TJFL9Q25165940; Thu, 30 Aug 2012 05:15:21 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q7TJOB9o020386; Thu, 30 Aug 2012 05:24:11 +1000 Received: from [192.168.1.3] ([9.77.194.142]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q7TJO5wX020323; Thu, 30 Aug 2012 05:24:07 +1000 From: Raghavendra K T To: Avi Kivity , Marcelo Tosatti , Rik van Riel Cc: Srikar , "Nikunj A. Dadhania" , KVM , Raghavendra K T , LKML , Srivatsa Vaddagiri , Gleb Natapov Date: Thu, 30 Aug 2012 00:51:01 +0530 Message-Id: <20120829192100.22412.92575.sendpatchset@codeblue> Subject: [PATCH RFC 1/1] kvm: Use vcpu_id as pivot instead of last boosted vcpu in PLE handler x-cbid: 12082919-1396-0000-0000-000001CA54D5 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The idea of starting from next vcpu (source of yield_to + 1) seem to work well for overcomitted guest rather than using last boosted vcpu. We can also remove per VM variable with this approach. Iteration for eligible candidate after this patch starts from vcpu source+1 and ends at source-1 (after wrapping) Thanks Nikunj for his quick verification of the patch. Please let me know if this patch is interesting and makes sense. --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ====8<==== From: Raghavendra K T Currently we use next vcpu to last boosted vcpu as starting point while deciding eligible vcpu for directed yield. In overcomitted scenarios, if more vcpu try to do directed yield, they start from same vcpu, resulting in wastage of cpu time (because of failing yields and double runqueue lock). Since probability of same vcpu trying to do directed yield is already prevented by improved PLE handler, we can start from next vcpu from source of yield_to. Suggested-by: Srikar Dronamraju Signed-off-by: Raghavendra K T --- include/linux/kvm_host.h | 1 - virt/kvm/kvm_main.c | 12 ++++-------- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b70b48b..64a090d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -275,7 +275,6 @@ struct kvm { #endif struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; atomic_t online_vcpus; - int last_boosted_vcpu; struct list_head vm_list; struct mutex lock; struct kvm_io_bus *buses[KVM_NR_BUSES]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468523..65a6c83 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1584,7 +1584,6 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) { struct kvm *kvm = me->kvm; struct kvm_vcpu *vcpu; - int last_boosted_vcpu = me->kvm->last_boosted_vcpu; int yielded = 0; int pass; int i; @@ -1594,21 +1593,18 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) * currently running, because it got preempted by something * else and called schedule in __vcpu_run. Hopefully that * VCPU is holding the lock that we need and will release it. - * We approximate round-robin by starting at the last boosted VCPU. + * We approximate round-robin by starting at the next VCPU. */ for (pass = 0; pass < 2 && !yielded; pass++) { kvm_for_each_vcpu(i, vcpu, kvm) { - if (!pass && i <= last_boosted_vcpu) { - i = last_boosted_vcpu; + if (!pass && i <= me->vcpu_id) { + i = me->vcpu_id; continue; - } else if (pass && i > last_boosted_vcpu) + } else if (pass && i >= me->vcpu_id) break; - if (vcpu == me) - continue; if (waitqueue_active(&vcpu->wq)) continue; if (kvm_vcpu_yield_to(vcpu)) { - kvm->last_boosted_vcpu = i; yielded = 1; break; }