From patchwork Mon Oct 29 14:07:18 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra K T X-Patchwork-Id: 1663681 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 153EFDFB7A for ; Mon, 29 Oct 2012 14:12:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757268Ab2J2OMa (ORCPT ); Mon, 29 Oct 2012 10:12:30 -0400 Received: from e28smtp02.in.ibm.com ([122.248.162.2]:38060 "EHLO e28smtp02.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753532Ab2J2OM1 (ORCPT ); Mon, 29 Oct 2012 10:12:27 -0400 Received: from /spool/local by e28smtp02.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 29 Oct 2012 19:42:24 +0530 Received: from d28relay04.in.ibm.com (9.184.220.61) by e28smtp02.in.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 29 Oct 2012 19:41:51 +0530 Received: from d28av04.in.ibm.com (d28av04.in.ibm.com [9.184.220.66]) by d28relay04.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q9TEBoK859375654; Mon, 29 Oct 2012 19:41:50 +0530 Received: from d28av04.in.ibm.com (loopback [127.0.0.1]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q9TJfeOv001223; Tue, 30 Oct 2012 06:41:41 +1100 Received: from [192.168.1.3] ([9.79.203.223]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q9TJfZJf000961; Tue, 30 Oct 2012 06:41:37 +1100 From: Raghavendra K T To: Peter Zijlstra , "H. Peter Anvin" , Avi Kivity , Ingo Molnar , Marcelo Tosatti , Rik van Riel Cc: Srikar , "Nikunj A. Dadhania" , KVM , Raghavendra K T , Jiannan Ouyang , Chegu Vinod , "Andrew M. Theurer" , LKML , Srivatsa Vaddagiri , Gleb Natapov , Andrew Jones Date: Mon, 29 Oct 2012 19:37:18 +0530 Message-Id: <20121029140717.15448.83182.sendpatchset@codeblue> In-Reply-To: <20121029140621.15448.92083.sendpatchset@codeblue> References: <20121029140621.15448.92083.sendpatchset@codeblue> Subject: [PATCH V2 RFC 3/3] kvm: Check system load and handle different commit cases accordingly x-cbid: 12102914-5816-0000-0000-0000051B02EC Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Raghavendra K T The patch indroduces a helper function that calculates the system load (idea borrowed from loadavg calculation). The load is normalized to 2048 i.e., return value (threshold) of 2048 implies an approximate 1:1 committed guest. In undercommit cases (threshold/2) we simply return from PLE handler. In overcommit cases (1.75 * threshold) we do a yield(). The rationale is to allow other VMs of the host to run instead of burning the cpu cycle. Reviewed-by: Srikar Dronamraju Signed-off-by: Raghavendra K T --- Idea of yielding in overcommit cases (especially in large number of small guest cases was Acked-by: Rik van Riel Andrew Theurer also has stressed the importance of reducing yield_to overhead and using yield(). (let threshold = 2048) Rationale for using threshold/2 for undercommit limit: Having a load below (0.5 * threshold) is used to avoid (the concern rasied by Rik) scenarios where we still have lock holder preempted vcpu waiting to be scheduled. (scenario arises when rq length is > 1 even when we are under committed) Rationale for using (1.75 * threshold) for overcommit scenario: This is a heuristic where we should probably see rq length > 1 and a vcpu of a different VM is waiting to be scheduled. virt/kvm/kvm_main.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e376434..28bbdfb 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1697,15 +1697,43 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) } #endif +/* + * A load of 2048 corresponds to 1:1 overcommit + * undercommit threshold is half the 1:1 overcommit + * overcommit threshold is 1.75 times of 1:1 overcommit threshold + */ +#define COMMIT_THRESHOLD (FIXED_1) +#define UNDERCOMMIT_THRESHOLD (COMMIT_THRESHOLD >> 1) +#define OVERCOMMIT_THRESHOLD ((COMMIT_THRESHOLD << 1) - (COMMIT_THRESHOLD >> 2)) + +unsigned long kvm_system_load(void) +{ + unsigned long load; + + load = avenrun[0] + FIXED_1/200; + load = load / num_online_cpus(); + + return load; +} + void kvm_vcpu_on_spin(struct kvm_vcpu *me) { struct kvm *kvm = me->kvm; struct kvm_vcpu *vcpu; int last_boosted_vcpu = me->kvm->last_boosted_vcpu; int yielded = 0; + unsigned long load; int pass; int i; + load = kvm_system_load(); + /* + * When we are undercomitted let us not waste time in + * iterating over all the VCPUs. + */ + if (load < UNDERCOMMIT_THRESHOLD) + return; + kvm_vcpu_set_in_spin_loop(me, true); /* * We boost the priority of a VCPU that is runnable but not @@ -1735,6 +1763,13 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) break; } } + /* + * If we are not able to yield especially in overcommit cases + * let us be courteous to other VM's VCPUs waiting to be scheduled. + */ + if (!yielded && load > OVERCOMMIT_THRESHOLD) + yield(); + kvm_vcpu_set_in_spin_loop(me, false); /* Ensure vcpu is not eligible during next spinloop */