From patchwork Mon Sep 24 07:16:12 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 1496361 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id CC17D3FC71 for ; Mon, 24 Sep 2012 07:16:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755189Ab2IXHQf (ORCPT ); Mon, 24 Sep 2012 03:16:35 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34517 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755166Ab2IXHQf (ORCPT ); Mon, 24 Sep 2012 03:16:35 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q8O7GNn6022748 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 24 Sep 2012 03:16:23 -0400 Received: from dhcp-1-237.tlv.redhat.com (dhcp-4-26.tlv.redhat.com [10.35.4.26]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q8O7GLER021899; Mon, 24 Sep 2012 03:16:22 -0400 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id 677EA18D3A5; Mon, 24 Sep 2012 09:16:12 +0200 (IST) Date: Mon, 24 Sep 2012 09:16:12 +0200 From: Gleb Natapov To: Takuya Yoshikawa Cc: avi@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org Subject: Re: [RFC PATCH] KVM: x86: Skip request checking branches in vcpu_enter_guest() more effectively Message-ID: <20120924071612.GE20907@redhat.com> References: <20120924152447.36c71b8f.yoshikawa_takuya_b1@lab.ntt.co.jp> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120924152447.36c71b8f.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Mon, Sep 24, 2012 at 03:24:47PM +0900, Takuya Yoshikawa wrote: > This is an RFC since I have not done any comparison with the approach > using for_each_set_bit() which can be seen in Avi's work. > > Takuya > --- > > We did a simple test to see which requests we would get at the same time > in vcpu_enter_guest() and got the following numbers: > > |...........|...............|........|...............|.| > (N) (E) (S) (ES) others > 22.3% 30.7% 16.0% 29.5% 1.4% > > (N) : Nothing > (E) : Only KVM_REQ_EVENT > (S) : Only KVM_REQ_STEAL_UPDATE > (ES): Only KVM_REQ_EVENT and KVM_REQ_STEAL_UPDATE > > * Note that the exact numbers can change for other guests. > Yes, for guests that do not enable steal time KVM_REQ_STEAL_UPDATE should be never set, but currently it is. The patch (not tested) should fix this. --- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 901ad00..01572f5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1544,6 +1544,8 @@ static void accumulate_steal_time(struct kvm_vcpu *vcpu) delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; vcpu->arch.st.last_steal = current->sched_info.run_delay; vcpu->arch.st.accum_steal = delta; + + kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); } static void record_steal_time(struct kvm_vcpu *vcpu) @@ -1673,8 +1675,6 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data) accumulate_steal_time(vcpu); preempt_enable(); - kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); - break; case MSR_KVM_PV_EOI_EN: if (kvm_lapic_enable_pv_eoi(vcpu, data)) @@ -2336,7 +2336,6 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } accumulate_steal_time(vcpu); - kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)