From patchwork Mon Jul 9 17:05:43 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avi Kivity X-Patchwork-Id: 1174151 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id C32CA40B21 for ; Mon, 9 Jul 2012 17:06:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752615Ab2GIRF7 (ORCPT ); Mon, 9 Jul 2012 13:05:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:22060 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752480Ab2GIRF5 (ORCPT ); Mon, 9 Jul 2012 13:05:57 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q69H5vtF020622 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 9 Jul 2012 13:05:57 -0400 Received: from s01.tlv.redhat.com (s01.tlv.redhat.com [10.35.255.8]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q69H5mDI016692; Mon, 9 Jul 2012 13:05:56 -0400 From: Avi Kivity To: Marcelo Tosatti Cc: kvm@vger.kernel.org Subject: [PATCH v3 4/6] KVM: Optimize vcpu->requests checking Date: Mon, 9 Jul 2012 20:05:43 +0300 Message-Id: <1341853545-3023-5-git-send-email-avi@redhat.com> In-Reply-To: <1341853545-3023-1-git-send-email-avi@redhat.com> References: <1341853545-3023-1-git-send-email-avi@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Instead of checking for each request linearly, use for_each_set_bit() to iterate on just the requests that are set (should be 0 or 1 most of the time). To avoid a useless call to find_first_bit(), add an extra check for no requests set. To avoid an extra indent and an unreviewable patch, I added a rather ugly goto. This can be fixed in a later patch. Signed-off-by: Avi Kivity --- arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 39 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 162231f..9296dce 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5217,6 +5217,7 @@ static void process_nmi(struct kvm_vcpu *vcpu) static int vcpu_enter_guest(struct kvm_vcpu *vcpu) { + unsigned req; int r; bool req_int_win = !irqchip_in_kernel(vcpu->kvm) && vcpu->run->request_interrupt_window; @@ -5225,57 +5226,67 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (unlikely(req_int_win)) kvm_make_request(KVM_REQ_EVENT, vcpu); - if (vcpu->requests) { - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) { + if (!vcpu->requests) + goto no_requests; + + for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) { + clear_bit(req, &vcpu->requests); + switch (req) { + case KVM_REQ_MMU_RELOAD: kvm_mmu_unload(vcpu); r = kvm_mmu_reload(vcpu); if (unlikely(r)) { kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); goto out; } - } - if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu)) + break; + case KVM_REQ_MIGRATE_TIMER: __kvm_migrate_timers(vcpu); - if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) { + break; + case KVM_REQ_CLOCK_UPDATE: r = kvm_guest_time_update(vcpu); if (unlikely(r)) goto out; - } - if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu)) + break; + case KVM_REQ_MMU_SYNC: kvm_mmu_sync_roots(vcpu); - if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) + break; + case KVM_REQ_TLB_FLUSH: kvm_x86_ops->tlb_flush(vcpu); - if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) { + break; + case KVM_REQ_REPORT_TPR_ACCESS: vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS; r = 0; goto out; - } - if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) { + case KVM_REQ_TRIPLE_FAULT: vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN; r = 0; goto out; - } - if (kvm_check_request(KVM_REQ_DEACTIVATE_FPU, vcpu)) { + case KVM_REQ_DEACTIVATE_FPU: vcpu->fpu_active = 0; kvm_x86_ops->fpu_deactivate(vcpu); - } - if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) { + break; + case KVM_REQ_APF_HALT: /* Page is swapped out. Do synthetic halt */ vcpu->arch.apf.halted = true; r = 1; goto out; - } - if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu)) + case KVM_REQ_STEAL_UPDATE: record_steal_time(vcpu); - if (kvm_check_request(KVM_REQ_NMI, vcpu)) + break; + case KVM_REQ_NMI: process_nmi(vcpu); - req_immediate_exit = - kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu); - if (kvm_check_request(KVM_REQ_PMU, vcpu)) + break; + case KVM_REQ_IMMEDIATE_EXIT: + req_immediate_exit = true; + break; + case KVM_REQ_PMU: kvm_handle_pmu_event(vcpu); - if (kvm_check_request(KVM_REQ_PMI, vcpu)) + break; + case KVM_REQ_PMI: kvm_deliver_pmi(vcpu); - if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { + break; + case KVM_REQ_EVENT: inject_pending_event(vcpu); /* enable NMI/IRQ window open exits if needed */ @@ -5288,9 +5299,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) update_cr8_intercept(vcpu); kvm_lapic_sync_to_vapic(vcpu); } + break; + default: + BUG(); } } +no_requests: + preempt_disable(); kvm_x86_ops->prepare_guest_switch(vcpu);