From patchwork Thu May 28 12:49:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 6498081 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 97BAC9F40A for ; Thu, 28 May 2015 12:49:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B3EBE204E1 for ; Thu, 28 May 2015 12:48:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ADAC5205BD for ; Thu, 28 May 2015 12:48:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754146AbbE1Msz (ORCPT ); Thu, 28 May 2015 08:48:55 -0400 Received: from mail-la0-f52.google.com ([209.85.215.52]:36180 "EHLO mail-la0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754077AbbE1Msm (ORCPT ); Thu, 28 May 2015 08:48:42 -0400 Received: by lagv1 with SMTP id v1so30774342lag.3 for ; Thu, 28 May 2015 05:48:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=veYbMDTGSWb4wPmDB/YXhIAVyzjyozOqDOpW1ueYf4A=; b=PL++vP7nG9W4J1Z8w/n1uywJ8ugnQqlLHxdbycfUb0ekSgQRE0gZZXizNDLPWgkoks 5Qom+MvkiKspGtJ3Rqa7PrNkp5gZvmLPDqiRYvrK5qZOJhnaiQxH8fWgHQ29OAu1hAMp 6Sf+PABjgLrfa66MPwkJJF8wznmKnIx7hY75LDjytXSK7jx1krgFsXf569DbTct/q2Aq g35HY+aqYvJwCbNgZbKvk4QYz2g1i+Srb1JVXAbcd0llc1C0FS+gUyJAiSpeQbYM3A7+ v438Z/UDRh6o6cqfVwEamGWj/zQZYJxpoDxKJvzDmXmYNFU+yaiSdy2IQP9X23Ko+goJ HuRA== X-Gm-Message-State: ALoCoQkFurfVJo0bOcaZ6i/fqjhff8Tw0/CXL9fxZqMzZSGs+z+9F/1FuzTdnAz190rfzFwmXYHk X-Received: by 10.152.29.161 with SMTP id l1mr2609099lah.76.1432817320477; Thu, 28 May 2015 05:48:40 -0700 (PDT) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id k12sm507861lbg.42.2015.05.28.05.48.38 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 May 2015 05:48:38 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier , borntraeger@de.ibm.com, Paolo Bonzini , Christoffer Dall Subject: [PATCH] arm/arm64: KVM: Propertly account for guest CPU time Date: Thu, 28 May 2015 14:49:09 +0200 Message-Id: <1432817349-17917-1-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Until now we have been calling kvm_guest_exit after re-enabling interrupts when we come back from the guest, but this has the unfortunate effect that CPU time accounting done in the context of timer interrupts doesn't properly notice that the time since the last tick was spent in the guest. Inspired by the comment in the x86 code, simply move the kvm_guest_exit() call below the local_irq_enable() call and change __kvm_guest_exit() to kvm_guest_exit(), because we are now calling this function with interrupts enabled. Note that AFAIU we don't need an explicit barrier like x86 because the arm/arm64 implementation of local_irq_(en/dis)able has an implicit barrier. At the same time, move the trace_kvm_exit() call outside of the atomic section, since there is no reason for us to do that with interrupts disabled. Signed-off-by: Christoffer Dall --- This patch is based on kvm/queue, because it has the kvm_guest_enter/exit rework recently posted by Christian Borntraeger. I hope I got the logic of this wrong, there were 2 slightly worrying facts about this: First, we now enable and disable and enable interrupts on each exit path, but I couldn't see any performance overhead on hackbench - yes the only benchmark we care abotu. Second, looking at the power and mips code, they seem to also call kvm_guest_exit() before enabling interrupts, so I don't understand how guest CPU time accounting works on those architectures. arch/arm/kvm/arm.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index e41cb11..bd0e463 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -559,8 +559,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ret = kvm_call_hyp(__kvm_vcpu_run, vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; - __kvm_guest_exit(); - trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + /* + * Back from guest + *************************************************************/ + /* * We may have taken a host interrupt in HYP mode (ie * while executing the guest). This interrupt is still @@ -574,8 +576,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) local_irq_enable(); /* - * Back from guest - *************************************************************/ + * We do local_irq_enable() before calling kvm_guest_exit so + * that the cputime accounting done in the context of timer + * interrupts properly accounts time spent in the guest as + * guest time. + */ + kvm_guest_exit(); + trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu);