From patchwork Fri Oct 20 11:49:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10019967 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 15E2C602CB for ; Fri, 20 Oct 2017 11:54:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EC6362868D for ; Fri, 20 Oct 2017 11:54:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E163428ED0; Fri, 20 Oct 2017 11:54:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 59C742868D for ; Fri, 20 Oct 2017 11:54:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=hadPUn95fcOTjx5pJmKHsPzkMbAm4rwNnVBtU5NOHyM=; b=VaqUV2t24RcxMpPRlqCVi64SeA YIwnsu+HF66gopqnsKSXpngypJvCgmO7VythNx4Mchdmb+NkXgvp7UptvYjw2HfNxE5RewBO3G4RS ZxZmzzPZu4/c+rq3hIMhCocaS94QdYzveOW349Nuzxk2muO/U2uo8uA+9+EyB6Iil6R8Zxei4j+IV h7OdpAwCbr1gJFSEUESswSJLbzVK62iEa5jyAbRzTpzFKSN+klXxQZGUvVfLOG6w1CC2ahQyUclaK KSjN5VEvPGngDJ7BG1ocAO8GS5PmdGEGRKsB6huAXMOC+PSkYVUEOFL9lZ7/lzrieewN4tE778JhE Jvqd68tQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e5Vs9-0003sc-7c; Fri, 20 Oct 2017 11:53:53 +0000 Received: from mail-wm0-x243.google.com ([2a00:1450:400c:c09::243]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e5Von-0000DK-5Q for linux-arm-kernel@lists.infradead.org; Fri, 20 Oct 2017 11:50:36 +0000 Received: by mail-wm0-x243.google.com with SMTP id r68so2367442wmr.3 for ; Fri, 20 Oct 2017 04:50:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dm1FUe0S6m+7LzCaU7ZRBEw8RjHXnH8vuP1A+Apsmgk=; b=Yj+pNacd8BrI6bPFyu+HLkOfx8kuafqPEwhiHW7XFbJ8Bhq7ndDgc32puqcdGhOo05 PK4CJuo9VzzdSvTSmBMttQXjdjlrDudLbwWH/f/Bid8eKEQqF2A9Jct7hGjrhRMsaG2v GqFFHm+k6juQAzE6rVMIJjGey6mP1TvD/I5TI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dm1FUe0S6m+7LzCaU7ZRBEw8RjHXnH8vuP1A+Apsmgk=; b=FXmNvFoRjHwrESMZ5HtcPfzq+JjfflM3R3j41X52Lcl1q4mvSiPlVJ3gnMi1G75COR rPRY0cV0hC7TPiRS8sJH12otfwc9NEkTWsIiN54UXjhUxfz+tp4BthULrcXzohyx4xgG R5dF1rD9zBp7ZL4lv/zok5Ea5AJcxe2zo1q3Y5NGqHCWwHT3wpj4qoeks9T/ZlOXZT59 d9PQDYopKFn5hMU+uxpQlX4WiVniN9Ht5OXf6b7gP5TTCouoIkfkhu0c7BSUcMP7gPmt lhFOOZkqliLGNtiWUsaHrtUxgSMtgK1DJKdy0BsTw3b6PXJxWhEmnkbBCEj7IJe3EE/C C6PA== X-Gm-Message-State: AMCzsaXGiUBxwWemA8eU7ZmnN87XD3TdNjonvf5+52JqM2+9vuDRFsNu hvKjwOs3rjurPnJ5/tuDNXS4vQ== X-Google-Smtp-Source: ABhQp+Q8JCOHueI05iut9XLqaiQd+kZlicag8ddnpi8Q9ux7ZBkgF1lHi9JUb60nP3wHl2yQS/U0Xw== X-Received: by 10.80.157.199 with SMTP id l7mr6138250edk.170.1508500207773; Fri, 20 Oct 2017 04:50:07 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id f53sm872234ede.63.2017.10.20.04.50.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 20 Oct 2017 04:50:06 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 18/20] KVM: arm/arm64: Avoid phys timer emulation in vcpu entry/exit Date: Fri, 20 Oct 2017 13:49:37 +0200 Message-Id: <20171020114939.12554-19-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171020114939.12554-1-christoffer.dall@linaro.org> References: <20171020114939.12554-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171020_045026_083105_ACFAD692 X-CRM114-Status: GOOD ( 18.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall There is no need to schedule and cancel a hrtimer when entering and exiting the guest, because we know when the physical timer is going to fire when the guest programs it, and we can simply program the hrtimer at that point. Now when the register modifications from the guest go through the kvm_arm_timer_set/get_reg functions, which always call kvm_timer_update_state(), we can simply consider the timer state in this function and schedule and cancel the timers as needed. This avoids looking at the physical timer emulation state when entering and exiting the VCPU, allowing for faster servicing of the VM when needed. Signed-off-by: Christoffer Dall Reviewed-by: Marc Zyngier --- virt/kvm/arm/arch_timer.c | 76 +++++++++++++++++++++++++++++++---------------- 1 file changed, 51 insertions(+), 25 deletions(-) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 7f4f12d48a1a..c7499364f2ed 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -202,7 +202,27 @@ static enum hrtimer_restart kvm_bg_timer_expire(struct hrtimer *hrt) static enum hrtimer_restart kvm_phys_timer_expire(struct hrtimer *hrt) { - WARN(1, "Timer only used to ensure guest exit - unexpected event."); + struct arch_timer_context *ptimer; + struct arch_timer_cpu *timer; + struct kvm_vcpu *vcpu; + u64 ns; + + timer = container_of(hrt, struct arch_timer_cpu, phys_timer); + vcpu = container_of(timer, struct kvm_vcpu, arch.timer_cpu); + ptimer = vcpu_ptimer(vcpu); + + /* + * Check that the timer has really expired from the guest's + * PoV (NTP on the host may have forced it to expire + * early). If not ready, schedule for a later time. + */ + ns = kvm_timer_compute_delta(ptimer); + if (unlikely(ns)) { + hrtimer_forward_now(hrt, ns_to_ktime(ns)); + return HRTIMER_RESTART; + } + + kvm_timer_update_irq(vcpu, true, ptimer); return HRTIMER_NORESTART; } @@ -256,24 +276,28 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, } /* Schedule the background timer for the emulated timer. */ -static void phys_timer_emulate(struct kvm_vcpu *vcpu, - struct arch_timer_context *timer_ctx) +static void phys_timer_emulate(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; + struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); - if (kvm_timer_should_fire(timer_ctx)) - return; - - if (!kvm_timer_irq_can_fire(timer_ctx)) + /* + * If the timer can fire now we have just raised the IRQ line and we + * don't need to have a soft timer scheduled for the future. If the + * timer cannot fire at all, then we also don't need a soft timer. + */ + if (kvm_timer_should_fire(ptimer) || !kvm_timer_irq_can_fire(ptimer)) { + soft_timer_cancel(&timer->phys_timer, NULL); return; + } - /* The timer has not yet expired, schedule a background timer */ - soft_timer_start(&timer->phys_timer, kvm_timer_compute_delta(timer_ctx)); + soft_timer_start(&timer->phys_timer, kvm_timer_compute_delta(ptimer)); } /* - * Check if there was a change in the timer state (should we raise or lower - * the line level to the GIC). + * Check if there was a change in the timer state, so that we should either + * raise or lower the line level to the GIC or schedule a background timer to + * emulate the physical timer. */ static void kvm_timer_update_state(struct kvm_vcpu *vcpu) { @@ -295,6 +319,8 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu) if (kvm_timer_should_fire(ptimer) != ptimer->irq.level) kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer); + + phys_timer_emulate(vcpu); } static void vtimer_save_state(struct kvm_vcpu *vcpu) @@ -441,6 +467,9 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) if (has_vhe()) disable_el1_phys_timer_access(); + + /* Set the background timer for the physical timer emulation. */ + phys_timer_emulate(vcpu); } bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu) @@ -472,16 +501,9 @@ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu) void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; - struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); if (unlikely(!timer->enabled)) return; - - if (kvm_timer_should_fire(ptimer) != ptimer->irq.level) - kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer); - - /* Set the background timer for the physical timer emulation. */ - phys_timer_emulate(vcpu, vcpu_ptimer(vcpu)); } void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) @@ -496,6 +518,17 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) vtimer_save_state(vcpu); + /* + * Cancel the physical timer emulation, because the only case where we + * need it after a vcpu_put is in the context of a sleeping VCPU, and + * in that case we already factor in the deadline for the physical + * timer when scheduling the bg_timer. + * + * In any case, we re-schedule the hrtimer for the physical timer when + * coming back to the VCPU thread in kvm_timer_vcpu_load(). + */ + soft_timer_cancel(&timer->phys_timer, NULL); + /* * The kernel may decide to run userspace after calling vcpu_put, so * we reset cntvoff to 0 to ensure a consistent read between user @@ -538,15 +571,8 @@ static void unmask_vtimer_irq(struct kvm_vcpu *vcpu) */ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu) { - struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); - /* - * This is to cancel the background timer for the physical timer - * emulation if it is set. - */ - soft_timer_cancel(&timer->phys_timer, NULL); - /* * If we entered the guest with the vtimer output asserted we have to * check if the guest has modified the timer so that we should lower