From patchwork Tue May 16 20:41:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= X-Patchwork-Id: 9729639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A2D7660386 for ; Tue, 16 May 2017 20:41:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 971CD28A20 for ; Tue, 16 May 2017 20:41:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8BD7828A23; Tue, 16 May 2017 20:41:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,HK_RANDOM_FROM, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 498B228A20 for ; Tue, 16 May 2017 20:41:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752750AbdEPUlU (ORCPT ); Tue, 16 May 2017 16:41:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43514 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752635AbdEPUlT (ORCPT ); Tue, 16 May 2017 16:41:19 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7114661B82; Tue, 16 May 2017 20:41:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 7114661B82 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=rkrcmar@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 7114661B82 Received: from potion (dhcp-1-124.brq.redhat.com [10.34.1.124]) by smtp.corp.redhat.com (Postfix) with SMTP id 776F57B8ED; Tue, 16 May 2017 20:41:11 +0000 (UTC) Received: by potion (sSMTP sendmail emulation); Tue, 16 May 2017 22:41:10 +0200 Date: Tue, 16 May 2017 22:41:10 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: Re: [PATCH] KVM: x86: lower default for halt_poll_ns Message-ID: <20170516204109.GA15344@potion> References: <20170418104118.21719-1-pbonzini@redhat.com> <4385bc02-2456-e8a7-a22b-cc2783ad6822@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <4385bc02-2456-e8a7-a22b-cc2783ad6822@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Tue, 16 May 2017 20:41:13 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP 2017-05-16 18:58+0200, Paolo Bonzini: > On 18/04/2017 12:41, Paolo Bonzini wrote: >> In some fio benchmarks, halt_poll_ns=400000 caused CPU utilization to >> increase heavily even in cases where the performance improvement was >> small. In particular, bandwidth divided by CPU usage was as much as >> 60% lower. >> >> To some extent this is the expected effect of the patch, and the >> additional CPU utilization is only visible when running the >> benchmarks. However, halving the threshold also halves the extra >> CPU utilization (from +30-130% to +20-70%) and has no negative >> effect on performance. >> >> Signed-off-by: Paolo Bonzini > > Ping? I didn't see any regression in crude benchmarks either and 200 us seems better anyway (just under 1/2 of Windows' timer frequency). Queued for rc2 as it is simple enough, thanks. --- Still, I think we have dynamic polling to mitigate this overhead; how was it behaving? I noticed a questionable decision in growing the window: we know how long the polling should have been (block_ns), but we do not use that information to set the next halt_poll_ns. Has something like this been tried? It would avoid a case where several halts in a row were interrupted after 300 us, but on the first one we'd schedule out after 10 us, then after 20, 40, 80, 160, and finally have the successful poll at 320 us, but we have just wasted time if the window is reset at any point before that. (I really don't like benchmarking ...) Thanks. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f0fe9d02f6bb..d8dbf50957fc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2193,7 +2193,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) /* we had a short halt and our poll time is too small */ else if (vcpu->halt_poll_ns < halt_poll_ns && block_ns < halt_poll_ns) - grow_halt_poll_ns(vcpu); + vcpu->halt_poll_ns = block_ns /* + x ? */; } else vcpu->halt_poll_ns = 0;