From patchwork Thu May 11 14:41:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 9722125 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9CCA860236 for ; Thu, 11 May 2017 15:21:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 30294286AC for ; Thu, 11 May 2017 15:21:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2AE7A28687; Thu, 11 May 2017 15:21:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AE4142872A for ; Thu, 11 May 2017 15:21:27 +0000 (UTC) Received: from localhost ([::1]:48794 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d8puA-0002lZ-Nj for patchwork-qemu-devel@patchwork.kernel.org; Thu, 11 May 2017 11:21:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42189) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d8pIS-0001YH-EF for qemu-devel@nongnu.org; Thu, 11 May 2017 10:42:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d8pIR-0004Cz-F6 for qemu-devel@nongnu.org; Thu, 11 May 2017 10:42:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38108) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1d8pIL-000495-Sb; Thu, 11 May 2017 10:42:22 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CC603C05AA52; Thu, 11 May 2017 14:42:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com CC603C05AA52 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=pbonzini@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com CC603C05AA52 Received: from donizetti.redhat.com (ovpn-117-123.ams2.redhat.com [10.36.117.123]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v4BEg9A9012159; Thu, 11 May 2017 10:42:19 -0400 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Thu, 11 May 2017 16:41:57 +0200 Message-Id: <20170511144208.24075-8-pbonzini@redhat.com> In-Reply-To: <20170511144208.24075-1-pbonzini@redhat.com> References: <20170511144208.24075-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 11 May 2017 14:42:21 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 07/18] throttle-groups: only start one coroutine from drained_begin X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, qemu-block@nongnu.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Starting all waiting coroutines from bdrv_drain_all is unnecessary; throttle_group_co_io_limits_intercept calls schedule_next_request as soon as the coroutine restarts, which in turn will restart the next request if possible. If we only start the first request and let the coroutines dance from there the code is simpler and there is more reuse between throttle_group_config, throttle_group_restart_blk and timer_cb. The next patch will benefit from this. We also stop accessing from throttle_group_restart_blk the blkp->throttled_reqs CoQueues even when there was no attached throttling group. This worked but is not pretty. The only thing that can interrupt the dance is the QEMU_CLOCK_VIRTUAL timer when switching from one block device to the next, because the timer is set to "now + 1" but QEMU_CLOCK_VIRTUAL might not be running. Set that timer to point in the present ("now") rather than the future and things work. Signed-off-by: Paolo Bonzini Reviewed-by: Stefan Hajnoczi Reviewed-by: Alberto Garcia --- v1->v2: new block/throttle-groups.c | 45 +++++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/block/throttle-groups.c b/block/throttle-groups.c index 69bfbd44d9..85169ecfb0 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -292,7 +292,7 @@ static void schedule_next_request(BlockBackend *blk, bool is_write) } else { ThrottleTimers *tt = &blk_get_public(token)->throttle_timers; int64_t now = qemu_clock_get_ns(tt->clock_type); - timer_mod(tt->timers[is_write], now + 1); + timer_mod(tt->timers[is_write], now); tg->any_timer_armed[is_write] = true; } tg->tokens[is_write] = token; @@ -340,15 +340,32 @@ void coroutine_fn throttle_group_co_io_limits_intercept(BlockBackend *blk, qemu_mutex_unlock(&tg->lock); } +static void throttle_group_restart_queue(BlockBackend *blk, bool is_write) +{ + BlockBackendPublic *blkp = blk_get_public(blk); + ThrottleGroup *tg = container_of(blkp->throttle_state, ThrottleGroup, ts); + bool empty_queue; + + aio_context_acquire(blk_get_aio_context(blk)); + empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]); + aio_context_release(blk_get_aio_context(blk)); + + /* If the request queue was empty then we have to take care of + * scheduling the next one */ + if (empty_queue) { + qemu_mutex_lock(&tg->lock); + schedule_next_request(blk, is_write); + qemu_mutex_unlock(&tg->lock); + } +} + void throttle_group_restart_blk(BlockBackend *blk) { BlockBackendPublic *blkp = blk_get_public(blk); - int i; - for (i = 0; i < 2; i++) { - while (qemu_co_enter_next(&blkp->throttled_reqs[i])) { - ; - } + if (blkp->throttle_state) { + throttle_group_restart_queue(blk, 0); + throttle_group_restart_queue(blk, 1); } } @@ -376,8 +393,7 @@ void throttle_group_config(BlockBackend *blk, ThrottleConfig *cfg) throttle_config(ts, tt, cfg); qemu_mutex_unlock(&tg->lock); - qemu_co_enter_next(&blkp->throttled_reqs[0]); - qemu_co_enter_next(&blkp->throttled_reqs[1]); + throttle_group_restart_blk(blk); } /* Get the throttle configuration from a particular group. Similar to @@ -408,7 +424,6 @@ static void timer_cb(BlockBackend *blk, bool is_write) BlockBackendPublic *blkp = blk_get_public(blk); ThrottleState *ts = blkp->throttle_state; ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts); - bool empty_queue; /* The timer has just been fired, so we can update the flag */ qemu_mutex_lock(&tg->lock); @@ -416,17 +431,7 @@ static void timer_cb(BlockBackend *blk, bool is_write) qemu_mutex_unlock(&tg->lock); /* Run the request that was waiting for this timer */ - aio_context_acquire(blk_get_aio_context(blk)); - empty_queue = !qemu_co_enter_next(&blkp->throttled_reqs[is_write]); - aio_context_release(blk_get_aio_context(blk)); - - /* If the request queue was empty then we have to take care of - * scheduling the next one */ - if (empty_queue) { - qemu_mutex_lock(&tg->lock); - schedule_next_request(blk, is_write); - qemu_mutex_unlock(&tg->lock); - } + throttle_group_restart_queue(blk, is_write); } static void read_timer_cb(void *opaque)